category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
phase shift
Phase shift of all-pass bi-quadratic filter
https://dsp.stackexchange.com/questions/53777/phase-shift-of-all-pass-bi-quadratic-filter
<p>Does anybody know how to derive this phase function for that all-pass filter structure? Many papers re-print them, and corresponding chart as well. </p> <p><a href="https://i.sstatic.net/xo8bQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xo8bQ.png" alt="enter image description here"></a></p> <p>But nobody mentions that at normalized frequency 0.25, tan(pi/2) goes to +- infinity, and atan jumps from + to - pi/2 :</p> <p><a href="https://i.sstatic.net/CObfL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CObfL.png" alt="enter image description here"></a></p> <p>Is phase shift periodic here by pi, so we can move the right curve down to glue them at 0.25. What does that mean in real filter operation?</p>
<p>First of all, that formula for the phase has a sign error, at least if <span class="math-container">$\alpha$</span> is defined as in the diagram of the filter structure. Second, your phase plot shows a jump from <span class="math-container">$-\pi$</span> to <span class="math-container">$\pi$</span>, which is no jump at all because a phase of <span class="math-container">$\pi$</span> is the same as a phase of <span class="math-container">$-\pi$</span>. The jump occurs only because the arctangent function returns the principal value of the phase in the interval <span class="math-container">$[-\pi,\pi]$</span>. You can get rid of the jump by using the phase unwrapping function <code>unwrap</code> in Matlab/Octave. But note that this is just a cosmetic thing, there is no phase jump in the physical sense of the word.</p> <p>Now for the derivation of the formula for the phase. From the given filter structure we get the following equation for the <span class="math-container">$\mathcal{Z}$</span>-transforms of the input and output signals:</p> <p><span class="math-container">$$Y(z)=X(z)z^{-2}+\alpha\left(X(z)-Y(z)z^{-2}\right)\tag{1}$$</span></p> <p>From this the transfer function can be computed as</p> <p><span class="math-container">$$H(z)=\frac{Y(z)}{X(z)}=\frac{\alpha+z^{-2}}{1+\alpha z^{-2}}=\frac{\alpha z+z^{-1}}{z+\alpha z^{-1}}=\frac{B(z)}{B\left(\frac{1}{z}\right)}\tag{2}$$</span></p> <p>with <span class="math-container">$B(z)=\alpha z+z^{-1}$</span>. <span class="math-container">$H(z)$</span> as given by <span class="math-container">$(2)$</span> is indeed an allpass transfer function with <span class="math-container">$|H(z)|=1$</span> for <span class="math-container">$|z|=1$</span>. Note that the system described by <span class="math-container">$(2)$</span> is only stable for <span class="math-container">$-1&lt;\alpha &lt;1$</span>.</p> <p>On the unit circle <span class="math-container">$z=e^{j\omega}$</span>, <span class="math-container">$H(z)$</span> can be written as</p> <p><span class="math-container">$$H(e^{j\omega})=\frac{B(e^{j\omega})}{B^*(e^{j\omega})}\tag{3}$$</span></p> <p>where <span class="math-container">$^*$</span> denotes complex conjugation. Consequently, the phase of <span class="math-container">$H(e^{j\omega})$</span> is given by</p> <p><span class="math-container">$$\phi_H(\omega)=2\phi_B(\omega)\tag{4}$$</span></p> <p>where <span class="math-container">$\phi_B(\omega)$</span> is the phase of the numerator <span class="math-container">$B(e^{j\omega})$</span>:</p> <p><span class="math-container">$$\begin{align}\phi_B(\omega)&amp;=\arg\big\{\alpha e^{j\omega}+e^{-j\omega}\big\}\\&amp;=\arg\big\{(1+\alpha)\cos(\omega)-j(1-\alpha)\sin(\omega)\big\}\\&amp;=-\arctan\left(\frac{1-\alpha}{1+\alpha}\tan(\omega)\right)\end{align}\tag{5}$$</span></p> <p>The phase of <span class="math-container">$H(e^{j\omega})$</span> is thus given by</p> <p><span class="math-container">$$\phi_H(\omega)=-2\arctan\left(\frac{1-\alpha}{1+\alpha}\tan(\omega)\right)\tag{6}$$</span></p> <p>which differs from the given formula in the sign of <span class="math-container">$\alpha$</span>.</p>
1,134
phase shift
Time shift and Phase Examples
https://dsp.stackexchange.com/questions/82329/time-shift-and-phase-examples
<p>Given are two cosines according to the following formula <span class="math-container">$x_i(t) = cos(2\pi f_i t)$</span> with <span class="math-container">$f_1 = 1Hz$</span> , <span class="math-container">$f_2 = 2Hz$</span> and <span class="math-container">$f_3 = 3Hz$</span> .</p> <p>The two cosines are delayed by <span class="math-container">$\tau=0.1s$</span> to yield <span class="math-container">$y_i(t) = cos(2\pi f_i(t-0.1s))$</span>. This corresponds to a phase shift and the delayed cosines can also be written as <span class="math-container">$y_i(t) = cos(2\pi f_it + \phi_i)$</span></p> <p>Calculate the phase shifts <span class="math-container">$\phi_i$</span> for each cosine and verify that this corresponds to Time Shift theorem of Fourier Transform.</p> <p>My work:</p> <p>I found online some formula that supposed to calculate the <span class="math-container">$\phi_i s$</span>. It is written like this <span class="math-container">$\phi_i=\tau *f *2\pi$</span> and calculated that <span class="math-container">$\phi_1 =\frac{2\pi}{10}$</span>, <span class="math-container">$\phi_2 =\frac{4\pi}{10}$</span> and <span class="math-container">$\phi_3 =\frac{6\pi}{10}$</span>.</p> <p>Time Shift Theorem say If the original function g(t) is shifted in time by a constant amount, it should have the same magnitude of the spectrum, G(f). That is, a time delay doesn't cause the frequency content of G(f) to change at all. This should make sense. Since the complex exponential always has a magnitude of 1, we see the time delay alters the phase of G(f) but not its magnitude.</p> <p>So the phase for these examples have changed but not the magnitude.</p> <p>First of all, are the calculations and the formula correct? Does my arguments make sense for the Time shift theorem in regards to these three examples?</p> <p>Could someone please explain what is the difference between the time delayed signal and the phase shifted signal?</p> <p>Any help is much appreciated! Thanks!</p>
<p>Any signal <span class="math-container">$x(t)$</span> can be time-shifted: simply calculate <span class="math-container">$x(t + \Delta t)$</span>.</p> <p>A sinusoid can also be phase-shifted. Consider the cosine signal with phase <span class="math-container">$\phi$</span>: <span class="math-container">$$x(t) = \cos(2\pi f_0 t + \phi).$$</span> Now, time shift it: <span class="math-container">$$x(t + \Delta t) = \cos(2\pi f_0 (t + \Delta t) + \phi) = \cos(2\pi f_0 t + 2\pi f_0 \Delta t + \phi).$$</span> The phase of this delayed cosine is <span class="math-container">$2\pi f_0 \Delta t + \phi$</span>. The takeaway here is: for periodic sinusoids, a time-shift has a direct and straightforward relationship with a phase shift, and vice-versa. This is also true for complex sinusoids <span class="math-container">$x(t) = \exp(j2\pi f_0 t + \phi)$</span>.</p> <p>The definition of phase for non-sinusoidal signals is not as simple as that of sinusoids. For example, many signals can be written in the form <span class="math-container">$A(t)e^{j\phi(t)}$</span> where <span class="math-container">$A(t) &gt; 0$</span> and their phase is defined as <span class="math-container">$\phi(t)$</span>. Here, a time shift of <span class="math-container">$\Delta t$</span> results in a new phase <span class="math-container">$\phi(t + \Delta t)$</span>. See a full discussion <a href="https://dsp.stackexchange.com/q/75064/11256">here</a> and also <a href="https://dsp.stackexchange.com/q/31394/11256">here</a>.</p> <p>As an example of a slightly more complicated relationship between time shift and phase shift, consider the signal <span class="math-container">$$x(t) = \cos(2\pi f_0 t + \phi_0) + \cos(2\pi f_1 t + \phi_1).$$</span> The delayed signal is <span class="math-container">$$x(t - \Delta t) = \cos( 2\pi f_0 t + 2\pi f_0 \Delta t + \phi_0) + \cos(2\pi f_1 t + 2\pi f_1 \Delta t + \phi_1).$$</span> You can see that the time delay resulted in a different phase shift for each of the sinusoidal components of <span class="math-container">$x(t)$</span>. Fourier tells us that all signals are made up of sums of sinusoids, and each one of them has a phase, so this approach can be generalized to all signals, even non-periodic ones, whose Fourier transform is a continuous sum of sinusoids.</p>
1,135
phase shift
resilient codes to phase shift
https://dsp.stackexchange.com/questions/36228/resilient-codes-to-phase-shift
<p>I need to define an encoding to beacon IDs. The codes are going to be periodic but in the receiver side I can have phase shifts so I' wondering the best way to decode: - 8 bits code groups that represent the same ID: </p> <pre><code>Group1: 00000001, 00000010, 00000100 ..... 10000000 Group2: 00000011, 00000110, ......11000000, 10000001 Group3: 00000101, 00001010, .............., 01000001 ... </code></pre> <p>How can I verify that a code belongs to a group? maybe the Real Part of a 8 bit DFT? Is there a group/codes generator?</p>
<blockquote> <p>How can I verify that a code belongs to a group?</p> </blockquote> <p>Assuming these are really just sub-words that are common to all elements in a group, simply trying out all 8 possible (cyclic) bitshifts of the received sequence to see whether you can match any group representative (e.g. the first column in your <code>GroupN</code> table) would very likely be the most efficient approach – because 8 bit is really not that much, and bit shifts are very efficient on CPUs or on FPGAs (whatever you're using), and even more importantly (since "beacons" surely doesn't sound like high data throughput), the code is easy to understand, and premature optimization is the root of all evil.</p>
1,136
phase shift
Accounting for phase shift in time-dependent signals
https://dsp.stackexchange.com/questions/82456/accounting-for-phase-shift-in-time-dependent-signals
<p><strong>Background:</strong></p> <p>I am using a vibration sensor and arduino to record signals and log them with timestamps, where the signals need to be time accurate so that I can glean information about which frequencies are present.</p> <p>To eliminate aliasing I am connecting a low-pass Butterworth filter between the sensor and arduino.</p> <p><strong>Question:</strong></p> <p>Will the phase shift introduced negatively affect the time accuracy of the signal?</p> <p>If so, how can this be accounted for? Can one connect an all-pass filter with a given phase response in series?</p>
<p>All filters will introduce delay; the sharper the roll-off the more delay is required since the two are directly related. It is the variation in group delay specifically that can cause distortion in the signal when we are concerned about the time alignment of signals that occupy different frequencies. Group delay is the negative derivative of phase with respect to frequency, so a changing phase over frequency is associated with time delay - when the phase slope is linear, then the delay is constant at all frequencies which then causes no distortion since all frequency components will stay aligned in time. When the phase slope varies over frequency- we get group delay variation and what we refer to as &quot;group delay distortion&quot;.</p> <p>When concerned about group delay in an analog filter, a Bessel filter is a better choice as it provides constant group delay across the passband at the expense of significantly less stop band roll-off and out of band attenuation. Alternatively, the signal can be equalized for the group delay variation, and doing this in the digital domain as an all-pass filter is a simple and attractive solution as we can determine and equalize for all distortion effects that may be introduced elsewhere in the analog system as well (see <a href="https://dsp.stackexchange.com/questions/31318/compensating-loudspeaker-frequency-response-in-an-audio-signal">this post</a> for details on determining the filter coefficients; for a fixed equalizer solution, the computation can be done off-line such that the implementation is just an FIR filter). Alternatively an all-pass filter can be implemented as an analog filter as well.</p> <p>I suggest first reviewing the actual expected group delay variation for the specific filter used (either compute or test) and determine if the specific time variation involved is sufficient to even be a concern.</p>
1,137
phase shift
Does resampling cause phase shifts?
https://dsp.stackexchange.com/questions/66961/does-resampling-cause-phase-shifts
<p>I have a signal sampled at 40MHz, I would like to resample it to 37MHz. The signal is <em>not periodic</em>, I did resampling with Matalb resample function and it doesn't cause phase shift (as far as I understood). Matlab applies an anti-aliasing FIR filter and compensate for the delay introduced by the process.</p> <p>I want to do the same process in Python. I know there is a resample function in scipy.signal but the documentation is not clear for me. Does the resample function introduces phase shifts?</p> <p>if so, there is also a decimate function in scipy as well which applies similar FIR filter. Should I use decimate function over resample?</p>
<p>Unfortunately, although <code>scipy.signal.decimate</code> has a zero phase shift argument, the decimation factor can only be an integer so you won't be able to downsample from 40KHz to 37kHz.</p> <p><code>scipy.signal.resample</code>, on the other hand, can do the resampling you want but may (and most likely will) introduce phase shifts in your signal. </p> <p>There are other options, ( e.g <a href="https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/modules/generated/sklearn.utils.resample.html" rel="nofollow noreferrer">sklearn.resample</a>), so have a look at them as well.</p>
1,138
phase shift
Nyquist Frequency Phase Shift
https://dsp.stackexchange.com/questions/14125/nyquist-frequency-phase-shift
<p>The figure below shows in dashed lines sinusoidal signals of the same frequency at three different phase shifts. The signals are then sampled such that the sinusoidal frequency is exactly a half of the sampling frequency, i.e. the frequency of all the sinusoids is the Nyquist frequency. The samples taken from this signal are represented by the circles.</p> <p><img src="https://i.sstatic.net/9Zm4e.png" alt="Nyquist frequency signal with samples"></p> <p>From the figure, it seems that the amplitude of the digital sinusoidal signals is dependent upon the sampling rate and instant of sampling. In fact if the sampling times coincide with the zero-crossings of the sinusoid, then no signal will be detected at all.</p> <p>I had initially thought that sampling a bandlimited analog signal at the appropriate sampling frequency would enable perfect reconstruction, but this counter-example has left me stumped. It seems that this sinusoid will generally not be reconstructed properly if digitized and then reconstructed at this rate. Have I gone wrong in my understanding, and if so, can someone please point me in the right direction?</p> <p>Cheers!</p>
<p>The sample rate needs to be GREATER than (NOT just equal to) twice the highest non-zero frequency content of the signal being sampled. Just a little bit greater might work, but the closer the sample rate is to twice the signal frequency, the longer in time you may need to sample to raise the signal above the noise and complex conjugate image in a DFT/FFT result.</p>
1,139
phase shift
Zero-phase-shift filters must be symmetric about the origin
https://dsp.stackexchange.com/questions/68063/zero-phase-shift-filters-must-be-symmetric-about-the-origin
<p>In reading Rafael Gonzalez Digital Image Processing, second edition, section 4.10.2, the author writes:</p> <blockquote> <p>Notch filters are the most useful of the selective filters. A notch filter rejects (or passes) frequencies in a predefined neighborhood about the center of the frequency rectangle. Zero-phase-shift filters must be symmetric about the origin, so a notch with center at <span class="math-container">$(u_o, v_o)$</span> must have a notch at location <span class="math-container">$-(u_o, v_o)$</span>. </p> </blockquote> <p>Two questions:</p> <p>1) Why does a zero phase shift filter have to be symmetric about the origin (with real coefficients I am assuming)?</p> <p>2) What does the author mean here when he says a "notch"</p> <p>Any insights appreciated. </p>
<p>Consider the Inverse Fourier Transform as the impulse response of the filter. In the case of an ideal brick-wall filter with zero-phase, the impulse response would be a Sinc function centered about zero in time (given by the Inverse Fourier Transform).</p> <p><a href="https://i.sstatic.net/VbI44.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VbI44.png" alt="Ideal Low pass filter"></a></p> <p>This is non-causal (time domain response is greater than 0 for <span class="math-container">$t&lt;0$</span>), but in order to be zero-phase as shown, the time domain response must be complex conjugate symmetric. It need not be real; in order for the frequency response to be real, the time response real components must be even and the imaginary components must be odd (complex conjugate symmetric). </p> <p>But to visualize the above plot further and what "zero-phase" means, a delay in time is a linear phase in frequency, thus if we shift the impulse response to the right (as done prior to truncation for a causal solution), the magnitude response is unchanged but the phase in no longer "zero-phase". If we continued to see the next plot I have in this series, which shows the impulse response truncated by the dashed window in this plot below, we would then see that the magnitude response of the filter would also be distorted, but that is beyond the question being asked here.</p> <p><a href="https://i.sstatic.net/VjxHu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VjxHu.png" alt="non zero-phase"></a></p> <p>I summarize the properties of odd and even functions as well as other universal Fourier Transform properties in the figure below. The time and frequency domains are interchangeable for these properties; for example, if a function is periodic in one domain, it is discrete in the other. If a function has only real components in one domain (zero-phase), then all real components in the other domain must be an even function about 0, and all imaginary components must be odd about zero (inverse sign). This results in complex conjugate symmetry. </p> <p>The relationship of the Hilbert Transform mentioned for the causal and anticausal functions applies to minimum phase systems.</p> <p><a href="https://i.sstatic.net/drVtk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/drVtk.png" alt="Universal FT properties"></a></p> <p>What is interesting is how these tie together and why I choose to describe the above relationships in terms of even and odd functions rather than simply saying the other domain will be complex conjugate symmetric. This way we can see how a causal function in time (or a single sided function in frequency) is the sum of even and odd functions, and therefore MUST be complex in the other domain given that we need both real and imaginary components to create such even and odd functions. </p> <p>Regarding the OP’s second question: a notch filter usually implies a very narrow band of rejection to remove certain frequency components. For more details see this post here: <a href="https://dsp.stackexchange.com/questions/31028/transfer-function-of-second-order-notch-filter/31030#31030">Transfer function of second order notch filter</a></p>
1,140
phase shift
How to correct for inperfect 90 degree phase shift before doing IQ demodulation?
https://dsp.stackexchange.com/questions/9377/how-to-correct-for-inperfect-90-degree-phase-shift-before-doing-iq-demodulation
<p>I want to sample two signals, S and R and later do IQ demodulation on these.</p> <p>I am multiplying S with R and I also want to multiply S with a 90 degree phase shifted version of R by shifting the samples in R so I get a 90 degree phase shift. The problem I encounter with this is that unless I have a sampling rate that is 4*n of the sine signal frequency (where n is an integer) I wont be able to phase shift the R signal by 90 degrees. And this is my case as the sampling rate and the signal frequency are not in this multiple of each other. So let's say I am able to do a 82 degree phase shift. Is it then possible to correct for this somehow for the total number of periods I choose to sample?</p> <p>Does it at all give any additional information when doing IQ demodulation when I phase shift the reference signal by shifting the samples to get a 90 degree phase shift?</p> <p>EDIT: Measuring range is 3k->300k Hz. S is a single frequency sine wave and R a square wave.</p>
1,141
phase shift
Adaptive filter to scale and phase shift two sensors output
https://dsp.stackexchange.com/questions/49221/adaptive-filter-to-scale-and-phase-shift-two-sensors-output
<p>One way of separating downgoing and upgoing wavefields in offshore seimic processing is to add signals from hydrophone and vertical component of the geophone (they are co-located). Hydrophone only registers a change in the pressure whereas geohphone as well as registering a change in seismic field also reacts to the direction of arriving wave, ie. if it is coming from above or below (sensors are deployed on sea floor). But the scale of the two are different as they record two different physical quantities. In addition, there is a phase difference between the two. To do the summation, I first need to re-scale one to the other and phase shift. The phase shift is slightly frequency dependent. I would like to make a filter that does the above on a period of data and then use the filter on all incoming data. The figure below shows an example of an upcoming signal (hydrophone signal has been downscaled for the exmple. In reality it is of much higher amplitude). If the two were of same scale and in phase then summing them will double the signal's amplitude while reducing the noise coming from above. I have some solution in time domain (assuming constant phase shift and scaling factor) and frequency domain by multiplying one's spectra with the transfer function between the two (found from spectral division). I have also used 'nlms' adaptive filtering. But some noise still remains, enough to make trouble later on. Wonder if anyone has a suggestion on how to make the filter or some Matlab code. Thanks <a href="https://i.sstatic.net/4sHvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4sHvI.png" alt="enter image description here"></a></p>
<p>It seems that you want to equalize one time series to the other or vice versa or some combination. The general problem is then (in shorthand) $$ h_1{\Large *}y_1 \approx h_2{\Large *}y_2 $$ where $\Large *$ denotes convolution, $h_i$ are filters, and $y_i$ are your time series. If you simply equalize one time series to the other, this is $$ y_1 \approx h_2{\Large *} y_2 $$</p> <p>In matrix form, this becomes $$ \left[\begin{array}{c} y_1[p]\\y_1[p+1]\\\vdots\\y_1[p+N-1]\end{array}\right] \approx \left[\begin{array}{cccc}y_2[2p] &amp; y_2[2p-1] &amp; \cdots &amp; y_2[0]\\ y_2[2p+1] &amp; y_2[2p] &amp; \cdots &amp; y_2[1]\\ \vdots &amp; \ddots &amp; \ddots &amp; \vdots \\ y_2[2p+N-1] &amp; y_2[2p+N-2] &amp; \cdots &amp; y_2[N-1] \end{array}\right] \left[\begin{array}{c}h_2[0]\\h_2[1]\\\vdots\\h_2[2p] \end{array}\right] $$ or more concisely $$ \underline{y}_1 \approx {\bf Y_2}\underline{h}_2 $$ The least squares solution for the equalizing filter coefficient vector is $$ \underline{h}_{2,LS} = \left({\bf Y_2^HY_2}\right)^{-1}{\bf Y_2^H}\underline{y}_1 $$</p> <p>You can alternatively use LMS or RLS to do recursive updates to find/track $\underline{h}_2$.</p> <p>One additional comment on equalizing one time series to the other: by choosing to equalize $\underline{y}_2$ to $\underline{y}_1$, we choose one spectral weighting - implicitly. If we instead equalized $\underline{y}_1$ to $\underline{y}_2$, we would have a slightly different spectral weight for the overall least squares fit. If the filters are relatively minor in magnitude variation over the signal band, the spectral weight difference is similarly minor, but if the equalizer filter response is quite different for the two cases (can just as well solve and analyze both), you will need to assess which is preferred/better and why.</p>
1,142
phase shift
Phase shifting a noisy signal
https://dsp.stackexchange.com/questions/29671/phase-shifting-a-noisy-signal
<p>I have a signal of the form $s(t)=A(t) \sum \cos(\omega_i(t)t +\phi_0) + n(t)$, where $n$ is gaussian noise.</p> <p>I can only read the <em>signal+noise</em> and thus can not separate them.</p> <p>I want to phase shift the signal to $A(t) \sum\cos(\omega_i(t)t)+n'(t)$ and I am at a loss on how to do this. During lectures / courses I've always done phase shifts simply by multiplying with $e^{i\phi_0}$.</p> <p>My signal is of the form $s(t)=A(t) \sum \cos(\omega_i(t)t +\phi_0) + n(t)$, as opposed to $s(t)=A(t) \sum e^{\omega_i(t)t +\phi_0} + n(t)$, so I can not simply multiply with $e^{i\phi_0}$</p> <p>Is there any way to do this? I'm asking because I am interested in cross-correlating two signals.</p>
<p>With a real cosine a phase shift is equivalent to a time delay. If you delay the signal by $\phi_0$ then you will get the result you want.</p> <p>The number of samples that you will need to delay by will likely not be an integer number. You can delay by a non-integer number of samples via a <a href="https://dsp.stackexchange.com/questions/9349/how-to-pick-coefficients-for-fractional-delay-filters">fractional delay filter</a>.</p>
1,143
phase shift
Does FFT filtering preserve the phase shift of a PSK signal?
https://dsp.stackexchange.com/questions/94842/does-fft-filtering-preserve-the-phase-shift-of-a-psk-signal
<p>I'm working with the wonderful Direwolf sound card TNC software, and am attempting to use the OpenCL clFFT GPGPU acceleration library to do FFT on the signal to replace the bandpass and lowpass filters with a much more efficient GPGPU accelerated FFT filter.</p> <p>In PSK (phase key shifting) the signal waveform is shifted to encode information. This phase shift does not alter the frequency of the signal, and instead the identical frequency just has its waveform shifted.</p> <p>If using FFT the frequency is the same in the frequency domain, and if we cancel out all other frequencies besides the desired frequency by multiplying them by zero, and then doing the inverse FFT, we should get a single, very powerful waveform.</p> <p>So does this FFT and inverse FFT filter process destroy the phase shift information in the signal? If so what are some things we could do to preserve the phase key shift, such as using a specific sized filter or some other method?</p>
<p>&quot;FFT filtering&quot; is just filtering (if not buggy); it's the actual filter that defines the signal properties of that. In other words, it's the filter coefficients, and not the algorithmic shortcuts you take.</p> <p>In communications we typically use linear-phase filters, since they have constant group delay and hence don't introduce any phase distortion.</p> <p>So, if you use an appropriate filter design, and use a correct implementation of a convolution algorithm, then, there will be no problem with your PSK phase.</p> <blockquote> <p>if we cancel out all other frequencies besides the desired frequency by multiplying them by zero</p> </blockquote> <p>That, however, is <strong>not</strong> how you build a convolution algorithm. See this <a href="https://dsp.stackexchange.com/questions/6220/why-is-it-a-bad-idea-to-filter-by-zeroing-out-fft-bins/6224#6224">very popular answer</a> to the question &quot;Why is it a bad idea to filter by zeroing out FFT bins?&quot;.</p> <p>A note on computation: I'm all for accelerating of computations, but isn't TNC something with single- to two-digit kilohertz bandwidths and relatively large channel spacing? I don't really think anything computationally stronger than a toaster will benefit much from calculating the FFTs for these channels on a GPU (overhead of getting signal in and out of the GPU will outweigh the power saving compared to doing the same computations on a CPU); I'd expect that at these rates, a power-efficient method is actually to compute fast convolution (&quot;FFT filtering&quot;) plain on the CPU. I might be wrong, and you need to have extremely steep frequency responses for some reason, but considering the age of everything involved, I'd be <em>more</em> than surprised if that was actually a way to get better signal out of it.</p>
1,144
phase shift
$360^\circ$ Phase shift between Bode plots of two similar systems
https://dsp.stackexchange.com/questions/96533/360-circ-phase-shift-between-bode-plots-of-two-similar-systems
<p>I am trying to identify the characteristics of an FIR filter in an ideal case as a sanity check. I observe that there is no strange behaviour, but there is an unexplainable issue in the phase plot of the original and identified filters. To make it simple, I've provided the transfer function coefficients below:</p> <pre><code>a1 = 1; b1 = [-0.0228015616127155 0.445214656159413 0.776226769351428 0.445214656159413 -0.0228015616127155]; b2 = [-0.0228015616127150 0.445214656159413 0.776226769351428 0.445214656159413 -0.0228015616127150]; G1 = filt(b1,a1); G2 = filt(b2,a1); fmin = 1e-1; fmax = 0.5; Bodeoptions = bodeoptions; Bodeoptions.FreqUnits = 'Hz'; Bodeoptions.Xlim = [fmin,fmax]; Bodeoptions.Ylim = {[-100,10],[0,720]}; Bodeoptions.Grid = 'on'; bode(G1,'-b',G2,':g',Bodeoptions) h = findobj(gcf,'type','line'); set(h,'LineWidth',2); wmin = 2*pi*fmin; H1 = freqresp(G1, wmin); H2 = freqresp(G2, wmin); gain1_dB = 20*log10(abs(H1)); gain2_dB = 20*log10(abs(H2)); phase1_deg = angle(H1)*180/pi; phase2_deg = angle(H2)*180/pi; fprintf('\nAt f = %.4f Hz:\n', fmin); fprintf('G1: Gain = %.16f dB, Phase = %.4f degrees\n', gain1_dB, phase1_deg); fprintf('G2: Gain = %.16f dB, Phase = %.4f degrees\n', gain2_dB, phase2_deg); [mag1, phase1, wout] = bode(G1, wmin); [mag2, phase2, ~] = bode(G2, wmin); gain1_dB = 20*log10(squeeze(mag1)); gain2_dB = 20*log10(squeeze(mag2)); phase1_deg = squeeze(phase1); phase2_deg = squeeze(phase2); fprintf('\nFrom bode():\n'); fprintf('G1: Gain = %.16f dB, Phase = %.4f degrees\n', gain1_dB, phase1_deg); fprintf('G2: Gain = %.16f dB, Phase = %.4f degrees\n', gain2_dB, phase2_deg); </code></pre> <p>Output:</p> <pre><code>At f = 0.1000 Hz: G1: Gain = 3.4199354940498718 dB, Phase = -72.0000 degrees G2: Gain = 3.4199354940498772 dB, Phase = -72.0000 degrees From bode(): G1: Gain = 3.4199354940498852 dB, Phase = 648.0000 degrees G2: Gain = 3.4199354940498634 dB, Phase = 288.0000 degrees </code></pre> <p><a href="https://i.sstatic.net/ALAgOm8J.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ALAgOm8J.jpg" alt="G1 is true, G2 is estimation" /></a></p> <p>The difference between <code>b1</code> and <code>b2</code> is only at the level of machine precision, yet I observe 360<span class="math-container">$^\circ$</span> phase shift in the phase plot. I understand that a 360<span class="math-container">$^\circ$</span> phase shift has no physical significance, but I would like to understand why this happens with the <code>bode</code> function in MATLAB. Also, it would be great if you could tell me how to correct this when plotting?</p>
<blockquote> <p>but I would like to understand why this happens with the bode function in MATLAB.</p> </blockquote> <p>That's difficult to answer since you basically asking about the inner workings of a commercial piece of software.</p> <blockquote> <p>Also, it would be great if you could tell me how to correct this when plotting?</p> </blockquote> <p>Both answers are correct. So it's not about &quot;correctness&quot; but about &quot;consistent visual experience&quot;. This partially depends on personal preference and there is no single right or wrong answer.</p> <p>Here is what I do:</p> <ol> <li>Find a good &quot;anchor frequency&quot;: a frequency that's in the band of interest, has good SNR and doesn't have some crazy features in the magnitude response. Often, that's simply the location of maximum of the magnitude.</li> <li>Calculate the phase at this frequency in the range of <span class="math-container">$[-\pi,+\pi]$</span></li> <li>Us this is a starting point and then unwrap downwards to DC and unwrap upwards to Nyquist.</li> </ol> <p>In 95% of all cases that gets me what I want but occasionally I will add or subtract integer multiples of <span class="math-container">$2\pi$</span> to make it look prettier.</p>
1,145
phase shift
Phase shifting in frequency domain
https://dsp.stackexchange.com/questions/63568/phase-shifting-in-frequency-domain
<p>I am aware this topic has already been addressed in at least one question here, but despite that I could not find the solution to my problem. Also bear patience with me since I am not a mathematics guru but a coder, so I am not really able to explain my problem with formulas but I will try to expose it to my best anyway.</p> <p>Suppose I have the FFT spectrum of a real audio signal and I want to carry phase shift by a constant amount s by multiplying all spectrum bins by <span class="math-container">$k= e^{js}$</span>. The matter turns out more complex than expected, because the full complex spectrum of a real signal is composed of the signal spectrum summed with its mirrored conjugate, therefore any attempt at multiplying as described above (despite taking care to multiply the negative frequencies by <span class="math-container">$-k$</span>) will produce a time domain signal with distortions close to the beginning and end of frame (using Hann windowed signals can mask the error but that is not a good solution for me, it is like hiding dust under the carpet). This happens simply because <span class="math-container">$k(Z+z^{*})$</span>, what one actually gets, is not the same as <span class="math-container">$kZ + (kz)^{*}$</span>, what instead it should be, where <span class="math-container">$z$</span> is the mirrored spectrum.</p> <p>After studying and experimenting, I understood that the only way is starting with a spectrum WITHOUT conjugates added. For example, such a spectrum for a pure frequency would contain the whole peak body for that frequency as a simple asinc function from <span class="math-container">$0$</span> to <span class="math-container">$z-1$</span> (where <span class="math-container">$z$</span> is the frame size) without any mirrored conjugate added. That way you can actually carry phase shift by multiplying the whole spectrum by <span class="math-container">$k$</span> to then add it with its mirrored conjugate and convert back to time domain, and the result is a perfect phase shifted signal (tested).</p> <p>Unfortunately you cannot get rid of the mirrored conjugates, because once two things are added you can' t separate them anymore. The only way to obtain such a spectrum devoid of mirrored conjugates added, is feeding the FFT with a complex signal, where real and imaginary parts are in quadrature. That is trivial when it is about supplying pure frequencies, as one has just to feed with a complex sinusoid, but WHAT to do with a plain audio signal? One would have to supply a <span class="math-container">$\frac{\pi}{2}$</span> shifted version for the imaginary part, which is something I can' t do because I cannot phase shift to start with (circular problem)! Can you help me ? My brain is stopping... </p> <p>For completeness I must add that to carry frequency shift in spectral domain (getting rid of the "twin" sideband of course) the <em>same</em> problem arises: one has to perform spectral shifting (by multiplying by twiddle factors either switching to cepstral or back to time domain) on a spectrum devoid of its mirrored conjugate added, otherwise edge distortions would result in the time domain signal for obvious reasons.</p> <p>How to obtain such a spectrum devoid of conjugates starting from a real audio signal then ? </p> <p>The picture below shows the distortion more noticeable at the edges of shifting the phase of a pure frequency by multiplying positive and negatives frequencies by k and -k respectively as many authors suggest</p> <p><img src="https://i.sstatic.net/a3irx.png" alt="The distorced result of a sinewave frequency-shifted by multiplying the pos frequencies by k and the negative ones by -k"></p> <p>TO REFORMULATE MY QUESTION: which is the least cpu intensive approach to convert the fft spectrum of a real signal to the spectrum of an analytic signal ?</p> <p>UPDATE: I understood the problem by a different perspective. Even the very act of multiplying a spectrum by a phasor to operate phase shift is indeed a multiplication, implying a circular convolution in time domain, which therefore will always produce edge artifacts unless working with integer frequencies. Therefore, even an apparently simple thing as phase shifting a spectral frame requires an overlap/save scheme or alike as used for actual filtering. I tried this approach and works as expected indeed. In addition, by shifting this way by PI/2 and merging the original spectral frames with the shifted versions to form a complex spectrum, you end up with an analytic spectrum devoid of negative frequencies, on which one can carry operations like precise frequency shifting or peak analysis much better and more accurately than when working on a non analytic spectrum. The only drawback is the complexity and relative cpu load of the whole process, but apparently no simpler solutions exist</p>
1,146
phase shift
What is the minimum phase shift for Quadrature Amplitude Modulation (QAM)?
https://dsp.stackexchange.com/questions/58258/what-is-the-minimum-phase-shift-for-quadrature-amplitude-modulation-qam
<p>I'm a bit of a noob here, looking for a simple formula or just a table which can tell me the absolute <em>minimum</em> phase shift angle possible for different variations of QAM. Particularly QAM 16, 64, 256 and 1024. If the answer is zero (is it?), then what is the minimum phase shift possible in each case other than 0?</p> <p>Additionally, what is the limiting factor which governs the rate at QAM can switch between different phases? In other words, what is the maximum rate at which QAM can switch between different phases, and how is that determined for a particular device? Is there a relation to frequency of transmission (say a 1 GHz signal), or are frequency of transmission and phase-switching rate completely independent? </p>
<p>The minimum phase shift in 16-QAM, 64-QAM, 256-QAM etc is 0 (same value regardless of whether you are using degrees or radians!) and occurs when the symbol transition is from one constellation point on some radius vector to another constellation point on the same radius vector. If we take the constellation points as having coordinates <span class="math-container">$(i,j)$</span> where <span class="math-container">$(i,j) \in \{-3, -1, +1, +3\}$</span> in 16-QAM, then <span class="math-container">$(1,1)$</span> and <span class="math-container">$(3,3)$</span> lie on the same radius vector (line of slope 1 through the origin) and so the RF phase remains unchanged in the transition from one constellation point to the other. Obviously, the same example will also work for 64-QAM, 256-QAM, and 1024-QAM.</p> <p>What is the minimum <em>non-zero</em> phase shift?</p> <p>Well, for 16-QAM, the constellation points <span class="math-container">$(1,3)$</span> and <span class="math-container">$(3,3)$</span> are distance 2 apart, and together with the origin, form an obtuse triangle with sides of lengths <span class="math-container">$2$</span>, <span class="math-container">$\sqrt{10}$</span>, and <span class="math-container">$\sqrt{18}$</span>. From the standard formula <span class="math-container">$$c^2 = a^2 + b^2 - 2ab\cos C$$</span> relating the sides <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, <span class="math-container">$c$</span> of a triangle opposite to angles <span class="math-container">$A$</span>, <span class="math-container">$B$</span> and <span class="math-container">$C$</span>, we get that <span class="math-container">$$2^2 = 10 + 18 - 2\sqrt{180} \cos C$$</span> giving <span class="math-container">$\arccos \frac{2}{\sqrt{5}}$</span> as the minimum nonzero phase shift in 16-QAM. Similar calculations for 64-QAM, 256-QAM and 1024-QAM as left as an exercise for the reader.</p>
1,147
phase shift
help to calculate phase shift in certain frequency(identified from FFT)
https://dsp.stackexchange.com/questions/64555/help-to-calculate-phase-shift-in-certain-frequencyidentified-from-fft
<p>I am designing a 24ghz doppler radar, with 1TX 2RX. Four adc channel are recording at 25khz, applying 256point FFT on adc of both of RX I and Q, I could identify the target in certain doppler frequency(speed) . The next step is to calculate the phase shift to get the angle of the target, that's where I stuck. Since I locate the target on FFT, how to trace it back to the time domain and calculate the phase shift between the different RXs. Any input will be very welcomed. I am a computer engineer with limited knowledge of dsp though, but facinated about radar. </p>
<p>Phase shift in time domain translates to frequency shift in the Fourier transform. </p> <p><span class="math-container">$FT of exp(j2πf0t)x(t) = X(f−f0).$</span></p> <p>If there is a phase shift in time, the Fourier transform that you take is already shifted in frequency. Find the shift in frequency f0 and you should get the phase change.</p>
1,148
phase shift
why do low pass and high pass filters generate a phase shift?
https://dsp.stackexchange.com/questions/28750/why-do-low-pass-and-high-pass-filters-generate-a-phase-shift
<p>I just learned following notation for a sine wave:</p> <p>$A e^{j\phi t}$</p> <p>A low passfilter and a highpass filter respectively generate a phase shift in the complex plane of the sine wave as follows:</p> <p>$A e^{-j\phi t}$ and $A e^{+j\phi t}$</p> <p>I don't understand what phase shifting has to do with letting higher or lower frequencies intact... How do these things( filtering and phase shifting) relate to each other? They just seem like to completely and independent things to me</p>
<blockquote> <p>I just learned following notation for a sine wave: $Ae^{j\phi t}$</p> </blockquote> <p>well, that's not really a sine wave; that's a complex sinusoid. The real and imaginary parts of that are cosine and sine, respectively.</p> <blockquote> <p>A low passfilter and a highpass filter respectively generate a phase shift in the complex plane of the sine wave as follows:</p> </blockquote> <p>No. There's all kind of LPFs and HPFs, and only the linear phase ones do what you claim.</p> <p>The point is that (although you're basically asking us to write a textbook on filter and signal theory) you'd typically understand filters as systems with poles and zeros, as the transfer function of LTI (linear, time-invariant) systems can usually be written as fraction with polynomials (which have roots).</p> <p>Now, to find something that actually fulfills the boundaries these representations set, your system needs to move its response in relation to frequency over the complex plane; "moving over the complex plane" is something that can be understood as changing a complex number over a variable; the rate of change of the argument of that number is a phase shift.</p> <p>It can be shown (and you'll probably learn this later on) that you cannot change the magnitude of a signal over frequency (i.e. do the filtering) without changing it's argument.</p>
1,149
phase shift
Phase shift in higher order filters
https://dsp.stackexchange.com/questions/35055/phase-shift-in-higher-order-filters
<p>I was doing some research on filters, So I designed butterworth filter based on pole locations. I applied to a simple sine signal to check for the outputs. I cannot find the reason for what I noticed.</p> <p><a href="https://i.sstatic.net/A7t6h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A7t6h.png" alt="enter image description here"></a></p> <p>Why does higher order filter have alot of phase shifts. Is it trying to adjust for more noise reduction? If, Yes how is it mathematically possible? I have only changed the order of the filter, I have kept cut off frequency same</p>
<p>Discrete filters are divided in two major classes: <a href="https://en.wikipedia.org/wiki/Finite_impulse_response" rel="nofollow">Finite Impulse Response (FIR)</a> filters and <a href="https://en.wikipedia.org/wiki/Infinite_impulse_response" rel="nofollow">Infinite Impulse Response Filters (IIR)</a>.</p> <p>Both types of filters are structured around a fundamental unit. A fundamental operation that is responsible for the filtering. That is, the delay unit. Of course, besides the delay unit you also have the sumation points. But sumation, on its own, without delay doesn't produce filtering.</p> <p>So, both filters, produce outputs that depend on the past of the inputs (and / or outputs).</p> <p>The concept of "filter order" is defined for both types of filters but in a slightly different way (it's still the same thing really).</p> <p>Let's look at an FIR filter:</p> <p>$$y_n = \sum_{m=0}^{Nh-1} x_{n-m} \cdot h_m$$</p> <p>Here, to produce one value for $y_n$ we sum $m$ products of the past of the signal and some impulse response $h$. $Nh$ is the total length of $h$.</p> <p>The order of the filter here is $N_h$, directly proportional to <strong>the number of delay units in the filter</strong>.</p> <p>So, by increasing the order, we do get a sharper filter but also a <strong>longer filter</strong> and the longer the filter is, <strong>the longer it takes for one of the $x_n$ to propagate through the series of delays</strong>.</p> <p>And <strong>that</strong> is where the increased <a href="https://en.wikipedia.org/wiki/Group_delay_and_phase_delay" rel="nofollow">group delay</a> is coming from! Furthermore, in an FIR filter all frequencies are delayed proportionally. So, bass frequencies are delayed less than treble frequencies in a predictable (and linear) way because the time it takes for the sinusoids to propagate <strong>through</strong> the delay line is fixed.</p> <p>Now, let's look at an IIR filter:</p> <p>$$y_n = \sum_{m=0}^{Nh-1} x_{n-m} \cdot h_m - \sum_{k=1}^{Ng-1}y_{n-k} \cdot g_k$$</p> <p>Which, if you take a closer look, is like two FIR filters back-to-back. The left part, we have seen before and it is the FIR filter from the first equation. The right part is an FIR but this time it is on past values <strong>of the output</strong>. We therefore have introduced <strong>feedback</strong>...and all the <a href="https://www.youtube.com/watch?v=O1xwKRqft8E" rel="nofollow">"joys" that come with it.</a></p> <p>The order of an IIR filter is now the length of the feedback delay line (i.e. the $Ng$ here) and because of this interaction between the input and output, the group delay of an IIR filter <strong>varies</strong> with frequency. So, treble frequencies might be going through much faster than bass frequencies in an IIR filter.</p> <p>OK, so now let us look at your experiment. You design a filter and you pass through it a simple sinusoid at some <strong>fixed</strong> frequency. Then you increase the order and you observe a longer delay.</p> <p>Which is perfectly reasonable, because, irrespectively of the type of filter (FIR / IIR), increasing the order, means <strong>adding more delay units</strong>.</p> <p>Hope this helps.</p>
1,150
phase shift
What are the drawbacks of using the phase shift method (Hilbert Transformer) for SSB modulation?
https://dsp.stackexchange.com/questions/95548/what-are-the-drawbacks-of-using-the-phase-shift-method-hilbert-transformer-for
<p>The phase shift method (Hilbert Transformer) can be used for single sideband (SSB) generation. The box marked (−𝝅)/𝟐 is a 𝝅/𝟐 phase shifter, which delays the phase of every spectral component by 𝝅/𝟐. Hence, it is a Hilbert Transformer. An ideal phase shifter is also unrealizable. It is easy to build a circuit to shift the phase of a single frequency component by 𝝅/𝟐. But a device to achieve this phase shift for all the spectral components over a band of frequencies is unrealizable. We can, at best, approximate it over a finite band.</p> <ol> <li><p>I think all baseband signals are finite in band, then where is the problem? give me please examples of signals that cannot be used with this circuit, and thus are not suitable with SSB modulation and needs other types of modulation.</p> </li> <li><p>Why there is a problem with achieving the phase shift with all the spectral components?</p> </li> </ol> <p><a href="https://i.sstatic.net/mLGkM6ND.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mLGkM6ND.png" alt="Hilbert transformer for SSB generation" /></a></p>
<p>An example of common signals that cannot be used with this circuit as the modulator itself is any complex baseband modulation where the quadrature component is independent of the in-phase component. This includes most modern waveforms including QPSK, QAM, etc. This is because the “Q” or quadrature component is completely independent of the “I” or in-phase component rather than being related by the Hilbert Transform as shown above for the <span class="math-container">$m(t)$</span> signal. For those waveforms we can still use the Hilbert on the narrow band carrier frequency as shown and then drive the I and Q components into each multiplier (mixer) directly. This is the form of the “IQ modulator”. We can also use the circuit above for any of these waveforms once modulated to be a frequency translator from a IF or intermediate frequency to RF or radio frequency and with that simplify some of the other filtering that would be needed.</p> <p>If you are doing true “Single Sideband Modulation” which is an analog modulation technique from 100 years ago not in high use today (still used by amateur radio enthusiasts but I don’t know of other practical applications), the modulation waveform does not extend to DC and is band limited and there is no issue in implementing the modulator and would be trivial to do so digitally as long as we can trade complexity and delay for the bandwidth and performance needed. For this, the analog waveform would be modulated to one side of the carrier frequency as an AM modulation.</p> <p>Hilbert Transformers can not be implemented to cover all frequencies and they don’t pass DC. To cover all frequencies the impulse response would need to be infinitely long and hence is unrealizable. However we can cover any band of frequencies we need and come as close to DC as needed at the expense of filter complexity and delay. As mentioned, when the Hilbert is just on a single higher frequency carrier frequency, its implementation is trivial.</p>
1,151
phase shift
Calculating the phase shift between two signals based on samples
https://dsp.stackexchange.com/questions/41291/calculating-the-phase-shift-between-two-signals-based-on-samples
<p>I am comparing two signals in MATLAB Simulink for finding the phase between them. To do this I am inspired by using the code found <a href="https://www.researchgate.net/post/How_can_one_find_the_angle_between_voltage_and_current_sinusoidal_waves_in_MATLAB_SIMULINK_environment" rel="nofollow noreferrer">here</a>.</p> <p>I have two vectors of the same size which are a collection of samples of the two signals (sampling is more than fast enough). They are sine-signals with mostly the same frequency. To find the phase between them the code below is used </p> <pre><code>signal_one; %A vector with values of signal 1 signal_two; %A vector with values of signal 2 dot_product = dot(signal_one,signal_two); norm_product = (norm(signal_one)*norm(signal_two); phase_shift_in_radians = acos(dot_product/norm_product) </code></pre> <p>This code works satisfactory, and I am able to get the phaseshift from it, I just don't understand why. Can anyone show me why this gives the phase shift? A mathematical demonstration or a reference to an equation would be great. </p>
<p>It's an <strong>approximation</strong> as also admitted by your statement "satisfactory performance". Here is why.</p> <p>Let $a[n]=\cos(w_0 n)$ and $b[n]=\cos(w_0 n + \theta)$ be two <strong>sequences</strong> of size $N$, represented by $1 \times N$ row <strong>matrices</strong> $\bar{a}$ and $\bar{b}$ in a program (such as Octave or Matlab) </p> <p>Treating those two sequences $a[n]$ and $b[n]$ which are represented by two row matrices $\bar{a}$ and $\bar{b}$ as <strong>components</strong> of <strong>vectors</strong> $\vec{a}$ and $\vec{b}$, we can take advantage of the <strong>dot product</strong> stated for vectors and (approximately) compute the phase angle $\theta$ in between those two sinusoidal sequences $a[n]$ and $b[n]$ </p> <p>Geometric evaluation of the dot product between two vectors of same size is: $$ \vec{a} \cdot \vec{b} = |\vec{a}||\vec{b}| \cos(\phi)$$</p> <p>And the angle between those two vector therefore is: $$\cos(\phi) = \frac{\vec{a}\cdot \vec{b}}{|\vec{a}||\vec{b}|}$$ Where $\phi$ is the <strong>angle</strong> between those vectors, which will be equal to $\theta$ as shown:</p> <p>The <strong>dot product</strong> between the <strong>vectors</strong> $\vec{a}$ and $\vec{b}$ can also be algebrically computed from their MAC sum of <strong>components</strong> which are contained in the elements of the matrices $\bar{a}$ and $\bar{b}$: $$ \vec{a} \cdot \vec{b} = \sum_{i=1}^{N} \bar{a}(i)\bar{b}(i) $$</p> <p>The product $\bar{a}(i)\bar{b}(i)$ can be shown to be equal to the following (using trigonometry): $$\bar{a}(i)\bar{b}(i) = \cos(w_0 i) \cos(w_0 i + \theta)= 0.5\cos(2 w_0 i + \theta) + 0.5 \cos(\theta)$$</p> <p>And the dot-product summation becomes: $$ \vec{a} \cdot \vec{a} = \sum_{i=1}^{N} a(i)b(i) = 0.5 N \cos(\theta) + \sum_{i=1}^{N} 0.5 \cos(2 w_0 i + \theta)$$</p> <p>The <strong>absolute</strong> values $|\vec{a}|$ and $|\vec{b}|$ of the vectors $\vec{a}$ and $\vec{b}$ can also be computed from their <strong>norms</strong> computed over the matrices $\bar{a}$ and $\bar{b}$ as follows:</p> <p>$$ |\vec{a}| = ||\bar{a}||= ( \sum_{i=1}^{N} a(i)^2 )^{0.5}$$ $$ |\vec{b}| = ||\bar{b}||= ( \sum_{i=1}^{N} b(i)^2 )^{0.5}$$</p> <p>Again expanding the squared terms result in: $$ |\vec{a}| = ||\bar{a}||= ( 0.5 \sum_{i=1}^{N} 1 + \cos(2 w_0 i) )^{0.5}$$ $$ |\vec{b}| = ||\bar{b}||= ( 0.5 \sum_{i=1}^{N} 1 + \cos(2 w_0 i + \theta) )^{0.5}$$</p> <p>Now, it can easily be shown that the following summations are small compared to large $N$: $$ N+\sum_{i=1}^{N} \cos(2 w_0 i) \approx N , ~~~ N+\sum_{i=1}^{N} \cos(2 w_0 i + \theta) \approx N $$ for large $N$ and suitable $w_0$:</p> <p>Simplify the above equations to get the angle: $$\cos(\phi) = \frac{\vec{a}\cdot \vec{b}}{|\vec{a}||\vec{b}|}$$ $$\cos(\phi) \approx \frac{ 0.5 N \cos(\theta)}{|\vec{a}||\vec{b}|}$$ $$\cos(\phi) \approx \frac{ 0.5 N \cos(\theta)}{0.5 N}$$ $$\cos(\phi) \approx \cos(\theta)$$ $$\phi \approx \theta$$</p> <p>The approximation depends on the length $N$.</p>
1,152
phase shift
How can I apply a phase shift to an LFM pulse?
https://dsp.stackexchange.com/questions/88933/how-can-i-apply-a-phase-shift-to-an-lfm-pulse
<p>I am modelling radar scenarios in python and am trying to make a model which retains phase information. To do so, I'm envisioning something which works like this:</p> <ol> <li>Generate samples of an LFM waveform at RF in a numpy array</li> <li>Calculate the distance to the target (two way)</li> <li>Divide the time take for the echo to return by the RF sampling rate to figure out the number of samples which the delay corresponds to</li> <li>Model the returned waveform as some SNR coefficient (unimportant for context of this question) multiplied by a copy of the transmitted waveform, and shift the elements in the array such that the echo is preceeded by a number of zero elements equal to the number of samples the delay corresponds to</li> <li>Do something with the phase...</li> </ol> <p>It is this step 5 I am a little stuck with. In the steps leading up to this, I have accounted for the delay in a very 'discrete' way. That is, in time increments of the entire sample length. However, the true delay does not correspond to an integer number of samples. Therefore, I need to perform the sample shifting AND apply some phase shift to account for the fact that, due to the non-integer number of samples the shift corresponds to, the additional amount of shift can be represented by an offset in where in the waveform the sample is taken.</p> <p>For example: say the delay corresponds to 37.465 samples. I can't shift the elements by this much, but I can shift them by 38 samples and say that, in each element, the point of the waveform at which the sample is measured is offset by a phase angle equivalent to this additional proportion of the wavelength.</p> <p>As an example to illustrate what I mean: Assume the carrier frequency is 1 GHz. In this case, the period of the wavelength is 1/1e9. If we say the same rate is 2 GHz, each sample takes 1/2e9 seconds. Previously, I gave the example of a delay corresponding to 37.465 samples. 0.465 of a sample corresponds to a time of 0.465*1/2e9 seconds. In this length of time, the waveform cycles through (1/1e9) / (0.465 * 1/2e9) of a wavelength. I can calculate this as a phase angle and apply a phase shift to the array representing the returned signal to account for this.</p> <p>Assuming this makes sense, my problem is then two-fold: Firstly, how do I apply such a phase shift? Secondly, how would this work for the LFM pulse where the frequency is changing over time (since, this means that the wavelength is changing over time and therefore the proportion of the wavelength which corresponds to that 0.465 of a sample changes from sample to sample within the waveform)?</p> <p>Any help with how to approach this, in particular any actual code implementations would be highly appreciated. I will include the code I have so far in case that helps to understand how I'm modelling things.</p> <p>Side note: I know real systems don't digitize at RF but I'm trying to create a model which allows the retention of full phase information, so I can't model at baseband.</p> <p>CODE:</p> <pre><code>from numpy import * # This function determines the number of samples which will be needed to model the waveform to allow a target up to the maximum range of interest def find_chirp_sampling_requirements(max_range, carrier, bw): sample_rate = 2*(carrier+bw) time_per_sample = 1/rf_sample_rate num_samples_required = int(ceil(((2*max_range)/299792458)/rf_time_per_sample)) return sample_rate, time_per_sample, num_samples_required # This function returns the waveform as a numpy array def generate_lfm(samples, time_per_sample, duty_cycle, bandwidth, carrier_frequency): pulse_width_in_samples = int(samples*duty_cycle) pulse_width_time = pulse_width_in_samples*time_per_sample t_s = linspace(0,pulse_width_time,pulse_width_in_samples) chirp_rate = bandwidth/pulse_width_time f_low = carrier_frequency initial_phase_offset = 0 phase_angle = 2*pi*(((chirp_rate*t_s**2)/2)+(f_low*t_s))+initial_phase_offset complex_chirp = exp(1j*phase_angle) waveform = zeros((samples), dtype=complex) waveform[:shape(complex_chirp)[0]] += complex_chirp return waveform # This function returns the amount of phase angle which corresponds to the two way distance to the target def calculate_two_way_phase_angle(distance, carrier_frequency): wavelength = 299792458/carrier wavelengths_in_distance = distance/wavelength return wavelengths_in_distance*2*pi # This function returns the time taken for the signal to return to the radar def calculate_two_way_delay(distance): return distance/299792458 # This function creates a time and phase shifted copy of the transmitted signal to model the received signal def get_phase_and_sample_shifted_waveform(original_waveform, sample_shift, phase_shift): # Perform phase shifting step phase_shifted_waveform = DO SOMETHING TO 'original_waveform' HERE # Perform integer number of samples shifting step new_waveform = zeros_like(phase_shifted_waveform, dtype=complex) new_waveform[sample_shift:] = phase_shifted_waveform[:-sample_shift] return new_waveform max_range = 10000 bandwidth = 0.05*10**9 carrier_frequency = 1*10**9 duty_cycle = 0.1 # Generate chirp waveform sample_rate, time_per_sample, num_samples_required = find_chirp_sampling_requirements(max_range, carrier_frequency, bandwidth) my_waveform = generate_lfm(num_samples_required, time_per_sample, duty_cycle, bandwidth, carrier_frequency) distance_to_target = 5000 two_way_phase_angle = calculate_two_way_phase_angle(distance_to_target, carrier_frequency) two_way_delay = calculate_two_way_delay(distance_to_target) two_way_delay_in_samples = two_way_delay/time_per_sample sample_remainder = int(ceil(two_way_delay_in_samples))-two_way_delay_in_samples remainder_time = sample_remainder*time_per_sample time_per_wavelength = 1/carrier_frequency fraction_of_wavelength = remainder_time/time_per_wavelength phase_shift = 2*pi*fraction_of_wavelength phase_and_sample_shifted_waveform = get_phase_and_sample_shifted_waveform(my_waveform, two_way_delay_in_samples, phase_shift) <span class="math-container">```</span> </code></pre>
<p>You are off in your approach. First things first, however:</p> <blockquote> <p>For example: say the delay corresponds to 37.465 samples. I can't shift the elements by this much, but I can shift them by 38 samples and say that, in each element, the point of the waveform at which the sample is measured is offset by a phase angle equivalent to this additional proportion of the wavelength.</p> </blockquote> <p>This is not true in general and, moreover, there is no reason why you can't shift by <span class="math-container">$37.465$</span> samples. Simply apply the appropriate linear phase in the frequency domain:</p> <p><span class="math-container">$$ x(t - \tau) = x(t) \ast \delta( t - \tau) = \mathcal{F}^{-1} \left( \mathcal{F}(x) \cdot e^{j2\pi f \tau} \right) $$</span></p> <p>As to how to simulate this kind thing ... You can do it at the carrier if you want, but that is usually memory intensive and unnecessary. You transmit some baseband waveform <span class="math-container">$w(t)$</span> on some carrier frequency <span class="math-container">$f_0$</span>, that is, <span class="math-container">$w(t) \cdot e^{j2\pi f_0 t}$</span>. The signal that comes back to the radar is delayed by <span class="math-container">$\tau$</span> seconds: <span class="math-container">$w(t-\tau) \cdot e^{j2\pi f_0 (t-\tau) }$</span>. The first thing the radar will do is baseband this signal so that it can be digitized. Assuming you have a coherent radar, which nearly all are nowadays, this means that what gets digitized is</p> <p><span class="math-container">$$ w(t-\tau) \cdot e^{j2\pi f_0 (t-\tau) } \cdot e^{-j2\pi f_0 t} = w(t-\tau) \cdot e^{-j2\pi f_0 \tau}. $$</span></p> <p>So all you have to do is shift your waveform by <span class="math-container">$\tau$</span> seconds (see above) and then mix it with the carrier phase term <span class="math-container">$e^{-j2\pi f_0 \tau}$</span>.</p>
1,153
phase shift
Relating phase shift to the ROL/ROR operations
https://dsp.stackexchange.com/questions/31074/relating-phase-shift-to-the-rol-ror-operations
<p>Is phase-shifting in continuous-time analogous to the rotate-left (ROL) and rotate-right (ROR) operations in discrete-time? </p>
<p>If you mean phase shift of a sinusoid, then yes, a phase shift is equivalent to a translation. </p>
1,154
phase shift
Does filtering by wavelet decomposition and reconstruction introduce a phase shift in the filtered signal?
https://dsp.stackexchange.com/questions/88953/does-filtering-by-wavelet-decomposition-and-reconstruction-introduce-a-phase-shi
<p>I am removing low frequency noise from a signal using Matlab using the DWT and then reconstruction at a specific level:</p> <p>I am removing the approximation signal at level 10 which approximates the low frequency noise</p> <pre><code>signal sampled at 500 hz [a,b] = wavedec(signal,13,'db8'); % DWT to max level lfnoise = wrcoef('a',a,b,'db8',10); % Take approximation at lvl 10 as estimate of low freq noise final = signal - lfnoise; % Substract noise to remove from signal </code></pre> <p>Does this introduce a phase shift in the final filtered signal? Visually it does not seem to add a significant phase shift because the major and high frequency features of the signal remain aligned, but I want to be able to tell if this introduces a phase shift or if it is equivalent to a zero phase filter.</p> <p>Thanks for the help!</p>
1,155
phase shift
How can I extract a sinusoid from a very noisy signal for computing the phase shift?
https://dsp.stackexchange.com/questions/24941/how-can-i-extract-a-sinusoid-from-a-very-noisy-signal-for-computing-the-phase-sh
<p>I have done some measurements and I got two very noisy signals. I want to extract a certain sinusoid from the signals and I want to compute the phase shift between them.</p> <p>I made some simulations and I saw that the noise modifies a lot the value of the phase shift.</p> <p>How can I get a trustful value for the phase shift?</p>
<p>You are correct. Estimating the instantaneous phase of a noisy sinusoid is NOT easy. I suggest you design a narrow bandpass filter such that your sinusoid-of-interest is in the filter's passband. (The better the filter the more noise that will be eliminated.) Pass your two signals through the bandpass filter to generate filtered signals $x_1[n]$ and $x_2[n]$. Next, pass your $x_1[n]$ and $x_2[n]$ signals through a Hilbert transformer to generate $\hat{x}_1[n]$ and $\hat{x}_2[n]$. Create two analytic (complex) signals as:</p> <p>$$z_1[n] = x_1[n] + j \, \hat{x}_1[n],$$</p> <p>and $$z_2[n] = x_2[n] + j \, \hat{x}_2[n]$$</p> <p>where</p> <p>$$ \begin{align} \hat{x}[n] &amp; = \mathcal{H}\{ x[n] \} \\ &amp; = \sum\limits_{m=-\infty}^{+\infty} \frac{1 - (-1)^{m}}{\pi \, m} x[n-m] \\ \end{align} $$</p> <p>Next, compute two instantaneous phase sequences: </p> <p>$$\phi_1[n] = \arg\{z_1[n]\}$$ </p> <p>and $$\phi_2[n] = \arg\{z_2[n]\}.$$</p> <p>Finally, compare the instantaneous phase difference between the $\phi_1[n]$ and $\phi_2[n]$ sequences. </p>
1,156
phase shift
How to find phase shift and process I/Q channel
https://dsp.stackexchange.com/questions/24161/how-to-find-phase-shift-and-process-i-q-channel
<p>I am currently working of 24GHz FMCW radar signal processing. The Module I'm using has 2 receiving antennas and each has 2 outputs (I and Q channel). So overall I've 4 channels. I'm using combination of harware and software for the processing in frequency domain. Range and velocity is fine, if I sample any one channel out of those 4 (either I or Q) I can find the R and V values.</p> <p>Now I'm interested in finding the direction of a object, when it is moving (away from sensor or towards sensor) and the position of the object (right of the sensor or left or the senor, at the best the horizontal angle).</p> <p>In the datasheet of the radar sensor I read 2 lines in the features, that gives me some idea in finding the direction and location. Below are the lines:</p> <ol> <li>"two receiving antenna for phase comperision operation"</li> <li>"I/Q channels for direction of motion discrimination"</li> </ol> <p>Now I'm thinking how to find out the position or the horizontal angle of the target using phase comperision of the 2 antennas. Below is the diagram I made for analysis.</p> <p><img src="https://i.sstatic.net/tz8s1.png" alt="enter image description here"></p> <p>Surely the 2 antennas receives reflection from a single target with a phase shift. Now the question is how to I find the phase shift processing the I channel (let's say) from the 2 antennas. I'm not aware of any DSP technique, so want suggestions.</p> <p>Also the direction of motion, what technique can be ised with the I and Q channel of a antenna to deduce it. I'm again on poor dsp knowledge, how do I process the I and Q channel.</p> <p>I appreciate any suggestion of improve my knowledge and know about more dsp techniques in signal analysis.</p>
<p>The way to do it is to correlate the received signal with the signal you sent out- likely a chirp or something. You correlate it with the signal from antenna 1, which will give you a time (location of the cross correlation peak) and phase (phase of the complex value of the peak). You do the same with the signal from antenna 2, getting a time and phase. If all is right the time should be the same (or, depending on your time resolution, very, very close). The phase difference is simply the difference of the phases of the cross correlation peaks from the two antennas.</p>
1,157
phase shift
Bode plot phase shift equation when poles and zeros are not at the origin
https://dsp.stackexchange.com/questions/89778/bode-plot-phase-shift-equation-when-poles-and-zeros-are-not-at-the-origin
<p>Let <span class="math-container">$$H(s)=\frac{s^{n}}{s^{m}}$$</span> For <span class="math-container">$n \ne m$</span> the phase shift between output and input will be <span class="math-container">$\frac{\pi}{2}(n-m)$</span>.</p> <p>For situations where the poles and zeros are not at the origin, I could find the phase <span class="math-container">$\Phi(s)= \frac{\Im[H(s)]}{\Re[H(s)]}$</span> but I am asking if there is a shorter process.</p> <p>For example if the transfer function was <span class="math-container">$$\frac{(s-a_{1})(s-a_{2})...(s-a_{n})}{(s-b_{1})(s-b_{2})...(s-b_{n})}$$</span>which would be the equation for the phase shift?</p>
1,158
phase shift
Why doesn&#39;t this complex multiplication in the frequency domain produce my expected phase shift?
https://dsp.stackexchange.com/questions/47063/why-doesnt-this-complex-multiplication-in-the-frequency-domain-produce-my-expec
<p>I know how to change the phase of a complex number by multiplying by $\cos \theta + i \sin \theta$. And I understand that the phase of a sine wave is reflected in its Fourier transform. So, I am trying to phase-shift a signal by changing the phase of its Fourier transform..</p> <p>This works for "synthetic" Fourier transforms, but when I try to FFT a signal, apply the phase change, and invert the FFT, I don't get the expected result. I am using MATLAB in the examples below.</p> <pre><code>Fs = 1000; Tmax = 2; L = Tmax * Fs; </code></pre> <p>For example, I'll build a synthetic Fourier sequence. The frequency intervals are 0.5 Hz, so a 10Hz component is at the 20th position after DC. (I'm just spitballing here; I know there's better ways to pick frequency bins than hand-jamming <code>21</code>.)</p> <pre><code>Ysynth = zeros(1, L); Ysynth(21) = 1000; </code></pre> <p>And rotate it by 45°:</p> <pre><code>Ysynth = Ysynth * exp(j * pi/4); </code></pre> <p>It has a 45° phase on the 10-Hz component:</p> <pre><code>polarscatter(angle(Ysynth), abs(Ysynth)) </code></pre> <p><img src="https://i.sstatic.net/lYwmg.png" alt="1"></p> <p>Now take its inverse FFT:</p> <pre><code>Xsynth = ifft(Ysynth); </code></pre> <p>So far so good. The generated signal has a phase shift of 45°. MATLAB does complain about the presence of an imaginary part when I plot it; I <em>think</em> this is because I didn't bother with the negative frequency component.</p> <pre><code>t = (0:L-1)/Fs; plot(t, Xsynth) </code></pre> <p><img src="https://i.sstatic.net/SPifu.png" alt="2"></p> <p>Now, I would expect the same principle to apply to a frequency spectrum taken from an actual signal. But I do not get similar results. Quickly:</p> <pre><code>X = cos(2*pi * 10 * t); Y = fft(X); Yshift = Y * exp(j * pi/4); Xshift = ifft(Yshift); </code></pre> <p>There is an imaginary component in Xshift, which I do not expect, and again MATLAB complains about.</p> <pre><code>plot(t, [Xshift; X]) </code></pre> <p><img src="https://i.sstatic.net/YXqvY.png" alt="3"></p> <p>There is no phase shift here, and the amplitude (of the real part) is different.</p> <p>I must be misunderstanding something about phase representation in FFTs. Why doesn't my transformation produce a phase-shifted version of the original?</p>
<p>If you want a phase-shifted strictly-real result, then you have to make sure the data you feed to the IFFT is conjugate symmetric. So make sure to reflect the complex conjugate of any phase changes you make. e.g. Make the upper or negative half of your IFFT input vector mirror the complex conjugate of the bottom or positive half of the input vector. </p> <p>Added: (Also, "mirror" usually means the array index goes in the opposite direction.)</p>
1,159
phase shift
Playrate change based on sinc interpolation generating phase shift?
https://dsp.stackexchange.com/questions/46043/playrate-change-based-on-sinc-interpolation-generating-phase-shift
<p>I want to simulate the play-rate change function of a modern digital sampler. For the purpose I have constructed a sinc-resampling based algorithm of my own which changes the play-rate of a digital audio signal.</p> <p>There is some issue with the algorithm, as the output of the algorithm is far from nulling with the output of my sampler software, when same play-rate is applied. In particular, the high-frequencies do not null. The audio produced by the sampler and my algorithm sounds identical and the magnitude of the FFT is extremely close, so the issue seems to be with the phase. Why does the algorithm not produce similar output to my sampler, in specific is there some error that introduces phase-shift?</p> <p>The algorithm is simple $y_i$ being the i+1th sample, $n$ the total amount of samples and $a$ the play-rate:</p> <p>$$f(x)=\sum_{i=0}^{n-1}y_i \text{ sinc}(x-i)$$</p> <p>$$g(a,x)=f(ax)$$</p> <p>And the ouput can be had as $\{g(a,0),g(a,1)...g(a,n-1)\}$.</p> <p>Python code:</p> <pre><code>import numpy def ratechange(data,rate): Y=numpy.zeros(len(data)) for i in range(len(data)): for j in range(len(data)): Y[i]+=data[j]*numpy.sinc((-j+i*rate)) return Y </code></pre>
1,160
phase shift
Nyquist Theorem adding two same frequency near to Nyquist Frequency with phase shift
https://dsp.stackexchange.com/questions/53605/nyquist-theorem-adding-two-same-frequency-near-to-nyquist-frequency-with-phase-s
<p>This is my first question on this platform. Sorry if I made mistakes.</p> <p>What happens if we add two or more same frequency signals near to Nyquist Frequency with phase shift and sample them?</p> <p>For example, assume that we have a signal of 18000Hz. And add another signal with same frequency, but having 180 degree phase with our first signal. Then, sample this total signal at 36000Hz. What happens if we reconstruct this signal? Does this situation result in aliasing?</p> <p>Note: I'am trying to figure out how exactly sound records are sampled when musicians playing their instruments. For example, assume that two keyboard players playing a note at 20000Hz which will be sampled at 44100Hz. Since they are human-being they are not supposed to hold the rhythm like in electronic music. Therefore, there will be a phase shift in their playing and wouldn't this cause aliasing in reconstructed signal?</p> <p>Thanks for the answers.</p>
<p>When you add 2 or more sinusoids at the same frequency <span class="math-container">$f_o$</span> but with different phase shifts you get a sinusoid at same frequency but an additional attenuation term. Mathematically, you will have following: <span class="math-container">$$cos(2\pi f_ot + \phi)+cos(2\pi f_ot + \theta) = 2cos(\frac{\phi - \theta}{2}).cos(\frac{2\pi f_ot + \phi + 2\pi f_ot + \theta}{2})$$</span><span class="math-container">$$= 2cos(\frac{\phi - \theta}{2}).cos(2\pi f_ot + \frac{\phi + \theta}{2}),$$</span>You see basically, you have an attenuation term <span class="math-container">$2cos(\frac{\phi-\theta}{2})$</span> which will change the amplitude of the sum of the sinusoids, but the frequency of the sinusoid remains at <span class="math-container">$f_o$</span>. Hence, if you were sampling it with <span class="math-container">$f_s&gt;2f_o$</span>, then even after adding them you will not have any aliasing.</p> <p>Only thing of concern is that because of attenuation term driving the amplitude of the sinusoidal, your signal might become 0 now, as in you example of <span class="math-container">$2^{nd}$</span> sinusoidal at the same frequency <span class="math-container">$f_o$</span> but having phase difference(<span class="math-container">$\phi - \theta$</span>) of <span class="math-container">$\pi$</span> compared to <span class="math-container">$1^{st}$</span> signal. That means <span class="math-container">$2cos(\frac{\pi}{2})$</span> which is equal to 0, and hence you have actually nothing to sample, both signals are completely out of phase and cancelling each other.</p>
1,161
phase shift
Calculating phase shift between two sines of equal, unknown, amplitude and frequency
https://dsp.stackexchange.com/questions/63904/calculating-phase-shift-between-two-sines-of-equal-unknown-amplitude-and-frequ
<p>I've got a source signal <span class="math-container">$s_1(t)$</span> and a system affecting that, yielding a phase-shifted version <span class="math-container">$s_2(t)$</span>;</p> <p><span class="math-container">\begin{align} \color{blue}{s_1(t)} &amp;= A\sin(t+\phi_1)\\ \color{red}{s_2(t)} &amp;= A\sin(t+\phi_2)\\ \end{align}</span></p> <p>The amplitude <span class="math-container">$A$</span> is unknown. How many samples (sampling happens periodically) would I need to verify that <span class="math-container">$\phi_1-\phi_2 = \frac32\pi$</span>?</p> <p><a href="https://i.sstatic.net/4Orce.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4Orce.png" alt="2 points on the signal and the filtered signal"></a></p> <p>As a practical example, given these 2 points on each curve: on the purple curve, the original signal: </p> <ul> <li>point A (231,3.533151)</li> <li>point B: (230,3.614015) </li> </ul> <p>and on the the filtered signal: the red curve:</p> <ul> <li>point C (231,1.398873)</li> <li>point D (230,1.174265)</li> </ul> <p>How would you confirm the phase shift of <span class="math-container">$\frac32\pi$</span>, given an equal -unknown- amplitude and equal -unknown- frequency of the 2 signals? Is a numerical solution possible for online, causal analysis? Or do you need 3 points on each curve? Thank you.</p>
<p>Two samples is sufficient and we can determine if the phase difference is indeed <span class="math-container">$\frac{3\pi}{2}$</span> as follows:</p> <p>Start with the hypothesis that the phase difference <span class="math-container">$\phi_1-\phi_2$</span> is <span class="math-container">$\frac{3\pi}{2}$</span>, which is equivalent to <span class="math-container">$-\frac{\pi}{2}$</span></p> <p>If and only if the phase is in such quadrature, then <span class="math-container">$s_1(t)$</span> and <span class="math-container">$s_2(t)$</span> will be the real and imaginary components of a single vector on the unit circle with radius A which can be given as <span class="math-container">$-jAe^{j(t+\phi_1)} = As_1(t) + jAs_2(t)$</span>. This starts at the angle <span class="math-container">$\phi_1-\pi/2$</span> when t = 0 and rotates counter-clock-wise with increasing t, which matches the plot shown with the purple line representing the real axis and the red line representing the imaginary.</p> <p>Starting from a position on the unit circle at time = <span class="math-container">$t_1$</span> with position given by the real and imaginary components <span class="math-container">$s_1(t_1)$</span> and <span class="math-container">$s_2(t_1)$</span>, we can move forward in time to <span class="math-container">$t_2$</span> over a determined angle which should predict <span class="math-container">$s_1(t_2)$</span> and <span class="math-container">$s_2(t_2)$</span>. If this prediction matches the given result, then this will confirm the hypothesis:</p> <p>The starting angle using the first sample as given by B and D is given by <span class="math-container">$tan^{-1}(s_1(t_1)/s_2(t_1))$</span> <span class="math-container">$= tan^{-1}(3.614015/1.174265) = 1.256637 $</span> radians.</p> <p>The change is angle to the second sample is simply the difference in time since the frequency unit is 1 radian/sec given by <span class="math-container">$sin(t)$</span>:</p> <p><span class="math-container">$\Delta\phi = t_2-t_1 = 231-230 = 1$</span> radian (NOTE! This alone clearly does not match the plot visually, so the numbers given cannot be actually derived from the plot! There must be a frequency factor multiplied by t in the actual formulas for this to match the plot).</p> <p>Therefore if the phase difference was <span class="math-container">$-\frac{\pi}{2}$</span> then the second angle would be at <span class="math-container">$1.256637 + 1$</span> radian or <span class="math-container">$2.256637$</span> radians and the ratio of <span class="math-container">$(s_1(t_2)/s_2(t_2))$</span> would be <span class="math-container">$\tan(2.256637)= -1.22195$</span></p> <p>This does not match the ratio for the second sample given by A and C and therefore the phase difference cannot be <span class="math-container">$\frac{3\pi}{2}$</span> for the formulas as given. </p>
1,162
phase shift
Best method to extract phase shift between 2 sinosoids, from data provided
https://dsp.stackexchange.com/questions/8673/best-method-to-extract-phase-shift-between-2-sinosoids-from-data-provided
<p>I have been asking around if the way I was extracting phase shift (lag) was correct, and I ran into some trouble.</p> <p>So in general, given 2 arrays of data of the same length, representing:</p> <ol> <li>the input sinusoid and</li> <li>the response, also a sinusoid. </li> </ol> <p>So not knowing the functions that produced the input (but knowing the frequency, starting phase of the input, fs, and so on), what is the best way to find the phase lag?</p> <ul> <li>should I do it "manually" by looking at the graphs and finding the phase lag at each component?</li> <li>should I use FFT, like I have been trying?</li> </ul> <p>any suggestions, links, books are welcome.</p>
<p>You can use the <a href="http://en.wikipedia.org/wiki/Cross_correlation">cross-correlation</a> function to determine the lag between the two signals.</p>
1,163
phase shift
Moving Average Filter. General Form of a Filtered Signal. How to Determine the Phase Shift?
https://dsp.stackexchange.com/questions/95436/moving-average-filter-general-form-of-a-filtered-signal-how-to-determine-the-p
<p>Let's consider the following filter <span class="math-container">$$y[n]=\frac{1}{3}(u[n-2]+u[n-1]+u[n])$$</span> and <span class="math-container">$M=3$</span>, <span class="math-container">$k=\overline{0,2}$</span> with <span class="math-container">$$u[n]=\sin\left(\frac{\pi}{10}n\right)+\sin\left(\frac{\pi}{4}n\right)+\sin\left(\frac{3\pi}{4}n\right).$$</span> Next, I computed the <strong>frequency response</strong> <span class="math-container">$z \longrightarrow e^{j\omega}:$</span> <span class="math-container">$$H(\omega)=\frac{1}{3}e^{-jw}(2\cos(\omega)+1).$$</span></p> <p>Next step was to compute the <strong>magnitude phase</strong>: <span class="math-container">$\left|H(\omega)\right|$</span>. I have computed the magnitude for each frequency component of the input signal:</p> <ol> <li><p>For <span class="math-container">$\displaystyle \omega =\frac{\pi}{10} \Rightarrow \left|H(\omega)\right|=0.967$</span></p> </li> <li><p>For <span class="math-container">$\displaystyle \omega =\frac{\pi}{4} \Rightarrow \left|H(\omega)\right|=0.805$</span></p> </li> <li><p>For <span class="math-container">$\displaystyle \omega =\frac{3\pi}{4} \Rightarrow \left|H(\omega)\right|=0.138$</span></p> </li> </ol> <p>Is it correct to conclude that the filtered signal representation is:</p> <p><span class="math-container">$$y[n]=0.967\sin\left(\frac{n\pi}{10}\right)+0.805\sin\left(\frac{n\pi}{4}\right)+0.138\sin\left(\frac{3n\pi}{4}\right)$$</span></p> <p>or should be taken into consideration also the phase shift?</p> <p>If yes, how to determine the phase shift? Or even better how to understand what phase shift is?</p>
<p>You correctly computed the frequency response, but your output signal isn't correct (as you already suspected), because you didn't take the phase shift into account.</p> <p>The response of an LTI system with frequency response</p> <p><span class="math-container">$$H(\omega)=\big|H(\omega)\big|e^{j\phi(\omega)}\tag{1}$$</span></p> <p>to a sinusoidal input <span class="math-container">$x(t)=\sin(\omega_0t)$</span> is given by</p> <p><span class="math-container">$$y(t)=\big|H(\omega_0)\big|\sin\big(\omega_0t+\phi(\omega_0)\big)\tag{2}$$</span></p> <p>Since the system is linear, the response to a (weighted) sum of sinusoids</p> <p><span class="math-container">$$x(t)=\sum_na_n\sin(\omega_nt)$$</span></p> <p>is</p> <p><span class="math-container">$$y(t)=\sum_na_n\big|H(\omega_n)\big|\sin\big(\omega_nt+\phi(\omega_n)\big)$$</span></p> <p>The phase shift <span class="math-container">$\phi(\omega)$</span> results in a frequency dependent delay <span class="math-container">$\tau(\omega)=-\phi(\omega)/\omega$</span>:</p> <p><span class="math-container">$$\sin\big(\omega_0t+\phi(\omega_0)\big)=\sin\left(\omega_0\left(t+\frac{\phi(\omega_0)}{\omega_0}\right)\right)=\sin\big(\omega_0\big(t-\tau(\omega_0)\big)\big)$$</span></p> <p>This frequency dependent delay <span class="math-container">$\tau(\omega)$</span> is referred to as <a href="https://en.wikipedia.org/wiki/Group_delay_and_phase_delay#Phase_delay" rel="nofollow noreferrer">phase delay</a>.</p>
1,164
phase shift
Phase-shift filter design
https://dsp.stackexchange.com/questions/2934/phase-shift-filter-design
<p>If a time-domain signal has sharp corners, its frequency spectrum will contain high-frequency components. Truncating the spectrum results in Gibbs' phenomenon. So if you're trying to design an FIR, you really want the target frequency response to be nice and smooth so that windowing the impulse response down to a finite length doesn't distort the frequency response too much.</p> <p>Currently I'm contemplating trying to design a very strange filter: One that has unit gain at all frequencies, but <em>non-zero</em> phase. I'm wondering whether or not a similar phenomenon occurs: If the filter has unit gain at all frequencies, then what does truncating the impulse response do to the phase alignment?</p>
<p>This would be an allpass filter. Except for the trivial case of unity and integer-sample delays, these can't be done as FIR filters and in general an IIR filter is required. However, they are easy to make. The zeroes of an allpass are simply the inverse of the poles (and vice versa). If you have the poles in polynomial form, you can simply flip them to get the zero polynomials. For example a second order allpass looks like this $$H(z)=\frac{a_{2}\cdot z^{0} + a_{1}\cdot z^{-1} + a_{0}\cdot z^{-2}}{a_{0}\cdot z^{0} + a_{1}\cdot z^{-1} + a_{2}\cdot z^{-2}}$$ Strict allpass filter have $\left \|H(e^{j\cdot \omega })\right \|=1 $ for all frequencies. You can certainly design approximation using FIR filters if you only need this property for a limited frequency range and if the magnitude doesn't have to be exactly unity.</p>
1,165
phase shift
Is the magnitude of the phase shift independent of the volume of the sound?
https://dsp.stackexchange.com/questions/84054/is-the-magnitude-of-the-phase-shift-independent-of-the-volume-of-the-sound
<p>Sound waves can cause vibration of the particles/objects that are scattering/reflecting/emitting the light. Since vibration is spatial displacement, it causes a phase shift by affecting the value of &quot;<span class="math-container">$x$</span>&quot; from the light (electromagnetic wave) phase formula: <span class="math-container">$\phi = k x - \omega t$</span>.</p> <p>Is it true that the louder the sound, the larger the phase shift? Or is the magnitude of the shift independent of the volume of the sound?</p> <p>This question could be rather tricky. In light of the answers I have received, let me put together the whole experiment.</p> <p>First of all, we need a large enough source of electromagnetic waves, so large that there will always be a portion of the waves that can escape the scattering, diffusion, and other factors that destroy coherence and make it to its destination.</p> <p>Second, we need a mirror that is large enough and smooth enough to produce enough reflected signal (e.g., reflected light) to be captured by a receiver that is insignificantly small relative to both the signal source and the mirror.</p> <p>Finally, we need to have these receivers that are light enough, light enough to move because of the oscillations of the sound waves. When all three are put together, we will be able to modulate the position of the receiver through sound waves and thus indirectly modulate the phase of the reflected electromagnetic wave.</p> <p>If these understandings are correct, then the phase shift will actually depend only on the frequency of the sound, not on the volume. Unless, that is, the sound is loud enough to destroy those sophisticated receivers.</p> <p>My question is, does the understanding correct? If we want to have such an experiment, what settings should be paid attention to during the experiment?</p>
<p>In most cases the answer will be &quot;no&quot;. Talking about a &quot;phase&quot; of light implies a mono-chromatic coherent light wave. Most scattering, diffusion or reflecting processes will destroy the coherence and there is no way to define the phase of the reflected light. It doesn't matter if the reflecting/scattering particles are moving or not.</p> <p>A potential exception would be a reflective (or diffractive) object that's large and smooth enough to create a specular reflection. In this case the coherence of the light is maintained and you can determine the phase of the reflected light.</p> <p>What exactly happens in this case, will depend a lot on the specific setup, but you could construct a setup where a sound wave could modulate the position of a (very light weight) mirror to modulate the phase of a light wave.</p>
1,166
phase shift
Phase shifts of QPSK
https://dsp.stackexchange.com/questions/60159/phase-shifts-of-qpsk
<p>I'm honestly lost on creating timing diagrams for bit sequences. I understand for QPSK there are symbols 00, 01, 10, 11 with phase shifts of 45,135,225 and 315. </p> <p><a href="https://i.sstatic.net/ztioA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ztioA.png" alt="enter image description here"></a></p> <p>I'm just not understanding how you get these diagrams. I graphed an example on paper but it doesn't seem consistent (using the bit sequence below). </p> <p>Maybe I'm a bit lost on the whole even and odd bit sequences. </p> <p>For example, I'm looking at a bit stream of 01100101 0 (also I assume that lone bit at the end can be written as 00)</p> <p>So b(t) = 011001010</p> <p>be(t)=X0100</p> <p>bo(t)=X1011</p> <p>E denoting Even, O denoting Odd</p> <p>Thanks </p> <p>Edit: Also again, for example, I grabbed this from Wikipedia <a href="https://i.sstatic.net/cJsdv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cJsdv.png" alt="enter image description here"></a></p> <p><a href="https://en.wikipedia.org/wiki/Phase-shift_keying#Bit_error_rate_2" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Phase-shift_keying#Bit_error_rate_2</a></p> <p>I feel I'm just missing something that will make it all click. Thanks!</p>
<p>In QPSK, the bit information is embedded in the phase of the carrier signal. For example, for bits 00, the phase shift is 45degree. This means the carrier signal <span class="math-container">$x(t) = cos(2\pi f_c t + \phi) 0 \lt t \lt T$</span>, the phase shift <span class="math-container">$\phi = \pi / 4$</span>. So <span class="math-container">$x(t) = cos(2\pi f_c t + \pi / 4) = cos(2\pi f_c t) cos(pi/4) - sin(2\pi f_ct)sin(pi/4)$</span>. The Inphase component corresponding to <span class="math-container">$cos(2\pi f_c t)$</span> has factor of <span class="math-container">$cos(pi/4)$</span> which is what you see in the I component. Similar logic for Q component corresponding to <span class="math-container">$sin(2\pi f_c t)$</span>. If the bits are 10, then I component will have factor of <span class="math-container">$cos(3 \pi/ 4)$</span> which will be <span class="math-container">$-\frac{1}{\sqrt{2}}$</span> so you will see phase shift of 180 degree due to the -1 factor. </p>
1,167
phase shift
I can&#39;t understand why there is a phase shift in phase modulation of a sawtooth wave
https://dsp.stackexchange.com/questions/73601/i-cant-understand-why-there-is-a-phase-shift-in-phase-modulation-of-a-sawtooth
<p>This is regarding question 5.1-2 (see picture attached) from <em>Modern digital and analog communications, Lathi &amp; Zhi Ding (2010)</em>.</p> <p><a href="https://i.sstatic.net/ZG49j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZG49j.png" alt="Question" /></a></p> <p>I do not understand what the behavior should be at the discontinuity (sudden jump from 1 to -1). I ran the following code on MATLAB (using simpler values):</p> <pre><code>a = 0.25; t = -a:0.0000001:a; % time wc = 2*pi*1e2; % carrier frequency m = sawtooth(2*pi*5*t,0.9); % message signal phi_am = cos(wc*t+(pi/2)*m); % phase modulated signal plot(t,phi_am); ylim([-2 2]); hold on; plot(t,m); hold off; </code></pre> <p>and it showed me a phase shift by <span class="math-container">$\pi$</span>, and that is what the solution is. But I do not understand how.</p> <p>Also, if some light on why must <span class="math-container">$k_p &lt; \pi$</span> in part (b) then I'd appreciate it.</p>
1,168
phase shift
Confusion understanding phase shift/delay?
https://dsp.stackexchange.com/questions/75059/confusion-understanding-phase-shift-delay
<p>I am reading proakis, as shown highlighted in attached snap shot,there is '-'sign along with j,but still it is written (underlined red) implies shift in positive n direction,why and how positive?when we have negative sign with j?</p> <p><a href="https://i.sstatic.net/kMsb7.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kMsb7.jpg" alt="enter image description here" /></a></p>
<p>Delay is the negative derivative of phase with respect to frequency. Phase that increases linearly negative as frequency increases will result in a positive (later) delay in time. This is given by the Fourier Transform property of time delay:</p> <p><span class="math-container">$$\mathscr{F} \{f(t-\tau)\} = e^{-j\tau \omega}F(\omega)$$</span></p> <p>And is easy to prove from the Fourier Transform:</p> <p><span class="math-container">$$\mathscr{F} \{f(t-\tau)\} = \int_{-\infty}^\infty f(t-\tau)e^{-j\omega t}dt$$</span></p> <p><span class="math-container">$$ = \int_{-\infty}^\infty f(t-\tau)e^{-j\tau \omega}e^{j\tau \omega}e^{-j\omega t}dt$$</span></p> <p><span class="math-container">$$ = e^{-j\tau \omega}\int_{-\infty}^\infty f(t-\tau)e^{-j\omega (t-\tau)}dt$$</span></p> <p>If we substitute <span class="math-container">$u=t-\tau$</span>, then <span class="math-container">$du = dt$</span> and we get:</p> <p><span class="math-container">$$ = e^{-j\tau \omega}\int_{-\infty}^\infty f(u)e^{-j\omega u}du =e^{-j\tau \omega}F(\omega) $$</span></p> <p>Also without going into the details of the Fourier Transform, consider how we subtract phase in the time domain to shift a sine wave in time to the right:</p> <p><a href="https://i.sstatic.net/rGJQA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rGJQA.png" alt="enter image description here" /></a></p>
1,169
phase shift
How to calculate the phase shift AND time delay of non-periodic signals
https://dsp.stackexchange.com/questions/44057/how-to-calculate-the-phase-shift-and-time-delay-of-non-periodic-signals
<p>Say I have two non-periodic signals, $f_1(t)$ and $f_2(t)$, with Fourier transforms $F_1(\omega)$ and $F_2(\omega)$. Basically, I need to line up $f_1(t)$ and $f_2(t)$ as close as possible, and I am allowed to shift the signals in time and multiply their Fourier transforms by a constant phase.</p> <p>What is the most efficient method to find the values of the phase shift $\Delta \phi$ and the time delay $\Delta t$ such that, when the Fourier transform of $f_2$ is modified into</p> <p>$$ F_2(\omega) e^{i (\Delta \phi + \omega \Delta t)}, $$</p> <p>the modified $f_2$ best "lines up" with $f_1$ according to their cross-correlation (or some other measurement)?</p> <p>I am interested in simple and computationally efficient methods, even if they are not necessarily perfect.</p>
<p>A pure time delay could be determined by looking for a peak in the cross correlation. But in your case $f2$ might also have an overall phase offset.</p> <p>You could try to compute two cross correlations:</p> <p>$$ \begin{align} x &amp;= cross(f1,f2) \\ y &amp;= cross(f1,hilbert(f2)) \\ \end{align} $$</p> <p>where $hilbert(f2)$ refers to an overall 90° phase shifted version of $f2$. If you combine those two like this</p> <p>$$ z = \sqrt{x^2 + y^2} $$</p> <p>you should get something that is independent of the phase shift and shows you a peak at the correct time delay $\Delta t$. The "phase" at that peak, $atan2(y,x)$ should give you the phase offset $\Delta\phi$.</p> <p>I don't know if such a problem is usually solved this way and I have not tried it myself. But it might work.</p>
1,170
phase shift
How do I perform a time domain phase shift in the frequency domain?
https://dsp.stackexchange.com/questions/64389/how-do-i-perform-a-time-domain-phase-shift-in-the-frequency-domain
<p>I've seen some information on this topic around, but I don't quite understand it.</p> <p>I have a time domain signal. I understand that if I want to time shift this signal, I can do so by multiplying its Fourier transform by <span class="math-container">$\exp(-j\omega\delta t)$</span>, where <span class="math-container">$\delta t$</span> is the time delay. </p> <p>However, what if I want to introduce a phase shift in the time domain signal? What would I multiply its Fourier transform by? Or do I have to do this transformation in the time domain?</p> <p>For context, I am trying to apply both a time delay and phase difference to a time series to correlate with another one, given that the detectors that produced the data are a distance apart and have different orientation towards the source of the signal. </p>
<p>Let’s be clear on what we will refer to as time delay and phase shift. Due to the common association of individual frequencies as sinusoids many confuse delay and phase shift as being equivalent. However a phase shift in time is simply multiplying a time sample by <span class="math-container">$e^{j\phi}$</span> while a time delay is displacing the sample in time such as done with <span class="math-container">$x(t-\tau)$</span>.</p> <p>The same thing occurs in frequency, where a phase shift of a particular carrier is done by multiplying that carrier by <span class="math-container">$e^{j\phi}$</span>, while a frequency displacement is done by moving the carrier to a new frequency such as <span class="math-container">$X(\omega-\omega_c)$</span>.</p> <p>As you showed, a delay in time is a linear phase versus frequency in the frequency domain.</p> <p>Similarly due to the bidirectional properties of the Fourier Transform, a displacement in the frequency domain would be a linear phase versus time in the time domain, specifically <span class="math-container">$e^{j\omega t}$</span>. Multiplying a time domain signal by <span class="math-container">$e^{j\omega t}$</span> with <span class="math-container">$\omega t$</span> as a linear phase ramp, results in a frequency translation, while multiplying a time domain signal by <span class="math-container">$e^{j\phi}$</span> with <span class="math-container">$\phi$</span> as a constant, results in a static phase shift (or if it makes it clearer: a phase displacement, or a phase rotation). This phase displacement is not directly a time displacement.</p> <p>Further, any multiplication in the frequency domain would be equivalent to a convolution in the time domain (and vice versa). If you want to a frequency domain operation that results in a phase change in the time domain (to be equivalent to a multiplication of the time domain samples)- this would need to be done by a convolution if in the frequency domain.</p> <p>To provide further details we need to understand more precisely what is meant or intended by a “phase shift in the time domain”, specifically if this means all time samples are rotated by the same phase (which is done by convolving the frequency domain function with a complex impulse at f=0 with magnitude = 1 and associated phase such as <span class="math-container">$\delta(\omega)e^{j\phi}$</span> or if the time samples have a linearly increasing or decreasing phase versus time (which is done by convolving the frequency domain function with an impulse at a frequency offset <span class="math-container">$\delta(\omega-\omega_c$</span>).</p>
1,171
phase shift
Behavior at DC and Nyquist of an ideal phase shifter
https://dsp.stackexchange.com/questions/84914/behavior-at-dc-and-nyquist-of-an-ideal-phase-shifter
<p>In Matt L's <a href="https://dsp.stackexchange.com/a/31616/59893">answer</a> he states that an ideal phase shifter with a phase shift <span class="math-container">$\theta$</span> has a frequency response</p> <p><span class="math-container">$$ H(\omega)= \begin{cases} e^{-j\theta},&amp;\omega&gt;0 \\ e^{j\theta},&amp;\omega&lt;0 \end{cases} $$</span></p> <p>But what about DC and Nyquist? If I want to shift the phase of a real-valued signal <span class="math-container">$x[n]$</span>, I suppose to get another real-valued signal <span class="math-container">$y[n]$</span> rather than a complex-valued signal. So <span class="math-container">$H(\omega)$</span> should be real at DC and Nyquist (actually in discrete world, <span class="math-container">$H[k]$</span>) as well as <span class="math-container">$X(\omega)$</span> and <span class="math-container">$Y(\omega)$</span>.</p> <p>If we just let the frequency responses at these two frequencies equal to <span class="math-container">$1$</span>, I can simply come up with an anti example that <span class="math-container">$\theta=\pi$</span>, in which case <span class="math-container">$y[n] = -x[n]$</span>, <span class="math-container">$Y(\omega) = -X(\omega)$</span> and thus <span class="math-container">$H(\omega) = -1$</span> for all frequencies.</p> <p>Can anyone point my mistake out, thanks.</p>
<p>The frequency response of the ideal phase shifter can be written as</p> <p><span class="math-container">$$H(\omega)=\cos(\theta)-j\,\textrm{sgn}(\omega)\sin(\theta)\tag{1}$$</span></p> <p>where <span class="math-container">$\textrm{sgn}(\omega)$</span> is the <a href="https://en.wikipedia.org/wiki/Sign_function" rel="nofollow noreferrer">signum function</a>.</p> <p>Since <span class="math-container">$\textrm{sgn}(0)=0$</span>, we have</p> <p><span class="math-container">$$H(0)=\cos(\theta)\tag{2}$$</span></p> <p>which is purely real.</p> <p>Note that for a Hilbert transformer we have <span class="math-container">$\theta=\pi/2$</span>, and, according to <span class="math-container">$(2)$</span>, <span class="math-container">$H(0)=0$</span>.</p> <p>In discrete time, the same is true at Nyquist.</p>
1,172
phase shift
How do you relate imaginary numbers with phase shift? How to imagine this?
https://dsp.stackexchange.com/questions/11173/how-do-you-relate-imaginary-numbers-with-phase-shift-how-to-imagine-this
<p><img src="https://i.sstatic.net/eCpbp.gif" alt="enter image description here"> So I found this answer on one of the questions. I wanted to ask how the complex part can be related with the phase shift. I wanted to ask this on the same question but it didn't allow me. Could anyone please help me understand this concept. Even if I imagine this as a spiral with the value of the imaginary part changing, the starting time of the wave remains the same doesn't it? So how is it related to phase shift?</p> <p>"Negative frequency doesn't make much sense for sinusoids, but the Fourier transform doesn't break up a signal into sinusoids, it breaks it up into complex exponentials:</p> <p>$F(\omega)=\int_{-\infty}^{\infty} f(t)e^{−jωt}dt$</p> <p>These are actually spirals, spinning around in the complex plane:</p> <p>complex exponential showing time and real and imaginary axes</p> <p>Spirals can be either left-handed or right-handed (rotating clockwise or counterclockwise), which is where the concept of negative frequency comes from. You can also think of it as the phase angle going forward or backward in time.</p> <p>In the case of real signals, there are always two equal-amplitude complex exponentials, rotating in opposite directions, so that their real parts combine and imaginary parts cancel out, leaving only a real sinusoid as the result. This is why the spectrum of a sine wave always has 2 spikes, one positive frequency and one negative. Depending on the phase of the two spirals, they could cancel out leaving a purely real sine wave or a real cosine wave or a purely imaginary sine wave, etc."</p>
<p>Paint a red dot on the blue complex spiral at t=0, where the value is 1.0. Now roll the spiral upside-down until the red dot is at -1.0, still at t=0. This roll would be equivalent to a phase shift of Pi. Note that the real part no longer corresponds to a cosine wave that starts at t=0. </p> <p>But the real part of the rolled spiral does correspond to a cosine wave that starts (with value 1.0) at +Pi or -Pi, which is a different starting time, contradicting one of your assumptions above.</p> <p>Thus phase is related to starting time.</p>
1,173
phase shift
Does phase shift introduce high frequencies?
https://dsp.stackexchange.com/questions/89358/does-phase-shift-introduce-high-frequencies
<p>In PSK phase modulation scheme, the signal phase is shifted, hence it looks something like this: <a href="https://i.sstatic.net/jU5bj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jU5bj.png" alt="enter image description here" /></a></p> <p>I cannot understand why this doesn't introduce a set of unwanted high-frequencies, as the steep changes are supposed to reflect as so in the frequency domain.</p>
<p>The steps in the time domain for unshaped PSK refer to a rectangular baseband modulation waveform. The frequency response of this is a Sinc function which does indeed contain high frequency components, compared to a pulse shaped version where the time domain transitions are tapered rather than stepped.</p> <p>When modulated to a carrier, the spectrum at baseband is simply frequency translated to the carrier frequency. Thus at a carrier the spectrum associated with the unshaped BPSK waveform will have wider occupied bandwidth, due to those higher frequency components.</p> <p>It is typical to shape the PSK waveform such as with root-raised cosine pulse shaping, to limit the occupied bandwidth and therefore improve the spectral efficiency. The resulting waveform the OP is seeing is not shaped. To see the significance of this, I added a plot showing the frequency spectrum for PSK waveform without pulse-shaping (blue) and with pulse-shaping (red). The step in phase shift introduces significant high frequency components that can be significantly reduced if stepping is not allowed.</p> <p><a href="https://i.sstatic.net/N0ACx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N0ACx.png" alt="spectrum" /></a></p> <p>And here is a portion of the modulation waveform in the time domain with and without pulse shaping. It's rather straightforward: A slow change in time corresponds to lower frequencies, a rapid change in time corresponds to high frequencies:</p> <p><a href="https://i.sstatic.net/EdSHw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EdSHw.png" alt="time domain" /></a></p>
1,174
phase shift
Why would someone need a Spatial Light Modulator with a maximum phase shift $&gt; 2\pi$
https://dsp.stackexchange.com/questions/35092/why-would-someone-need-a-spatial-light-modulator-with-a-maximum-phase-shift-2
<p>I am working with <a href="http://holoeye.com/spatial-light-modulators/slm-pluto-phase-only/" rel="nofollow">Spatial Light Modulators</a> for a project and I was wondering why does the producer think someone would need a maximum phase shift $&gt;2 \pi$ since the device works as a monitor and every phase-element monitored on the device could be reduced between $[0,2\pi]$ using a modulo? </p> <p>Marko</p>
<p>That will be useful at least for pulse shaping, because then it is not just a single wavelength that you are dealing with, and as modulo $2\pi$ is actually implemented as modulo of displacement, it would work only for a single wavelength, creating jumps in the phase for nearby frequencies.</p>
1,175
phase shift
What exactly is a 90 degree phase shift of a digital signal in FM demodulation appraoches?
https://dsp.stackexchange.com/questions/51889/what-exactly-is-a-90-degree-phase-shift-of-a-digital-signal-in-fm-demodulation-a
<p>I am working on an FSK demodulator (1200/2200Hz, 1200 baud) featuring a 90 degree phase shift operation. I don't exactly understand what 360 degrees mean for a digital signal.</p> <p>My sampling frequency is 8 kHz. Is a 90 degree shift a delay by 8000/4 samples, or is it a delay by 4000/4 samples?</p>
<p>The OP asked about a phase shift specifically but from the written details without seeing the actual demodulator implementation, I suspect he may possibly be asking how to implement the delay that is used for some simple FM demodulation approaches. I want to mention the possibility that this may actually be a delay element and not a phase shift in the sense that phase shift implies a shift of that phase at all frequencies, while a true delay has a phase shift that is proportional to frequency. It is is this property that is exploited to form simple FM demodulator structures. </p> <p>For example, a very simple non-coherent digital (or analog!) FM discriminator is implemented by a delay and multiply. As introduced above, this is truly a delay resulting in a phase shift of 90 degrees only at the center frequency and NOT a phase shift of 90 degrees at all frequencies within the signal bandwidth (as would be provided by the Hilbert Transform). </p> <p>As the OP commented in another answer to this question, a delay of 1.176 samples (which is indeed conveniently a true delay!) would work for this approach, the details and trade space involved will be clearer once the demodulation process of converting frequency change to amplitude change is understood. </p> <p>To understand the operation, first it is useful to review how a multiplication in time between two signals results in a phase measurement:</p> <p><a href="https://i.sstatic.net/mfLcg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mfLcg.png" alt="Phase Detector"></a></p> <p>In the case of a real signal (which I assume you may be doing), the product creates the sum and difference of the two input frequencies (and phase), and once low pass filtered, the difference signal results in a voltage that is proportional to the cosine of the phase. The complex case is also shown (and in this case either the real or the imaginary output as shown could be used, or both for non-ambiguous phase from 0 to 360 degrees).</p> <p>The ideal operating point for a sensitive phase to voltage (or magnitude regardless of units) for the case of a real signal is when the two input signals are in quadrature phase; at this point the output signal is crossing through zero, but importantly, has the highest rate of change to any change of phase in the input. This is particularly true for analog demodulation, where we desire a linear phase to voltage translation to minimize harmonic distortion (so not as critical in FSK demodulation). </p> <p>The next important item to understand is how a fixed delay is a "frequency to phase converter". A fixed delay passes all frequencies with equal magnitude, but the phase at the output (relative to the input) is linearly proportional to the frequency (in a fixed delay a higher frequency signal may take several cycles to transition the delay while a lower frequency signal may only shift fractions of a cycle). Therefore with regards to a signal that is changing its frequency over time (the FM modulated signal including FSK), the delay and multiply will first convert frequency variation to phase variation via the delay and then phase variation to amplitude variation via the multiplier!</p> <p>It can be further determined from this understanding what delay would be optimum for two FSK frequency choices that will provide a peak maximum and minimum signal at each location (refer to the figure below which shows the complex multiplier approach but applies equally to the real signal approach with the additional low pass filter as given in the figure above.) </p> <p><a href="https://i.sstatic.net/y2AM0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y2AM0.png" alt="Frequency Discriminator"></a></p> <p>So for the OP’s case specifically with a 1 sample delay as the delay element:</p> <p>Frequency Phase Shift for 1 sample delay </p> <p>1200 Hz: 360*1.2/8 = 54° cos(54°) = 0.59</p> <p>2200 Hz: 360*2.2/8 = 99° cos(99°) = -0.16</p> <p>This will work in the sense that it provides a bipolar output ranging from -0.16 to +0.59 out of the possible range of +/-1. So not ideal in terms of best SNR but will certainly work.</p> <p>Interestingly consider the case where a 4 sample delay is used for this example:</p> <p>1200 Hz: 360*1.2/8 * 4 = 216° cos(216°) = -0.8</p> <p>2200 Hz: 360*2.2/8 * 4 = 396° cos(396°) = +0.8</p> <p>This is much closer to the ideal result for this non-coherent demodulation approach of +/-1 output and very simple to implement given the sampling rate and FSK frequencies used. As an even further simplification in a mixed digital/analog approach, a single XOR gate could be used at the multiplier, followed by an RC low pass filter (and then a Schmidt trigger as your decision slicer)! It is important in all of these implementations that the input signal is hard limited (after bandpass filtering) to remove any incidental AM modulation/noise on the signal prior to demodulation, since these demodulation approaches are equally sensitive to amplitude variation on the input.</p> <hr> <p>To add to this because I like the simplicity and the similar design completely in the analog domain, below shows a commonly used non-coherent FM demodulator circuit. This is not an optimized FM demodulation solution but commonly used due to it simplicity.</p> <p><a href="https://i.sstatic.net/PP76Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PP76Y.png" alt="Analog FM Demodulator"></a></p> <p>Observe from the sketch below the expected magnitude and phase response of just the LC resonator portion of the circuit (the capacitor and inductor in parallel and connected to ground):</p> <p><a href="https://i.sstatic.net/LrE1y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LrE1y.png" alt="LC tank response"></a></p> <p>At low frequencies the inductor impedance ($j\omega L$) dominates resulting in a positive 90° phase shift (+j), and at high frequencies the capacitor impedance dominates ($1/(j\omega C)$, resulting in a -90° phase shift (1/j). The important thing to observe is what the phase shift is doing as the response passes through resonance at $f_c$. Near this location the phase change versus frequency is nearly linear with a slope defined by the Q of the resonator ($\sqrt{L/C}$). The slope, or derivative of phase with regard to frequency is time delay! So the tank is creating a delay line in proximity of this location (resonance), and conveniently at resonance the impedances cancel resulting in no loss (in the ideal circuit). However unfortunately the phase shift at this frequency, as shown in the response, is 0°. This is solved by adding another smaller capacitor (smaller to dominate in impedance) in series which serves to shift the entire phase response down an additional -90°! So the circuit gives us the desired delay line with a 90° phase shift at the center frequency (carrier) of our modulated signal. We choose the slope (Q) of the tank circuit based on the discriminator sensitivity needed, and possible linearity specs that we need to meet (as for larger frequency excursions relative to the bandwidth of the tank circuit used the phase versus frequency will no longer be linear and there will be FM to AM translation effects due to the amplitude response of the tank). </p>
1,176
phase shift
DSP based phase shifting in Phased Array systems
https://dsp.stackexchange.com/questions/82231/dsp-based-phase-shifting-in-phased-array-systems
<p>Is there a downside to doing phase shifting at basedband in the DSP section of phased Array systems?</p> <p>I suppose you trade off analog components for digital which may be more costly but I suspect modern systems give each antenna it's own ADC and DAC regardless.</p>
<p>The downside is power consumption due to the combination of more data converters and additional digital processing required for each antenna, in comparison to phase shifters and gain control implemented at RF together with analog combining- which can then be done with a single ADC (for receiver) and DAC (for transmitter). Consider how dynamic power across the capacitance of every gate grows as <span class="math-container">$P= C V^2 f$</span>. (Also motivating lower voltage solutions!). Highly integrated <a href="https://www.analog.com/en/products/adar1000.html" rel="nofollow noreferrer">RF MMIC solutions</a> are becoming available which are compelling when power dissipation is an ultimate concern.</p> <p>When power dissipation and cost is not a concern, the performance and flexibility of an all digital solution would be very attractive.</p>
1,177
phase shift
Relation between Normal Phase Shift of a wave and Phase Modulation
https://dsp.stackexchange.com/questions/42803/relation-between-normal-phase-shift-of-a-wave-and-phase-modulation
<p>I am little confused with the Phase Modulation and the phase of a sine wave. I get the phase modulated wave from the google images as below:</p> <p><a href="https://i.sstatic.net/QE7Uv.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QE7Uv.jpg" alt="enter image description here" /></a></p> <p>Phase modulation has the form: <span class="math-container">$$PM(t)=A_c \cos\big(2\pi f_c t+(xA_m)\big)$$</span></p> <p>Considering the message signal as <span class="math-container">$$ m(t)=A_m\cos\big(2\pi f t\big)\text,$$</span> and <span class="math-container">$x$</span> as the phase modulation index.</p> <p>Basically, I see <span class="math-container">$\cos(\theta)$</span> denotes a wave with some frequency. If we consider in this form - <span class="math-container">$\cos(2\pi f_1 t)$</span>, it is a cosine wave with frequency <span class="math-container">$f_1$</span>. And if we add a constant time factor with the unit of seconds, we get something like: <span class="math-container">$\cos(2\pi f_2 t+t_1)$</span>. I am able to see this as a right phase shifted cosine wave with time <span class="math-container">$t_1$</span>.</p> <p>But during phase modulation, we actually change the phase of the wave as per this form: <span class="math-container">$$PM(t)=A_c\cos\big(2\pi f_c t+(x A_m)\big)$$</span></p> <ul> <li><p>Now what I am confused is that the term <span class="math-container">$(x A_m)$</span> has the unit of radians or seconds?</p> <p>Based on the figure, <strong>I am able to assume <span class="math-container">$x$</span> has the unit of radians/Volt</strong> making the unit of <span class="math-container">$(x A_m)$</span> as radians, so that the final phase modulated has an abstract form <span class="math-container">$\cos(\theta_1+\theta_2)$</span> which could be some other frequency of the original carrier wave. I am just guessing this but I am not sure. If you could please let me know if this assumption is valid, it would be helpful.</p> </li> <li><p>If it is valid, then why do we call it as a phase modulation?</p> <p>For a phase modulation, I see that the modulated waveform should be something like - <span class="math-container">$PM(t)=A_c\cos(2\pi f_c t+(x A_m))$</span> <strong>where <span class="math-container">$x$</span> should be in the unit of (seconds/volt).</strong> Now the phase of the carrier wave changes but I am not able to imagine the waveform.</p> </li> <li><p>This does not exist but isn't this the pure phase modulation?</p> </li> </ul> <p>Now, the next hardest part which I am concerned is that, I see that phase modulation is an analog modulation scheme. Let's assume that at a time instant <span class="math-container">$t_1$</span>, the Phase modulation system calculates the amplitude of the message signal and changes the frequency of the carrier wave proportionally. At time instant <span class="math-container">$t_2$</span>, this frequency changes to the next value depending up on the message signal.</p> <ul> <li>Now, how fast does the system notice the phase changes of the message signal?</li> <li>Is that given by the carrier frequency or something else? (Since it is looking the amplitude of the message signal at a specific time instant, can we see it like -the message signal is somehow sampled, its amplitude is quantized to some constant (having the unit of radians) and then added to the angle of the carrier wave?</li> </ul> <p>The reason why I am curious about this is that we can see the phase modulation as the frequency of the carrier wave is changed but that frequency should remain hold at least for one cycle to make it significant.</p>
<ul> <li><p>When adding a constant time shift, the equation is then $\cos(2\pi f (t + t_1))$ (note the parentheses around the time values).</p></li> <li><p>$(xA_m)$ should really be $xm(t)$ in your phase modulation equation - it is a time varying quantity.</p></li> <li><p>$xm(t)$ has units of radians.</p></li> <li><p>It is 'phase modulation' because what is changing directly with the message signal is the phase of the transmitted signal. With amplitude modulation, the amplitude envelope of the transmitted signal carries the message; with frequency modulation, the change in frequency of the transmitted signal (as compared to the carrier) holds the message; and in phase modulation it is the phase of the carrier signal that holds the message. FM and PM tend to look similar because they both affect the argument of the sine wave, but they do so in two different ways.</p></li> <li><p>It is not necessary to have several cycles of the carrier at any given phase shift in order to make that shift significant. Since we know the frequency of the carrier signal, that can be removed from the received signal to recover the message signal. Don't consider it from the perspective of a single time instant (as you point out, this is an analogue scheme, not digital), but rather consider operating on the signal as a signal.</p></li> <li><p>"Now, how fast does the system notice the phase changes of the message signal?"</p> <ul> <li>Continuously. That's how analogue systems work.</li> </ul></li> </ul>
1,178
phase shift
Depth estimation using phase shifted sine waves
https://dsp.stackexchange.com/questions/86606/depth-estimation-using-phase-shifted-sine-waves
<p>For 3 phase shifted sine waves using the projected pattern: I = sin(x + δi) where δ ∈ [-2pi/3, 0, +2pi/3] which is then projected onto a scene. The resulting image is captured with a camera. I am using the method mentioned below as <em>Robust sine patterns</em>: <a href="https://i.sstatic.net/d7Fdx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d7Fdx.png" alt="enter image description here" /></a></p> <p>I have the following questions:</p> <ol> <li>From what I understand, phi (obtained using inverse tangent) provides reconstructed 3D position (correct me if I am wrong)? What is confusing for me is that phi is a single scalar value. How does it provide 3D position(s)? What am I missing?</li> <li>Ld (direct component) and Lg (global component) can help get different representations of the acquired scene?</li> <li>How can I get depth estimation using these 3 phase shifted sinusoids?</li> </ol>
1,179
phase shift
How to estimate the delay in time domain when knowing the phase shift in frequency domain
https://dsp.stackexchange.com/questions/83225/how-to-estimate-the-delay-in-time-domain-when-knowing-the-phase-shift-in-frequen
<p>I am strugling with how to compute the exact value of delay in time-doamin when knowing phase shift in frequency domain. I have a analog circuit, I sweeped about 30 single tones frequency ranging from 250 MHz to 8 GHz, and achieved amplitude (A) and phase (<span class="math-container">$\phi$</span>) response of the circuit. I then used that transfer curve (denoted H) (combining amplitudes and phases) to generate response in time-domain (denoted h) using iFFT. Finally, I want to confirm that the h is correctly constructed by taking convolution of a single tone with that h to compare the output power and phase. However, I observed some phase shifts. For sinstance, the phase rotated a value of [65 degree + 2.3*n], where n= [0, 1, 2,.. 30]. I know that that the phase rotation in frequency domain will induce a delay tin time domain, and I need to include that delay when doing verification process. But, I do not know how to estimate the delay in time-domain. Please help me with this issues.</p> <p>Thank you verymuch.</p>
<p>Group delay is the negative derivative of phase with respect to frequency:</p> <p><span class="math-container">$$GD = \frac{-d\phi}{d\omega} \tag{1} \label{1}$$</span></p> <p>When the delay in time is fixed, the phase in frequency will be linear versus frequency, increasing at a constant rate in the negative direction (&quot;linear phase&quot;), however when the phase versus frequency is non-linear, the delay will be different for each frequency component leading to group-delay distortion.</p> <p>To estimate the time delay, compute the derivative of the frequency versus time as given in equation \ref{1}. This is a function available in Matlab, Octave as <code>grpdelay</code> and in Python scipy.signal as <code>group_delay</code>.</p> <p>For example, the delay of 1 sample will have a phase versus frequency starting at <span class="math-container">$0$</span> radians at <span class="math-container">$f=0$</span> and going to <span class="math-container">$-2\pi$</span> radians at <span class="math-container">$f=f_s$</span> where <span class="math-container">$f_s$</span> is the sampling rate. When <span class="math-container">$f_s$</span> is in normalized units of radians/sample, the sampling rate is <span class="math-container">$2\pi $</span> radians/sample.</p> <p><a href="https://i.sstatic.net/JFKml.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JFKml.png" alt="Group Delay of a unit sample" /></a></p> <p>In the OP's case with phase linearly increasing <span class="math-container">$2.5n$</span> with <span class="math-container">$n$</span> from <span class="math-container">$0$</span> to <span class="math-container">$30$</span> over a frequency range of 250 MHz to 8 GHz, assuming the indices are linearly spaced over that frequency range, the delay would be:</p> <p><span class="math-container">$$GD = \frac{-\Delta \phi}{\Delta \omega}$$</span></p> <p><span class="math-container">$\Delta \phi$</span> in radians is <span class="math-container">$(30-0)\frac{2\pi}{360}$</span></p> <p><span class="math-container">$\Delta \omega$</span> in radians/sec is <span class="math-container">$2\pi (8E9-250E6)$</span></p> <p>Therefore, the Group Delay in this case, in units of seconds, would be:</p> <p><span class="math-container">$$GD = \frac{-30/360}{8E9-250E6} \approx -10.8 \text{ ps}$$</span></p> <p>Either the phase measurements were actually increasingly negative in contrast to what was provided in the OP, or the group delay is actually negative which is feasible without violating causality, as explained in <a href="https://dsp.stackexchange.com/a/66593/21048">this post</a>.</p>
1,180
phase shift
Can a causal filter without phase shifts exist?
https://dsp.stackexchange.com/questions/13970/can-a-causal-filter-without-phase-shifts-exist
<p>When I was studying dispersion of refraction index in semiconductors and dielectrics, my professor tried to explain that if a filter (like a dielectric absorbing some light frequencies, or an electric RC-filter) removes some frequencies, then the remaining ones must be phase shifted to compensate for those frequencies (which are infinitely spread in time as usual monochromatic signals) being subtracted from the whole signal, to preserve causality.</p> <p>I intuitively understand what he was talking about, but what I'm not sure of is whether his argument is really justified - i.e. whether there can exist a non-trivial filter, which absorbs some frequencies and leaves remaining ones not shifted, but still preserving causality. I can't seem to construct one, but can't prove it doesn't exist as well.</p> <p>So the question is: how can it be (dis)proved that a causal filter <em>must</em> shift phases of frequencies relative to each other?</p>
<p>Suppose that a linear filter has impulse response <span class="math-container">$h(t)$</span> and frequency response/transfer function <span class="math-container">$H(f) = \mathcal F [h(t)]$</span>, where <span class="math-container">$H(f)$</span> has the property that <span class="math-container">$H(-f) = H^*(f)$</span> (conjugacy constraint). </p> <p>Now, the response of this filter to complex exponential input <span class="math-container">$x(t) = e^{j2\pi f t}$</span> is <span class="math-container">$$y(t) = H(f)e^{j2\pi f t} = |H(f)|e^{j(2\pi f t + \angle H(f))}$$</span> and if we want this filter to cause <em>no</em> phase shift, it must be that <span class="math-container">$\angle H(f) = 0$</span> for all <span class="math-container">$f$</span>. </p> <p>How about if, instead of no phase shift, we are willing to allow a fixed constant phase shift for all frequencies? That is, <span class="math-container">$\angle H(f) = \theta$</span> for <em>all</em> <span class="math-container">$f$</span> is acceptable to us where <span class="math-container">$\theta$</span> need not be <span class="math-container">$0$</span>? The extra latitude does not help very much, because <span class="math-container">$\angle H(-f) = -\angle H(f)$</span>, and so <span class="math-container">$\angle H(f)$</span> cannot have fixed constant value for all <span class="math-container">$f$</span> unless that value is <span class="math-container">$0$</span>. </p> <p>We conclude that if a filter does not change the phase at all, then <span class="math-container">$H(f)$</span> is a real-valued function, and because of the conjugacy constraint, it is also an <em>even</em> function of <span class="math-container">$f$</span>. But then its Fourier transform <span class="math-container">$h(t)$</span> is a an <em>even</em> function of time, and thus the filter cannot be causal (except in trivial cases): if its impulse response is nonzero for any particular <span class="math-container">$t &gt; 0$</span>, then it is also nonzero for <span class="math-container">$-t$</span> (where <span class="math-container">$-t &lt; 0$</span>). </p> <p>Note that the filter need not be doing any frequency suppression, that is, we did not need the assumption that some frequencies are "removed" by the filter (as the OP's professor's filter does) to prove the claim that zero phase shift is not possible with a causal filter, frequency suppressor or not.</p>
1,181
phase shift
Estimates sub-pixel shift directly in phase region
https://dsp.stackexchange.com/questions/38312/estimates-sub-pixel-shift-directly-in-phase-region
<p>I try to estimates shift estimation directly in phase region, by following the proposed method in this <em>Sub-pixel Shift Estimation of Image based on the Least Squares Approximation in Phase Region</em> by Fujimoto, Fujisawa and Ikehara (<em>Proceedings of 26th European Signal Processing Conference, EUSIPCO '16</em>, pp.&nbsp;91&ndash;95. IEEE, 2016 <a href="https://mega.nz/#!vp9WFDbb!pEG8pem9n2SaX8IHxbUxnD4JT6ngp0nF608T3zAeiJY" rel="nofollow noreferrer">PDF</a>). Here is the flow chart described in the paper</p> <p><a href="https://i.sstatic.net/Od10d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Od10d.png" alt="enter image description here"></a></p> <p>If you're still with me, my issue raised when the author try to proceed with the subtraction of the integer shift with the phase difference $θ(k1,k2)$ (Equation&nbsp;(14) to smooth the phase difference. The authors however did mention that they obtained the slope $(a′,b′)$ of the integer shift using conventional phase-only correlation(POC) (Section III, paragraph above Equation&nbsp;(14)).</p> <p>How can this step can be done? Since the phase difference is of a $60\times60$ matrix (assume the image dimension is so), while the integer shift consists only TWO values. How exactly were the slopes obtained?</p> <p>Full matlab code I implemented:</p> <pre><code>function [ output_args ] = phasecorrlsa( refIm, shifIm ) F=(fft2(double(refIm))); G=(fft2(double(shifIm))); [m, n]=size(refIm); [M,N] = meshgrid(1:m,1:n); X = [M(:), N(:)]; R=(F.*conj(G))./abs(F.*conj(G)); r=(ifft2(R)); [ap, bp, rhat]=lsa(angle(r)); %this is my slope a' and b' [~,w] = max(r(:)); [del_hat2p, del_hat1p] = ind2sub(size(r),w); del_hat2p=del_hat2p-1; del_hat1p=del_hat1p-1; theta=angle(F)-angle(G); R=exp(1j*theta); %E6 theta=atan2(imag(R),real(R)); %E9 [a, b, thetahat]=lsa(theta); del_hat1_ts=(m/(2*pi))*a del_hat2_ts=(n/(2*pi))*b thetapp=theta-rhat; %equation 14, I guess something amiss here %figure; %surf(theta); %figure; %surf(thetapp); [app, bpp, thetapphat]=lsa(thetapp); del_hat1=(m/(2*pi))*app+del_hat1p; del_hat2=(n/(2*pi))*bpp+del_hat2p; if del_hat1&gt;n/2, del_hat1=del_hat1p-m; end if del_hat2&gt;m/2, del_hat2=del_hat1p-n; end output_args=struct('a',del_hat1, 'b', del_hat2); end function [ a, b, hat ] = lsa( theta ) [m, n]=size(theta); [M,N] = meshgrid(1:m,1:n); X = [M(:), N(:)]; B=regress(theta(:), X); a=B(1); b=B(2); hat=reshape(X*B,m,n); end </code></pre> <p>Please lighten me on this issue. Thank you so much!</p>
<p>I am unable to comment on your Matlab code, but eq. (14) seems straightforward to me. You have a phase difference field $\theta$ which depends on the two spatial wavenumber components $k_1$ and $k_2$, and which formes a wrapped plane. The shift you seek is the slope of this plane, expressed as two scalar components $a$ and $b$. Since phase wraps, the slopes are decomposed into integer and fractional parts, $a = a' + a''$ etc. In eq. (14) you obtain the residual phase field by subtracting the phase due to the integer part $(a', b')$. You need the actual $k$ values when you compute $(a'k_1 + b'k_2)$, not just the $k$ indices. Computing the $k$ values is similar to making a frequency axis for a 1D Fourier spectrum analysis.</p>
1,182
phase shift
How does time shift correspond to phase change in a discrete signal?
https://dsp.stackexchange.com/questions/15807/how-does-time-shift-correspond-to-phase-change-in-a-discrete-signal
<p>I was watching <a href="https://www.youtube.com/watch?v=6xaaeop7gJ8#t=1030" rel="nofollow">this</a> video where the presenter remarks: </p> <blockquote> <p>For a discrete signal, time shift corresponds to phase change in a discrete signal but not vice versa.</p> </blockquote> <p>I was trying to figure out how this could be proved. For example, if our discrete signal is</p> <p>$f[n] = A \cos(\Omega_o n + n_o)$ where $ n,n_o \in Z$</p> <p>If the signal undergoes a time shift $n'$, we have</p> <p>$ A \cos(\Omega_o (n + n') + n_o) = A \cos(\Omega_o n + \Omega_o n' + n_o)$</p> <p>This would correspond to a phase shift only if $\Omega_o n'$ is an integer. But that means $\Omega_o$ needs to be an integer, which is not always the case. </p> <p>What is the correct explanation here ?</p>
<p>If you have a signal</p> <p>$$f[n]=\cos(\Omega_0n)$$</p> <p>and you apply a time shift of $n_0$ you get</p> <p>$$f[n+n_0]=\cos(\Omega_0(n+n_0))=\cos(\Omega_0n+\Omega_0n_0)=\cos(\Omega_0n+\phi)$$</p> <p>where $\phi=\Omega_0n_0$ is the phase shift.</p> <p>The other way around, if you have a phase shift of $\phi$, this is not always equivalent to a time shift of the original signal:</p> <p>$$g[n]=\cos(\Omega_0n+\phi)=\cos(\Omega_0(n+\phi/\Omega_0))$$</p> <p>which only corresponds to an integer time shift if $\phi/\Omega_0$ is integer, i.e. if $\phi/\Omega_0=n_0$ which results in</p> <p>$$g[n]=f[n+n_0]$$</p> <p><strong>EDIT:</strong> Re-reading your question, I think that the misunderstanding lies in the fact that you believe that the phase of a discrete-time signal must be integer. This is not the case. Imagine two continuous-time signals $$x(t)=\cos(\omega_0t)\quad\textrm{and}\quad y(t)=\cos(\omega_0t+\phi),\quad\phi\in\mathbb{R}$$</p> <p>Note that the following holds for any value of $\phi$:</p> <p>$$y(t)=x(t+\phi/\omega_0)$$</p> <p>So we can always express $y(t)$ as a shifted version of $x(t)$. Now imagine that we construct two discrete-time signals by sampling $x(t)$ and $y(t)$ at times $t_n=nT$ with some real-valued $T&gt;0$:</p> <p>$$f[n]=x(nT)=\cos(\omega_0nT)=\cos(\Omega_0n)\\ g[n]=y(nT)=\cos(\omega_0nT+\phi)=\cos(\Omega_0n+\phi) $$ with $\Omega_0=\omega_0T$. Note that the phase $\phi$ is of course the same as before (and for this reason not necessarily integer). The difference between the continuous-time and the discrete-time case is now that $g[n]$ cannot in general be obtained from $f[n]$ by time-shifting because we can only shift by integers. The condition under which $g[n]$ <em>can</em> be obtained by shifting $f[n]$ is if $\phi/\Omega_0$ is integer because then we can write</p> <p>$$g[n]=\cos(\Omega_0(n+\phi/\Omega_0))=f[n+\phi/\Omega_0]=f[n+n_0],\quad n_0\in\mathbb{Z}$$</p>
1,183
phase shift
Is it possible to find precise peak frequency using FFT phase shifts?
https://dsp.stackexchange.com/questions/61922/is-it-possible-to-find-precise-peak-frequency-using-fft-phase-shifts
<p>Been experimenting with FFT on a generated sinusoid and found something strange that doesn't seem to be described anywhere (though I may be missing something of course).</p> <p>A sinusoid that exactly corresponds to a bin gives what we all know: an amplitude and a phase shift of the sinusoid in that bin.</p> <p>However, for a frequency that falls between two bins the phase shift of the strongest bin expressed in polar angle seems to be shifted by almost exactly the amount by which the peak frequency is shifted from the strongest bin number. The phase shifts of the neighbouring bins also seem to show some correlation but it is not as strong.</p> <p>For example, if the distance between bins is <span class="math-container">$\Delta = 44Hz$</span>, I generate <span class="math-container">$f = 440Hz$</span>, I get a clear sinusoid at bin <span class="math-container">$i = 10$</span> that yields phase angle <span class="math-container">$\phi = 0$</span> (the sinusoid itself is shifted by a quarter period for simplicity, hence <span class="math-container">$0$</span>).</p> <p>If I generate say</p> <p><span class="math-container">$f = 440Hz + k·\Delta$</span>, where <span class="math-container">$k \in [-0.5, 0.5]$</span></p> <p>then <span class="math-container">$\phi$</span> for the strongest bin 10 <em>looks like</em> <span class="math-container">$k·\pi$</span>.</p> <p>Some sample measurements:</p> <p><span class="math-container">$k=-0.5 \Rightarrow \phi=-0.499·\pi$</span></p> <p><span class="math-container">$k=-0.25 \Rightarrow \phi=-0.252·\pi$</span></p> <p><span class="math-container">$k=-0.15 \Rightarrow \phi=-0.151·\pi$</span></p> <p><span class="math-container">$k=-0.05 \Rightarrow \phi=-0.054·\pi$</span></p> <p>The deviation gets bigger as you move towards the bin, though it's pretty good near the middle. Is this some kind of a parabolic correlation?</p> <p>So what's the math behind this? Why do these values seem correlated?</p> <p>And if they are correlated, could this be used for finding the peak frequency with some precision?</p>
<p>For a set of exact equations that describe how a pure real tone works in a DFT check out equations (23)-(25) in my blog article:</p> <ul> <li><a href="https://www.dsprelated.com/showarticle/771.php" rel="nofollow noreferrer">DFT Bin Value Formulas for Pure Real Tones</a></li> </ul> <p>The phenomenon you have found can be seen clearer if you consider the complex definition of a real valued sinusiodal.</p> <p><span class="math-container">$$ \cos( \theta ) = \frac{e^{i\theta}+e^{-i\theta}}{2} $$</span></p> <p>So, when you are near a peak bin, one of the numerator values is dominant, and you have an approximation of a complex tone. Using equation (24) in this article shows how a the distance to the bin <span class="math-container">$\delta$</span> rotates the bin value.</p> <ul> <li><a href="https://www.dsprelated.com/showarticle/1038.php" rel="nofollow noreferrer">DFT Bin Value Formulas for Pure Complex Tones</a></li> </ul> <p>Hope this helps.</p> <p>To answer your last question: Yes, I've done it different ways with more coming.</p> <p>Check out:</p> <ul> <li><a href="https://www.dsprelated.com/showarticle/773.php" rel="nofollow noreferrer">Exact Frequency Formula for a Pure Real Tone in a DFT</a></li> <li><a href="https://www.dsprelated.com/showarticle/1095.php" rel="nofollow noreferrer">Two Bin Exact Frequency Formulas for a Pure Real Tone in a DFT</a></li> <li><a href="https://www.dsprelated.com/showarticle/1108.php" rel="nofollow noreferrer">Improved Three Bin Exact Frequency Formula for a Pure Real Tone in a DFT</a></li> <li><a href="https://www.dsprelated.com/showarticle/1039.php" rel="nofollow noreferrer">A Two Bin Exact Frequency Formula for a Pure Complex Tone in a DFT</a></li> <li><a href="https://www.dsprelated.com/showarticle/1043.php" rel="nofollow noreferrer">Three Bin Exact Frequency Formulas for a Pure Complex Tone in a DFT</a></li> <li>[Jacobsen's Frequency Formula as an Approximation]</li> <li>[Candan's Tweak of Jacobsen's Frequency Formula]</li> <li>[Tweaking Macleod's Frequency Formula]</li> </ul> <hr> <p>Your <span class="math-container">$k$</span> is the fraction of a bin distance from a bin on the frequency scale of cycles per frame.</p> <p>Thus, from (20):</p> <p><span class="math-container">$$ k = -\frac{N}{2\pi} \delta $$</span></p> <p>The negative sign is unfortunate, I should have done it differently in the article.</p> <p><span class="math-container">$$ \delta = -\frac{2\pi}{N} k $$</span></p> <p>Your <span class="math-container">$\phi$</span> is the angle of the nearest complex bin value, so I will use <span class="math-container">$\theta$</span> in the formula for the phase angle from the signal definition in (24):</p> <p><span class="math-container">$$ \phi = \arg(X[]) = -\delta (N-1)/2 + \theta $$</span></p> <p><span class="math-container">$$ \phi = \frac{2\pi}{N} k (N-1)/2 + \theta $$</span></p> <p><span class="math-container">$$ \phi = \frac{N-1}{N} k \pi + \theta $$</span></p> <p>So, for N being large, and <span class="math-container">$\theta$</span> being zero, <span class="math-container">$\phi$</span> looks like <span class="math-container">$k \pi$</span>.</p> <p>Remember, this is based on using a complex tone as an approximation for the real value tone, so it won't be exact. I am not sure why your ratios are slightly larger than one when they should be slightly smaller.</p>
1,184
phase shift
Obtaining the phase component of an integer shift using phase only correlation
https://dsp.stackexchange.com/questions/81648/obtaining-the-phase-component-of-an-integer-shift-using-phase-only-correlation
<p>I'm trying to implement a sub-pixel estimation approach for my stereo matching project that I'm working on, by using the proposed method in the following paper &quot;<em><a href="https://ieeexplore.ieee.org/document/7760216" rel="nofollow noreferrer">Sub-pixel Shift Estimation of Image based on the Least Squares Approximation in Phase Region</a></em>&quot; by Fujimoto, Fujisawa and Ikehara.</p> <p>From the discrete Fourier transform of my two shifted images, I can according to the paper split up the phase component from the normalized cross power spectrum function R <span class="math-container">$$ R(k_1,k_2) = \frac{F(k_1,k_2)G^*(k_1,k_2)}{|F(k_1,k_2)G^*(k_1,k_2)|} = e^{j\theta(k_1,k_2)} $$</span> into two separate components (integer and decimal components). Given by: <span class="math-container">$$ \theta(k_1,k_2) \approx ak_1+bk_2 = (a'+a'')k_1+(b'+b'')k_2 $$</span> Where a and b are the coefficients for the slope of the phase component of each direction k1 and k2. By subtracting the integer shift, I can then find the phase component (theta'') with only the decimal shift. <span class="math-container">$$ \theta ''(k_1,k_2) = \theta(k_1,k_2)-(a'k_1+b'k_2) \approx a''k_1+b''k_2 $$</span> Here the slopes of the integer shift (a’ and b’) are according to the paper found by using the conventional phase only correlation (POC). However, I can’t seem to figure out how to get a’ and b’ from the conventional POC. I am able to find the integer shift in pixels, by finding the location of the max POC peak:</p> <p><a href="https://i.sstatic.net/rAFYf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rAFYf.png" alt="Image of max POC peak" /></a></p> <p>But from my understanding the integer shift from the peak would be in the spatial domain, while a’ and b’ is the slope of the phase component of the integer shift and is defined in the frequency domain. As they are defined in two different domains I’m having a hard time understanding how I can obtain the slope values a' and b' from the integer shift that I have found using POC.</p> <p>I’m therefore wondering if someone could help me understand how can I obtain the phase component of the integer shift, and how I can get the slopes a’ and b’ by using phase only correlation?</p> <p>Any help would be greatly appreciated!<br /> Thanks in advance!</p>
1,185
phase shift
Shifting phase with boolean logic gate
https://dsp.stackexchange.com/questions/95591/shifting-phase-with-boolean-logic-gate
<p>Given 4 foundational identical square wave with different phases:</p> <ol> <li>Yellow: (Ground/Phase Reference)</li> <li>Cyan: 45 degree relative to Yellow</li> <li>Purple: 90 degree relative to Yellow (a.k.a. the quadrature version of Yellow)</li> <li>Green: 135 degree relative to Yellow</li> </ol> <p><a href="https://i.sstatic.net/19dRrQe3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19dRrQe3.png" alt="enter image description here" /></a></p> <p>How do I shift unknown arbitrary square signal based on those 4 foundational square signal?</p> <p>For example, the arbitrary signal is the cyan (in this case I pretend to don't know the phase of cyan is 45 degree relative to yellow)</p> <ul> <li>If I perform an operator X once, it will shift (+45) to 90 degree relative to yellow (equivalent to purple)</li> <li>shifting the same operator X, will shift to 135 degree relative to yellow (equivalent to green)</li> <li>shifting it again 3rd, will shift to 180 degree relative to yellow (logical NOT of yellow)</li> <li>shifting it again 4th, will shift to 225 degree relative to yellow (logical NOT of cyan itself)</li> <li>shifting it again 5th, will shift to 270 degree relative to yellow (logical NOT of purple)</li> <li>shifting it again 6th, will shift to 315 degree relative to yellow (logical NOT of green)</li> </ul> <p>The only allowed operator is boolean operator, such as AND, OR, XOR, NOR, NOT, XNOR, IMPLY, NIMPLY. So I would expect the interference between 4 signals using those operator only.</p> <ul> <li>Input: a square signal with unknown phase</li> <li>Output: +45 shift version of input signal</li> <li>Operator: X (performing +45 signal)</li> <li>Phase Reference for measure: Yellow (0 degree phase)</li> </ul> <p>E.g. Performing OR interference between green and purple yielding new square signal with higher PWM instead of default 50%. But PWM manipulation is another topic, but perhaps it can be useful for the operator defintion which I expect the phase shifting. Let's give the name of operator/function as <code>SHIFT_45(INPUT)</code></p> <p>Here is the clue, how CYAN, PURPLE, and GREEN come from the yellow signal:</p> <p><a href="https://i.sstatic.net/cwDbqCMg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwDbqCMg.png" alt="enter image description here" /></a></p>
<p>Using these conventions:</p> <blockquote> <ol> <li>the frequency of the input signal is &quot;1&quot;, &quot;2&quot;, &quot;3&quot; or &quot;4&quot;, where the frequency of the yellow wave is &quot;1&quot;. and</li> <li>The phase of the input signal is one of 0⋅,1⋅,2⋅,…,7⋅π/2, where the yellow signal has phase 0</li> </ol> </blockquote> <p>Let's start with the highest-frequency signal: f=4. To delay that by 45°, you would need a delay that is 1/4 of that wave's period, so 1/16 of the reference period.</p> <p>But that's shorter than any period that you can get by logically combining your four input signal.</p> <p>So, first thing you need is a <em>frequency synthesizer</em> that synthesizes a sufficiently fast clock. You can implement that however you want – the only condition is that there's a rising edge that coincides with the yellow rising edge. Hence, a VCO and a simple PLL will do – <strong>but a VCO is not a logic gate</strong>, so this is impossible to do within the framework you're giving yourself.</p> <p>So, you can conclude here that it's impossible, and stop. Or you loosen your restrictions that you can only use logic gates.</p> <p>Assuming you have a clock that runs at a period of &quot;shortest possible delay necessary&quot; (i.e. at frequency 4·4=16), call it &quot;high speed clock&quot; (HSC), you realize that depending on your input square wave's frequency, you need to delay that input by 1, 2, 3 or 4 of the periods of that wave.</p> <p>And the phase of the input wave doesn't matter at all – all that matters is the frequency, because that selects whether you delay by 1, 2, 3 or 4.</p> <p>To determine the frequency, you can simply count the edges of HSC) between two edges of the input. Whether it's 1, 2, 3, or 4 defines the delay.</p> <p>You then use that knowledge and build a delay chain (that's D flip flops, in a chain, where you select the output of the 1., 2., 3., or 4. flip flop depending on your count.</p> <p>And that's it.</p> <p>So, to conclude:</p> <ol> <li>synthesize your HSC from your yellow wave. <span class="math-container">$f_{\text{HSC}} = 16, \phi_{\text{HSC}} = 0$</span>.</li> <li>build a 2-bit counter, triggered on HSC ⤴ (rising edge only), reset on input wave edge ⤴, count &quot;latched&quot; into two D-flip flops (A, B) on input wave ⤵ (falling edge only).</li> <li>build a four-stage shift register out of D-flip flops (C, D, E, and F) clocked on HSC ⤴. Input connected to input signal.</li> <li>build a mux out of logic gates that selects the output of the C, D, E, or F flip flop based on the state of (A, B).</li> </ol> <p>That's it. As you can see, none of your cyan, purple or green square waves were any useful in this. You <em>could</em> use them for the PLL that generates HSC from yellow, but since they are themselves generated from yellow, that's a bad idea, since it would not decrease jitter.</p>
1,186
phase shift
Why dont sin and phase shifted cosine overlap?
https://dsp.stackexchange.com/questions/15798/why-dont-sin-and-phase-shifted-cosine-overlap
<p>I plotted a sine wave(blue) and a 90 degrees phase shifted cosine wave (red) expecting them to overlap each other. </p> <pre><code>x = 0:0.001:1; %ploting sine y = sin(2*pi*x); plot(x, y); hold('on'); %ploting cosine z = cos(2*pi*x-90); plot(x, z); </code></pre> <p><img src="https://i.sstatic.net/muXzq.png" alt="enter image description here"></p> <p>The two graphs do not overlap. Whats wrong with my assumption ?</p>
<p>You're confusing degrees and radians.</p> <p>$$\sin(x)=\cos(x-\pi/2)$$</p> <p>$\pi/2$ radians corresponds to 90 degrees.</p>
1,187
phase shift
OFDM complex symbols have dynamic phase shift after the fft-block &amp; before the BPSK demodulator at the receiver ,
https://dsp.stackexchange.com/questions/89491/ofdm-complex-symbols-have-dynamic-phase-shift-after-the-fft-block-before-the-b
<p><a href="https://i.sstatic.net/e1pVG.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e1pVG.jpg" alt="enter image description here" /></a><a href="https://i.sstatic.net/OWhKc.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OWhKc.jpg" alt="enter image description here" /></a>I have transmitted and received an OFDM-signal using the same ADALM-Pluto-SDR's Tx/Rx Antenna.</p> <p>I have used BPSK baseband modulation with symbol freq=312.5 KHz, L (length of ifft)=64, fsampling=fsym*L=20 MHz. I basically sent 4 OFDM-symbols, each symbol has 64 symbol-length(52 subchannels and 12 null-subcarriers). The cyclic prefix is 0.25% of the ifft length. I used a barker code with my signal for synchronization.</p> <p>At the transmiiter, I have noticed that many symbols after the ifft-block are approaching to zero and spreading around! After correcting the time delay and phase shift at the receiver, I have observed that my baseband complex symbols, after the fft-block at the receiver and before the BPSK-demodulator, have an obvious dynamic phase shift as you see below in the figure (18) and also in the constellation diagram. At this stage, I was supposed to get real values of +1s and -1s with imaginary parts equal to zero as I used BPSK baseband modulation. Instead I got rotating amplitudes of +1s and -1s.</p> <p>My question is what is the cause of this problem and how can I solve it in matlab. I have tried to implement several algorithms after calculating the dynamic phase shift but all did not work.</p> <p>I would highly appreciate your support. [The complex symbols after fft-block at the Rx before the BPSK demodulator<a href="https://i.sstatic.net/hk301.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hk301.jpg" alt="][1]" /></a></p> <p>enter image description here</p> <p><a href="https://i.sstatic.net/kBUmZ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kBUmZ.jpg" alt="The complex symbols after fft-block at the Rx before the BPSK demodulator" /></a></p> <p><a href="https://i.sstatic.net/ZPO0s.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZPO0s.jpg" alt="The ideal complex modulated symbol after BPSK modulator at Tx" /></a></p>
1,188
phase shift
How to minimize peak-to-peak amplitude of periodic waveform through FFT phase shifts?
https://dsp.stackexchange.com/questions/46706/how-to-minimize-peak-to-peak-amplitude-of-periodic-waveform-through-fft-phase-sh
<p>For one of my music projects, I'm playing back periodic audio signals (by looping single periods). Unfortunately, one of my waveforms sounds too quiet (even at maximum volume).</p> <p>I'm trying to use FFT to obtain harmonic strengths, then phase-shift each harmonic to minimize the peak-to-peak amplitude, and then amplify the signal.</p> <p>Is there any algorithm to find the optimal phase shifts, or should I use <code>scipy.optimize.basinhopping</code> (or generating thousands of random phases) for an approximate result?</p>
<p>There are algorithms for determining low-peak-factor signals, for example:</p> <blockquote> <p>"Synthesis of low-peak-factor signals and binary sequences with low autocorrelation (Corresp.)", M. Schroeder, 1970</p> </blockquote> <p>I have used this for flat-spectra signals to minimise peak to peak amplitude.</p> <p>However I think this may be an XY problem. Do you have a limit on how much distortion you can introduce to the signals? Have you tried using a dynamic range compression algorithm? There are limiting algorithms which can increase the RMS amplitude significantly without introducing significant audio artifacts.</p>
1,189
phase shift
How to remove phase shift for a baseband signal given an estimated CFO (Bluetooth)
https://dsp.stackexchange.com/questions/88505/how-to-remove-phase-shift-for-a-baseband-signal-given-an-estimated-cfo-bluetoot
<p>I am working with Bluetooth specification 5.1 where the advertisement packets can send a constant tone extension (CTE) over the baseband signal to estimate the angle of arrival with an antenna array.</p> <p>Due to inaccuracy of transciever and reciever clocks for bluetooth devices, the CTE is not exact.. This leads to a phase shift in the recieved signal of the antennas.</p> <p>Based on my last discussion over <a href="https://dsp.stackexchange.com/questions/88253/how-to-calculate-phase-rotation-of-i-q-reference-samples-for-bluetooth-angle-of">here</a>, I have understand that I need to estimate the CFO and phase rotation to compensate for the inaccuracy of the recieved signal.</p> <p>I have simulated the I/Q sampling for Bluetooth baseband model with CTE signal of 250 Khz with a sampling of 125 KHz.</p> <p><a href="https://i.sstatic.net/Ppt71.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ppt71.png" alt="enter image description here" /></a></p> <p>As expected the phase of the I/Q samples remains fairly constant.</p> <p>In this case as I am trying to understand how to compensate CFO, I have introduced a freq. deviation of 5 Khz to the signal.</p> <p><a href="https://i.sstatic.net/hKAJn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hKAJn.png" alt="Phase shift of 5Khz" /></a></p> <p>Based on this I am now trying to figure out how to compensate this deviation I introduced to return back to the first figure where the phase is constant. What method do I need to apply to use the knowledge of the 5 Khz freq. deviation from the original 250 Khz signal?</p>
<p>Such carrier frequency offsets are typically compensated for digitally by using a numerically controlled oscillator (NCO)- the complex signal at or near baseband is multiplied by the complex NCO output with a full complex multiplier, consisting of 4 real multipliers and 2 adders according to;</p> <p><span class="math-container">$$I_o = I_1I_2-Q_1Q_2$$</span> <span class="math-container">$$Q_o = I_1Q_2+I_2Q_1$$</span></p> <p>Where <span class="math-container">$I_o+jQ_o$</span> is the correct red baseband signal and <span class="math-container">$I_1+jQ_1$</span> is the received signal with a carrier offset and <span class="math-container">$I_2+jQ_2$</span> is the NCO. All are functions of time as <span class="math-container">$I_o[n]$</span> etc with sample index <span class="math-container">$n$</span>.</p> <p>This can be done in a carrier recovery loop when combined with a phase detector to measure the phase error from symbol to symbol and a loop filter.</p>
1,190
phase shift
What&#39;s the difference between frequency shift, frequency offset, phase offset, and phase noise?
https://dsp.stackexchange.com/questions/86997/whats-the-difference-between-frequency-shift-frequency-offset-phase-offset-a
<p>I'm confused about these terms: frequency shift, frequency offset, phase offset, and phase noise. My understanding that frequency shift and frequency offset are the same and caused by Doppler shift. Phase noise is caused by the instability of the local oscillator, and it changes per symbol. However, some books say this is nonrandom but unknown, I don't know what this means. I have no idea what phase offset is, and what causes it.</p>
<p>Frequency shift and frequency offset could reasonably refer to the same thing, but a shift suggests something that has moved (from one frequency to the next as would occur in a changing Doppler) while a frequency offset suggests something more static such as a difference in frequency from a frequency reference after a shift has occurred.</p> <p>Phase noise is clearly a random process, any book that says otherwise is wrong, or the statement as written is misinterpreted.</p> <p>Modulations can change the phase, frequency or amplitude of a carrier frequency, and similarly unintentional noise and disturbances can modulate the carrier as well (in addition to additive noise sources that sum in power with the modulated signal). If we change the phase, that is &quot;PM&quot;; if we change the frequency, that is &quot;FM&quot;; and if we change the amplitude, that is &quot;AM&quot;. AM and FM/PM are completely independent of each other but FM and PM are directly related. Instantaneous frequency is specifically the time derivative of phase (a change in phase versus a change in time):</p> <p><span class="math-container">$$f(t) = \frac{d\phi(t)}{dt}$$</span></p> <p>If phase was in radians, the units of frequency would be radians/sec, or we can divide the result by <span class="math-container">$2\pi$</span> to have frequency in units of Hz. So if we change phase with time, we are also changing the frequency. A frequency offset therefore would be a phase ramp in time. Many confuse delay with phase given its explanation using sine waves, but that can be very misleading. A time delay will cause a phase offset for a specific frequency, and more specifically a phase that is directly proportional to carrier frequency (given a fixed time delay, a low frequency will have a much smaller phase shift than a higher frequency). Ultimately viewing waveforms as complex signals (a single tone at <span class="math-container">$e^{j\omega t}$</span> rather than sinusoids <span class="math-container">$\cos(\omega t)$</span>) removes this confusion and provides much clarity of phase and frequency offsets and phase noise. Note as given by Euler's formula a sinusoid consists of two such tones:</p> <p><span class="math-container">$$\cos(\omega t) = e^{j\omega t} + e^{-j\omega t}$$</span></p> <p>And if it wasn't clear the expression <span class="math-container">$Ke^{j\phi}$</span> is a complex phasor with magnitude <span class="math-container">$K$</span> and angle <span class="math-container">$\phi$</span> where <span class="math-container">$K$</span> and <span class="math-container">$\phi$</span> are both real.</p> <p><span class="math-container">$e^{j\omega t}$</span> is a spinning phasor and frequency is its rate of rotation (like a bicycle wheel, and we then have the notion of &quot;positive&quot; and &quot;negative&quot; frequencies since the rotation can now have direction). Phase is simply a rotation on the complex plane, and a Phase Offset is a static rotation. Phase noise is the random fluctuation of phase with time and is non-stationary.</p> <p>Here are some graphical examples to clarify these different impairments (frequency and phase offsets, phase noise). The first graphic shows a waveform of phase versus time <span class="math-container">$\phi(t)$</span> as a noise free signal (the reference clean signal of what we actually want, which I chose arbitrarily as a fixed slope), and how it is received with the different impairments. The noise free signal here in this example is a fixed slope in phase versus time representing a constant frequency. This could for example be one symbol in a simple FSK (Frequency Shift Keying) transmission. The signal as received had a <strong>frequency shift</strong> which could have been due to Doppler if the transmitter or receiver were moving, or a <strong>frequency offset</strong> between the transmitter and receiver reference clocks (in practice these are both the primary contributors).</p> <p><a href="https://i.sstatic.net/vvT8V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vvT8V.png" alt="phase vs time" /></a></p> <p>If our objective is to measure the frequency over an observation time (such as one symbol duration in FSK) then we would be interested in the average frequency which would be the best fit linear slope of phase over that duration of time. The difference of the two slopes (ideal vs actual) would be the frequency offset. The <strong>phase offset</strong> is the difference in phase at any point in time, typically given as a difference from the linear slopes: So if a constant frequency offset exists, the phase offset is linearly changing with time. The instantaneous deviations of phase from the noise free signal is <strong>phase noise</strong>. All oscillators have phase noise. Many of the sources of phase noise and its characteristics are captured in <a href="https://en.wikipedia.org/wiki/Leeson%27s_equation" rel="nofollow noreferrer">Leeson's equation</a>.</p> <p>The graphic below shows a QAM (Quadrature Amplitude Modulation) receiver with the effects of different noise impairments: AWGN, Phase Noise, Frequency and Phase Offset and how it would appear on the QAM constellation:</p> <p><a href="https://i.sstatic.net/dWDBI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dWDBI.png" alt="QAM Simulation" /></a></p> <p>On the far left the noise free constellation is shown as symbols on a complex plane, here for 16-QAM. AWGN (Additive White Gaussian Noise) is added, and not shown but this would make each of those dots become circular clouds since the noise would be equally distributed on the real and imaginary axis. Phase noise adds noise in phase only, so stretches these circular clouds along the angular axis. A static phase offset (not shown by itself) would rotate the constellation by a fixed amount, while a frequency offset (which we do see) would cause the constellation to spin. Further of interest, the carrier tracking loop which is depicted would be able to track out the slower frequency variations of the random phase noise and thus reduce its impact (as we see in the constellation after carrier tracking). Finally a decision block converts the noise symbols to the decided (noise-free) result.</p> <p>Other common impairments not shown are DC offsets which would move the origin, and amplitude and phase imbalance (also called IQ imbalance) as shown in the constellations below for 16-QAM with AWGN:</p> <p><a href="https://i.sstatic.net/Egcks.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Egcks.png" alt="IQ imbalance" /></a></p>
1,191
phase shift
What exactly is a frequency component, and what is the phase shift from the argument of the Fourier Transform relative to?
https://dsp.stackexchange.com/questions/91503/what-exactly-is-a-frequency-component-and-what-is-the-phase-shift-from-the-argu
<p>I'm an EE undergrad that struggles heavily with the intuition behind the Fourier Transform (most likely due to a shoddy mathematical foundation). Specifically:</p> <ol> <li><p>From what I understand, the real part of the Fourier Transform is the Fourier Transform of the even part of the function in time domain, and the imaginary part is the transform of the odd part. But in the context of Fourier, we often say that a time domain function has n / an infinite amount of frequency components. <strong>But what exactly is a frequency component?</strong> I.e, is a function <span class="math-container">$f(t)$</span> represented by convention as a sum / integral of sines, cosines, or both? Or maybe it is so that we refer to the sine and cosine terms of <span class="math-container">$e^{-j\omega} = cos(\omega) - jsin(\omega)$</span> when we say &quot;frequency component?&quot;</p> </li> <li><p>I get that the absolute value of the Fourier Transform yields the magnitude of a certain frequency component, and that the argument yields the phase shift of some frequency component. <strong>But what is that phase shift relative to?</strong> Is that phase shift relative to a sine or a cosine?</p> </li> </ol> <p>Thank you for taking your time to read this.</p>
<p>Granted a precise answer to your question would (and has) fill(ed) a multitude of books, here is a stripped down answer.</p> <ol> <li><p>In the context of Fourier analysis, when we refer to a &quot;frequency component&quot; of a function <span class="math-container">$f(t)$</span>, we are typically talking about the constituent sine and cosine waves that, when summed together in a specific way, form the original function <span class="math-container">$f(t)$</span>. The Fourier Transform decomposes <span class="math-container">$f(t)$</span> into these sine and cosine components, each associated with a specific frequency.</p> <p>The Fourier Transform of a real-valued function, such as <span class="math-container">$f(t)$</span>, <em>can</em> be complex-valued (can be real as well, depending on the symmetries of <span class="math-container">$f(t)$</span> - <span class="math-container">$f(t)$</span> even for example). This complex result encodes both the amplitude and phase information of each frequency component. The real part of this result corresponds to the coefficients of the cosine terms (even function component), and the imaginary part corresponds to the coefficients of the sine terms (odd function component).</p> <p>When we express <span class="math-container">$f(t)$</span> in terms of its Fourier Transform, we are essentially representing it as a sum (or integral, in the case of the continuous Fourier Transform) of these sine and cosine waves. Each sine and cosine wave is a &quot;frequency component&quot; of <span class="math-container">$f(t)$</span>. The complex exponential form <span class="math-container">$e^{-j\omega t}$</span> (where <span class="math-container">$j$</span> is the imaginary unit) is often used because it compactly represents both sine and cosine terms through Euler's formula.</p> </li> <li><p>The phase shift obtained from the Fourier Transform is the phase angle of the complex number representing a particular frequency component. This phase shift indicates how much a sine or cosine wave of that particular frequency is shifted in time relative to a reference point, typically the origin <span class="math-container">$t=0$</span>.</p> <p>Whether the phase shift is relative to a sine or a cosine wave can depend on the convention used in the Fourier Transform definition. However, in the standard form, the cosine function is often taken as the reference. This is because the cosine wave starts at its maximum value at time zero, making it a natural reference point for measuring phase shifts.</p> <p>The argument (or angle) of the complex number from the Fourier Transform gives this phase shift. Denote the phase shift by <span class="math-container">$\phi$</span>. A positive <span class="math-container">$\phi$</span> typically means the wave is shifted to the left (leading phase), and a negative <span class="math-container">$\phi$</span> means the wave is shifted to the right (lagging phase), relative to the reference cosine wave.</p> </li> </ol>
1,192
phase shift
Why is there a phase shift when taking the average of the Hilbert transformed signal for all trials, but not in individual trials?
https://dsp.stackexchange.com/questions/96234/why-is-there-a-phase-shift-when-taking-the-average-of-the-hilbert-transformed-si
<p>As you can see in the figure attaches below, for individual trials (which is the first row), the bandpass filtering using complex morlet wavelet convolution is in phase with the result of the real part of the hilbert transform applied after filtering the signal. However, when I take the average across all trials (which is the second row), there is a significant phase shift.</p> <p><a href="https://i.sstatic.net/zYFYDH5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zYFYDH5n.png" alt="enter image description here" /></a></p> <p>This is the MATLAB code used to generate this figure:</p> <pre><code>% filter-Hilbert Method % Designing the filter nyquist = EEG.srate/2; lower_filter_bound = 4; % Hz upper_filter_bound = 6; % Hz transition_width = 0.2; filter_order = round(3*(EEG.srate/lower_filter_bound)); % create the filter shape (this is explained more in the text around figure 14.4) ffrequencies = [ 0 (1-transition_width)*lower_filter_bound lower_filter_bound upper_filter_bound (1+transition_width)*upper_filter_bound nyquist ]/nyquist; idealresponse = [ 0 0 1 1 0 0 ]; filterweights = firls(filter_order,ffrequencies,idealresponse); nyquist = EEG.srate/2; lower_filter_bound = 25; % Hz upper_filter_bound = 26; % Hz transition_width = 0.2; filter_order = round(3*(EEG.srate/lower_filter_bound)); % create the filter shape (this is explained more in the text around figure 14.4) ffrequencies = [ 0 (1-transition_width)*lower_filter_bound lower_filter_bound upper_filter_bound (1+transition_width)*upper_filter_bound nyquist ]/nyquist; idealresponse = [ 0 0 1 1 0 0 ]; filter2weights = firls(filter_order,ffrequencies,idealresponse); 2) Applying filter-Hilbert Method filtered_signal = filtfilt(filterweights,1,signal); hf_result1 = hilbert(filtered_signal); filtered_signal = filtfilt(filter2weights,1,signal); hf_result2 = hilbert(filtered_signal); figure; subplot(221) plot(EEG.times, real(hf_result(:,23))) hold on plot(EEG.times, real(fliplr(conv_result(23,:,1)))) hold off legend(&quot;filter-Hilbert&quot;, &quot;Convolution&quot;) title(&quot;Bandpass filtering freq = 5Hz&quot;) subplot(222) plot(EEG.times, real(hf_result2(:,23))) hold on plot(EEG.times, real(fliplr(conv_result(23,:,2)))) hold off legend(&quot;filter-Hilbert&quot;, &quot;Convolution&quot;) title(&quot;Bandpass filtering freq = 25Hz&quot;) subplot(223) mean_conv_result = squeeze(mean(conv_result)); mean_hf_result = squeeze(mean(hf_result,2)); plot(EEG.times, real(mean_hf_result)); hold on plot(EEG.times, real(mean_conv_result(:,1))) hold off legend(&quot;filter-Hilbert&quot;, &quot;Convolution&quot;) subplot(224) mean_conv_result = squeeze(mean(conv_result)); mean_hf_result2 = squeeze(mean(hf_result2,2)); plot(EEG.times, real(mean_hf_result2)); hold on plot(EEG.times, real(mean_conv_result(:,2))) hold off legend(&quot;filter-Hilbert&quot;, &quot;Convolution&quot;) </code></pre> <p>What creates the phase shift? I tried plotting the imaginary part and it seems like there is a phase shift of more than 90 degrees.</p> <p>Also, as you might see in my code, I had also reversed my convolved data to fit it with the filter-Hilbert data. What could the potential reason for this reversal be?</p> <p>EDIT: Here is the fixed data: <a href="https://i.sstatic.net/wjikRovY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wjikRovY.png" alt="enter image description here" /></a></p> <p>I got this after scaling the convolved result down by sqrt(s)/10. The amplitude goes down only at 25Hz (for both individual trials and average of all trials) indicating the obvious chaotic activity (non-phase locked) of higher frequencies which cancels each other out?</p>
1,193
phase shift
Phase shift between X and Y components of the same wave
https://dsp.stackexchange.com/questions/94170/phase-shift-between-x-and-y-components-of-the-same-wave
<p>I was told to post this question here, originally posted on Overflow.</p> <p>I have two signals, I1 and I2, which were acquired from a white light source in a spectrally resolved interferometer with a waveplate and a polarizer in one arm, such that they are the X and Y components of the original wave after introducing the shift, and theoretically they should have a phase difference of pi/2. I'm trying to extract said phase difference between the two signals using FFT. I have been trying for a few days and I'm sure this is not that hard and there's something I'm just not understanding about either Fourier transform or the nature of my signals, but I could use some help to realize what that is.</p> <p>I have tried a lot of approaches along the lines of this, which is the code I currently have:</p> <pre><code>k = np.fft.rfftfreq(I1.size, d=num_onda_equi[1]-num_onda_equi[0]) mask = k &gt; 0 F1 = np.fft.rfft(I1)[mask] F2 = np.fft.rfft(I2)[mask] fase1 = np.angle(F1) fase2 = np.angle(F2) conv = np.conj(F1) * F2 plt.figure(figsize=(12, 16)) # Señales del interferómetro plt.subplot(3, 1, 1) plt.plot(I1, label='I1') plt.plot(I2, label='I2') plt.title('Intensidades') plt.legend() # Fases individuales plt.subplot(3, 1, 2) plt.plot(fase1, label='Fase X') plt.plot(fase2, label='Fase Y') plt.legend() plt.title('Fases') plt.legend() # Diferencia de fases plt.subplot(3, 1, 3) plt.plot(np.angle(conv), 'g-') plt.title('Diferencia de fases') plt.show() </code></pre> <p>This produces the following result:</p> <p><a href="https://i.sstatic.net/1KI5EUv3.png" rel="nofollow noreferrer">1. Original signals, the phase difference between them isn't constant for some reason but should be around pi/2 for most values. 2. Phase from both signals, just to see if they make sense. 3. Phase difference.</a></p> <p>I expected to get a somewhat constant plot for the phase difference around pi/2 since that's the only phase difference the X and Y components are supposed to have. I have also tried using Hilbert transforms adapting the code from <a href="https://www.alivelearn.net/?p=3008" rel="nofollow noreferrer">this link</a> without success, and checked a lot of similar questions but no answer has led me to the expected result.</p>
<p>These are narrow band signals. The phase at most frequencies that are outside of the band are undefined or dominated by noise.</p> <p>You need to evaluate the phase difference at the exact center frequency of the signal: at any other frequency the phase will be mostly garbage. A quick way to get started is to find the peak of the magnitude spectrum and look at the phase difference at this frequency only.</p> <p>Another method would be to cross correlate the two signals and low pass filter the result.</p>
1,194
phase shift
Analyzing a signal that contains frequency content at Fs/2 doesn&#39;t seem to work unless there is a phase shift
https://dsp.stackexchange.com/questions/59804/analyzing-a-signal-that-contains-frequency-content-at-fs-2-doesnt-seem-to-work
<p>I am trying to write a basic program that samples a 4 kHz sinewave at a sampling rate of 8 kHz and takes the FFT of the signal and plots it.</p> <p>From everything I have read, as long as the signal you are sampling has frequency content that is less than or equal to Fs/2 no aliasing will occur and the results will be accurate. However writing a simple example seems to be more complicated than I thought.</p> <p>Using Python I wrote up a basic example:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np Fs = 8000.0 Ts = 1/Fs N = 8 t = np.arange(0, N*Ts, Ts) # 4 kHz sinewave y = np.sin(2*np.pi*4000*t) # Bandwidth of the signal (Hz) BW = Fs/2 # Spectral Lines (number of frequency samples) SL = round(N/2 + 1) # Frequency scale f = np.linspace(0, BW, SL) Y = np.fft.fft(y)*2/N Y = Y[:SL] fig, ax = plt.subplots(2, 1) ax[0].stem(t, y, use_line_collection=True) ax[0].set_xlabel('Time') ax[0].set_ylabel('Amplitude') ax[1].stem(f, abs(Y), use_line_collection=True) ax[1].set_xlabel('Freq (Hz)') ax[1].set_ylabel('|Y(freq)|') plt.show() </code></pre> <p>Looking at the output you can see that this does not properly capture the frequency content at 4 kHz in the FFT plot.</p> <p><a href="https://i.sstatic.net/oDRz4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oDRz4.png" alt="enter image description here"></a></p> <p>But if I simply add a phase shift to the signal by changing this line</p> <pre><code>y = np.sin(2*np.pi*4000*t) </code></pre> <p>to</p> <pre><code>y = np.sin(2*np.pi*4000*t + np.pi/2) </code></pre> <p>I end up with much better results...</p> <p><a href="https://i.sstatic.net/aIQ6p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aIQ6p.png" alt="enter image description here"></a></p> <p>My question is, how come this doesn't work without a phase shift? It makes me wonder how anyone can be confident in the result of an FFT when the signal being sampled has some frequency content that is at the frequency Fs/2. How do you ensure that the signal you are analyzing is at a phase alignment that will not cause issues?</p>
<p>the discrete function <span class="math-container">$$x_q[n]=\sin(\pi n)$$</span> is always zero for all of the integers <span class="math-container">$n$</span>.</p> <p>the discrete function <span class="math-container">$$x_i[n]=\cos(\pi n)$$</span> is always <span class="math-container">$(-1)^n$</span> for integer <span class="math-container">$n$</span>.</p> <p>so this general sinusoid at Nyquist that has a phase term:</p> <p><span class="math-container">$$\begin{align} x[n] &amp;= A \cos(\pi n + \theta) \\ &amp;= A \big( \cos(\pi n) \cos(\theta) - \sin(\pi n) \sin(\theta) \big) \\ &amp;= \big(A\cos(\theta)\big) \cos(\pi n) + \big(-A\sin(\theta)\big) \sin(\pi n) \\ &amp;= \big(A\cos(\theta)\big) (-1)^n + \big(-A\sin(\theta)\big) \cdot 0 \\ &amp;= B (-1)^n \\ \end{align}$$</span></p> <p>So now, how do you tell the difference between a sampled sinusoid that had amplitude <span class="math-container">$A$</span> and a phase angle <span class="math-container">$\theta$</span> and another sampled sinusoid that has amplitude <span class="math-container">$B \triangleq\big(A\cos(\theta)\big)$</span> and no phase shift from the cosine?</p>
1,195
phase shift
How to calculate the total signal strength and phase shift of multiple signals in a reciever
https://dsp.stackexchange.com/questions/67139/how-to-calculate-the-total-signal-strength-and-phase-shift-of-multiple-signals-i
<p>I have a simulation to make where I have an array of transmitters that transmit the same signal. At a random point, which I have to consider as a receiver I have to measure the phase shift of the signals and also measure the total signal strength. The requirement is that the transmitter emits the same signal but with different phase offsets.</p> <ul> <li><p>As far I understood because the transmitters are separated apart, the received signal is already offset by some phase because they different travel distances. Is my assumption right?</p></li> <li><p>I know the formula to find the phase offset given the distance travelled of two signals - how do I calculate it for multiple signals? Or am I understanding the question wrong?</p></li> <li><p>How do I calculate the total signal strength for all signals combined? I know for one signal assuming a free space, line of sight connection I can use friis equation. But how do I use it for multiple signals?</p></li> </ul> <p>Any help is very much appreciated. I am Software Engineer who have recently taken up Wireless communications course, and I'm not sure if I understand the motivation and meaning behind the simulation right. Thanks!</p>
<p>This is simplified assuming there is not multipath occurring in the transmission (where the receiver would typically receive multiple copies of the same transmit signal at different delays resulting in multipath distortion which are typically addressed using channel estimation and equalization).</p> <p>The OP's phase shift would all be relative since there is no common reference (clock) with the transmitter, so the comparison would be of the phase of the received signals to each other. A cross-correlation of the received signals in complex notation with the reference signal of what you know was transmitted would provide this.</p> <p>This is accomplished by doing a multiply and accumulate of the received signal with the complex conjugate of the known ideal transmit signal, repeating the computation below for each possible offset <span class="math-container">$m$</span> (thus is the cross-correlation function specifically, correlating at each possible offset in time). </p> <p><span class="math-container">$$XCorr[m] = \sum r[n-m]t^*[n]$$</span></p> <p>Where (*) indicates a complex conjugate.</p> <p>The magnitude of the correlation peak would be proportional to the signal strength, and the shift <span class="math-container">$m$</span> is proportional to the delay. With two signals at the same shift <span class="math-container">$m$</span> (in the same bin), the phase of the correlation is proportional to the carrier phase, giving a fine delay indication by comparing the phase result from two different receiver signals if they were in the same bin. In this case the signals would need to not be correlated themselves, otherwise the result would be the vector addition of the two (appearing as a single received signal). For this reason the transmit signals from each transmitter would be uncorrelated by separating them in frequency, time or code; and you can identify each one separately in receiver using correlation. </p> <p>Note the above computation for the cross correlation function can be done directly with FFTs given the FFT correlation property (which results in a circular cross-correlation):</p> <p><span class="math-container">$$XCorr[m] = \text{ifft}\{\text{fft}(r[n])\}\text{fft}^*\{(t[n])\}$$</span></p>
1,196
phase shift
Why does this reconstruction produce a phase shift from the original signal?
https://dsp.stackexchange.com/questions/29545/why-does-this-reconstruction-produce-a-phase-shift-from-the-original-signal
<p>After an FFT of a signal is done, it is plotted as in the image below, with the original signal is on the first subplot.</p> <p><a href="https://i.sstatic.net/Wf23i.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wf23i.jpg" alt="No Ballast"></a></p> <p>Using the magnitude and phase data of each frequency, it's reconstructed, and the produced signal is as the image below.</p> <p><a href="https://i.sstatic.net/EN5hl.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EN5hl.jpg" alt="No Ballast Reconstruction"></a></p> <p>The reconstructed is signal is good except it's shifted 4 milliseconds earlier from the the original signal. I also has done this with another signal, but this 4 milliseconds shift is there too. Why does this happen?</p> <p>Additional question: I used this for harmonic analysis of 50 Hz fundamental frequency (electric power system frequency). I had a slightly hard time determining which frequency of each harmonic (50*n Hz) frequency to use because, from the FFT result, the phase angle is θ at (50*n∓0.001) Hz but suddenly becomes θ±180° at (50*n±0.001) Hz. For example, at 149.999Hz the phase angle is 223.3° but at 150.001Hz the phase angle is suddenly 403.3°. Is this hard time determining which angle of each frequency to use for reconstruction, normal?</p> <p>By the way, this is the Matlab code I use to do the FFT and the plotting.</p> <pre><code>Fs = 1/(8e-6); % Sampling frequency T = 1/Fs; % Sample time L = 2^21; % Length of signal t = (0:L-1)*T; % Time vector SampleNoBallast; % The original signal NFFT = 2^nextpow2(L); % Next power of 2 from length of y Y = fft(y,NFFT)/L; % The Fast Fourier Transform producing FFT complex f = Fs/2*linspace(0,1,NFFT/2+1); % Frequencies to plot P = rad2deg(unwrap(angle(Y))); % Phase degrees from FFT complex % Plotting the original signal subplot(3,1,1); plot(t(1:12500),y(1:12500)) title('Arus Masukan LED T8 Opple 18 W tanpa Ballast') ylabel('arus (mA)') xlabel('waktu (s)') grid on set(gca,'ButtonDownFcn','selectmoveresize'); % Plot single-sided amplitude spectrum. % Plotting the magnitude of each frequency subplot(3,1,2); plot(f(1:17000),2*abs(Y(1:17000))) title('Hasil FFT') xlabel('Frekuensi (Hz)') ylabel('Amplitudo (mA)') set(gca,'ButtonDownFcn','selectmoveresize'); % Plotting the phase of each frequency subplot(3,1,3); plot(f(1:17000),P(1:17000)) title('Sudut Komponen Harmonik') xlabel('Frekuensi (Hz)') ylabel('Sudut (derajat)') set(gca,'ButtonDownFcn','selectmoveresize'); </code></pre>
<p>The shift is due to using an FFT with a different length than the length of the data, and likely using a non-symmetric arrangement of zero-padding to increase that original length to the zero-padded length.</p> <p>Non-symmetric zero-padding rotates the phase results of an FFT, spiraling across result bins.</p>
1,197
phase shift
How to process/generate a phase shifted frequency varying sine wave?
https://dsp.stackexchange.com/questions/7988/how-to-process-generate-a-phase-shifted-frequency-varying-sine-wave
<p>I want to phase shift an incoming sine wave with varying frequency but I am unsure how to go about doing so in practical terms.</p> <p>A little more info regarding the requirements: I have an encoder producing a sine/cosine pair with a fixed Peak to Peak but as the speed changes so obviously does the frequency. I want to effectively advance/retard the signal by altering the phase of the incoming signal.</p> <p>Where I am unsure is how to go about doing this, if I use a look up table (LUT) for fast frequencies this would be ok but at slower frequencies I am going to get a very rough (digitised) signal. I was thinking I could calculate the signal 'on the fly' but again that would require having a LUT ie output = sin (input(amplitude) + offset[amplitude])</p> <p>I think I maybe going about this the wrong way and there is possibly a simple solution to this?</p>
<p>Indeed, as Jason R sais, that's how all sine oscillators are designed for synthesizers, they must change frequency without changing phase, so you have to control them only using a counter, and you vary the counter increment speed to change the sine/cosine frequency, in that way the sine is always one count onwards from it's previous value regardles of speed changes.</p>
1,198
phase shift
Approximate using DFT phase shifting property
https://dsp.stackexchange.com/questions/72828/approximate-using-dft-phase-shifting-property
<p>I have a discrete signal <span class="math-container">$x(n)$</span> having <span class="math-container">$N$</span> samples with DFT <span class="math-container">$X(n)$</span>. Here <span class="math-container">$N$</span> is <em><strong>large</strong></em> say <span class="math-container">$N=600$</span>. Let the samples of <span class="math-container">$x(n)$</span> be, <span class="math-container">$x(n) = \big[x(0), x(1), x(2), ..., x(N-1)\big]$</span>. But if suppose instead of sampling at the zeroth instant, we began at <span class="math-container">$k^{th}$</span> instant, then the new DFT using phase shifting property will be:</p> <p><span class="math-container">$$x_k(n) = \big[x(k), x(k+1), ..., x(N-1), x(0), x(1), ...,x(k-1)\big] \rightarrow e^{\frac{j2 \pi kn}{N}}X(n)$$</span></p> <p>Now consider <span class="math-container">$k$</span> to be <em><strong>small</strong></em> say, <span class="math-container">$k=2$</span> but instead the obtained signal is:</p> <p><span class="math-container">$$x_2(n)=\big[x(2), x(3), ..., x(N-1), x(0), x(1)\big]$$</span></p> <p>My question is about what happens when you change <span class="math-container">$k$</span> entries to something else, like this:</p> <p><span class="math-container">$$\tilde{x}_2(n)=\big[x(2), x(3), ..., x(N-1), y(0), y(1)\big]$$</span></p> <p>where <span class="math-container">$y(0) \ne x(0)$</span> and <span class="math-container">$y(1) \ne x(1)$</span>.</p> <p>Thus given that <span class="math-container">$N$</span> is large and <span class="math-container">$k$</span> is small can we approximate DFT of new sequence w.r.t <span class="math-container">$X(n)$</span>? Can we say that still the DFT of new signal will approximately the same as <span class="math-container">$e^{\frac{j2 \pi kn}{N}}X(n)$</span>? If not can we have some error bound? You may further assume that:</p> <p><span class="math-container">$$\text{max}|y(0) - x(0)| = \text{max}|y(1)-x(1)| =c$$</span> where <span class="math-container">$c$</span> is some real valued constant.</p>
1,199
gene expression
Average number of gene products in (a) eukaryote(s)
https://biology.stackexchange.com/questions/3780/average-number-of-gene-products-in-a-eukaryotes
<p>Due to alternative RNA splicing, it isn't uncommon to ultimately find multiple gene products expressed from one gene in eukaryotes. I'm looking for a reference value for <em>the average number of final gene products expressed per gene</em> for:</p> <ul> <li>... <strong>a particular eukaryote (preferably humans)</strong>.</li> </ul> <p>This can't be too hard to do for one species like humans. I would expect the following formula to provide me with a rough answer. Is this correct?</p> <p>(number of distinct proteins + numbers of distinct non-translated RNAs) / number of genes</p> <ul> <li>... <strong>all eukaryotes as a whole</strong>.</li> </ul> <p>This one is slightly more problematic.</p>
<p>One answer can be found in the UniProt FAQ:</p> <blockquote> <p>What is the human complete proteome?</p> <p>In 2008, a draft of the complete human proteome was released from UniProtKB/Swiss-Prot: the approximately 20,000 putative human protein-coding genes were represented by one UniProtKB/Swiss-Prot entry, tagged with the keyword 'Complete proteome'. This UniProtKB/Swiss-Prot complete H. sapiens proteome (manually reviewed) can be considered as complete in the sense that it contains one representative (canonical) sequence for each currently known human gene. Close to 40% of these 20'000 entries contain manually annotated alternative isoforms representing over 15'000 additional sequences ...</p> </blockquote> <p><a href="http://www.uniprot.org/faq/48" rel="nofollow">http://www.uniprot.org/faq/48</a></p> <p>So, we have 20,000 genes and 35,000 products yielding about 1.75 gene products per gene. Alternatively, the 8,000 genes undergoing alternative splicing give about 2.88 proteins per gene.</p>
0
gene expression
What is meant by &quot;alien&quot; probe in a microarray?
https://biology.stackexchange.com/questions/8210/what-is-meant-by-alien-probe-in-a-microarray
<p>In my lab we did a microarray to analyze differential gene expression in S. cerevisiae treated with UV irradiation. We are now analyzing the results and one of the up-regulated genes is labeled "Alien4_60." I believe this is some kind of control, but I am having trouble understanding what it is. </p> <p>I would think that a probe labeled "alien" in a microarray would be some kind of negative control. The <a href="http://www.microarrays.com/docs/SC4001-aros+ybox.zip" rel="nofollow">probe-list</a> (excel file) says "Control stringency: 60% identity to oligo Alien4" as the description for this probe. What does this mean? </p> <p>..As you can imagine, searches for "Alien4" only brings up sci-fi movies! </p> <p>p.s. perhaps someone with enough rep could tag this question with more meaningful tags.</p>
<p>It's a probe to detect external 'Alien' RNA standard, a synthetic mRNA commercialized by Stratagene/Agilent.</p> <blockquote> <p>The Alien RNA transcript is a ~500-nt, polyadenylated RNA molecule that is synthesized by in vitro transcription. The Alien RNA transcript is nonhomologous to all known nucleic acid sequences currently in public databases, as determined by BLAST comparisons against NIH sequence databases. <a href="https://www.chem.agilent.com/Library/usermanuals/Public/300602.pdf" rel="nofollow">https://www.chem.agilent.com/Library/usermanuals/Public/300602.pdf</a></p> </blockquote> <p>As any other external standard, you spike in the same amount of Alien RNA standard in your different reactions and then using the Alien probe you can normalize for extraction efficiencies etc. It works not only for microarrays, but also for qPCR, RNA-seq etc. </p> <p>If you didn't spike in any of it, I expect your probes giving very low signal that's probably why you thought about negative controls.</p>
1
gene expression
2 blue eyed children
https://biology.stackexchange.com/questions/15025/2-blue-eyed-children
<p>My husband had light brown eyes. His father had hazel and his mother, light brown as well. His younger brother however, has blue eyes. Both my children have blue eyes. Is this possible? I thought once the recessive gene was used in my first child, his dominant brown gene would take over?</p>
<p>There is some chance for that. Look at this chart with some probabilities:</p> <p><img src="https://i.sstatic.net/guIuv.jpg" alt="enter image description here"></p> <p>Its from this <a href="http://www.dnaexaminers.com/lib/ckeditor/eye-color-paternity-test.html" rel="nofollow noreferrer">webpage</a>, which also gives some background information. There is also an online calculation tool available, which takes the grandparents into account as well. It can be found <a href="http://genetics.thetech.org/online-exhibits/what-color-eyes-will-your-children-have" rel="nofollow noreferrer">here</a>.</p>
2
gene expression
Why is polysome loading affected by double stranded structure?
https://biology.stackexchange.com/questions/35500/why-is-polysome-loading-affected-by-double-stranded-structure
<p>Why polysomes are not able to load properly onto a transcript if a transcript has a double-stranded structure in it?</p>
3
gene expression
Is the increased/decreased enzyme activity (tyrosinase) caused by an environmental factor (UV radiation) considered to be gene expression?
https://biology.stackexchange.com/questions/44704/is-the-increased-decreased-enzyme-activity-tyrosinase-caused-by-an-environment
<p>My AP Bio assignment asks that I research the effects of UV radiation on melanin production, but the directions and questions suggest that UV radiation influences gene expression. From what I could find, such as on <a href="http://enhs.umn.edu/current/5103/uv/harmful.html" rel="nofollow">this</a> university site or <a href="http://www.ncbi.nlm.nih.gov/pubmed/8151127" rel="nofollow">this</a> abstract on NCBI, it seems that UV-B radiation directly affects the enzyme tyrosinase, which is responsible for producing melanin from tyrosine. So here is what I'm wondering: is our lesson wrong in stating that UV radiation affects gene expression, since it seems that it actually affects enzyme activity?</p>
<p>Since @anongoodnurse already nicely covery the effects of UV radiation on the cellular level, I will only have a look on the molecular level.</p> <p>UV radiation of the skin causes increased DNA damage and thus activity of p53 in the keratinocytes of the skin. This causes (amongst others) the production of Kitl (Kit ligand, binds to the Kit receptor) and POMC (<a href="https://en.wikipedia.org/wiki/Proopiomelanocortin" rel="nofollow noreferrer">Proopiomelanocortin</a>) which is later processed to ACTH (<a href="https://en.wikipedia.org/wiki/Adrenocorticotropic_hormone" rel="nofollow noreferrer">Adrenocorticotropic hormone</a>) and the different MSH-forms (<a href="https://en.wikipedia.org/wiki/Melanocyte-stimulating_hormone" rel="nofollow noreferrer">Melanocyte-stimulating hormone</a>). These act on the MC1R (Melanocortin 1 receptor) on the surface of the melanocytes and induce pigmentation. See the figure below from reference 1 as illustration:</p> <p><a href="https://i.sstatic.net/B8LCS.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B8LCS.jpg" alt="enter image description here"></a></p> <p>In the melanocytes the binding of these ligands activates different signal transduction pathways which all finally lead to the activation of the transcription factor MITF (<a href="https://en.wikipedia.org/wiki/Microphthalmia-associated_transcription_factor" rel="nofollow noreferrer">Microphtalmia-associated transcription factor</a>) which then upregulates the transcription of tyrosinase. See the figure from reference 1 as an illustration about what goes on in the cell:</p> <p><a href="https://i.sstatic.net/8ceQE.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8ceQE.jpg" alt="enter image description here"></a></p> <p>So the activity of the sun directly influences the signal transduction and later on the gene expression of Tyrosinase by a specific transcription factor. This translates a external stimulus (as UV light) down to the gene expression level. The stimulus does not affect the activity of the enzyme, only the amount available.</p> <p>This upregulation stops when there is no further signalling down from the keratinocytes and eventually the Tyrosinase will be degraded due to normal turnover of the enzyme. Since no or only little is made without the right stimulus, this causes a stop or reduction in the pigment production. This happens when you get less UV light on your skin which happens for example in the winter time in Europe. Shedding of the upper layers of the epidermis then leads to the loss of the pigmented keratinocytes which makes persons lighter.</p> <p>References:</p> <ol> <li><a href="http://www.ncbi.nlm.nih.gov/pubmed/25414302" rel="nofollow noreferrer">The melanoma revolution: From UV carcinogenesis to a new era in therapeutics</a></li> <li><a href="http://www.ncbi.nlm.nih.gov/pubmed/25111671" rel="nofollow noreferrer">The roles of microphthalmia-associated transcription factor and pigmentation in melanoma.</a></li> </ol>
4
gene expression
What is the role of TATA box in transcriptional regulation?
https://biology.stackexchange.com/questions/45783/what-is-the-role-of-tata-box-in-transcriptional-regulation
<p>I know what is the TATA box, but I wish to know whether it has specific roles in transcriptional regulation.</p>
<p>TATA box serves as a binding site for the <a href="https://en.wikipedia.org/wiki/TATA-binding_protein" rel="nofollow">TATA-binding protein</a> (TBP; and its associated factors, together comprising the <a href="https://en.wikipedia.org/wiki/Transcription_factor_II_D" rel="nofollow">TFIID</a>). </p> <p>TATA-box is one of the basic promoters present in many genes and therefore TBP is a general transcription factor i.e. TATA-box and TBP do not have specific roles. TATA-box is like a minimal promoter. </p>
5
gene expression
What is the difference between differentially expressed genes and deregulated genes?
https://biology.stackexchange.com/questions/59131/what-is-the-difference-between-differentially-expressed-genes-and-deregulated-ge
<p>Can anyone explain clearly what is the difference between differentially expressed genes and deregulated genes?</p>
<p>Any gene whose gene expression differs significantly from some reference is considered to be differentially expressed. I think the most common representation of differential expression is the volcano plot, where you're plotting the fold change or log2 fold change against the -log10 p value assigned that gene. </p> <p>This means a couple of things: One, you need a sufficently large sample size that you can actually <em>get</em> statistics, and two, you need a sufficient reference sample which will act as your baseline. </p> <p>If a gene is degregulated, however, the expression is <strong>aberrant</strong>. In order to characterize aberrant expression you need a <em>normal</em> sample or accepted reference sample, and the sample of interest. This might be as simple as tumor vs normal tissue from the same patient. </p>
6
gene expression
Cell type that expresses 90% of the genome
https://biology.stackexchange.com/questions/60568/cell-type-that-expresses-90-of-the-genome
<p>Cells that are differentiated express the genes that are necessary for their own usage. I've heard that some cell type expresses about 90% of 30,000 proteins that are encoded in the genome. Can anybody tell me what this cell type is?</p>
<p>Well, I found this paper:</p> <p><a href="https://genomebiology.biomedcentral.com/articles/10.1186/s13059-016-1079-9" rel="nofollow noreferrer">Danan-Gotthold M, Guyon C, Giraud M, Levanin EY, Abramson J. 2016. Extensive RNA editing and splicing increase immune self-representation diversity in medullary thymic epithelial cells. Genome Biol 17:219. </a></p> <blockquote> <h3>Medullary thymic epithelial cells [mTECs] express ~85 % of the entire coding genome</h3> <p>...to better determine the fraction of genes expressed in mTECs and other mouse tissues, we took advantage of RNA-seq technology... The analysis... revealed that most of the tissues [see below for those tested] express 12,000–14,000 genes (i.e. 60–65 % of the coding genome). The lung and two immunologically privileged sites, brain and testis, were the only tissues that expressed a larger fraction of the genome, in the range of 70–75 %... In line with other recently published studies, the mTEC<sup>hi</sup> population expressed nearly 18,000 genes, which represents ~85 % of the coding genome, while <a href="https://en.wikipedia.org/wiki/Autoimmune_regulator" rel="nofollow noreferrer">Aire</a>-deficient mTECs (AireKO) expressed approximately 15,000 genes, suggesting that Aire is responsible for the induction of ~3000 genes in mTECs. Interestingly, even in the absence of Aire, the mTECs expressed a relatively large fraction of the genome (~75 %), considerably exceeding the overall genome expression in other peripheral tissues. Interestingly, neither [cortical thymic epithelial cells] nor [skin epithelial cells] demonstrated higher overall genome expression than other tissues, suggesting that promiscuous gene expression is indeed unique to the mTECs population.</p> </blockquote> <p>These cells express such a wide complement of proteins because they are involved in T-cell maturation, specifically <a href="https://en.wikipedia.org/wiki/Central_tolerance#T_cell_tolerance" rel="nofollow noreferrer">negative selection</a>.</p> <hr /> <p>List of tissues tested in the study:</p> <blockquote> <p>brain, testes, liver, kidney, lung, colon, skeletal muscle, spleen, cortical thymic epithelial cells, skin epithelial cells</p> </blockquote>
7
gene expression
qPCR: Huge variation in fold change of genes between biological replicates
https://biology.stackexchange.com/questions/74310/qpcr-huge-variation-in-fold-change-of-genes-between-biological-replicates
<p>I am trying to validate my RNAseq data by doing qpcr for which I am looking at the fold change of few genes across various timepoints of treatment conditions. I am getting huge amount of variation (in thousand folds ) in my biological replicates. I thought may be it is due to genomic DNA contamination so thats why I repeated all my experiment and did DNase treatment twice but still I am having such variations. I am using two reference genes EF and GAPDH and I also have variations in the Ct values of reference genes across various timepoints. I will really appreciate it if you could suggest me with possible problems and solutions.</p> <p>Thank you, Ambika</p>
8
gene expression
Element of promoter responsible for expression power of gene
https://biology.stackexchange.com/questions/80737/element-of-promoter-responsible-for-expression-power-of-gene
<p>I know that there is a classification of promoters like: strong, medium and weak. This shows how a promoter affects gene expression levels. So my question is: what part of the promoter affects it? </p> <p>For example I want to raise or lower the expression of some gene, so, what part of promoter I need to edit or maeby I can just replace it with another promoter?</p>
<p>Tldr; In an <em>in vivo</em> context the answer is going to be dependent on DNA sequence and context.</p> <p>Simplistically, Eukaryotic genes promoters can be segmented in to several parts. The first, a core promoter region where RNA pol II will bind. This isn't capable of a robust transcriptional response on its own and relies on proximal promoter regions, or more specifically transcription factor binding to the proximal promoters. Proximal promoters are usually 100 or so base pairs upstream of the transcription start site and will bind proteins that will assist in Pol II binding and activation (or silencing). Enhancers (and Silencers) are conceptually similar to proximal promoter regions with the exception that they can be a looooooooooong way away from the transcription start site in sequence space. See here:<a href="https://www.ncbi.nlm.nih.gov/books/NBK21780/" rel="nofollow noreferrer">1</a>.</p> <p>So, if you just wanted to drive expression of a transgene in a cell line in the lab there are plenty of established cloned promoters and enhancer regions known to work such as <a href="https://blog.addgene.org/plasmids-101-the-promoter-region" rel="nofollow noreferrer">SV40 or Gal4</a>. This would be an example of "replacing the promoter". That's exactly what it is and why these sequences were chosen to be cloned.</p> <p>If you are asking which regions are specific to activate or repress a gene <em>in vivo</em> and/or in context then it will be, of course, context dependent and you'll have to find out yourself if someone hasn't already. Generally, these regions of effect will be <strong>binding sites for sequence specific transcription factors</strong> which interact with the core transcriptional machinery (or bind other proteins that do) to effect the rate of transcription. So, if the sequence is what can induce an effect, how can you find it? You can search for these sequence specific sites by analyzing your sequence of interest with a <a href="https://molbiol-tools.ca/Transcriptional_factors.htm" rel="nofollow noreferrer">slew of computational tools</a> for potential binding sites and directly assess any hits ability to increase/decrease expression as isolated units when cloned upstream of an appropriate reporter construct. I guess you could even go as far to CRISPR single base pairs in or out to "edit" any hits. Alternately, you could use a purely wet bench technique such as <a href="https://en.wikipedia.org/wiki/Promoter_bashing" rel="nofollow noreferrer">promoter bashing</a> to similar effect - performing serial deletions to find out which part of your promoter is important for strong/medium/weak effects or activation/repression. </p> <p>The Stark Lab developed a cool <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5870828/" rel="nofollow noreferrer">genome-wide approach</a> looking at the ability of DNA to drive expression. Although, something worth bearing in mind is that in a normal <em>in vivo</em> situation everything is context dependent - one piece of sequence may be reliant on another close by (like the core promoter requires proximal promoters) or it may require one or more pieces of DNA that are distant in sequence space to be close by in 3D space (like distal enhancers).</p> <p>People have also taken completely <a href="https://www.pnas.org/content/107/6/2538" rel="nofollow noreferrer">synthetic approaches to making promoters</a> .</p> <p>This would be easier to answer if your question was more specific. HTH.</p>
9
gene expression
How do substances for gene expression work?
https://biology.stackexchange.com/questions/90948/how-do-substances-for-gene-expression-work
<p>How does amplification of gene expression work? I often find in various articles expressions like "Using substance X or peptide X, we enhanced the expression of gene Z, which led to a certain therapeutic effect.</p> <p>How to find out which substance enhances the expression of a particular gene? Is there a law, a pattern for this, or, if all this is found out empirically, a summary table of expressing substances?</p>
10
gene expression
Why do Alu repeats form secondary structures?
https://biology.stackexchange.com/questions/35385/why-do-alu-repeats-form-secondary-structures
<p>I've been doing a lot of research on Alu repeats and how they mediate the gene expression. I read the following article <a href="http://link.springer.com/article/10.1007%2Fs00018-007-7084-0" rel="nofollow">"Useful junk: Alu RNAs in the human transcriptome"</a>.</p> <p>And it says that alu repeats embedded in 5'UTR makes stable secondary structure that prevents the assembly of ribosomal subunits, thus blocking the process of translation. I've been reading various articles to find out why Alu repeats form this secondary structure and why only in the 5'UTR and not in the 3'UTR. But haven't been able to find an answer.</p>
<p>Single-stranded RNA can easily form secondary structures, a very important example of this are tRNAs. </p> <p>From a quick look at the article you read it seems that these Alu repeats work similar to riboswitches in the 5'-UTR. They can form secondary structures that block translation initiation, riboswitches commonly do this by hiding the Shine-Dalgarno sequence in a stable helix.</p> <p>This kind of regulation of translation can obviously only work in the 5'-UTR as that is the place where the translation starts. The 3'-UTR can't affect translation in this way as the translation starts at the other end of the mRNA.</p> <p>The Alu repeats still can form secondary structures in the 3'-UTR, and the article suggests that those might have an effect on mRNA stability.</p>
11
gene expression
Which is more important for protein expression mRNA structure or codon optimization?
https://biology.stackexchange.com/questions/1152/which-is-more-important-for-protein-expression-mrna-structure-or-codon-optimizat
<p>The field seems extremely divided on the debate. On one hand, artificial experiments have suggested that synonymous mutations don't correlate with gene expression but rather, the mRNA 5' structure is the most important <a href="http://www.ncbi.nlm.nih.gov/pubmed/19359587">1</a>. On the other hand, genome wide analysis suggests that tRNA biases are better associated with high expression <a href="http://www.ncbi.nlm.nih.gov/pubmed/20403328">2</a>. What other works balance out this discussion?</p> <ol> <li><a href="http://www.ncbi.nlm.nih.gov/pubmed/19359587">Coding-sequence determinants of gene expression in Escherichia coli</a></li> <li><a href="http://www.pnas.org/content/107/8/3645.short">Translation efficiency is determined by both codon bias and folding energy</a></li> <li><a href="http://www.ncbi.nlm.nih.gov/pubmed/20403328">An evolutionarily conserved mechanism for controlling the efficiency of protein translation</a></li> </ol>
<p>This is an excellent question! To my knowledge, there hasn't been a definite answer yet. Recently, I did tons of research on which factors influence protein expression and you should definitely check out the following questions which I asked: </p> <ol> <li><p><a href="https://biology.stackexchange.com/q/1/28">What is the criticality of the ribosome binding site relative to the start codon in prokaryotic translation?</a></p></li> <li><p><a href="https://biology.stackexchange.com/q/166/28">What determines a successful protein expression in E. coli?</a></p></li> </ol> <p>I answered my second question by posting a concise version of <a href="http://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=6&amp;ved=0CFoQFjAF&amp;url=http://mpec.ucsf.edu/pdfs_new/Pubs_77.pdf&amp;ei=xkMHT-XmKqbdiAKv4JSHCQ&amp;usg=AFQjCNFcrXypViPECyKyeUvRM2JmZtidGA&amp;sig2=0BtAug3bPEHhzZDqzuFUjA" rel="nofollow noreferrer">a paper</a> from DNA2.0 (a gene synthesis company) that discusses the protein expression. It summarizes all the important factors (RBS site, codon frequency match, 5' mRNA secondary structure), but it doesn't discuss the extend to which they influence expression. </p> <p>There is a nature paper from Voigt lab about <a href="http://www.nature.com/nbt/journal/v27/n10/full/nbt.1568.html" rel="nofollow noreferrer">RBS calculator</a>, which takes into account only the 5' mRNA secondary structure and it doesn't discuss at all the codon-optimization. The calculator was good, but far away from perfect: there is a 50% chance of predicting an expression level within a 2-fold range of the target one.</p> <p>On the other hand, an <a href="http://www.ncbi.nlm.nih.gov/pubmed/16756672" rel="nofollow noreferrer">older paper from DNA2.0</a> argues that the mRNA is covered with ribosomes practically all the time, so the secondary structure of the mRNA should have little effect. Besides, an actively translating ribosome can break up stem-loop structures.</p> <p>In my opinion, we are still far off from predicting RNA secondary structure formation, although it might be highly implicated in protein expression. From what I read, I think that it is equally crucial that there is no strong secondary structure at the 5' end of the mRNA, the RBS sequence is close to the consensus one and it it appropriately spaced upstream of the start codon, and the first 60-100 codon frequencies match the frequency of your heterologous host. </p>
12
gene expression
Determining potential protease sites within a recombinant protein
https://biology.stackexchange.com/questions/1176/determining-potential-protease-sites-within-a-recombinant-protein
<p>My expressed proteins are frequently truncated and I'm trying to figure out which bands are which. The first thing to come to mind is using PeptideCutter from ExPASy but there is just a data deluge of potential sites. I was curious what other strategies exists for determining the potential breaks asides using LC-MS.</p>
<p>LC-MS is certainly quantitative and will give you a definitive answer, but it is costly and requires access to such a machine.</p> <p>I presume you're analyzing your protein based on western blotting.</p> <p>The first thing you should always do is verify your DNA sequence is coding for the protein product you want. Once you're sure of this, the western blot will give you an indication as to where your protein is (approximately) being cleaved. Say you protein has a predicted size of 40 kDa and you see a band at 20 kDa, then your cleavage is somewhere in the middle of your sequence. There are a TON of proteases that could be potentially cleaving your peptide and you need to have a hypothesis as to what that could be. Your peptide could be cleaved by an endopeptidase (cutting within), or a carboxypeptidase (C-terminal cleavage) or an aminopeptidase (N-termainal cleavage). To go back to the 20/40 kDa example, it could be your peptide was cleaved N-terminally o C-terminally down to a size of 20 kDa, or that it was literally cleaved in the middle by an endopeptidase.</p> <p>Something you may consider is the use of a general protease inhibitor cocktail (Roche makes a really good tablet product called Complete and Complete mini tab). These mixes have a bunch of general protease inhibitors which will stop most cleavage events. If you still see cleavage after using an inhibitor cocktail, you can reasonably expect that you are not inhibiting the protease with the tablet (thus narrowing down your search) and then you can better comb through Expasy's PepCutter data. You should also do a literature search to see what has been reported about your peptide or motif.</p>
13
gene expression
Time from stimulus to gene expression
https://biology.stackexchange.com/questions/291/time-from-stimulus-to-gene-expression
<p>My understanding is that gene expression, in response to some stimulus, generally occurs on the order of minutes. I'm curious about the extremes...the quickest and the slowest cases.</p> <p>What is(are) the fastest time(s) recorded for genes being expressed in response to a stimulus? What are the slowest times?</p>
<p>The fastest I know of is the heat shock locus in <em>Drosophila</em>. The transcription factor (HSF) accumulates within about 30 seconds, and RNA Polymerase can be seen to start accumulating within 3 minutes.</p> <p>Katie L. Zobeck, Martin S. Buckley, Warren R. Zipfel, John T. Lis. 2010. <a href="http://www.sciencedirect.com/science?_ob=MiamiImageURL&amp;_cid=272198&amp;_user=10&amp;_pii=S1097276510008889&amp;_check=y&amp;_coverDate=2010-12-22&amp;view=c&amp;_gw=y&amp;wchp=dGLzVlt-zSkWA&amp;md5=8be52df8fda06cadd3072a46310e2659/1-s2.0-S1097276510008889-main.pdf">Recruitment Timing and Dynamics of Transcription Factors at the Hsp70 Loci in Living Cells</a>. <em>Molecular Cell</em> 40(6): 965-975</p>
14
gene expression
Random X-Inactivation and Duchenne
https://biology.stackexchange.com/questions/111596/random-x-inactivation-and-duchenne
<p>I'm reading about X-inactivation and I can't reconcile some things with it being truly random. In only a small percentage of female carriers Duchenne's will be expressed. But if this was truly random, wouldn't 50% of female carriers expressing the disease? As one of the X-chromosomes is silenced at random, this seems sort of contradictory. Then, in females that do express the disease phenotype it's because of non-random X-inactivation. For me, this sounds like it's the other way around, so clearly I'm not up to speed and am in need of a bit of clarification on the subject.</p>
<p>Duchenne muscular dystrophy (DMD) is caused by the body's inability to make the protein dystrophin, which is needed for proper muscle function (<a href="https://www.genome.gov/Genetic-Disorders/Duchenne-Muscular-Dystrophy#:%7E:text=A%20woman%20who%20has%20a%20genetic%20change%20in,genetic%20material%20unless%20they%20have%20a%20family%20history." rel="nofollow noreferrer">1</a>, <a href="https://www.hopkinsmedicine.org/health/conditions-and-diseases/duchenne-muscular-dystrophy" rel="nofollow noreferrer">2</a>). In an individual carrier, half of the cells that would normally make dystrophin cannot, but the other half still can. This leads to two explanations (AFAIK) for why carriers do not express DMD:</p> <ol> <li>The cells that can make dystrophin make twice as much, so the total amount produced is the same as a normal individual.</li> <li>Only half as much dystrophin is produced, but that is enough to prevent symptoms from manifesting.</li> </ol> <p>This is actually the stock explanation for why female carriers exist for X-linked recessive diseases. You can substitute 'DND' and 'dystrophin' with another X-linked recessive disease and the associated protien (e.g. 'hemophilia' and 'factor 8 or 9') and it would still be <em>mostly</em> true.</p> <p>I say mostly true because DMD is actually an unusual. Its carriers do sometimes exhibit symptoms (<a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1399-0004.1974.tb01694.x" rel="nofollow noreferrer">3</a>, <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1399-0004.1981.tb01799.x" rel="nofollow noreferrer">4</a>). Different studies give different rates for how often this happens, but the <a href="https://www.genome.gov/Genetic-Disorders/Duchenne-Muscular-Dystrophy#:%7E:text=A%20woman%20who%20has%20a%20genetic%20change%20in,genetic%20material%20unless%20they%20have%20a%20family%20history." rel="nofollow noreferrer">NIH</a> says it is about 20%.</p> <p>EDIT: I am still a little unclear on what your asking, but I think you are confused about why any carriers for Duchenne express the disease. The answer to that is a phenomena called skewed X-inactivation, which is when either the paternal or maternal X chromosome is inactivated and turned into a Barr body more often than the other. This can happen for <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2147673/" rel="nofollow noreferrer">2 reasons</a>:</p> <ol> <li>Primary: When Barr bodies are forming, the paternal or maternal chromosome is selected more than 50% of the time.</li> <li>Secondary: After Barr bodies form, cells with the paternal chromosome active reproduce less than cells with the maternal chromosome active, or vice versa.</li> </ol> <p>Both types of skewing can be happen for genetic causes (e.g. the <a href="https://www.cambridge.org/core/journals/genetics-research/article/evidence-of-nonrandom-x-chromosome-activity-in-the-mouse/F2DC9060C92503DF90A138D75C385209" rel="nofollow noreferrer"><em>Xce</em> locus</a> in mice can cause primary inactivation) or stochastic causes (i.e. by <a href="https://www.nature.com/articles/s41431-018-0291-3" rel="nofollow noreferrer">pure chance</a> some people will have unlikely things happen to them).</p> <p>I spent the last few hours reading up on which type happens specifically during Duchenne and I am still not 100% sure. What I am confident about is that carriers for Duchenne can have a <a href="https://www.genome.gov/genetics-glossary/Translocation" rel="nofollow noreferrer">translocation</a>, which causes the <a href="https://www.nature.com/articles/s41572-021-00248-3" rel="nofollow noreferrer">skewed X-inactivation</a> and therefor symptoms in carriers:</p> <blockquote> <p>A few cases of translocations involving DMD have been reported22; these translocations will cause DMD in both males and females, the latter owing to non-random X-inactivation of the unaffected X chromosome. In these cases, cells with inactivation of the mutated X chromosome (cells that could, in theory, produce dystrophin) are not viable owing to the inactivation effect of the chromosomal translocation on the autosome. Only the cells where the unaffected X chromosome is inactivated will be viable. However, these will not produce dystrophin owing to the chromosomal translocation affecting DMD. Therefore, females with these translocations are unable to produce any dystrophin.</p> </blockquote> <p>What I cannot figure out is if the translocation happens on the chromosome with the functioning <em>DMD</em> gene or the chromosome with the defective <em>DMD</em> gene.</p>
15
gene expression
Why are protein-coding regions rich in GC?
https://biology.stackexchange.com/questions/88062/why-are-protein-coding-regions-rich-in-gc
<p>I have been searching for an answer for this question and have some possible solutions, but I am not sure. GC regions are more stable as there are 3 hydrogen bonds instead of 2 with AT, however I am not sure if this would influence the number of protein-coding genes in GC regions? Protein-coding regions would have to be less condensed than noncoding regions so transcription factors could access them. Would a high GC content influence this as well? </p>
<p>Transcription factors generally bind to promoters, enhancers, silencers, and other regulatory regions that lie outside coding regions, through there are &quot;duons&quot; which code for amino acids and also bind TFs to regulatory effect. <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3967546/" rel="nofollow noreferrer">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3967546/</a></p> <p>One paper suggests GC content helps balance <a href="https://en.wikipedia.org/wiki/Genetic_recombination" rel="nofollow noreferrer">recombination</a> events with genetic stability, and so there may be an evolutionary benefit from organisms that undergo sexual reproduction having GC-rich coding regions. <a href="https://www.frontiersin.org/articles/10.3389/fpls.2016.01433/full" rel="nofollow noreferrer">https://www.frontiersin.org/articles/10.3389/fpls.2016.01433/full</a></p> <blockquote> <p>In eukaryotes, meiotic exchange of genetic information, or recombination, between homologous chromosomes is a critical step in generating genetic diversity required for adaptation. Recombination is also a crucial tool in plant improvement efforts. Local genome architecture is sculpted by the recombination process, and genome architecture, in turn, drives recombination. This interplay helps to create variability in genomic space, defining relatively stable and plastic genomic regions. This fluctuation in genomic stability is critical for balancing adaptation and stability on the phenotypic level.</p> <p>Recombination has direct implications for GC patterns and vice versa. GC content refers to the percentage of guanine and cytosine bases in a DNA sequence, as opposed to adenine and thymidine bases. There have been many studies substantiating the positive correlation between recombination and GC content (Ikemura and Wada, 1991; Eyre-Walker, 1993; Fullerton et al., 2001; Galtier et al., 2001; Marais et al., 2001; Duret and Arndt, 2008; Haudry et al., 2008; Escobar et al., 2010; Muyle et al., 2011). Crossovers have been found to be correlated with high GC content in rat, mouse, human, zebrafish, bee, and maize at a broad scale (Jensen-Seaman et al., 2004; Beye et al., 2006; Gore et al., 2009; Backstrom et al., 2010; Giraut et al., 2011), while other studies detected strong correlation only at a fine scale (∼5 kb for yeast, ∼15–128 kb for human) and rather weak correlation at a broad scale (∼30 kb for yeast, ∼1 Mb for human; Gerton et al., 2000; Myers et al., 2006; Marsolier-Kergoat and Yeramian, 2009).</p> </blockquote> <p>GC-richness may offer the ability for organisms to create offspring that evolve to changes to the environment and fend off parasites, while also being viable enough to procreate, themselves.</p>
16
gene expression
Why can a gene lack of a binding site be expressed in skin cells?
https://biology.stackexchange.com/questions/100244/why-can-a-gene-lack-of-a-binding-site-be-expressed-in-skin-cells
<blockquote> <p><a href="https://i.sstatic.net/I4WtA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I4WtA.png" alt="enter image description here" /></a> In order for a specific gene to be expressed in the mammal’s cells, all of the gene’s binding sites must be bound by transcriptional activators. The mammal’s skin cells contain activators that bind to sites B, D, and E, while the mammal’s liver cells contain activators that bind to sites A, C, and E. <a href="https://www.khanacademy.org/science/ap-biology/gene-expression-and-regulation/regulation-of-gene-expression-and-cell-specialization/e/regulation-of-gene-expression-and-cell-specialization" rel="nofollow noreferrer">(From Khan academy)</a></p> </blockquote> <p>Why can both Gene 2 and Gene 4 be expressed in skin cells? I think Gene 4 can't since it doesn't have site E.</p>
<p>You understand this wrong: All <strong>present</strong> enhancers of a gene must be bound by a enhancing factor, not all factors of a certain cell types have to be bound. So when skin cells contain activators for B, D and E they can activate gene 2 and 4 of your example, but not gene 3 since the activators for A and C are missing.</p> <p>So when you have a gene containing any combination of B, D and E, its activation in skin cells is possible according to this model. The same is true for any combination of A, C and E for liver cells. The special case would be a gene having E as the only enhancer as this could be activated in both, liver and skin cells.</p>
17
gene expression
How does a gene &quot;know&quot; what to change to?
https://biology.stackexchange.com/questions/66019/how-does-a-gene-know-what-to-change-to
<p>Excuse my ignorance but I've always been curious about this...</p> <p>For example, a frog is red, but it starts living in a green forest. Over time the frog becomes green to camouflage. But a gene can't see and I'm sure there's no mechanism for color info to be transmitted to individual genes from the brain. So how does a gene know to pick green over, say, blue?</p>
<p>Using your example, the gene doesn't know anything. Mutations cause some of the offspring of the red frog to turn green, some to turn blue, some to turn fluorescent yellow, and some stay red. Birds can't see the green ones as well as the others, so more green frogs survive and make more green frogs. The red frogs, the fluorescent yellow ones, the blue ones, mostly get eaten. After a few generations, almost all the frogs are green -- not because the gene knew anything, not because the mutations went in any direction, but because all the other changes were counterproductive and got eaten.</p> <p>The gene doesn't know anything. It's just a bunch of chemicals that randomly react with cosmic rays, chance, whatever. Most of the changes are irrelevant or actively bad, and the frog that's carrying those particular chemicals doesn't survive. But sometimes the change benefits the frog carrying the particular chemicals and then the frog sends those chemicals down to its progeny.</p> <p>Obviously this is hugely over-simplified. A short and simple intro to the basics of evolution is <a href="http://evolution.berkeley.edu/evolibrary/home.php" rel="noreferrer">Understanding Evolution</a>, by UC Berkeley.</p>
18
gene expression
Will a nucleic acid sequence deduced from a protein sequence be expressed from a plasmid?
https://biology.stackexchange.com/questions/94967/will-a-nucleic-acid-sequence-deduced-from-a-protein-sequence-be-expressed-from-a
<p>I have a fasta file containing the amino acid sequence of glycogenin-1: <a href="https://www.rcsb.org/fasta/entry/6EQJ" rel="nofollow noreferrer">https://www.rcsb.org/fasta/entry/6EQJ</a></p> <p>I want to create a plasmid that produces glycogenin-1.</p> <p>Is it possible to use the glycogenin-1 amino acid sequence to produce said plasmid? Or would one need the nucleic acid sequence to do so?</p>
<p>I lack the knowledge to fully answer your question but I can give some potentially useful pieces of information:</p> <ul> <li><p>In theory you can easily in-silico reverse translate a peptide sequence to an unambiguous nucleotide sequence using the GMOs optimized codon usage. I can share a Python script for that.</p> </li> <li><p>Also you could easily search for the 'real' nucleotide sequence in databases. For example use 'Blast' to find the gene of your peptide sequence.</p> </li> <li><p>But attention! There are known cases of codon-optimized sequences causing bad protein folding or even (paradoxically) worse expression levels than the native sequence.</p> </li> <li><p>There is software that helps with the design of plasmids, also in light of use of unique restriction sights, helping to decide which of your restriction enzymes are applicable.</p> </li> </ul>
19
gene expression
Do housekeeping genes vary between tissues
https://biology.stackexchange.com/questions/51800/do-housekeeping-genes-vary-between-tissues
<p>I am working on an RNA-Seq project, and I am aware that some researchers use housekeeping genes as a method of normalization. My project has several different tissues, and I was wondering if housekeeping gene expression is generally invariant across tissue type?</p>
<p>My experience is: Yes they can vary quite a lot (according to differences in the genetic profile of the cells) and you have to test this. As a primer I can recommend reading the articles listed below. These are mostly looking into this in the context of realtime PCR, but this should be valid for RNAseq as well. For some applications it is also a good idea to use more than one gene for normalization.</p> <p>References:</p> <ol> <li><a href="http://www.ncbi.nlm.nih.gov/pubmed/12184808" rel="nofollow">Accurate normalization of real-time quantitative RT-PCR data by geometric averaging of multiple internal control genes.</a></li> <li><a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1609175/" rel="nofollow">Selection of housekeeping genes for gene expression studies in human reticulocytes using real-time PCR</a></li> <li><a href="http://normalisation.gene-quantification.info/" rel="nofollow">Reference Genes / Housekeeping Genes</a></li> <li><a href="http://www.rna-seqblog.com/tag/housekeeping-gene/" rel="nofollow">Gene Ontology Based Housekeeping Gene Selection for RNA-seq Normalization</a></li> </ol>
20
gene expression
How many copies of RNA per cell are usually reached through overexpression in human cell-lines? (any technique)
https://biology.stackexchange.com/questions/96820/how-many-copies-of-rna-per-cell-are-usually-reached-through-overexpression-in-hu
<p>After 8 hours of online-research I was unable to find any info at all..</p> <p>I was able to get some concrete copy numbers of DNA (e.g. plasmid) per cell after transfection of diverse transfection techniques, but I was unable to get any concrete copy numbers of RNAs that correspond to the transgene (of any overexpression technique).</p> <p>My main interest would be the hek293T overexpression RNA levels using the viral plasmid, but concrete RNA copy numbers from any possible overexpression technique would be of interest. (for human cells like Hela, Hek, HepG2..)</p> <p>Ideally I would need a reputable source, but 'gut-feeling comments' are welcomed as well.</p> <p>In the end I would like to compare overexpressed RNA concentrations to naturally extremely high concentrated endogenous RNAs. Are these levels comparable, or are the (viral) transgene promoters that much more effective?</p> <p>EDIT: Please also mind that the nucleus is a strong barrier to foreign DNA. Might complicate this answer.</p>
21
gene expression
How are the various classes of E coli genes determined?
https://biology.stackexchange.com/questions/1758/how-are-the-various-classes-of-e-coli-genes-determined
<p>Looking at some more detailed <a href="http://www.faculty.ucr.edu/~mmaduro/codonusage/codontable.htm">codon usage tables</a>, genes may be further clustered into three gene classes: Metabolic genes, highly expressed genes during exponential growth, and horizontal gene transfer. Looking at the original <a href="http://www.ncbi.nlm.nih.gov/pubmed/1762151">paper</a> by Medique et al., they clustered the genes based on the CAI and then by a variant of <a href="http://en.wikipedia.org/wiki/K-means_clustering">k-means</a> determined 3 classes. Note that this is different from a class II gene which is determined by the types of RNA polymerase used.</p> <p>How did they end up determining what the three classes are? It seems as if they made this generalization without any proteome data. Would the same genes be classified using a protein expression data during exponential growth and stationary growth?</p>
<p>I read through the paper. The author starts by stating that as of the time of writing, two different classes of codon usage profiles were known (or at least putatively so). All 782 unique CDS sequences used were subjected to a two-step classification method. In step one, each CDSs was broken down into a 61-dimensional vector representing each of the 61 possible codons. A factorial cluster analysis (the categorical, multi-variate equivalent of principle component analysis) was run on these vectors, condensing 61 dimensions down to 2 dimensions. Now that the data complexity has been reduced to 2D, it is more manageable for a k-means algorithm to partition the data. In the end, the genes were clustered into 3 orthogonal groups (classes I, II and III, with 502, 191 and 89 CDS, respectively).</p> <p>Only after the authors clustered the gene set were they able to go back and look at the canonical definitions of each gene. It so happened, fortuitously, that each class of the genes had a strong bias for subsets of cellular function (eg, metabolism, protein biosynthesis, transport). They did not use proteome data, but they were able to define the role for a large number of these genes based on the body of literature at the time.</p>
22
gene expression
How extensive is CD47?
https://biology.stackexchange.com/questions/1901/how-extensive-is-cd47
<p>CD47 aka the "don't eat me" signal has recently been claimed <a href="http://www.pnas.org/content/early/2012/03/20/1121623109.short">to be expressed on all tumor cells</a>. This doesn't seem to corroborate with other cell-biology experiments. On what other cells is CD47 expressed?</p>
<p>I don't know how extensive. Let's run a simple data query and find out: Go to <a href="http://www.ncbi.nlm.nih.gov/geo/" rel="nofollow">GEO</a> at NCBI. In the "Gene profiles" window, type CD47, and hit enter to launch the query. At the top of the resulting page, use the link labeled "Limits" to restrict the 7000+ results to human by entering the term "human" in the "DataSet organism" window. So, there are 3700+ results. Look through them to get an idea of which cell types and under which conditions <em>CD47</em> is expressed.</p> <p>Let's try a second method. Go to the <a href="http://biogps.org" rel="nofollow">BioGPS</a> portal. Enter CD47 in the appropriate window and run the query. From the results, select the row with ID # 961 as this represents the human gene <em>CD47</em>. The resulting gene expression/activity chart for Hs (human) will show you in more general terms where <em>CD47</em> is expressed.</p>
23
gene expression
What exactly is meant by the expression &quot;differentially expressed&quot;?
https://biology.stackexchange.com/questions/2176/what-exactly-is-meant-by-the-expression-differentially-expressed
<p>As far as I've seen, this expression is almost always used in relation to gene expression profiling. Unfortunately, I have no background in this area. Can someone please explain this in layman terms?</p>
<p>Although each cell of your body essentially contains the same DNA and the same genes, cells in different tissues express (turn on) different genes under different conditions. Measuring differential gene expression involves looking at the amount of expression for a gene (or set of genes) in two contrasting scenarios. The contrast could be across different times, different tissues, different conditions, different related species, etc.</p> <p>When, you say a gene is "differentially expressed", this is very context-specific. The phrase means nothing by itself, and it is only useful in terms of the applicable contrast. For example, the statement "gene A is differentially expressed" is uninformative, while the statement "gene A is differentially expressed in liver and muscle tissue" is descriptive--it tells you that liver tissues and muscle tissues have a significantly different level of gene A products. Often the terms "up-regulated" and "down-regulated" are also used to provide additional detail. In the context of the previous example, the statement "gene A is up-regulated in muscle tissues" tells you that the level of gene A products is higher in muscle tissues than in liver tissues.</p>
24
gene expression
T7 promoter leakiness
https://biology.stackexchange.com/questions/2758/t7-promoter-leakiness
<p>Can a gene be expressed under the T7 promoter in an E. coli strain (e.g. DH5 alpha), which does not have the T7 polymerase gene encoded in its genome? In other words, is T7 promoter leaky? </p> <p>To be more specific, how is it possible that a regular E. coli strain, which does not encode for the T7 polymerase, can grow on kan selective media if it was transformed with a plasmid that has the kanR gene under T7 promoter?</p>
<p>Apparently not, <a href="http://openwetware.org/wiki/E._coli_genotypes#High-Control.28tm.29_BL21.28DE3.29_.28Lucigen.29" rel="nofollow">leakiness can be controlled by tightly regulating the T7 polymerase with a tight promoter (in this case lacUV5)</a>. </p>
25
gene expression
A mathematician&#39;s confusion regarding parametric $t$ tests for gene expression data
https://biology.stackexchange.com/questions/5010/a-mathematicians-confusion-regarding-parametric-t-tests-for-gene-expression-d
<p>I'm a mathematician trying to test some things on gene expression data, and I'm thus skimming over various articles such as <a href="http://www.ncbi.nlm.nih.gov/pubmed?term=Sotiriou%20et.%20al.%2C%20Breast%20cancer%20classification%20and%20prognosis%20based%20on%20gene%20%20expression%20profiles%20from%20a%20population-based%20study." rel="nofollow">Sotiriou et. al.</a> to understand what is typically done with such data sets. Several things confuse me; in particular, a paragraph in <a href="http://www.ncbi.nlm.nih.gov/pubmed?term=Sotiriou%20et.%20al.%2C%20Breast%20cancer%20classification%20and%20prognosis%20based%20on%20gene%20%20expression%20profiles%20from%20a%20population-based%20study." rel="nofollow">Sotiriou et. al.</a> reads:</p> <p><em>"Clinical parameters such as ER status, [...] affect the behavior of breast cancers. We asked whether these clinical/pathologic characteristics were associated with differential gene expression. Parametric t tests identified 606 probe elements of 7,650 elements represented in our array that could segregate ER+ and ER- breast tumors (P &lt; 0.001)."</em></p> <p>As segregation of ER+/- based on gene expressions is one of several things I'm interested in attempting to achieve through novel methods, I have been trying to understand what precisely is meant with the above paragrah. To recap the article, there are 99 patients with 7,650 probe expression values, and one ER+/- value each. The article sets out to determine which of those 7,650 probes successfully segregate the dataset into ER+ and ER-.</p> <p>I've run the above paragraph by a nearby statistician, and he could not for the life of him figure out what was done, and had not even heard of such a thing as a "parametric t test". This leads me to suspect that the term is specific to biology, so I ask: what is meant? It is also unclear to me (and him) what the P-value means in this context.</p> <p>I hope the scope of this question isn't too broad. Of course I want to avoid asking "explain this article to me, the outsider, please"; I do believe the paragraph above is relatively self-contained in the context of gene expression.</p> <p>References:</p> <ol> <li><a href="http://www.ncbi.nlm.nih.gov/pubmed?term=Sotiriou%20et.%20al.%2C%20Breast%20cancer%20classification%20and%20prognosis%20based%20on%20gene%20%20expression%20profiles%20from%20a%20population-based%20study." rel="nofollow">Sotiriou et. al., Breast cancer classification and prognosis based on gene expression profiles from a population-based study.</a></li> </ol>
<p>I understand this in the following way:</p> <p>For each probe you have two sets of measurements, one for ER+ and one for ER-. What you do is a T-test (to my understanding is that the "parametric" just emphasizes that T-test is a parametric test) on these two sets, testing if their mean is significantly different (they refer to this as "separated"). You repeat this test for all 7650 probes, and you get a set of 7650 p-values. You then do some multiple testing correction, such as a Bonferroni correction (I haven't checked in the paper if they did it, but they obviously should). Finally, they find that 606 of the p-values are significant (for some choice of threshold), suggesting that they can "separate" ER+ from ER-.</p> <p>As a computational biologist I would advise you to look specifically at bioinformatics papers if you are looking into developing new methods, since the analysis in "pure biology" papers can often be lacking and would not give you a good perspective of state-of-the-art analysis methods. Specifically for the question of separating groups from gene expression you should look into the field of Machine Learning, as it had been widely applied to this problem.</p>
26
gene expression
Confusion related to the DAVID tool
https://biology.stackexchange.com/questions/7299/confusion-related-to-the-david-tool
<p>I am trying to use the DAVID tool to do some gene analysis. I have some probe set intensities for some cancer cell lines. I found this link in the DAVID tool <a href="http://david.abcc.ncifcrf.gov/tools.jsp" rel="nofollow">http://david.abcc.ncifcrf.gov/tools.jsp</a>. I am a bit confused with the terminology introduced here. It says gene list for the probe sets why is it so? I mean in the example you can see the probe sets like</p> <p>1007_s_at 1053_at 117_at 121_at 1255_g_at 1294_at 1316_at 1320_at 1405_i_at 1431_at 1438_at 1487_at 1494_f_at 1598_g_at</p> <p>Buy why are they called gene list not probe sets. Why?</p>
<p>Its a bit confusing, but DAVID uses the term Gene List as a generic term. </p> <p>Looking at Step 2, you can submit many kinds of lists to DAVID, including actual gene symbols, Ensembl or RefSeq Accessions, etc... actually nearly 30 kinds of terms including 'not sure' which probably looks at your list and tries to guess.</p> <p>Affymetrix or Illumina probe set IDs are each designed to measure a gene, ideally, though its not precisely a one Probe Set to one Gene relationship. This is because when the array is designed there may be a partial transcript RNA records which turn out later to be parts of a single gene. There are also probe sets which may turn out to hybridize to similar sequences in more than one gene. </p> <p>Its messy, but its also true that often more than one gene symbol will appear for the same gene because of historical naming conventions...</p>
27
gene expression
What is benjamini
https://biology.stackexchange.com/questions/7354/what-is-benjamini
<p>I was doing some gene expression analysis using this tool <a href="http://david.abcc.ncifcrf.gov/summary.jsp" rel="nofollow">http://david.abcc.ncifcrf.gov/summary.jsp</a>. However, I have a confusion about what benjamini is. I fed it some gene list and it gave me some potential pathways the genes belong to(KEGG pathways). However, I have a confusion what this column called benjamini is. What does it denote? Any suggestions?</p>
<p>It is a <a href="http://udel.edu/~mcdonald/statmultcomp.html" rel="nofollow">Benjamini-Hochberg q-value,</a> similar to a p-value corrected for multiple hypothesis testing using the false discovery rate.</p>
28
gene expression
Complexity in creating transgenic animals (e.g., mice)
https://biology.stackexchange.com/questions/7619/complexity-in-creating-transgenic-animals-e-g-mice
<p>Many papers I have seen describing transgenic rodent models (and presumably applicable to other model organisms) involve the knock-in, or modification to, a single gene, possibly two genes. With respect to recombineering techniques, what prevents targeting multiple genes in a single organism? For instance, if I wanted to simultaneously knock-in some genes and knock-out others within the same mouse, would I be forced to generate individually modified transgenic lines and then do some "fancy" breeding to generate the multiple-modified mice?</p>
<p>One reason is the low likelihood of success. Modifying a gene almost always involves a recombination event of plasmid DNA with a target site in the genome (and I say almost just because there may be some method that I don't know about, but all the ones I'm familiar with do). The likelihood of that decreases <strong>exponentially</strong> with the number of genes you're trying to modify. If you're trying to make several mutants of individual genes the likelihood of success decreases only linearly. </p> <p>Another reason is having more knowledge and experimental power. You can learn little from a double mutant if you don't also have the individual mutants to compare. In fact, most reviewers would ask for individual mutant data if you've made a double mutant in your paper. This is especially true with flies and worms, as crosses take less time with them.</p> <p>Also, the more mutant genes you have, the weaker the animal. Your mutants may not be viable at all with too many mutations.</p>
29
gene expression
Known MicroRNA - Gene Systems?
https://biology.stackexchange.com/questions/9703/known-microrna-gene-systems
<p>Have there been any experimentally-verified systems of microRNAs targeting a gene set (e.g., in cancer, perhaps)?</p>
<p>Yes. Below is a link to a review of ncRNA (non-coding RNAs) and their role in disease. There are many examples in this review in all sorts of diseases, one of which is miR-200, which is thought to play a role in some cancers. </p> <p>There are also some tables in the paper that list the miRNA and the disease they are linked to. They also have a reference for each one, so you could read more about that particular miRNA and its function, including the gene it regulates.</p> <p><a href="http://www.nature.com/nrg/journal/v12/n12/full/nrg3074.html" rel="nofollow">http://www.nature.com/nrg/journal/v12/n12/full/nrg3074.html</a></p> <p>Im not sure if that article is open to the public or not. If you cant access it you could always check out the wikipedia entry for miR-200</p> <p><a href="http://en.wikipedia.org/wiki/Mir-200" rel="nofollow">http://en.wikipedia.org/wiki/Mir-200</a></p>
30
gene expression
Expression of bidirectional promoters
https://biology.stackexchange.com/questions/9783/expression-of-bidirectional-promoters
<p>How are bidirectional promoters expressed ? (Won't RNA Pol have to go in 3'-5' direction?) Why are they more commonly found in eukaryotes than prokaryotes?</p>
<p>Genes controlled by bidirectionl promoters are in head-to-head configurations, meaning that their 5' ends are facing one-another. Remember that DNA is double stranded, so this means one gene is on the 'top' strand and one gene is on the 'bottom' strand. Check out the diagram below, genes are in capitals, bidirectional promoter in parenthesis. Both genes are transcribed 5'->3'</p> <pre><code> --&gt; Gene 1 5'-atgcagtcatga(ctgactaagt...tcagtcatga)CTGACTGACTAGTCAT-3' |||||||||||| ||||||||||||||||||||||| |||||||||||||||| 3'-TACGTCAGTACT(gactgattca...agtcagtact)gactgactgatcagta-5' Gene 2 &lt;-- </code></pre> <p>As for why they are more commonly found in eukaryotes, those questions are definitely hard to answer. It has been shown that genes on bidirectional promoters are expressed together more often than two genes that are next to each other on different promoters. That means this might be a eukaryotic strategy similar to prokaryote's usage of polycistronic mRNA, an easy way to have similar expression patterns of genes that should often be expressed in similar amounts.</p>
31
gene expression
The reason why researchers usually use cell lines from &quot;blast cells&quot;?
https://biology.stackexchange.com/questions/13740/the-reason-why-researchers-usually-use-cell-lines-from-blast-cells
<p>What's the reason why researchers usually use cell lines from "blast cells" (so, immature, like lymphoblastoid cells) for measuring gene expression data? Is that they are growing up, which would make their expression data higher and more significant?</p>
<p>Some of the reasons why immature blast cells are studied:</p> <ol> <li>They have self renewal ability </li> <li>Can be differentiated to different types of cells</li> <li>Serve as model for studying development: this is quite pertinent to the gene expression studies. Because it is important to know what changes take place during differentiation and simply looking at the differentiated state won't explain that process. </li> </ol>
32
gene expression
Punnett Square Help
https://biology.stackexchange.com/questions/13825/punnett-square-help
<p>In fruit flies, red eyes are dominant over white eyes. Show a cross between two white-eye fruit flies. </p> <p>My question is...</p> <p>How do I know if the white-eye fruit flies are homozygous or heterozygous?</p>
<p>Just to add an extended perspective to all of the answers submitted so far. </p> <p>The classic white-eye phenotype in <em>Drosophila</em> is associated with a gene, <em>white</em> (or <em>w</em>) that is carried on the X chromosome (females XX, males XY) i.e. it is a sex-linked phenotype. Male white-eyed flies are therefore technically not homozygous, they are hemizygous since they only have one X chromosome and thus one copy of the gene.</p> <p>There are other white-eyed phenotypes due to segregation of two genes (brown and scarlet) - this is the standard two-factor cross example in many genetics courses. In this case (no sex linkage is involved) a white-eyed fly is homozygous for the recessive allele at both loci.</p>
33