category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
implement quantization
|
How to change this javascript code to show a modified image?
|
https://stackoverflow.com/questions/42734830/how-to-change-this-javascript-code-to-show-a-modified-image
|
<p>I am playing with image color quantization algorithms. I have found this link</p>
<p><a href="https://github.com/lokesh/color-thief" rel="nofollow noreferrer">Color Thief</a></p>
<p>where a javascript (a language that I have never studied) implementation of a modified median cut algorithm is presented. But the demo shows just the ten dominant colors.</p>
<p>I would like to see what the quantized image looks like. In the file <code>src/color-thief.js</code> (line 132), there is a call to the quantization function. This functions returns a <code>CMap object</code>, which can be used to extract the dominant colors and to map a color to the best one in the reduced color palette. Given the CMap object, how do I modify the original image and show it?</p>
<h2>Edit 1</h2>
<p>This question is not about color quantization algorithms, but what i need to change in the Color Thief project in order to show a modified image. Suppose that, when i click the 'Click' button, i want to show the image after adding to all its pixels the value 10.</p>
|
<p>Here, I made this just for you.</p>
<p>I assume you want to <em>simplify</em> colors of an image to its palette provided by <code>color-thief</code>.</p>
<p>To achieve this I used <a href="https://github.com/lokesh/color-thief" rel="nofollow noreferrer">color-thief</a> and <a href="https://github.com/dtao/nearest-color" rel="nofollow noreferrer">nearest-color</a>.</p>
<p>Basically, you want to generate a color-thief's palette, then loop each pixel and get a closest palette value compared to the color of this pixel.</p>
<p>Check out this solution on <a href="http://codepen.io/anon/pen/QpvGjr" rel="nofollow noreferrer">codepen</a>, code below:</p>
<pre><code>/* rgbToHex() and leadingZero() functions ripped from nearest-color. */
function rgbToHex(rgb) {
return '#' + leadingZero(rgb.r.toString(16)) +
leadingZero(rgb.g.toString(16)) + leadingZero(rgb.b.toString(16));
}
function leadingZero(value) {
if (value.length === 1) value = '0' + value;
return value;
}
/* Initialize an image and the canvas. */
var img = document.getElementById("img");
var canvas = document.getElementById("canvas");
/* When the image is loaded */
img.onload = function(){
/* Initialize color-thief and get a palette from image. */
var colorthief = new ColorThief();
var colorthief_palette = colorthief.getPalette(img, 8);
var palette = {};
/* Turn color-thief palette to nearest-color-compatible palette. */
for(var i = 0; i < colorthief_palette.length; i++){
var r = colorthief_palette[i][0];
var g = colorthief_palette[i][1];
var b = colorthief_palette[i][2];
var o = {r: r, g: g, b: b};
palette["color_" + i] = rgbToHex(o);
}
/* Initialize nearest-color */
var clr = nearestColor.from(palette);
/* Initialize canvas, draw the image data and hide the default image. */
var ctx = canvas.getContext("2d");
canvas.width = img.width;
canvas.height = img.height;
ctx.drawImage(img, 0, 0);
img.style.display = "none";
var data = ctx.getImageData(0, 0, canvas.width, canvas.height);
var pixel = data.data;
/* Loop for each pixel of image. */
for (var i = 0, n = pixel.length; i < n; i += 4) {
var r = pixel[i+0];
var g = pixel[i+1];
var b = pixel[i+2];
var o = {r: r, g: g, b: b};
var color = rgbToHex(o);
var nearest = clr(color);
pixel[i+0] = nearest.rgb.r;
pixel[i+1] = nearest.rgb.g;
pixel[i+2] = nearest.rgb.b;
}
ctx.putImageData(data, 0, 0);
}
</code></pre>
<p>In result, this image:</p>
<p><a href="https://i.sstatic.net/7qstL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Es3El.png" alt="enter image description here"></a></p>
<p>Becomes this image:
<a href="https://i.sstatic.net/7qstL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7qstL.png" alt="enter image description here"></a></p>
| 1,534
|
implement quantization
|
Quantizing object detection model
|
https://stackoverflow.com/questions/63771573/quantizing-object-detection-model
|
<pre><code>[2] frozen_graph_file = # path to frozen graph (.pb file)
[3] input_arrays = ["normalized_input_image_tensor"]
[4] output_arrays = ['TFLite_Detection_PostProcess',
[5] 'TFLite_Detection_PostProcess:1',
[6] 'TFLite_Detection_PostProcess:2',
[7] 'TFLite_Detection_PostProcess:3']
[8] input_shapes = {"normalized_input_image_tensor" : [1, 300, 300, 3]}
[9]
[10] converter = tf.lite.TFLiteConverter.from_frozen_graph(frozen_graph_file,
[11] input_arrays=input_arrays,
[12] output_arrays=output_arrays,
[13] input_shapes=input_shapes)
[14] converter.allow_custom_ops = True
[15] converter.optimizations = [tf.lite.Optimize.DEFAULT]
[16] tflite_quant_model = converter.convert()
[17] with open(tflite_model_quant_file, "wb") as tflite_file:
[18] tflite_file.write(tflite_model_quant)
</code></pre>
<p>When quantizing a model, we usually fed the model with some calibration data to identify the range of activation, hence define the scale and zero point. This is done for tensor-wise quantization. How the quantized values are obtained for object detection bounding box coordinates? Does it follow the same fashion?
In Tensorflow, they provide the custom ops for the operations that can not be quantized in a conventional way. Where can I get the detailed implementation of them, especially TFLite_Detection_PostProcess?</p>
|
<p>The implementation for TFLite_Detection_PostProcess is in <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels/detection_postprocess.cc" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels/detection_postprocess.cc</a></p>
<p>Regardless of quantization, the output of TFLite_Detection_PostProcess is always float, so I don't think you need to care about it here.</p>
| 1,535
|
implement quantization
|
Lossless jpeg compression in opencv
|
https://stackoverflow.com/questions/55814800/lossless-jpeg-compression-in-opencv
|
<p>Is it possible to achieve lossless compression in opencv without using an API such as libjpeg? I want to modify the DCT coefficients and get the same values when reading the image again. </p>
<p>I've tried to implement it like this : 8x8 RGB pixel blocks -> YCrCb -> DCT -> Quantization -> modify some coefficient values -> deQuantization -> IDCT -> RGB -> save the image. The lossy part, I've noticed, is when I apply the inverse DCT and save.</p>
| 1,536
|
|
implement quantization
|
TensorRT/TFlite sample implementation
|
https://stackoverflow.com/questions/56911455/tensorrt-tflite-sample-implementation
|
<p>Having a trained '.h5' Keras model file, I'm trying to optimize inference time:</p>
<p>Explored 2 options:</p>
<ol>
<li>Accelerated inference via TensorRT</li>
<li>'int8' Quantization.</li>
</ol>
<p>At this point I can convert the model file to TensorFlow protobuf '.pb' format, but as a sidenote, it also contains custom objects of few layers.</p>
<p>Saw a few articles on TensorRT conversion and TFLite conversion, but I don't seem to find a robust implementation that's legible. Can someone explain how that's done (TFLite/Keras Quantization or TensorRT) to use the same model for faster inference.</p>
<p>(Open for other suggestions to improve inference speed supported in TensorFlow and Keras)</p>
|
<p>This is the user guide on how to use TensorRT in TF: <a href="https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html" rel="nofollow noreferrer">https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html</a></p>
<p>This talk explains how TensorRT works in TF: <a href="https://developer.nvidia.com/gtc/2019/video/S9431" rel="nofollow noreferrer">https://developer.nvidia.com/gtc/2019/video/S9431</a></p>
<p>Note that TensorRT also supports INT8-quantization (during training or post-training).</p>
<p>This blog post also has kind of the same content: <a href="https://medium.com/tensorflow/high-performance-inference-with-tensorrt-integration-c4d78795fbfe" rel="nofollow noreferrer">https://medium.com/tensorflow/high-performance-inference-with-tensorrt-integration-c4d78795fbfe</a></p>
<p>This repository has a bunch of examples showing how to use it: <a href="https://github.com/tensorflow/tensorrt" rel="nofollow noreferrer">https://github.com/tensorflow/tensorrt</a></p>
| 1,537
|
implement quantization
|
How can I improve fixed-point data type utilization?
|
https://stackoverflow.com/questions/75937617/how-can-i-improve-fixed-point-data-type-utilization
|
<p>I'm trying to use the quantization for a convolutional neural network in order to reduce memory occupation going from the FP32 bit data type to Int16 one. The problem is that I'm obtaining poor results and since it's the first time that I use this kind of representation I have some doubts about the correct implementation.</p>
<p>First of all, I'm quantizing both the input data and the weights using the following functions (uniform quantization):</p>
<pre><code>#define FXP 16
int16_t quantize(float a, int fxp){
int32_t maxVal = ((1 << (FXP-1)) - 1);
int32_t value = a * (1 << fxp); //mapping
//rounding
if (a>=0){
value += 0.5f;
}else{
value -= 0.5f;
}
//clipping
if(value > maxVal){
return (int16_t)maxVal;
}else if(value < -maxVal){
return -(int16_t)maxVal;
}else{
return (int16_t)value;
}
}
int16_t value = quantize(test_data[i],10);
</code></pre>
<p>In this case I'm using a Q5.10 format (from the data I have it seems the best format to use). Once all numbers have been converted, arithmetic within the network (multiplications and sums/subtractions - for example used in convolutions), is implemented in this way:</p>
<pre><code> for(int k=0; k<output_fea; k++){
int32_t accumulator = 0;
for(int l=minimum; l<maximum; l++){
for(int j=0; j<input_fea; j++){
accumulator += (data[l][j]*weights[k][l][j] + (1<<((FXP_VALUE-1))))>>FXP_VALUE; //both data and weights array are int16_t
}
}
//before going from int32_t to int16_t
if(accumulator>INT16_MAX){
accumulator=INT16_MAX;
}else if(accumulator<INT16_MIN){
accumulator=INT16_MIN;
}
result[i][k] = (int16_t)ReLU(accumulator); //result is int16_t
}
}
</code></pre>
<p>Is it correct what I am doing ? Are there any steps I could take to improve the results and reduce approximations ?</p>
|
<p>You should check how much error is introduced into your values by rounding and clipping. Continue working with floating-point values, but introduce just rounding; then introduce just clipping; then introduce both. How much error is introduced in your results?</p>
<p>Also, regarding fixed-point format: even if it <em>seems</em> the best format to use, maybe it's not the best. Try different formats; check the error in results for each format. Try using different formats at different stages of calculation (i.e. at different layers). Each application has its own problems, so you have to gather intuition for how much rounding and clipping (separately) affect your results.</p>
<p>If your results are very sensitive to rounding errors, you might want to use <code>int16</code> for some stages and <code>float32</code> for others.</p>
| 1,538
|
implement quantization
|
How can I incorporate PReLU in a quantized model?
|
https://stackoverflow.com/questions/62891103/how-can-i-incorporate-prelu-in-a-quantized-model
|
<p>I'm trying to quantize a model which uses <code>PReLU</code>. Replacing <code>PReLU</code> with <code>ReLU</code> is not possible as it drastically affects the network performance to the point its useless.</p>
<p>As far as I know, <code>PReLU</code> is not supported in Pytorch when it comes to quantization. So I tried to rewrite this module manually and implement the multiplication and additions using <code>torch.FloatFunctional()</code> to get around this limitation.</p>
<p>This is what I have come up so far:</p>
<pre class="lang-py prettyprint-override"><code>class PReLU_Quantized(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.weight = prelu_object.weight
self.quantized_op = nn.quantized.FloatFunctional()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, inputs):
# inputs = torch.max(0, inputs) + self.weight * torch.min(0, inputs)
self.weight = self.quant(self.weight)
weight_min_res = self.quantized_op.mul(self.weight, torch.min(inputs)[0])
inputs = self.quantized_op.add(torch.max(inputs)[0], weight_min_res).unsqueeze(0)
self.weight = self.dequant(self.weight)
return inputs
</code></pre>
<p>and for the replacement :</p>
<pre><code>class model(nn.Module):
def __init__(self)
super().__init__()
....
self.prelu = PReLU()
self.prelu_q = PReLU_Quantized(self.prelu)
....
</code></pre>
<p>Basically, I read the learned parameter of the existing prelu module, and run the calculation myself in a new module. The module seems to be working in the sense, its not failing the whole application.</p>
<p>However, in order to assess whether my implementation is actually correct and yields the same result as the original module, I tried to test it.<br />
Here is a counterpart for normal models (i.e. not quantized model):<br />
For some reason, the error between the actual <code>PReLU</code> and my implementation is very large!</p>
<p>Here are sample diffs in different layers:</p>
<pre class="lang-py prettyprint-override"><code>diff : 1.1562038660049438
diff : 0.02868632599711418
diff : 0.3653906583786011
diff : 1.6100226640701294
diff : 0.8999372720718384
diff : 0.03773299604654312
diff : -0.5090572834014893
diff : 0.1654307246208191
diff : 1.161868691444397
diff : 0.026089997962117195
diff : 0.4205571115016937
diff : 1.5337920188903809
diff : 0.8799554705619812
diff : 0.03827812895178795
diff : -0.40296515822410583
diff : 0.15618863701820374
</code></pre>
<p>and the diff is calculated like this in the forward pass:</p>
<pre class="lang-py prettyprint-override"><code>def forward(self, x):
residual = x
out = self.bn0(x)
out = self.conv1(out)
out = self.bn1(out)
out = self.prelu(out)
out2 = self.prelu2(out)
print(f'diff : {( out - out2).mean().item()}')
out = self.conv2(out)
...
</code></pre>
<p>This is the normal implementation which I used on ordinary model (i.e. not quantized!) to asses whether it produces correct result and then move on to quantized version:</p>
<pre class="lang-py prettyprint-override"><code>class PReLU_2(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.prelu_weight = prelu_object.weight
self.weight = self.prelu_weight
def forward(self, inputs):
x = self.weight
tmin, _ = torch.min(inputs,dim=0)
tmax, _ = torch.max(inputs,dim=0)
weight_min_res = torch.mul(x, tmin)
inputs = torch.add(tmax, weight_min_res)
inputs = inputs.unsqueeze(0)
return inputs
</code></pre>
<p>What am I missing here?</p>
|
<p>I figured it out! I made a huge mistake in the very begining. I needed to calculate</p>
<pre><code>PReLU(x)=max(0,x)+a∗min(0,x)
</code></pre>
<p>or<br />
<a href="https://i.sstatic.net/OLWLW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OLWLW.png" alt="enter image description here" /></a><br />
and not the actual <code>torch.min</code>! or <code>torch.max</code>! which doesn't make any sense!
Here is the final solution for normal models (i.e not quantized)!:</p>
<pre class="lang-py prettyprint-override"><code>class PReLU_2(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.prelu_weight = prelu_object.weight
self.weight = self.prelu_weight
def forward(self, inputs):
pos = torch.relu(inputs)
neg = -self.weight * torch.relu(-inputs)
inputs = pos + neg
return inputs
</code></pre>
<p>and this is the quantized version :</p>
<pre class="lang-py prettyprint-override"><code>class PReLU_Quantized(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.prelu_weight = prelu_object.weight
self.weight = self.prelu_weight
self.quantized_op = nn.quantized.FloatFunctional()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, inputs):
# inputs = max(0, inputs) + alpha * min(0, inputs)
self.weight = self.quant(self.weight)
weight_min_res = self.quantized_op.mul(-self.weight, torch.relu(-inputs))
inputs = self.quantized_op.add(torch.relu(inputs), weight_min_res)
inputs = self.dequant(inputs)
self.weight = self.dequant(self.weight)
return inputs
</code></pre>
<p>Side note:<br />
I also had a typo where I was calculating the diff :</p>
<pre class="lang-py prettyprint-override"><code> out = self.prelu(out)
out2 = self.prelu2(out)
print(f'diff : {( out - out2).mean().item()}')
out = self.conv2(out)
</code></pre>
<p>needs to be</p>
<pre class="lang-py prettyprint-override"><code> out1 = self.prelu(out)
out2 = self.prelu2(out)
print(f'diff : {( out1 - out2).mean().item()}')
out = self.conv2(out1)
</code></pre>
<h2>Update:</h2>
<p>In case you face issues in quantization, you may try this <a href="https://github.com/pytorch/pytorch/issues/41640#issuecomment-667344109" rel="nofollow noreferrer">version</a> :</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.quantized as nnq
from torch.quantization import fuse_modules
class QPReLU(nn.Module):
def __init__(self, num_parameters=1, init: float = 0.25):
super(QPReLU, self).__init__()
self.num_parameters = num_parameters
self.weight = nn.Parameter(torch.Tensor(num_parameters).fill_(init))
self.relu1 = nn.ReLU()
self.relu2 = nn.ReLU()
self.f_mul_neg_one1 = nnq.FloatFunctional()
self.f_mul_neg_one2 = nnq.FloatFunctional()
self.f_mul_alpha = nnq.FloatFunctional()
self.f_add = nnq.FloatFunctional()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
self.quant2 = torch.quantization.QuantStub()
self.quant3 = torch.quantization.QuantStub()
# self.dequant2 = torch.quantization.QuantStub()
self.neg_one = torch.Tensor([-1.0])
def forward(self, x):
x = self.quant(x)
# PReLU, with modules only
x1 = self.relu1(x)
neg_one_q = self.quant2(self.neg_one)
weight_q = self.quant3(self.weight)
x2 = self.f_mul_alpha.mul(
weight_q, self.f_mul_neg_one2.mul(
self.relu2(
self.f_mul_neg_one1.mul(x, neg_one_q),
),
neg_one_q)
)
x = self.f_add.add(x1, x2)
x = self.dequant(x)
return x
m1 = nn.PReLU()
m2 = QPReLU()
# check correctness in fp
for i in range(10):
data = torch.randn(2, 2) * 1000
assert torch.allclose(m1(data), m2(data))
# toy model
class M(nn.Module):
def __init__(self):
super(M, self).__init__()
self.prelu = QPReLU()
def forward(self, x):
x = self.prelu(x)
return x
# quantize it
m = M()
m.qconfig = torch.quantization.default_qconfig
torch.quantization.prepare(m, inplace=True)
# calibrate
m(torch.randn(4, 4))
# convert
torch.quantization.convert(m, inplace=True)
# run some data through
res = m(torch.randn(4, 4))
print(res)
</code></pre>
<p>and make sure to read the ralted notes <a href="https://github.com/pytorch/pytorch/issues/41640#issuecomment-668108514" rel="nofollow noreferrer">here</a></p>
| 1,539
|
implement quantization
|
Run quantized tensorflow model on FPGA / pure python
|
https://stackoverflow.com/questions/53420994/run-quantized-tensorflow-model-on-fpga-pure-python
|
<p>I have a model trained in keras which is a simple model trained on MNIST dataset.</p>
<p>What I try to do is to rewrite this model and run on FPGA device.
In order to do this I want to fully understand how quantized model works.</p>
<p>First I converted this model with post training quantization to .tflite format and UINT8 precision (<a href="https://www.tensorflow.org/lite/performance/post_training_quantization" rel="nofollow noreferrer">https://www.tensorflow.org/lite/performance/post_training_quantization</a>).</p>
<p>So I have quantized model and accuracy is about 90%.</p>
<p>Now I try to get weights from quantized model and implement it in a pure python. I use this tool for visualization and to get model weights: <a href="https://github.com/lutzroeder/netron" rel="nofollow noreferrer">https://github.com/lutzroeder/netron</a>.</p>
<p>Although simple python code (matrix multiplication, add bias and relu) works, the one with quantized weights doesn't work. </p>
<p>So my question is how to write a feed forward using numpy?</p>
<p>My model in keras looks like this:</p>
<pre><code>model = Sequential()
model.add(Dense(512, input_shape=input_shape))
model.add(Activation(tf.nn.relu))
model.add(Dense(100))
model.add(Activation(tf.nn.relu))
model.add(Dense(num_classes))
model.add(Activation(tf.nn.softmax))
model.compile(
optimizer=Adam(),
loss='categorical_crossentropy',
metrics=['accuracy'],
)
</code></pre>
<p>I converted it with TocoConverter. And it works in tensorflow.</p>
<p>Then I try to write feed forward in pure python:</p>
<pre><code>for img, label in zip(x_test, y_test):
img = img.astype('uint8')
total_seen += 1
label = tf.keras.utils.to_categorical(label, num_classes=num_classes)
X = img.reshape(1, 784)
z1 = np.dot(X, W0.T) + b0
a1 = relu(z1)
z2 = np.dot(a1, W1.T) + b1
a2 = relu(z2)
z3 = np.dot(a2, W2.T) + b2
prediction = np.argmax(z3)
label = np.argmax(label)
if prediction == label:
num_correct += 1
</code></pre>
<p>But this model accuracy is about 10%, so something goes wrong.
How to correct this model?</p>
<p>Thanks in advance.</p>
<p>Edit:
I've read paper about quantization in tensorflow:
<a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Jacob_Quantization_and_Training_CVPR_2018_paper.pdf" rel="nofollow noreferrer">http://openaccess.thecvf.com/content_cvpr_2018/papers/Jacob_Quantization_and_Training_CVPR_2018_paper.pdf</a></p>
<p>And I know almost everything, I know what are S and Z values for activations and kernels. But after matrix multiplication it should be multiplied by factor: M :=S1*S2/S3.
And i don't know what is S3 scale and how to get it. Because i can't see anything related in netron graph. Any suggestion?</p>
|
<p>There are two steps you'll need to do:</p>
<ol>
<li><p>Dequantize the input, weights and bias back into full precision (or integer equivalent)</p>
<p>(w-w_offset)*w_scale</p></li>
<li><p>After the Relu, quantize the activations back into integer</p>
<p>a/a_scale+a_offset</p>
<p>You can probably skip step 2 that quantize-dequantize the activations with minor risk of getting different result as TFlite model. This is because Relu has no upper bound but TFlite will saturate it to a maximum value.</p></li>
</ol>
<p>You can check out my tutorials on TFlite in <a href="http://www.github.com/soon-yau/QNN" rel="nofollow noreferrer">my Github</a> where I have introduced the concept and training and is about to write out about inference.</p>
| 1,540
|
implement quantization
|
MFCC Vector Quantization for Speaker Verification Hidden Markov Models
|
https://stackoverflow.com/questions/22338123/mfcc-vector-quantization-for-speaker-verification-hidden-markov-models
|
<p>I am currently doing a project on speaker verification using Hidden Markov Models. I chose MFCC for my feature extraction. I also intend to apply VQ to it. I have implemented HMM and tested it on Eisner's data spreadsheet found here: <a href="http://www.cs.jhu.edu/~jason/papers/" rel="nofollow">http://www.cs.jhu.edu/~jason/papers/</a> and got correct results. </p>
<p>Using voice signals, I seem to have missed something since I was not getting correct acceptance (I did the probability estimation using the forward algorithm - no scaling applied).I was wondering on what could have I done wrong. I used scikits talkbox's MFCC function for feature extraction and used Scipy's cluster for vector quantization. Here is what I have written:</p>
<pre><code>from scikits.talkbox.features import mfcc
from scikits.audiolab import wavread
from scipy.cluster.vq import vq, kmeans, whiten
(data, fs) = wavread(file_name)[:2]
mfcc_features = mfcc(data, fs=fs)[0]
#Vector Quantization
#collected_feats is a list of spectral vectors taken together from 3 voice samples
random.seed(0)
collected_feats = whiten(collected_feats)
codebook = kmeans(collected_feats, no_clusters)[0]
feature = vq(mfcc_features, codebook)
#feature is then used as the observation for the hidden markov model
</code></pre>
<p>I assumed that the default parameters for scikits' mfcc function is already fit for speaker verification. The audio files are of sampling rates 8000 and 22050. Is there something I am lacking here? I chose a cluster of 64 for VQ. Each sample is an isolated word. at least 1 second in duration. I haven't found a Python function yet to remove the silences in the voice samples so I use Audacity to manually truncate the silence parts. Any help would be appreciated. Thanks!</p>
|
<p>Well I am not sure about HMM approach but I would recommend using GMM. ALize is a great library for doing that. For Silence removal, use the LIUM library. The process is called speaker diarization, the program detects where the speaker is speaking and gives the time stamp.</p>
| 1,541
|
implement quantization
|
How to save quantized DCT coefficients as a JPEG image with Python?
|
https://stackoverflow.com/questions/56442098/how-to-save-quantized-dct-coefficients-as-a-jpeg-image-with-python
|
<p>I am creating a Python 3 application for steganography, specifically JPEG steganography. For this reason, I have to implement some basic JPEG compression to access the quantized DCT coefficients and embed bits into the LSBs. I have implemented all of this, but now I have a problem with saving the coefficients as a JPEG image.</p>
<p>All of the libraries I have found will perform full JPEG compression when saving an image with the .jpg extension. But I don't want it, as I have already done the lossy parts of the compression myself. I want it to only perform the lossless parts and save the image without performing DCT transformation and quantization on it again.</p>
<p>Has anyone tried to do this before? Are there any libraries out there that let you essentially save a 2 or 3 dimensional numpy.ndarray as a JPEG image without performing lossy compression on it again?</p>
<p>Here is an example of how the transformed, quantized, and embedded coefficients look:</p>
<pre><code>[[[-48. -1. -9.]
[ -3. 0. 1.]
[ 0. -0. -0.]
...
[ 0. -0. 0.]
[ -0. -0. 0.]
[ -0. -0. -0.]]
[[ 3. 0. -2.]
[ -0. -0. 0.]
[ 0. 0. 0.]
...
[ -0. 0. -0.]
[ 0. 0. -0.]
[ -0. -0. 0.]]
[[ -0. 0. 0.]
[ -0. 0. 0.]
[ 0. 0. 0.]
...
[ 0. 0. -0.]
[ -0. 0. 0.]
[ 0. 0. -0.]]]
</code></pre>
| 1,542
|
|
implement quantization
|
GLCM Texture analysis in Sentinel-1 SNAP toolbox outputs texture with min and max pixel values not between 0 and 1
|
https://stackoverflow.com/questions/51330883/glcm-texture-analysis-in-sentinel-1-snap-toolbox-outputs-texture-with-min-and-ma
|
<p>I have implemented GLCM Texture analysis on the Sentinel-1 SAR imagery. The imagery is high resolution. The parameters for the GLCM texture analysis are:</p>
<p><strong>Window size: 5x5</strong></p>
<p><strong>Quantizer: Probablistic Quantizer</strong> </p>
<p><strong>Quantization: 64 bit</strong> </p>
<p><strong>Angle: 0 degree</strong> </p>
<p><strong>Displacement: 1</strong></p>
<p>The output is 10 different texture images. However the range of pixel values is not between 0 and 1. The range for every texture is between different min and max values. I believe this should be between 0 and 1 as it is a probabilistic analysis with GLCM that is being calculated for every pixel. </p>
<p>Am I missing a step?</p>
|
<p>I guess you are getting 10 different images because for each image pixel you are performing the following operations:</p>
<ul>
<li>Define a neighbourhood of 5×5 centered at the considered pixel.</li>
<li>Compute the GLCM corresponding to <code>displacement=1</code> and <code>angle=0</code> of that neighbourhood.</li>
<li>Extract 10 features from the local GLCM.</li>
</ul>
<p>This results in a stack of 10 images, one image for each feature extracted from the local GLCMs.</p>
<p>The problem is that <a href="http://murphylab.web.cmu.edu/publications/boland/boland_node26.html" rel="nofollow noreferrer">Haralick features</a> are not normalized to 1. Consider for example the standard definition of entropy:</p>
<p><a href="https://i.sstatic.net/t1wAV.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t1wAV.gif" alt="Entropy"></a></p>
<p>If you wish to obtain entropy value in the range <code>[0, 1]</code> you should divide the equation above by the maximum entropy (measured in bits), like this:</p>
<p><a href="https://i.sstatic.net/WVtbY.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WVtbY.gif" alt="Normalized entropy"></a></p>
<p>where <a href="https://i.sstatic.net/2kzmo.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2kzmo.gif" alt="N_g"></a> is the number of different grey levels.</p>
<p><a href="https://www.sciencedirect.com/science/article/pii/S016786551400124X?via%3Dihub" rel="nofollow noreferrer">This paper</a> explains how to normalize <em>contrast</em>, <em>correlation</em>, <em>energy</em>, <em>entropy</em> and <em>homogeneity</em> features extracted from GLCM so that they have range [0, 1].</p>
| 1,543
|
implement quantization
|
Video processing Inter-frame Prediction
|
https://stackoverflow.com/questions/25278491/video-processing-inter-frame-prediction
|
<p>I need to perform 'Inter-frame Prediction' and 'Motion Compensation' of a set of 30 frames for video processing in Matlab. I am working with Mother-daughter frames. </p>
<p><img src="https://i.sstatic.net/Ybd3T.jpg" alt="enter image description here"></p>
<p>What I have done so far is to take the very first frame and divided it into </p>
<ul>
<li>8x8 blocks</li>
<li>performed DCT</li>
<li>quantized it</li>
<li>dequantized it</li>
<li>performed inverse DCT.</li>
</ul>
<p>I know that No motion estimation is required for the first frame and second frame onwards, the reconstructed frame one is used as reference for frame two and so on. For motion estimation I need to implement 'Full-search Block Matching Algorithm'</p>
<p><strong>Question 1</strong>: What is meant by reconstruction of a frame? Is it quantization and DCT which I have listed above?</p>
<p><strong>Question 2</strong>: What is 'Full-search Block Matching Algorithm'?</p>
|
<p>I'm going to assume that you are referring to the MPEG consortium of video compression algorithms (MPEG-1, MPEG-2, H.264, etc.). Let's answer each question one at a time:</p>
<h1>Question #1 - Frame Reconstruction</h1>
<p>For a single frame, the forward transformation basically consists of decomposing a frame into 8 x 8 non-overlapping blocks, doing an 8 x 8 DCT transform of each block, quantizing the blocks, and then we perform some more complicated stuff such as zig-zag ordering, run-length encoding, etc.</p>
<p>Basically, your frame is represented as a compressed sequence of bits. A <strong>reconstruction</strong> of the frame is going in the reverse order, so you almost have it right. This consists of reconstructing the sequence and undoing the zig-zag ordering, then de-quantizing the block, then applying the IDCT. The reason why they call this "reconstruction" is because you represented the frame to be in a different format. You are converting the frame back to what it should have been before compressing the frame.</p>
<p>One thing that you may already know is that quantization of the frame is the reason why this methodology is <strong>lossy</strong>. This means that you won't be able to get the <strong>original frame</strong> back, but you can get it to be as close as possible to the original. However, the advantage is that with lossy algorithms, you get high compression ratios, which means that the size of the video will be smaller, and can easily be transmitted. </p>
<p>In fact, if you do a forward transformation of one frame, then do a reverse transformation. If you compare the frames pixel by pixel, you will see that there are some subtle differences, but not enough to write home about. The parameters and design behind how the compression works has been tuned so that the human visual system of an average person won't be able to notice much of the differences between the original and the reconstructed frame in hindsight.</p>
<p>So why lossy you may ask? The reason why this is is because the MPEG consortium leveraged that the video should be highly compressible and transmittable in favour of the actual quality of the video. This is due to the fact that quality has always been a subjective measure, even when you have numerical measures (PSNR for instance) that can measure image quality.</p>
<p>So, the moral of this story is that a reconstruction is undoing the forward transformation performed to get the video frame to be compressed, but it will not <strong>exactly</strong> be the same as the original frame, but close enough that a normal human being won't complain.</p>
<hr>
<h1>Question #2 - Full-search Block Matching Algorithm</h1>
<p>The basics behind motion estimation are that we don't want to transmit every frame as <strong>full</strong> video frames in order to reduce transmission bandwidth. If you know the basics of the MPEG consortium of video compression algorithms, there are three classes of encoded frames in your video:</p>
<ul>
<li><p>I-Frames - These are what are known as intracoded frames. These frames have the full compression algorithm performed on them (DCT, Quantization, etc.). We don't have a video that consists entirely of I-Frames as that would make the size of the video quite large. Instead, what is done is that I-frames are used as a reference point, and <strong>difference</strong> frames are sent after this point where for each block in an I-Frame, a <strong>motion vector</strong> is transmitted. More to follow.</p></li>
<li><p>P-Frames - Instead of sending another I-Frame, we send a predicted frame or P-Frame instead. For each block from a reference I-Frame, the P-Frame essentially tells us where the block <strong>best moved</strong> from one frame to the next. These are what are known as motion vectors for each block. The rationale behind this is that video is usually captured at such a high frame rate, that successive video frames exhibit very little difference and so most of the blocks should remain the same, or move very little. You will get to a point where the scene will drastically change in the video, or that there is <strong>a lot</strong> of high motion that even with a high frame rate, you can't adequately capture all of the motion only with P-Frames. This is commonly seen when you're watching MPEG video and there is a lot of high motion - you'll see a lot of "blockiness", and that blockiness is explained by this fact. As such, you'll need to encode another I-Frame as a quick refresher and then continue from this point. As such, most video files have the frames encoded such that you have one I-Frame, then have a bunch of P-frames, then have another I-Frame followed by a bunch of P-Frames and so on.</p></li>
<li><p>B-Frames - These are what are known as bi-directional predicted frames. These frames use information from both the frame (or frames) that are ahead and the frame (or frames) from behind. How these exactly work are beyond the scope of this post, but I wanted to talk about this briefly to be self-contained.</p></li>
</ul>
<p>As such, one possible sequence of frames that are encoded follow the following format:</p>
<pre><code>IPPPBPPPIPPPBPPPI...
</code></pre>
<p>However, this all depends on how your encoder is set up, but we'll leave that aside.</p>
<p>How is all of this useful you might ask? The reason why is because your question of <strong>Full-search Block Matching Algorithm</strong> deals exactly with how P-frames are constructed. For a given block in an I-Frame, <strong>where would the best location that this block would have moved to in the next frame</strong>? To do this, we actually take a look at blocks in the next frame and figure out the most similar block with the one in the I-Frame. You are probably asking yourself this question: <em>Woah.... aren't there a lot of blocks to search for?</em> and the answer is yes. The Full-search Block Matching algorithm basically searches <strong>the entire frame</strong> for the best matching block. This is quite computationally intensive, and so most encoders actually limit the search to moderately sized finite window around the block's location. Full-search Block Matching would give you the best results, but takes too long, and definitely not worth it. We can leverage the fact that most blocks don't really move that far as we're assuming the video was captured with such a high frame rate.</p>
<hr>
<p>I hope this has answered your questions!</p>
| 1,544
|
implement quantization
|
Float ops found in quantized TensorFlow MobileNet model
|
https://stackoverflow.com/questions/48121702/float-ops-found-in-quantized-tensorflow-mobilenet-model
|
<p><a href="https://i.sstatic.net/jualG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jualG.png" alt="loat ops found in quantized TensorFlow MobileNet model "></a></p>
<p>As you can see in the screenshot of a quantized MobileNet model implemented in TensorFlow, there are still some float operations. The quantization is done in TensorFlow via the graph_transform tools. </p>
<p>The red ellipse in the image has its description in the right-hand-size text box. The "depthwise" is a "DepthwiseConv2dNative" operation that expects "DT_FLOAT" inputs.</p>
<p>Despite the lower Relu6 performs an 8-bit quantized operation, the result has to go through "(Relu6)" which is a "Dequantize" op, in order to produce "DT_FLOAT" inputs for the depthwise convolution. </p>
<p>Why is depthwise conv operations left out by TF graph_transform tools? Thank you.</p>
|
<p>Unfortunately there isn't a quantized version of depthwise conv in standard TensorFlow, so it falls back to the float implementation with conversions before and after. For a full eight-bit implementation of MobileNet, you'll need to look at TensorFlow Lite, which you can learn more about here:</p>
<p><a href="https://www.tensorflow.org/mobile/tflite/" rel="nofollow noreferrer">https://www.tensorflow.org/mobile/tflite/</a></p>
| 1,545
|
implement quantization
|
Optimize Albert HuggingFace model
|
https://stackoverflow.com/questions/70740565/optimize-albert-huggingface-model
|
<p>Goal: Amend this <a href="https://github.com/microsoft/onnxruntime-inference-examples/blob/main/quantization/notebooks/bert/Bert-GLUE_OnnxRuntime_quantization.ipynb" rel="nofollow noreferrer">Notebook</a> to work with <strong>albert-base-v2</strong> model</p>
<p>Kernel: <code>conda_pytorch_p36</code>.</p>
<p><strong>Section 2.1</strong> exports the finalised model. It too uses a BERT specific function. However, I cannot find an equivalent for Albert.</p>
<p>I've successfully implemented alternatives for Albert up until this section.</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code># optimize transformer-based models with onnxruntime-tools
from onnxruntime_tools import optimizer
from onnxruntime_tools.transformers.onnx_model_bert import BertOptimizationOptions
# disable embedding layer norm optimization for better model size reduction
opt_options = BertOptimizationOptions('bert')
opt_options.enable_embed_layer_norm = False
...
</code></pre>
<p><strong>Do functions for Optimizing and Quantizing an Albert model exist?</strong></p>
<p>Update: You can run Quantization in the notebook, without running Optimization. You just need to remove '.opt.' from code, that is an indicative of optimised filenames.</p>
|
<p>Optimise any PyTorch model, using <strong>torch_optimizer</strong>.</p>
<p>Installation:</p>
<pre class="lang-sh prettyprint-override"><code>pip install torch_optimizer
</code></pre>
<p>Implementation:</p>
<pre class="lang-py prettyprint-override"><code>import torch_optimizer as optim
# model = ...
optimizer = optim.DiffGrad(model.parameters(), lr=0.001)
optimizer.step()
</code></pre>
<p><a href="https://github.com/jettify/pytorch-optimizer#simple-example" rel="nofollow noreferrer">Source</a></p>
<pre class="lang-py prettyprint-override"><code>torch.save(model.state_dict(), PATH)
</code></pre>
<p><a href="https://pytorch.org/tutorials/beginner/saving_loading_models.html#save-load-state-dict-recommended" rel="nofollow noreferrer">Source</a></p>
| 1,546
|
implement quantization
|
Using custom activation function with TF-Lite
|
https://stackoverflow.com/questions/62194241/using-custom-activation-function-with-tf-lite
|
<p>I am new to TensorFlow Lite and ran into a problem using a custom activation function (f(x) = x^2).</p>
<p>Making the model quantization aware, compiling, training and evaluating works fine. However, trying to convert the model has turned into a problem. I tried following the "Quantization aware training comprehensive guide" and created a custom QuantizeConfig. As the quantization of weights is the part that I am mainly interested in, I used the same technique as in the "Modify parts of layer to quantize" section, and skip quantizing the activations:</p>
<pre><code>def get_activations_and_quantizers(self, layer):
# Skip quantizing activations.
return []
def set_quantize_activations(self, layer, quantize_activations):
# Empty since `get_activaations_and_quantizers` returns
# an empty list.
return
</code></pre>
<p>However, when trying to convert the model, I get the following errors/ exceptions:</p>
<pre><code>File "C:\[..]\lib\site-packages\tensorflow\lite\python\wrap_toco.py", line 32, in wrapped_toco_convert
return _pywrap_toco_api.TocoConvert(
Exception: C:\[...]\lib\site-packages\tensorflow\python\ops\gen_math_ops.py:5793:1: error: 'std.constant' op requires attribute's type ('tensor<20x20xf32>') to match op's return type ('tensor<*xf32>')
_, _, _op, _outputs = _op_def_library._apply_op_helper(
</code></pre>
<p>and</p>
<pre><code>File "C:\[...]\lib\site-packages\tensorflow\lite\python\convert.py", line 183, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: C:\[...]\lib\site-packages\tensorflow\python\ops\gen_math_ops.py:5793:1: error: 'std.constant' op requires attribute's type ('tensor<20x20xf32>') to match op's return type ('tensor<*xf32>')
_, _, _op, _outputs = _op_def_library._apply_op_helper(
</code></pre>
<p>Is it possible to only quantize the weights and not the activation function? If not, are there any examples out there describing how to quantize custom activation functions? I haven't found anything among those lines while googling and got lost trying to search for it in the source code.</p>
<p>For reference, this is what I am using to convert the model:</p>
<pre><code>converter = tf.lite.TFLiteConverter.from_keras_model(quant_aware_model)
converter.allow_custom_ops = True
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS
]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(quantized_tflite_model)
</code></pre>
<p><strong>Edit (what else I tried):</strong></p>
<ul>
<li>Trying to implement a custom layer that does the squaring in the <code>call</code> method. Had the same problems as with the custom activation function.</li>
<li>Using the <code>tf.keras.layers.Multiply()</code> layer instead of an activation function, right after the relevant layers. But layer is not supported and needs to be passed with a <code>QuantizeConfig</code> into <code>quantize_annotate_layer()</code>. However, doing this lead to the exact same problem as above: only converting the model doesn't work, same error as above as well.</li>
<li>Using <code>tf.math.multiply(x, x)</code>. On its own it didn't work because <code>Layer tf_op_layer_Mul:<class 'tensorflow.python.keras.engine.base_layer.TensorFlowOpLayer'> is not supported</code>. The error message stated that, again, a <code>QuantizeConfig</code> and the <code>quantize_annotate_layer()</code> function should be used. I tried that but couldn't make it work at all.</li>
<li>Finally, I tried to make squared versions of the layers (Dense and Conv2D) that would need the squared activation function. I did this by inhereting the respective layer, overwriting the parent's <code>call</code> function by first using <code>super</code> to get the original result and then returning the squared output. I used the <code>Default8BitQuantizeConfig</code> and <code>Default8BitConvQuantizeConfig</code> to annotate the layers, respectively. This worked with the Dense layer, but not with Conv2D. Again, the Conv2D version produced the same error as described above when trying to convert the model.</li>
</ul>
| 1,547
|
|
implement quantization
|
Video Signature extraction in matlab
|
https://stackoverflow.com/questions/18787119/video-signature-extraction-in-matlab
|
<p>I am developing an application for Visual duplicate detection. For that First i have extract Frames from video. Then i have calculate KeyFrame which has Highest RMS error. Now i have to calculate Features of different subregion of frames which are configured at various scales,
shapes and locations. Then I have to apply ternary quantization of it to calculate Frame signature. </p>
<p>I am trying to calculate features but i cant getting it as per requirement. Can anyone please help to implement this. Any help will be appreciated.</p>
<p>Thanks in advance.</p>
| 1,548
|
|
implement quantization
|
In scipy why doesn't idct(dct(a)) equal to a?
|
https://stackoverflow.com/questions/34890585/in-scipy-why-doesnt-idctdcta-equal-to-a
|
<p>I am trying to implement JPEG compression using python. When I tried to apply the DCT, quantization, IDCT process for a tiff image, I found something strange for scipy.fftpack.dct/idct.</p>
<p>Since there is only 1D dct/idct within scipy package, I was doing this for a 2D dct</p>
<pre><code>import numpy as np
from scipy.fftpack import dct, idct
def dct2(block):
return dct(dct(block.T).T)
def idct2(block):
return idct(idct(block.T).T)
</code></pre>
<p>I tested the 2D dct/idct using a simple 3x3 matrix. I was expecting to get a True matrix with this test case.</p>
<pre><code>a = np.random.randint(0,255,9).reshape(3,3)
print a == idct2(dct2(a))
</code></pre>
<p>However it turned out that after idct2(dct2(a)) the result was scaled by a constant factor compared with the original a matrix.</p>
<p>I would like to ask if there is a way to implement a set of 2D dct/idct such that after a idct(dct(a)) operation I can get the same output as the input.</p>
|
<p>You need to set scaling to <code>ortho</code> for both <code>dct2</code> and <code>idct2</code>:</p>
<pre><code>def dct2 (block):
return dct(dct(block.T, norm = 'ortho').T, norm = 'ortho')
</code></pre>
<p>also, you cannot expect the values to be exactly the same, but almost the same within some margin of error:</p>
<pre><code>np.allclose (a, idct2(dct2(a)))
</code></pre>
| 1,549
|
implement quantization
|
Approximating dot product between two real-valued vectors in Hamming space
|
https://stackoverflow.com/questions/68890746/approximating-dot-product-between-two-real-valued-vectors-in-hamming-space
|
<p>currently I am reading a paper about quantization in graph neural networks (<a href="https://arxiv.org/pdf/2012.15823.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2012.15823.pdf</a>). On page two they talk about how you can approximate the dot product of two real-valued vectors a and b</p>
<p><a href="https://i.sstatic.net/pc2oh.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pc2oh.gif" alt="approximation of real-valued vectors" /></a></p>
<p>Now I tried to implement this in Python but I get something that is nowhere near a good approximation when there are negative numbers:</p>
<pre><code>dataset = np.random.uniform(-1, 1, (500, 300))
dataset /= np.linalg.norm(dataset, axis=1, keepdims=True)
def sign(vector):
return 2* (vector > 0).astype(int) - 1
def rescaling(vector):
return (1 / vector.size) * np.sum(np.abs(vector))
def binary_product(vector_x, vector_y):
return (vector_x == vector_y).sum()
def calculate_approx(a, b):
alpha = rescaling(a)
beta = rescaling(b)
sign_a = sign(a)
sign_b = sign(b)
return binary_product(sign_a, sign_b) * alpha * beta
print(calculate_approx(dataset[0], dataset[1]))
print(dataset[0] @ dataset[1])
</code></pre>
<p>When I change the dataset to have only values higher than 0 I do get a good approximation.</p>
<p>Anyone an idea what I am doing wrong?</p>
|
<p>You see that the derivation assumes <code>allclose(|x|, 1)</code></p>
<p><a href="https://i.sstatic.net/AKIlq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AKIlq.png" alt="enter image description here" /></a></p>
<p>With the rescaling <code>mean(|x|)</code> parameter you can have <code>allclose(|x|, mean(|x|))</code>,
the dot product could be rewritten as <code>sum(sign(x) * sign(y) * |x| * |y|)</code>, what you are calculating is <code>sum(sign(x) * sign(y)) mean(|x|) * mean(|y|)</code>, if both <code>|x|</code> and <code>|y|</code> are close to their respective averages then you get a good approximation.</p>
| 1,550
|
implement quantization
|
Other compression methods for Federated Learning
|
https://stackoverflow.com/questions/63456963/other-compression-methods-for-federated-learning
|
<p>I noticed that the Gradient Quantization compression method is already implemented in TFF framework. How about non-traditional compression methods where we select a sub-model by dropping some parts of the global model? I come across the "Federated Dropout" compression method in the paper "Expanding the Reach of Federated Learning by Reducing Client Resource Requirements" (<a href="https://arxiv.org/abs/1812.07210" rel="nofollow noreferrer">https://arxiv.org/abs/1812.07210</a>). Any idea if Federated Dropout method is already supported in Tensorflow Federated. If not, any insights how to implement it (the main idea of the method is dropping a fixed percentage of the activations and filters in the global model to exchange and train a smaller sub-model)?</p>
|
<p>Currently, there is no implementation of this idea available in the TFF code base.</p>
<p>But here is an outline of how you could do it, I recommend to start from <a href="https://github.com/tensorflow/federated/tree/v0.16.1/tensorflow_federated/python/examples/simple_fedavg" rel="nofollow noreferrer"><code>examples/simple_fedavg</code></a></p>
<ol>
<li>Modify top-level <a href="https://github.com/tensorflow/federated/blob/1dedbe4d91281a210098daa230b0cb0aa8d6a339/tensorflow_federated/python/examples/simple_fedavg/simple_fedavg_tff.py#L54" rel="nofollow noreferrer"><code>build_federated_averaging_process</code></a> to accept two <code>model_fn</code>s -- one <code>server_model_fn</code> for the global model, one <code>client_model_fn</code> for the smaller sub-model structure actually trained on clients.</li>
<li>Modify <a href="https://github.com/tensorflow/federated/blob/1dedbe4d91281a210098daa230b0cb0aa8d6a339/tensorflow_federated/python/examples/simple_fedavg/simple_fedavg_tf.py#L173" rel="nofollow noreferrer"><code>build_server_broadcast_message</code></a> to extract only the relevant sub-model from the <code>server_state.model_weights</code>. This would be the mapping from server model to client model.</li>
<li>The <a href="https://github.com/tensorflow/federated/blob/1dedbe4d91281a210098daa230b0cb0aa8d6a339/tensorflow_federated/python/examples/simple_fedavg/simple_fedavg_tf.py#L192" rel="nofollow noreferrer"><code>client_update</code></a> may actually not need to be changed (I am not 100% sure), as long as only the <code>client_model_fn</code> is provided from <a href="https://github.com/tensorflow/federated/blob/1dedbe4d91281a210098daa230b0cb0aa8d6a339/tensorflow_federated/python/examples/simple_fedavg/simple_fedavg_tff.py#L103" rel="nofollow noreferrer"><code>client_update_fn</code></a>.</li>
<li>Modify <a href="https://github.com/tensorflow/federated/blob/1dedbe4d91281a210098daa230b0cb0aa8d6a339/tensorflow_federated/python/examples/simple_fedavg/simple_fedavg_tf.py#L139" rel="nofollow noreferrer"><code>server_update</code></a> - the <code>weights_delta</code> will be the update to the client sub-model, so you will need to map it back to the larger global model.</li>
</ol>
<p>In general, the steps 2. and 4. are tricky, as they depend not only what layers are in a model, but also the how they are connected. So it will be hard to create a easy to use general solution, but it should be ok to write these for a specific model structure you know in advance.</p>
| 1,551
|
implement quantization
|
TFLite Conversion changing model weights
|
https://stackoverflow.com/questions/52726632/tflite-conversion-changing-model-weights
|
<p>I have a custom built tensorflow graph implementing MobileNetV2-SSDLite which I implemented myself. It is working fine on the PC.</p>
<p>However, when I convert the model to TFLite (all float, no quantization), the model weights are changed drastically. </p>
<p>To give an example, a filter which was initially -
0.13172674179077148,
2.3185202252437188e-32,
-0.003990101162344217</p>
<p>becomes-
4.165565013885498,
-2.3981268405914307,
-1.1919032335281372</p>
<p>The large weight values are completely throwing off my on-device inferences. Need help! :(</p>
|
<p>What command are you using to convert to tflite? For instance are you using toco, and if so what parameters are you using? While I haven't been looking at the filters, <a href="https://github.com/tensorflow/tensorflow/issues/22106#issuecomment-428409506" rel="nofollow noreferrer">here are my default instructions</a> for finetuning a MobileNetV2-SSD and SSDLite graphs and the model has been performing well.</p>
| 1,552
|
implement quantization
|
Php search keyword and output this line where keyword be
|
https://stackoverflow.com/questions/19393021/php-search-keyword-and-output-this-line-where-keyword-be
|
<p><strong>I would like to search for a keyword in a large text content and output this line.
My example text are as follow:</strong></p>
<p>HSPICE simulation methods for the nestlist of the proposed RTD-based nanoarchitecture in order to verify
a candidate of image functions by using the afore-mentioned representation methods.</p>
<p>Categories and Subject Descriptors: C.5.4 [Computer System Implementation]: VSLI Systems</p>
<p>General Terms: Design
Additional Key Words and Phrases: VLSI, quantization, color extraction, color image processing, resonant-</p>
<p>tunneling diode(s), cellular neural network</p>
<p>ACM Reference Format:</p>
<p><strong>And finally I want to just output "general terms: Design Additional Key Words and Phrases: VLSI, quantization, color extraction, color image processing, resonant-" by searching key word "general terms". How Can I code in PHP get this result? For whole text let it be $content, and $key="General Terms";</strong></p>
|
<p>You want to use regex - <a href="http://www.php.net/manual/en/function.preg-match.php" rel="nofollow">http://www.php.net/manual/en/function.preg-match.php</a></p>
<pre><code>$search = 'General Terms:';
$pattern = '/'.$search.'(.)+/';
$content = 'HSPICE simulation methods for the nestlist of the proposed RTD-based nanoarchitecture in order to verify a candidate of image functions by using the afore-mentioned representation methods.
Categories and Subject Descriptors: C.5.4 [Computer System Implementation]: VSLI Systems
General Terms: Design Additional Key Words and Phrases: VLSI, quantization, color extraction, color image processing, resonant-
tunneling diode(s), cellular neural network';
preg_match($pattern, $content, $out);
$out[0] = str_replace($search, '', $out[0]);
print_r($out);
// Array ( [0] => Design Additional Key Words and Phrases: VLSI, quantization, color extraction, color image processing, resonant- [1] => )
</code></pre>
| 1,553
|
implement quantization
|
VHDL - Designing a simple first order IIR filter
|
https://stackoverflow.com/questions/29853221/vhdl-designing-a-simple-first-order-iir-filter
|
<p>I'm designing a simple first order IIR filter for my Spartan-6 but I'm struggling with bus widths and coefficient quantization.</p>
<p>The input data is 16-bits wide comes from integrated ADCs and the quantization noise is the main noise contribution to the front end noise.</p>
<p>The input signal is filtered at roughly 300kHz and I want to implement a first order IIR filter at tunable frequencies of 1Hz, 10Hz, 100Hz, 1kHz, 10kHz: let's focus on the 1Hz filtering. In theory I should be able to gain N = log2(300k) = 18 bits of resolution.</p>
<p>I've computed the filter coefficients:</p>
<p>Gain: 3.1416e-6 </p>
<p>Numerator: [1 1]</p>
<p>Denominator: [1 -0.999993717]</p>
<p>How do I deal with fractional coefficients? I was thinking to multiply the coefficients times 2^N and then cut N LSBs, choosing N to have a reasonable approximation of the coefficients.</p>
<p>Let's say I use this structure:
<img src="https://i.sstatic.net/Pydos.gif" alt="First order IIR filter"></p>
<p>What should be the bus width of <strong>z-1 register and the y output</strong> using this multiplication method?</p>
<hr>
<p>Thanks to Jonathan for the help, I still need to understand some things so let's make this practical: first of all, which structure do you think is the best one for FPGA implementation?</p>
<p><img src="https://i.sstatic.net/lVKxQ.jpg" alt="Possible structures"></p>
<p>In any case let's say I multiply:</p>
<pre><code>b = 3.1416e-6 * 2^36 --> 110100101101001111
a = 0.999993716 * 2^17 --> 011111111111111111
</code></pre>
<p>Now what? :D</p>
|
<p>You deal with fractional coefficient by multiplying them by 2**N, just like you thought. This gives you a fixed point representation with N binary decimal places. You have to take care of keeping track of the fractional part width.</p>
<p>For example, if you multipy an input (16 bits integer, 0 bits fractional) with a coefficient (1 bit integer, N bits fractional), you would end up with a 17+N bits number with 17 bits integer, N bits fractional. When you add number, make sure to align the integer parts together...</p>
<p>For how large N should be, it's up to you! Matlab's fdatool can help you visualize the impact of bit quantization on the filter. Matlab/Simulink is the best tool to analyze impact of quantifiation wherever it happens in your filter IMO.</p>
<p>In a FPGA though, I would put N as large as the multiplier allow me. For example, if you use 18x18 multipliers, just use 18 bits (must be signed) for the coefficient. If it's not enough, think about prescaling your input, larger multiplier input will cost a lot more, but maybe you have plenty of multiplier to spare. </p>
<p>Also take note that when truncating a fixed-point number, you can <em>round</em> to reduce your noise. Simply add 0.5 before truncation, which can usually be done somewhere on your pipeline with minimal cost.</p>
<h2>Update</h2>
<p>Xilinx has a nice <a href="http://www.xilinx.com/support/documentation/white_papers/wp330.pdf" rel="nofollow">whitepaper on IIR filtering</a> that may help you out better than I.</p>
<p>Otherwise, I just realized that your filtering requirements are quite drastic (1Hz cutoff out of 300kHz). I doubt you can achieve stability with 18 bits multipliers. You may want to look for a different design, one that decimate the input to a lower frequency as a first stage operation, for example.</p>
<p>If you need to keep your current requirements intact, you will have to use larger multipliers and adders.</p>
| 1,554
|
implement quantization
|
How to add a noise with uniform distribution to input data in Keras?
|
https://stackoverflow.com/questions/58484545/how-to-add-a-noise-with-uniform-distribution-to-input-data-in-keras
|
<p>I need to add quantization noise to my input data. I read often these kinds of noises are modeled as noise with uniform distribution. </p>
<p>I have an encoding/decoding network implemented with Keras (input data is time series raw data), there is a layer implemented in Keras with which you can add Gaussian noise (GaussianNoise layer), can I use this layer to create uniform noise?</p>
<p>If not, are there other implemented layers that I can use?</p>
|
<p>You can create your own layer as such,</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
class noiseLayer(tf.keras.layers.Layer):
def __init__(self,mean,std):
super(noiseLayer, self).__init__()
self.mean = mean
self.std = std
def call(self, input):
mean = self.mean
std = self.std
return input + tf.random.normal(tf.shape(input).numpy(),
mean = mean,
stddev = std)
X = tf.ones([10,10,10]) * 100
Y = noiseLayer(mean = 0, std = 0.1)(X)
</code></pre>
<p>This code works in the latest Tensorflow 2.0.</p>
| 1,555
|
implement quantization
|
Is there a way to monitor the different subtensors of a custom Keras backend loss function?
|
https://stackoverflow.com/questions/72437087/is-there-a-way-to-monitor-the-different-subtensors-of-a-custom-keras-backend-los
|
<p>I'm currently implementing a custom loss function by modelling it as a Keras backend tensor. The loss function has different parts (such as a classification loss, a quantization loss or a pairwise distance loss)</p>
<p>The code looks something like this:</p>
<pre><code>...
different_class_loss = K.log(1 + K.exp(-1*dissimilarity + margin))
pair_loss = same_class * same_class_loss + (1-same_class) * different_class_loss
loss_value = lambda_pair * pair_loss + lambda_classif * classif_loss + lambda_quant_binary * quantization_loss
# Add loss to model
pairwise_model.add_loss(loss_value)
# Compile without specifying a loss
pairwise_model.compile(optimizer=optimizer_adam)
</code></pre>
<p>When I train the model using a batch_generator and pairwise_model.fit() the history contains exactly one loss argument for the combined loss_value. For debugging purposes I'd like to monitor every part of that loss function individually (i.e .quantization, classification and pairwise distance loss), but I can't figure out how.</p>
<p>I tried implementing a callback using K.eval() or K.print_tensor() to retrieve the values during training, but that didn't work. I also wasn't able to add multiple loss metrics using the add_loss function.</p>
<p>Is there a way to do this without writing a custom training loop? It feels like there should be. Any help is greatly appreciated.</p>
<p><strong>__________________________________________________</strong></p>
<p><strong>EDIT:</strong></p>
<p>Following the idea from Dr. Snoopy, here is the code that ended up working for me:</p>
<pre><code>...
different_class_loss = K.log(1 + K.exp(-1*dissimilarity + margin))
pair_loss = same_class * same_class_loss + (1-same_class) * different_class_loss
loss_value = lambda_pair * pair_loss + lambda_classif * classif_loss + lambda_quant_binary * quantization_loss
# Add loss to model
pairwise_model.add_loss(loss_value)
# Add additional losses as metrics
pairwise_model.add_metric(pair_loss, name = "pairwise loss")
pairwise_model.add_metric(quantization_loss, name = "quantization loss")
# Compile without specifying a loss or metrics
pairwise_model.compile(optimizer=optimizer_adam)
</code></pre>
|
<p>You can pass them as metrics, like this:</p>
<pre><code>def pl():
return pair_loss
pairwise_model.compile(optimizer=optimizer_adam, metrics=[pl])
</code></pre>
<p>And you can do similarly for your other loss components. The function might not be needed, you could also try passing <code>pair_loss</code> directly as a metric.</p>
| 1,556
|
implement quantization
|
Java: binary series representation
|
https://stackoverflow.com/questions/19139748/java-binary-series-representation
|
<p>I am doing some experiments on my own about quantization processes etc.</p>
<p>I try to implement a binarization process which makes a "binary string" which will get processed by xor afterwards and some other stuff.</p>
<p>Anyhow the binarization is the following, where d and u are some numbers that will get compared:</p>
<pre><code>String b = "";
for (int i = 0; i < u.length; u++) {
if(d[i] < u[i]) {
b[i] += '0';
} else {
b[i] += '1';
}
}
</code></pre>
<p>Currently like described I have a string where each character is 0 or 1. </p>
<p>Using a <code>BigInteger</code> gives me an Object where I can XOR two values against each other:</p>
<pre><code>BigInteger bi = new BigInteger(b, 2);
(...)
BigInteger result = bi.xor(other_bi);
</code></pre>
<p>Is there another way to achieve what I want to do? I didn't find anything but maybe there is one I have not found?</p>
|
<p>The <a href="http://docs.oracle.com/javase/7/docs/api/java/util/BitSet.html" rel="nofollow"><code>BitSet</code></a> class is more appropriate for representing a sequence of bits. To set a bit you would use the <a href="http://docs.oracle.com/javase/7/docs/api/java/util/BitSet.html#set%28int%29" rel="nofollow"><code>BitSet.set</code></a> method.</p>
| 1,557
|
implement quantization
|
Generate the Dominant Colors for an RGB image with XMLHttpRequest
|
https://stackoverflow.com/questions/33312362/generate-the-dominant-colors-for-an-rgb-image-with-xmlhttprequest
|
<p><strong>A Note For Readers: This is a long question, but it needs a background to understand the question asked.</strong></p>
<p>The <a href="https://en.wikipedia.org/wiki/Color_quantization" rel="nofollow noreferrer">color quantization technique</a> is commonly used to get the <em>dominant colors</em> of an image.
One of the well-known libraries that do color quantization is <a href="http://www.leptonica.com/" rel="nofollow noreferrer">Leptonica</a> through the <a href="http://www.leptonica.com/color-quantization.html" rel="nofollow noreferrer">Modified Median Cut Quantization (MMCQ) and octree quantization (OQ)</a>
Github's <a href="https://github.com/lokesh/color-thief" rel="nofollow noreferrer">Color-thief</a> by @lokesh is a very simple implementation in JavaScript of the MMCQ algorithm:</p>
<pre><code>var colorThief = new ColorThief();
colorThief.getColor(sourceImage);
</code></pre>
<p>Technically, the image on a <code><img/></code> HTML element is backed on a <code><canvas/></code> element:</p>
<pre><code>var CanvasImage = function (image) {
this.canvas = document.createElement('canvas');
this.context = this.canvas.getContext('2d');
document.body.appendChild(this.canvas);
this.width = this.canvas.width = image.width;
this.height = this.canvas.height = image.height;
this.context.drawImage(image, 0, 0, this.width, this.height);
};
</code></pre>
<p>And that is the problem with <code>TVML</code>, as we will see later on.</p>
<p>Another implementation I recently came to know was linked on this article <a href="http://javier.io/blog/en/2015/09/30/using-imagemagick-and-kmeans-to-find-dominant-colors-in-images.html" rel="nofollow noreferrer">Using imagemagick, awk and kmeans to find dominant colors in images</a> that links to <a href="http://charlesleifer.com/blog/using-python-to-generate-awesome-linux-desktop-themes/" rel="nofollow noreferrer">Using python to generate awesome linux desktop themes</a>.
The author posted an article about <a href="http://charlesleifer.com/blog/using-python-and-k-means-to-find-the-dominant-colors-in-images/" rel="nofollow noreferrer">Using python and k-means to find the dominant colors in images</a> that was used there (sorry for all those links, but I'm following back my History...).</p>
<p>The author was super productive, and added a JavaScript version too that I'm posting here: <a href="https://gist.github.com/loretoparisi/c147ca437ab9d5e163b7" rel="nofollow noreferrer">Using JavaScript and k-means to find the dominant colors in images</a></p>
<p>In this case, we are generating the dominant colors of an image, not using the MMCQ (or OQ) algorithm, but K-Means.
The problem is that the image must be a as well:</p>
<pre><code><canvas id="canvas" style="display: none;" width="200" height="200"></canvas>
</code></pre>
<p>and then</p>
<pre><code>function analyze(img_elem) {
var ctx = document.getElementById('canvas').getContext('2d')
, img = new Image();
img.onload = function() {
var results = document.getElementById('results');
results.innerHTML = 'Waiting...';
var colors = process_image(img, ctx)
, p1 = document.getElementById('c1')
, p2 = document.getElementById('c2')
, p3 = document.getElementById('c3');
p1.style.backgroundColor = colors[0];
p2.style.backgroundColor = colors[1];
p3.style.backgroundColor = colors[2];
results.innerHTML = 'Done';
}
img.src = img_elem.src;
}
</code></pre>
<p>This is because the Canvas has a getContext() method, that expose 2D image drawing APIs - see <a href="http://html5doctor.com/an-introduction-to-the-canvas-2d-api/" rel="nofollow noreferrer">An introduction to the Canvas 2D API</a></p>
<p>This context ctx is passed to the image processing function</p>
<pre><code> function process_image(img, ctx) {
var points = [];
ctx.drawImage(img, 0, 0, 200, 200);
data = ctx.getImageData(0, 0, 200, 200).data;
for (var i = 0, l = data.length; i < l; i += 4) {
var r = data[i]
, g = data[i+1]
, b = data[i+2];
points.push([r, g, b]);
}
var results = kmeans(points, 3, 1)
, hex = [];
for (var i = 0; i < results.length; i++) {
hex.push(rgbToHex(results[i][0]));
}
return hex;
}
</code></pre>
<p>So you can draw an image on the Canvas through the Context and get image data:</p>
<pre><code>ctx.drawImage(img, 0, 0, 200, 200);
data = ctx.getImageData(0, 0, 200, 200).data;
</code></pre>
<p>Another nice solution is in CoffeeScript, <a href="https://github.com/dannvix/ColorTunes" rel="nofollow noreferrer">ColorTunes</a>, but this is using a as well:</p>
<pre><code>ColorTunes.getColorMap = function(canvas, sx, sy, w, h, nc) {
var index, indexBase, pdata, pixels, x, y, _i, _j, _ref, _ref1;
if (nc == null) {
nc = 8;
}
pdata = canvas.getContext("2d").getImageData(sx, sy, w, h).data;
pixels = [];
for (y = _i = sy, _ref = sy + h; _i < _ref; y = _i += 1) {
indexBase = y * w * 4;
for (x = _j = sx, _ref1 = sx + w; _j < _ref1; x = _j += 1) {
index = indexBase + (x * 4);
pixels.push([pdata[index], pdata[index + 1], pdata[index + 2]]);
}
}
return (new MMCQ).quantize(pixels, nc);
};
</code></pre>
<p>But, wait, we have no <code><canvas/></code> element in <code>TVML</code>!</p>
<p>Of course, there are native solutions like Objective-C <a href="https://github.com/pixelogik/ColorCube" rel="nofollow noreferrer">ColorCube</a>, <a href="https://github.com/indragiek/DominantColor" rel="nofollow noreferrer">DominantColor</a> - this is using K-means </p>
<p>and the very nice and reusable <a href="https://github.com/panicinc/ColorArt" rel="nofollow noreferrer">ColorArt</a> by @AaronBrethorst from CocoaControls.</p>
<p>Despite the fact that this could be used in a TVML application through a native to JavaScriptCore bridge - see <a href="https://stackoverflow.com/questions/33081565/tvos-tvml-and-objective-c-swift-putting-all-together">How to bridge TVML/JavaScriptCore to UIKit/Objective-C (Swift)?</a></p>
<p>my aim is to make this work completely in <code>TVJS</code> and <code>TVML</code>.</p>
<p>The simplest MMCQ JavaScript implementation does not need a Canvas: see <a href="https://gist.github.com/loretoparisi/fe6e5cf889bb2c8f2099" rel="nofollow noreferrer">Basic Javascript port of the MMCQ (modified median cut quantization)</a> by <a href="https://gist.github.com/nrabinowitz" rel="nofollow noreferrer">Nick Rabinowitz</a>, but needs the RGB array of the image:</p>
<pre><code>var cmap = MMCQ.quantize(pixelArray, colorCount);
</code></pre>
<p>that is taken from the HTML <code><canvas/></code> and that is the reason for it!</p>
<pre><code>function createPalette(sourceImage, colorCount) {
// Create custom CanvasImage object
var image = new CanvasImage(sourceImage),
imageData = image.getImageData(),
pixels = imageData.data,
pixelCount = image.getPixelCount();
// Store the RGB values in an array format suitable for quantize function
var pixelArray = [];
for (var i = 0, offset, r, g, b, a; i < pixelCount; i++) {
offset = i * 4;
r = pixels[offset + 0];
g = pixels[offset + 1];
b = pixels[offset + 2];
a = pixels[offset + 3];
// If pixel is mostly opaque and not white
if (a >= 125) {
if (!(r > 250 && g > 250 && b > 250)) {
pixelArray.push([r, g, b]);
}
}
}
// Send array to quantize function which clusters values
// using median cut algorithm
var cmap = MMCQ.quantize(pixelArray, colorCount);
var palette = cmap.palette();
// Clean up
image.removeCanvas();
return palette;
}
</code></pre>
<p><strong>[QUESTION]</strong>
How to generate the dominant colors of a RGB image without using the HTML5 <code><canvas/></code>, but in pure JavaScript from an image's <code>ByteArray</code> fetched with <code>XMLHttpRequest</code>?</p>
<p><strong>[UPDATE]</strong>
I have posted this question to <a href="https://github.com/lokesh/color-thief/issues/86" rel="nofollow noreferrer">Color-Thief</a> github repo, adapting the RGB array calculations to the latest codebase.
The solution I have tried was this</p>
<pre><code>ColorThief.prototype.getPaletteNoCanvas = function(sourceImageURL, colorCount, quality, done) {
var xhr = new XMLHttpRequest();
xhr.open('GET', sourceImageURL, true);
xhr.responseType = 'arraybuffer';
xhr.onload = function(e) {
if (this.status == 200) {
var uInt8Array = new Uint8Array(this.response);
var i = uInt8Array.length;
var biStr = new Array(i);
while (i--)
{ biStr[i] = String.fromCharCode(uInt8Array[i]);
}
if (typeof colorCount === 'undefined') {
colorCount = 10;
}
if (typeof quality === 'undefined' || quality < 1) {
quality = 10;
}
var pixels = uInt8Array;
var pixelCount = 152 * 152 * 4 // this should be width*height*4
// Store the RGB values in an array format suitable for quantize function
var pixelArray = [];
for (var i = 0, offset, r, g, b, a; i < pixelCount; i = i + quality) {
offset = i * 4;
r = pixels[offset + 0];
g = pixels[offset + 1];
b = pixels[offset + 2];
a = pixels[offset + 3];
// If pixel is mostly opaque and not white
if (a >= 125) {
if (!(r > 250 && g > 250 && b > 250)) {
pixelArray.push([r, g, b]);
}
}
}
// Send array to quantize function which clusters values
// using median cut algorithm
var cmap = MMCQ.quantize(pixelArray, colorCount);
var palette = cmap? cmap.palette() : null;
done.apply(this,[ palette ])
} // 200
};
xhr.send();
}
</code></pre>
<p>but it does not gives back the right RGB colors array.</p>
<p><strong>[UPDATE]</strong>
Thanks to all the suggestions I got it working. Now a full example is available on <a href="https://github.com/loretoparisi/dominant-colors-xmlhttprequest-example" rel="nofollow noreferrer">Github</a>, </p>
|
<p>The canvas element is being used as a convenient way to decode the image into an RGBA array. You can also use pure JavaScript libraries to do the image decoding.</p>
<p><a href="https://github.com/notmasteryet/jpgjs" rel="nofollow">jpgjs</a> is a JPEG decoder and <a href="https://github.com/arian/pngjs" rel="nofollow">pngjs</a> is a PNG decoder. It looks like the JPEG decoder will work with TVJS as is. The PNG decoder, however, looks like it's made to work in a Node or web browser environment, so you might have to tweak that one a bit.</p>
| 1,558
|
implement quantization
|
Keras manual quantization
|
https://stackoverflow.com/questions/54525193/keras-manual-quantization
|
<p>I've recently inherited a keras based network from a colleague, and I want to quantize it down to 8 bit fixed point.</p>
<p>Unfortunately I'm not overly familiar with keras itself.</p>
<p>I've been looking around, and there doesn't seem to be any easy methods to do this, without converting to something like tf.lite etc, and even that seems to have problems. (please correct me if I'm missing any great solutions here). </p>
<p>So I'm wondering if I can do it manually. I understand the formula, and don't think I'd have any major trouble implementing, but I'm not sure how keras handles weights under the hood.
If I were to just manually map a weight from 32 to 8 bit, would keras be fine with that, or would it do something annoying like just append 0s to make it some internally expected length for a weight.</p>
<p>Any help or pointers in this are would be greatly appreciated. </p>
|
<p>Perhaps you could use the <a href="https://github.com/transcranial/keras-js/blob/master/python/encoder.py" rel="nofollow noreferrer"><code>encoder.py</code></a> converter script with the <code>-q</code> quantization flag:</p>
<p><a href="https://transcranial.github.io/keras-js-docs/conversion/#quantization" rel="nofollow noreferrer">https://transcranial.github.io/keras-js-docs/conversion/#quantization</a></p>
| 1,559
|
implement quantization
|
Method to quantize a range of values to keep precision when signficant outliers are present in the data
|
https://stackoverflow.com/questions/72894055/method-to-quantize-a-range-of-values-to-keep-precision-when-signficant-outliers
|
<p>Could you tell me please if there is a suitable quantizing method in the following case (preferrably implemented in python)?</p>
<p>There is an input range where majority of values are within +-2 std from mean, while some huge outliers are present.
E.g. [1, 2, 3, 4, 5, 1000]
Quantizing it to output range of e.g. 0-255 would result in loss of precision because of huge outlier 1000 (1, 2, 3, 4, 5 will all become 0).</p>
<p>However, it is important to keep precision for those values which are within several std from mean.</p>
<p>Throwing away the outliers or replacing them with NaN is not acceptable. They should be kept in some form. Roughly, using example above, output of quantization should be something like [1, 2, 3, 4, 5, 255]</p>
<p>Thank you very much for any input.</p>
|
<p>I can think of 2 answers to your question.</p>
<ol>
<li>You write "huge outlier". The term outlier suggest that this number does not really fit the data. If you really have evidence that this observation is not representative (say because the measurement device was broken temporarily), then I would omit this observation.</li>
<li>Alternatively, such high values might occur because this variable can truly span a large range of outcomes (e.g. an income variable with Elon Musk in the sample). In this situation I would consider a transformation of the input, say take the logarithm of the numbers first. This would transform your list [1,2,3,4,5,1000] to [0,0.69,1.10,1.39,1.61,6.91]. These values are already closer together.</li>
</ol>
<p>However, regardless of choices 1 or 2, it is probably best to anyways compare the outcomes with and without this outlier. You really want to avoid your conclusions being driven by this single observation.</p>
| 1,560
|
implement quantization
|
Quantized dct not yielding runs of 0
|
https://stackoverflow.com/questions/35514018/quantized-dct-not-yielding-runs-of-0
|
<p>For quick reference here is a github <a href="https://github.com/DimitryRakhlei/MediaCompression" rel="nofollow">link</a>.</p>
<p>I am attempting to implement a simple JPEG compression.
I will provide some of the more notable methods.</p>
<p>The issue is that I am not seeing any notable runs of 0 so my RLE encoding does nothing to compress the image.</p>
<p><strong>Code:</strong></p>
<p>RGB to YCbCr conversion:</p>
<pre><code>private static Ycbcr AlternativeRgbtoYCbCr ( Rgb rgb ) {
var y = 16f + (65.481f*rgb.R + 128.553f*rgb.G + 24.966f*rgb.B);
var cr = 128f + (-37.797f * rgb.R - 74.203f * rgb.G + 112.0f * rgb.B);
var cb = 128f + (112.0f*rgb.R - 93.786f*rgb.G - 18.214f*rgb.B);
return new Ycbcr(y, cb, cr);
}
</code></pre>
<p>Splitting into ColorSpaces:</p>
<pre><code>/// <summary>
/// Loops through all the pixels and converts them to ycbcr
/// </summary>
public void SplitBytesIntoColorSpaces() {
if (_imageIsSplit) return;
for (var x = 0; x < LeftImageBitmap.Width; x++) {
//var innerlist = new List<Ycbcr>();
var innerY = new List<float>();
var innerCr = new List<float>();
var innerCb = new List<float>();
for (var y = 0; y < LeftImageBitmap.Height; y++) {
var color = ToRgb(LeftImageBitmap.GetPixel(x, y));
//innerlist.Add(RgbtoYCbCr(color));
innerY.Add(AlternativeRgbtoYCbCr(color).Y);
innerCr.Add(AlternativeRgbtoYCbCr(color).Cr);
innerCb.Add(AlternativeRgbtoYCbCr(color).Cb);
}
//ChromeList.Add(innerlist);
LumList.Add(innerY);
CrList.Add(innerCb);
CbList.Add(innerCr);
}
_imageIsSplit = true;
}
</code></pre>
<p>Sub-Sampling:</p>
<pre><code>/// <summary>
/// 4:2:0 subsampling
/// </summary>
private void SubSample420() {
var tempCrArray = new List<List<float>>(CrList.Count/2);
var tempCbArray = new List<List<float>>(CbList.Count/2);
for (var x = 0; x < CrList.Count/2; x++) {
var rowCr = new List<float>();
var rowCb = new List<float>();
for (var y = 0; y < CrList.Count/2; y++) {
rowCb.Add(0);
rowCr.Add(0);
}
tempCrArray.Add(rowCr);
tempCbArray.Add(rowCb);
}
for (int x = 0, x2 = 0; x < CrList.Count; x += 2, x2 ++) {
if (x2 >= tempCrArray.Count) continue;
var crrow = tempCrArray[x2];
var cbrow = tempCbArray[x2];
for (int y = 0, y2 = 0; y < CrList[x].Count; y += 2, y2++) {
if (y2 >= crrow.Count) continue;
crrow[y2] = CrList[x][y];
cbrow[y2] = CbList[x][y];
}
}
CrList = new List<List<float>>(tempCbArray);
CbList = new List<List<float>>(tempCrArray);
_subSampled420 = true;
}
</code></pre>
<p>Now for dct processing:</p>
<p>Step function to step through arrays 8x8 at a time.</p>
<pre><code>/// <summary>
/// Steps through a 2d array 8 pixels at a time.
/// When x reaches the end, x is set to 0 and y
/// is incremented by 8.
/// </summary>
/// <param name="x">ref position to the loop's x</param>
/// <param name="y">ref position to the loop's y</param>
/// <param name="xl">maximum x position</param>
/// <param name="yl">maximum y position</param>
private static void Step(ref int x, ref int y, int xl, int yl) {
if (x + 8 < xl) {
x += 8;
}
else if (x + 8 >= xl) {
x = 0;
y += 8;
}
}
</code></pre>
<p>DCT:</p>
<pre><code>//DCT function that creates a task and runs dct on it
public double[,] Go(double[,] d) {
_data = d;
var task = Task<double[,]>.Factory.StartNew(() => {
var output = new double[8, 8];
for (var x = 0; x < _data.GetLength(0); x++)
for (var y = 0; y < _data.GetLength(1); y++) {
output[x, y] = GetValueForward(x, y);
}
return output;
});
return task.Result;
}
//gets a singular value for dct
private double GetValueForward(int u, int v) {
double freq = 0;
for (var i = 0; i < 8; i++) {
for (var j = 0; j < 8; j++) {
freq +=
Math.Cos((2*i + 1)*u*Pi/16)*
Math.Cos((2*j + 1)*v*Pi/16)*
_data[i, j];
}
}
freq *= 2*C(u)*C(v)/Math.Sqrt(8*8);
return freq;
}
</code></pre>
<p>Function that does all of the processing:</p>
<pre><code>/// <summary>
/// DctImageProcessor's main method.
/// This method will run on 8x8 chunks
/// and process them into double arrays by channel.
/// </summary>
public void Process() {
//get data manager's ycbcr chromelist
_luminanceDatalist = Manager.LumList;
_crDatalist = Manager.CrList;
_cbDatalist = Manager.CbList;
_lumOutList = new List<double[,]>();
_cbOutList = new List<double[,]>();
_crOutList = new List<double[,]>();
var lumx = _luminanceDatalist[0].Count;
var lumy = _luminanceDatalist.Count;
var channelx = _crDatalist[0].Count;
var channely = _crDatalist.Count;
//SetArray(_luminanceDatalist);
//dct
var dct = new Dct();
//quantizer
var q = new Quantizer();
//loop and Step() + store values into _lumOutList
for (int i = 0, j = 0; i < lumx && j < lumy; Step(ref i, ref j, lumx, lumy)) {
var lumarr = ForDctLum(i, j);
var lumdct = dct.Go(lumarr);
q.QuantizeLuminance(ref lumdct);
_lumOutList.Add(lumdct);
}
for ( int i = 0, j = 0; i < channelx && j < channely; Step(ref i, ref j, channelx, channely) ) {
var crArr = ForDctCr(i, j);
var cbArr = ForDctCb(i, j);
var crdct = dct.Go(crArr);
var cbdct = dct.Go(cbArr);
q.QuantizeChrominance(ref crdct);
q.QuantizeChrominance(ref cbdct);
_crOutList.Add(crdct);
_cbOutList.Add(cbdct);
}
//rle
var rleOutputs = _lumOutList.Select(Rle.ZigZag).ToList();
rleOutputs.AddRange(_crOutList.Select(Rle.ZigZag));
rleOutputs.AddRange(_cbOutList.Select(Rle.ZigZag));
var encoded = Rle.Encode(rleOutputs);
File.WriteAllBytes("./output.dct", encoded);
}
</code></pre>
<p>RLE:
Zigzag reading:</p>
<pre><code>public static byte[] ZigZag(double[,] input) {
var result = new double[8, 8];
var output = new byte[64];
int i = 0, j = 0;
var d = -1; // -1 for top-right move, +1 for bottom-left move
int start = 0, end = 8*8 - 1;
do {
output[start++] = (byte) input[i, j];
output[end--] = (byte) input[8 - i - 1, 8 - j - 1];
i += d;
j -= d;
if (i < 0) {
i++;
d = -d; // top reached, reverse
}
else if (j < 0) {
j++;
d = -d; // left reached, reverse
}
} while (start < end);
if (start == end)
result[i, j] = start;
return output;
}
</code></pre>
<p>Run Length Encoding.</p>
<pre><code>public static byte[] Encode(List<byte[]> list) {
var ret = new List<byte>();
var prev = list[0][0];
byte count = 0;
const byte delim = 255;
foreach (var val in list.SelectMany(arr => arr)) {
if (val != prev) {
if (count > 1) {
ret.Add(delim);
ret.Add(count);
ret.Add(prev);
}
else {
ret.Add(prev);
}
prev = val;
count = 1;
}
else {
count ++;
prev = val;
}
}
return ret.ToArray();
}
</code></pre>
<p>I know this is a pretty long post but I have not been able to get this resolved on my own. The book we are using isn't of much help so I am left to just randomly writing code until something works.</p>
<p>Right now I am able to work with images that can be split evenly into 8x8 blocks like the lena.tif image on github.</p>
<p>The issue comes when I RGB > YCBCR > SubSample > DCT > Quantize
At this point my values do not have a run of 0s like they should.</p>
<p>All suggestions appreciated.</p>
<p>Edit:</p>
<p>Quantization Tables:</p>
<pre><code>private double[,] ChrominanceQuantizationMatrix { get; } = new double[8, 8] {
{17, 18, 24, 47, 99, 99, 99, 99},
{18, 21, 26, 66, 99, 99, 99, 99},
{24, 26, 56, 99, 99, 99, 99, 99},
{47, 66, 99, 99, 99, 99, 99, 99},
{99, 99, 99, 99, 99, 99, 99, 99},
{99, 99, 99, 99, 99, 99, 99, 99},
{99, 99, 99, 99, 99, 99, 99, 99},
{99, 99, 99, 99, 99, 99, 99, 99}
};
private double[,] LuminanceQuantizationMatrix { get; } = new double[8, 8] {
{16, 11, 10, 16, 24, 40, 51, 61},
{12, 12, 14, 19, 26, 58, 60, 55},
{14, 13, 16, 24, 40, 57, 69, 56},
{14, 17, 22, 29, 51, 87, 80, 62},
{18, 22, 37, 56, 68, 109, 103, 77},
{24, 35, 55, 64, 81, 104, 113, 92},
{49, 64, 78, 87, 103, 121, 120, 101},
{72, 92, 95, 98, 112, 100, 103, 99}
};
</code></pre>
<p>Edit 2: </p>
<p>Method to quantize:</p>
<pre><code>public void QuantizeLuminance(ref double[,] data) {
for (var i = 0; i < 8; i++) {
for (var j = 0; j < 8; j++) {
data[i, j] /= LuminanceQuantizationMatrix[i, j];
}
}
}
</code></pre>
| 1,561
|
|
implement quantization
|
TensorRT Post Training Quantization (INT8) generates incorrect results
|
https://stackoverflow.com/questions/76715132/tensorrt-post-training-quantization-int8-generates-incorrect-results
|
<p>This should really be a question for the NVIDIA team, but they are notoriously bad in providing support, so I am hoping instead that someone intimately familiar with the TensorRT C++ API can help me out.</p>
<p>I am unable to put a full minimum reproducible example in this code here, as there is a lot of boilerplate code involved, especially since I want to demonstrate both the FP16 and INT8 calibration cases. <strong>But wait</strong>, before you close the question, I do have a minimal reproducible example on GitHub <a href="https://github.com/cyrusbehr/tensorrt-cpp-api" rel="nofollow noreferrer">here</a>. It's a project (150 stars and counting) which has the intention of teaching and helping others to use the TensorRT API (so by helping me solve this, you will actually help countless others too - win win!). The project has a very thorough readme file for how to get started, and the code is well documented (and it's only 3 code files in the <code>/src</code> dir).</p>
<p>Anyways, I am successfully able to run FP32 and FP16 inference (in the repo I use the arcface face recognition model). However, when I try doing INT8 quantization, that's where things fall apart.</p>
<p>I define a class which extends <code>nvinfer1::IInt8EntropyCalibrator2</code> called <code>Int8EntropyCalibrator2</code>. The class is used for reading calibration data into GPU memory and providing it to TensorRT via the <code>getBatch</code> method:</p>
<pre class="lang-cpp prettyprint-override"><code>// Class used for int8 calibration
class Int8EntropyCalibrator2 : public nvinfer1::IInt8EntropyCalibrator2 {
public:
Int8EntropyCalibrator2(int32_t batchSize, int32_t inputW, int32_t inputH, const std::string& calibDataDirPath, const std::string& calibTableName, const std::string& inputBlobName,
const std::array<float, 3>& subVals = {0.f, 0.f, 0.f},const std::array<float, 3>& divVals = {1.f, 1.f, 1.f}, bool normalize = true, bool readCache = true);
virtual ~Int8EntropyCalibrator2();
// Abstract base class methods which must be implemented
int32_t getBatchSize () const noexcept override;
bool getBatch (void *bindings[], char const *names[], int32_t nbBindings) noexcept override;
void const * readCalibrationCache (std::size_t &length) noexcept override;
void writeCalibrationCache (void const *ptr, std::size_t length) noexcept override;
private:
const int32_t m_batchSize;
const int32_t m_inputW;
const int32_t m_inputH;
int32_t m_imgIdx;
std::vector<std::string> m_imgPaths;
size_t m_inputCount;
const std::string m_calibTableName;
const std::string m_inputBlobName;
const std::array<float, 3> m_subVals;
const std::array<float, 3> m_divVals;
const bool m_normalize;
const bool m_readCache;
void* m_deviceInput;
std::vector<char> m_calibCache;
};
</code></pre>
<p>The implementation for said class is as follows:</p>
<pre class="lang-cpp prettyprint-override"><code>Int8EntropyCalibrator2::Int8EntropyCalibrator2(int32_t batchSize, int32_t inputW, int32_t inputH,
const std::string &calibDataDirPath,
const std::string &calibTableName,
const std::string &inputBlobName,
const std::array<float, 3>& subVals,
const std::array<float, 3>& divVals,
bool normalize,
bool readCache)
: m_batchSize(batchSize)
, m_inputW(inputW)
, m_inputH(inputH)
, m_imgIdx(0)
, m_calibTableName(calibTableName)
, m_inputBlobName(inputBlobName)
, m_subVals(subVals)
, m_divVals(divVals)
, m_normalize(normalize)
, m_readCache(readCache) {
// Allocate GPU memory to hold the entire batch
m_inputCount = 3 * inputW * inputH * batchSize;
checkCudaErrorCode(cudaMalloc(&m_deviceInput, m_inputCount * sizeof(float)));
// Read the name of all the files in the specified directory.
if (!doesFileExist(calibDataDirPath)) {
throw std::runtime_error("Error, directory at provided path does not exist: " + calibDataDirPath);
}
m_imgPaths = getFilesInDirectory(calibDataDirPath);
if (m_imgPaths.size() < static_cast<size_t>(batchSize)) {
throw std::runtime_error("There are fewer calibration images than the specified batch size!");
}
// Randomize the calibration data
auto rd = std::random_device {};
auto rng = std::default_random_engine { rd() };
std::shuffle(std::begin(m_imgPaths), std::end(m_imgPaths), rng);
}
int32_t Int8EntropyCalibrator2::getBatchSize() const noexcept {
// Return the batch size
return m_batchSize;
}
bool Int8EntropyCalibrator2::getBatch(void **bindings, const char **names, int32_t nbBindings) noexcept {
// This method will read a batch of images into GPU memory, and place the pointer to the GPU memory in the bindings variable.
if (m_imgIdx + m_batchSize > static_cast<int>(m_imgPaths.size())) {
// There are not enough images left to satisfy an entire batch
return false;
}
// Read the calibration images into memory for the current batch
std::vector<cv::cuda::GpuMat> inputImgs;
for (int i = m_imgIdx; i < m_imgIdx + m_batchSize; i++) {
std::cout << "Reading image " << i << ": " << m_imgPaths[i] << std::endl;
auto cpuImg = cv::imread(m_imgPaths[i]);
if (cpuImg.empty()){
std::cout << "Fatal error: Unable to read image at path: " << m_imgPaths[i] << std::endl;
return false;
}
cv::cuda::GpuMat gpuImg;
gpuImg.upload(cpuImg);
cv::cuda::cvtColor(gpuImg, gpuImg, cv::COLOR_BGR2RGB);
// TODO: Define any preprocessing code here, such as resizing
// In this example, we will assume the calibration images are already of the correct size
inputImgs.emplace_back(std::move(gpuImg));
}
// Convert the batch from NHWC to NCHW
// ALso apply normalization, scaling, and mean subtraction
auto mfloat = Engine::blobFromGpuMats(inputImgs, m_subVals, m_divVals, m_normalize);
auto *dataPointer = mfloat.ptr<void>();
// Copy the GPU buffer to member variable so that it persists
checkCudaErrorCode(cudaMemcpyAsync(m_deviceInput, dataPointer, m_inputCount * sizeof(float), cudaMemcpyDeviceToDevice));
m_imgIdx+= m_batchSize;
if (std::string(names[0]) != m_inputBlobName) {
std::cout << "Error: Incorrect input name provided!" << std::endl;
return false;
}
bindings[0] = m_deviceInput;
return true;
}
void const *Int8EntropyCalibrator2::readCalibrationCache(size_t &length) noexcept {
std::cout << "Searching for calibration cache: " << m_calibTableName << std::endl;
m_calibCache.clear();
std::ifstream input(m_calibTableName, std::ios::binary);
input >> std::noskipws;
if (m_readCache && input.good()) {
std::cout << "Reading calibration cache: " << m_calibTableName << std::endl;
std::copy(std::istream_iterator<char>(input), std::istream_iterator<char>(), std::back_inserter(m_calibCache));
}
length = m_calibCache.size();
return length ? m_calibCache.data() : nullptr;
}
void Int8EntropyCalibrator2::writeCalibrationCache(const void *ptr, std::size_t length) noexcept {
std::cout << "Writing calib cache: " << m_calibTableName << " Size: " << length << " bytes" << std::endl;
std::ofstream output(m_calibTableName, std::ios::binary);
output.write(reinterpret_cast<const char*>(ptr), length);
}
Int8EntropyCalibrator2::~Int8EntropyCalibrator2() {
checkCudaErrorCode(cudaFree(m_deviceInput));
};
</code></pre>
<p>Once again, I understand the example above is not complete (ex. the definition of <code>checkCudaErrorCode</code> or <code>blobFromGpuMats</code> are not shown) but again please I plead you to look at the GitHub repo before dismissing this question. The implementation above is <a href="https://github.com/cyrusbehr/tensorrt-cpp-api/blob/561f30676bba94b519c135ca42d31f9f49523e86/src/engine.cpp#L512-L623" rel="nofollow noreferrer">here</a> (Note you will need to checkout the <code>int8</code> branch.</p>
<p>What I find is that the feature vector produced when running <code>int8</code> inference is very different from that generated using <code>FP16</code>.</p>
<p>Here are the steps to reproduce for yourself:</p>
<ol>
<li><p>Navigate to the <a href="https://github.com/cyrusbehr/tensorrt-cpp-api/tree/main" rel="nofollow noreferrer">GitHub repo</a>, clone recursively, checkout <code>int8</code> branch , install dependencies listed in readme, compile.</p>
</li>
<li><p>Follow the readme file <a href="https://github.com/cyrusbehr/tensorrt-cpp-api/tree/main#sanity-check" rel="nofollow noreferrer">Sanity check</a> section to obtain the arcface model.</p>
</li>
<li><p>Run the executable and provide path to the arcface model. It should generate the following feature vector. This is the FP16 feature vector.</p>
</li>
</ol>
<pre><code>-0.050293 -0.0993042 0.181152 0.144531 0.222656 0.217529 -0.290283 -0.0638428 0.234375 -0.176636 ...
</code></pre>
<ol start="4">
<li><p>Navigate to <a href="https://github.com/cyrusbehr/tensorrt-cpp-api/blob/561f30676bba94b519c135ca42d31f9f49523e86/src/main.cpp#L51" rel="nofollow noreferrer">this</a> line, change it from <code>Precision::FP16</code> to <code>Precision::INT8</code>.</p>
</li>
<li><p>Download and extract calibration data, available <a href="https://drive.google.com/file/d/1y0hIQW_iUQEQ2JOmhvkX625UP3JObgV0/view?usp=sharing" rel="nofollow noreferrer">here</a>.</p>
</li>
<li><p>Provide path to calibration data to the <code>Engine::build</code> method <a href="https://github.com/cyrusbehr/tensorrt-cpp-api/blob/561f30676bba94b519c135ca42d31f9f49523e86/src/main.cpp#L70" rel="nofollow noreferrer">here</a>.</p>
</li>
</ol>
<p>7 ) Recompile and run. The resulting feature vector this time is:</p>
<pre><code>-0.175003 -0.00527599 -0.128431 -0.147636 0.278055 0.0584708 -0.083089 -0.0100119 -0.185134 0.0172769 ...
</code></pre>
<p>As can be seen, the int8 feature vector is quite different from the FP16 feature vector.
Any thoughts on where I'm going wrong? There doesn't seem to be much documentation or sample code on int8 calibration.</p>
| 1,562
|
|
implement quantization
|
Tensorflow Quantization - Failed to parse the model: pybind11::init(): factory function returned nullptr
|
https://stackoverflow.com/questions/66731194/tensorflow-quantization-failed-to-parse-the-model-pybind11init-factory-f
|
<p>I'm working on a TensorFlow model to be deployed on an embedded system. For this purpose, I need to quantize the model to int8.
The model is composed of three distinct models:</p>
<ol>
<li>CNN as a feature extractor</li>
<li>TCN for temporal prediction</li>
<li>FC/Dense as last classfier.</li>
</ol>
<p>I implemented the TCN starting from <a href="https://medium.com/the-artificial-impostor/notes-understanding-tensorflow-part-3-7f6633fcc7c7" rel="nofollow noreferrer">this post</a> with some modifications. In essence, the TCN is just a set of 1D convolutions (with some 0-padding) plus an add operation.</p>
<pre class="lang-py prettyprint-override"><code>## Define TCN newer
tcn_input = tf.keras.Input(shape=tf.keras.backend.int_shape(glue)[1:])
# first causal conv for channel adaptation
k=1; d=1; padding = (k - 1) * d
# tcn_input_p = tf.pad(tcn_input, tf.constant([(0,0), (1,0), (0,0)]) * padding)
temp_block_input = tf.keras.layers.Conv1D(32,k, padding='valid', data_format='channels_last', name='adapt_conv')(tcn_input)
# TEMPORAL BLOCK 1
k=2; d=1; padding = (k - 1) * d
# temp_block_input_p = tf.pad(temp_block_input, tf.constant([(0,0), (1,0), (0,0)]) * padding)
temp_block_input_p = tf.keras.layers.ZeroPadding1D((padding, 0))(temp_block_input)
x = tf.keras.layers.Conv1D(32,k, padding='valid', data_format='channels_last', dilation_rate=d, activation='relu', name='conv1')(temp_block_input_p)
temp_block_input = tf.keras.layers.Add()([temp_block_input, x])
# TEMPORAL BLOCK 2
k=2; d=2; padding = (k - 1) * d
# temp_block_input_p = tf.pad(temp_block_input, tf.constant([(0,0), (1,0), (0,0)]) * padding)
temp_block_input_p = tf.keras.layers.ZeroPadding1D((padding, 0))(temp_block_input)
x = tf.keras.layers.Conv1D(32,k, padding='valid', data_format='channels_last', dilation_rate=d, activation='relu', name='conv2')(temp_block_input_p)
temp_block_input = tf.keras.layers.Add()([temp_block_input, x])
# TEMPORAL BLOCK 3
k=2; d=4; padding = (k - 1) * d
# temp_block_input_p = tf.pad(temp_block_input, tf.constant([(0,0), (1,0), (0,0)]) * padding)
temp_block_input_p = tf.keras.layers.ZeroPadding1D((padding, 0))(temp_block_input)
x = tf.keras.layers.Conv1D(32,k, padding='valid', data_format='channels_last', dilation_rate=d, activation='relu', name='conv3')(temp_block_input_p)
x = tf.keras.layers.Add()([temp_block_input, x])
tcn = tf.keras.Model(tcn_input, x, name='tcn')
tcn.summary()
</code></pre>
<p>I try to quantize the TCN with the following code (which works for other models, eg the CNN)</p>
<pre class="lang-py prettyprint-override"><code>converter = tf.lite.TFLiteConverter.from_keras_model(tcn)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
def representative_dataset(): # generate the inputs
for sample in x_train:
yield [cnn(i) for i in sample]
converter.representative_dataset = representative_dataset
quant_model = converter.convert()
with open(os.path.join('models','tcn_q.bin'), 'wb') as f:
f.write(quant_model)
</code></pre>
<p>And I get the error below. I also unsuccessfully tried the following:</p>
<ul>
<li>Use the format saved_model and then <code>tf.lite.TFLiteConverter.from_saved_model(path)</code></li>
<li>use <code>tf.Add</code> and <code>tf.pad</code> instead of the keras API</li>
<li>Remove the Add operation to make the model sequential</li>
</ul>
<pre><code>Failed to parse the model: pybind11::init(): factory function returned nullptr.
</code></pre>
<p>I could not find a solution so far, but I believe it should be possible to quantize this network, as the operations I use are basic and should be supported.
I can also use some workaround if anything comes to mind, but I'd like to understand which part is creating the issue.</p>
<p>As a side node, I also inspected the network with <a href="https://netron.app" rel="nofollow noreferrer">netron.app</a>, and it seems the 1D convolutions are transformed into a 2D convolution using some additional Reshape, ExpandDims and BatchToSpace layers. I'm not sure if this might be an issue though.</p>
<p><a href="https://i.sstatic.net/cWNW1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cWNW1.png" alt="TCN without the add layers" /></a></p>
|
<p>As suggested by <a href="https://stackoverflow.com/users/11843861/jae-sung-chung">@JaesungChung</a>, the problem seems to be solved using tf-nightly (I tested on 2.5.0-dev20210325).</p>
<p>It's possible to obtain the same effect in 2.4.0 using a workaround and transforming the Conv1D into Conv2D with a width of 1 and using a flat kernel (1, kernel_size).</p>
| 1,563
|
implement quantization
|
Closest point algorithm | How to improve it?
|
https://stackoverflow.com/questions/53264167/closest-point-algorithm-how-to-improve-it
|
<p>I wrote a k-means clustering algorithm and a color quantization algorithm. They work as expected in terms of results but I want to make them faster. In both implementations I need to solve a problem: there are two arrays of points in a 3D space, then for each point of the first array, you need to find the closest point from the second array. I do it like this:</p>
<pre><code>size_t closest_cluster_index;
double x_dif, y_dif, z_dif;
double old_distance;
double new_distance;
for (auto point = points.begin(); point != points.end(); point++)
{
//FIX
//as suggested by juvian
//K = 1
if (point != points.begin())
{
auto cluster = &(clusters[closest_cluster_index]);
r_dif = cluster->r - point->r;
g_dif = cluster->g - point->g;
b_dif = cluster->b - point->b;
new_distance = r_dif * r_dif + g_dif * g_dif + b_dif * b_dif;
if (new_distance <= std::sqrt(old_distance) - ColorU8::differenceRGB(*(point - 1), *point))
{
old_distance = new_distance;
//do sth with closest_cluster_index;
continue;
}
}
//END OF FIX
old_distance = std::numeric_limits<double>::infinity();
for (auto cluster = clusters.begin(); cluster != clusters.end(); cluster++)
{
x_dif = cluster->x - point->x;
y_dif = cluster->y - point->y;
z_dif = cluster->z - point->z;
new_distance = x_dif * x_dif + y_dif * y_dif + z_dif * z_dif;
if (new_distance < old_distance)
{
old_distance = new_distance;
closest_cluster_index = cluster - clusters.begin();
}
}
//do sth with: closest_cluster_index
}
</code></pre>
<p>How can I improve it?
(I don't want to make it multithreaded or computed by GPU)</p>
|
<p>There are multiple data structures for efficient nearest neighbour queries. For 3d, a <a href="https://en.wikipedia.org/wiki/K-d_tree" rel="nofollow noreferrer">kdtree</a> works really well, and has a complexity of O(log n) for each query on average which would improve your current O(n). </p>
<p>So with this structure you can add all your points from clusters to it, and then for each point in points, you can use the structure to query the closest point. For your particular case, a static kdtree is enough, as you don´t need to update points.</p>
<p><strong>Another approach</strong>: </p>
<p>We can try to risk doing extra computations on some points in exchange for fewer on others. This method should work well with the following assumptions:</p>
<ul>
<li>The distance between a cluster with another is far</li>
<li>The distance between a point and the adjacent point is low</li>
</ul>
<p>I think these apply to your case because your clusters are few colors and your points come from a real image, which tends to have similar colors between adjacent pixels.</p>
<p>For each point, create a heap. Instead of storing the closest cluster, store in the <a href="https://stackoverflow.com/questions/5380568/algorithm-to-find-k-smallest-numbers-in-array-of-n-items">max heap</a> the closest k clusters. When you move to the next point, we can use this information. Let's call this point P and its kth closest cluster C.</p>
<p>Now for a new point P2, before comparing to all clusters we will check if the closest cluster to P2 is in our heap. This can only be true if the distance between any cluster from the heap and P2 is <= distance(P, C) - distance(P, P2). When this holds true, we can check only in our heap instead of all clusters. When it is not true, we compare against all and rebuild our heap and P will be P2.</p>
<p>You will need to try out different values of k to see if it improves. For the case of K = 2, might be worth avoiding the added complexity of a heap and just use variables.</p>
| 1,564
|
implement quantization
|
Rounded division by power of 2
|
https://stackoverflow.com/questions/6135157/rounded-division-by-power-of-2
|
<p>I'm implementing a quantization algorithm from a textbook. I'm at a point where things pretty much work, except I get off-by-one errors when rounding. This is what the textbook has to say about that:</p>
<blockquote>
<p>Rounded division by <code>2^p</code> may be carried out by adding an offset and right-shifting by p bit positions</p>
</blockquote>
<p>Now, I get the bit about the right shift, but what offset are they talking about?</p>
<p>Here's my sample code:</p>
<pre><code>def scale(x, power2=16):
if x < 0:
return -((-x) >> power2)
else:
return x >> power2
def main():
inp = [ 12595827, -330706, 196605, -387168, -274244, 377496, -241980,
-545272, -196605, 24198, 196605, 193584, 104858, 424683,
-40330, 41944 ]
expect = [ 192, -5, 3, -6, -4, 5, -3, -8, -3, 0, 3, 3, 1, 6, 0, 0 ]
actual = map(scale, inp)
for i in range(len(expect)):
if actual[i] == expect[i]:
continue
print 'inp: % 8d expected: % 3d actual: % 3d err: %d' % (inp[i],
expect[i], actual[i], expect[i] - actual[i])
if __name__ == '__main__':
main()
</code></pre>
<p>I'm checking for negative input as bit shifting a negative integer appears to be implementation-dependent.</p>
<p>My output:</p>
<pre><code>inp: 196605 expected: 3 actual: 2 err: 1
inp: -387168 expected: -6 actual: -5 err: -1
inp: -196605 expected: -3 actual: -2 err: -1
inp: 196605 expected: 3 actual: 2 err: 1
inp: 193584 expected: 3 actual: 2 err: 1
</code></pre>
<p>What is the offset that is mentioned in the textbook, and how can I use it to get rid of this error?</p>
|
<p>The shift will truncate. The shift is a binary operator operating. I'm using square brackets to denote the base here:</p>
<pre><code>196605[10] = 101111111111111111[2]
101111111111111111[2] >> 16[10] = 10[2] = 2[10]
</code></pre>
<p>To perform correct rounding you need to add half of your divisor before doing the shift.</p>
<pre><code>101111111111111111[2] + 1000000000000000[2] >> 16[10] = 110111111111111111[2] >> 16[10] = 11[2] = 3[10]
</code></pre>
| 1,565
|
implement quantization
|
Algorithm Complexity vs Running Time
|
https://stackoverflow.com/questions/37438314/algorithm-complexity-vs-running-time
|
<p>I have an algorithm used for signal quantization. For the algorithm I have an equation to calculate its complexity with different values of parameters. This algorithm is implemented in C. Sometimes according to the equation I have less complexity but the running time is higher. I'm not 100% sure about the equation.</p>
<p>My question is running time and algorithm complexity are all the time having straight relation? Means, always the higher complexity we have, the higher running time happens? Or it's different from one algorithm to another?</p>
|
<p>Time complexity is more a measure of how time varies with input size than an absolute measure.<br>
(This is an extreme simplification, but it will do for explaining the phenomenon you're seeing.)</p>
<p>If <code>n</code> is your problem size and your actual running time is <code>1000000000 * n</code>, it has linear complexity, while <code>0.000000001*n^2</code> would be quadratic. </p>
<p>If you plot them against each other, you'll see that <code>0.000000001*n^2</code> is smaller than <code>1000000000 * n</code> all the way up to around n = 1e18, despite its "greater complexity".</p>
<p>(<code>0.000000001*n^2 + 1000000000 * n</code> would also be quadratic, but always have worse execution time than both.)</p>
| 1,566
|
implement quantization
|
Matlab : Can SOM and kmeans be applied to binarize time series data?
|
https://stackoverflow.com/questions/40956532/matlab-can-som-and-kmeans-be-applied-to-binarize-time-series-data
|
<p>I found a similar question asked here <a href="https://stackoverflow.com/questions/19128859/determining-cluster-membership-in-som-self-organizing-map-for-time-series-data?rq=1">Determining cluster membership in SOM (Self Organizing Map) for time series data</a></p>
<p>and I want to learn how to apply self organizing map in binarizing or assigning more than 2 kinds of symbols to data.</p>
<p>For example, let <code>data = rand(100,1)</code> In general, I would be doing <code>data_quantized = 2*(data>=0.5)-1</code> to get a binary valued transformed series where the threshold 0.5 is assumed and fixed. It may have been possible to quantize data using more that 2 symbols. Can kmeans or SOM be applied to do this task? What should be the input and output if I were to use SOM in quantizing the data?</p>
<p><code>X = {x_i(t)}</code> for i =1:N and t = 1:T number of time series, <code>N</code> represents the number of components/ variables. To get the quantized value for any vector x_i is to use the value of the BMU, which is nearest. The quantization error will be the Euclidean norm of the difference of the input vector and the best-matching model. Then a new time series is compared / matched using the symbols representation of the time series. WOuld BMU be a scalar valued number or a vector of floating point numbers? It is very hard to picturize what SOM is doing. </p>
<p>Matlab implementation <a href="https://www.mathworks.com/matlabcentral/fileexchange/39930-self-organizing-map-simple-demonstration" rel="nofollow noreferrer">https://www.mathworks.com/matlabcentral/fileexchange/39930-self-organizing-map-simple-demonstration</a></p>
<p>I cannot understand how to work for time series in quantization. Assuming <code>N = 1</code>, a 1 dimensional array/ vector of elements obtained from a white noise process, how can I quantize / partition this data using self organizing map?</p>
<p><a href="http://www.mathworks.com/help/nnet/ug/cluster-with-self-organizing-map-neural-network.html" rel="nofollow noreferrer">http://www.mathworks.com/help/nnet/ug/cluster-with-self-organizing-map-neural-network.html</a></p>
<p>is provided by the Matlab but it works for N dimensional data but I have a 1 dimensional data containing 1000 data points (t =1,...,1000).</p>
<p>It shall be of immense help if a toy example is provided which explain how a time series can be quantized into multiple levels. Let, trainingData = x_i;</p>
<pre><code>T = 1000;
N = 1;
x_i = rand(T,N) ;
</code></pre>
<p>How can I apply the code below of SOM so that the numerical valued data can be represented by symbols such as 1,2,3 i.e clustered using 3 symbols? A data point (scalar valued) can be either represented by symbol 1 or 2 or 3.</p>
<pre><code>function som = SOMSimple(nfeatures, ndim, nepochs, ntrainingvectors, eta0, etadecay, sgm0, sgmdecay, showMode)
%SOMSimple Simple demonstration of a Self-Organizing Map that was proposed by Kohonen.
% sommap = SOMSimple(nfeatures, ndim, nepochs, ntrainingvectors, eta0, neta, sgm0, nsgm, showMode)
% trains a self-organizing map with the following parameters
% nfeatures - dimension size of the training feature vectors
% ndim - width of a square SOM map
% nepochs - number of epochs used for training
% ntrainingvectors - number of training vectors that are randomly generated
% eta0 - initial learning rate
% etadecay - exponential decay rate of the learning rate
% sgm0 - initial variance of a Gaussian function that
% is used to determine the neighbours of the best
% matching unit (BMU)
% sgmdecay - exponential decay rate of the Gaussian variance
% showMode - 0: do not show output,
% 1: show the initially randomly generated SOM map
% and the trained SOM map,
% 2: show the trained SOM map after each update
%
% For example: A demonstration of an SOM map that is trained by RGB values
%
% som = SOMSimple(1,60,10,100,0.1,0.05,20,0.05,2);
% % It uses:
% % 1 : dimensions for training vectors
% % 60x60: neurons
% % 10 : epochs
% % 100 : training vectors
% % 0.1 : initial learning rate
% % 0.05 : exponential decay rate of the learning rate
% % 20 : initial Gaussian variance
% % 0.05 : exponential decay rate of the Gaussian variance
% % 2 : Display the som map after every update
nrows = ndim;
ncols = ndim;
nfeatures = 1;
som = rand(nrows,ncols,nfeatures);
% Generate random training data
x_i = trainingData;
% Generate coordinate system
[x y] = meshgrid(1:ncols,1:nrows);
for t = 1:nepochs
% Compute the learning rate for the current epoch
eta = eta0 * exp(-t*etadecay);
% Compute the variance of the Gaussian (Neighbourhood) function for the ucrrent epoch
sgm = sgm0 * exp(-t*sgmdecay);
% Consider the width of the Gaussian function as 3 sigma
width = ceil(sgm*3);
for ntraining = 1:ntrainingvectors
% Get current training vector
trainingVector = trainingData(ntraining,:);
% Compute the Euclidean distance between the training vector and
% each neuron in the SOM map
dist = getEuclideanDistance(trainingVector, som, nrows, ncols, nfeatures);
% Find the best matching unit (bmu)
[~, bmuindex] = min(dist);
% transform the bmu index into 2D
[bmurow bmucol] = ind2sub([nrows ncols],bmuindex);
% Generate a Gaussian function centered on the location of the bmu
g = exp(-(((x - bmucol).^2) + ((y - bmurow).^2)) / (2*sgm*sgm));
% Determine the boundary of the local neighbourhood
fromrow = max(1,bmurow - width);
torow = min(bmurow + width,nrows);
fromcol = max(1,bmucol - width);
tocol = min(bmucol + width,ncols);
% Get the neighbouring neurons and determine the size of the neighbourhood
neighbourNeurons = som(fromrow:torow,fromcol:tocol,:);
sz = size(neighbourNeurons);
% Transform the training vector and the Gaussian function into
% multi-dimensional to facilitate the computation of the neuron weights update
T = reshape(repmat(trainingVector,sz(1)*sz(2),1),sz(1),sz(2),nfeatures);
G = repmat(g(fromrow:torow,fromcol:tocol),[1 1 nfeatures]);
% Update the weights of the neurons that are in the neighbourhood of the bmu
neighbourNeurons = neighbourNeurons + eta .* G .* (T - neighbourNeurons);
% Put the new weights of the BMU neighbouring neurons back to the
% entire SOM map
som(fromrow:torow,fromcol:tocol,:) = neighbourNeurons;
end
end
function ed = getEuclideanDistance(trainingVector, sommap, nrows, ncols, nfeatures)
% Transform the 3D representation of neurons into 2D
neuronList = reshape(sommap,nrows*ncols,nfeatures);
% Initialize Euclidean Distance
ed = 0;
for n = 1:size(neuronList,2)
ed = ed + (trainingVector(n)-neuronList(:,n)).^2;
end
ed = sqrt(ed);
</code></pre>
|
<p>I don't know that I might be misunderstanding your question, but from what I understand it is really quite straight forward, both with <code>kmeans</code> and with Matlab's own <code>selforgmap</code>. The implementation you have posted for SOMSimple I cannot really comment on.</p>
<p>Let's take your initial example:</p>
<pre><code>rng(1337);
T = 1000;
x_i = rand(1,T); %rowvector for convenience
</code></pre>
<p>Assuming you want to quantize to three symbols, your manual version could be:</p>
<pre><code>nsyms = 3;
symsthresh = [1:-1/nsyms:1/nsyms];
x_i_q = zeros(size(x_i));
for i=1:nsyms
x_i_q(x_i<=symsthresh(i)) = i;
end
</code></pre>
<p>Using Matlab's own <code>selforgmap</code> you can achieve a similar result:</p>
<pre><code>net = selforgmap(nsyms);
net.trainParam.showWindow = false;
net = train(net,x_i);
net(x_i);
y = net(x_i);
classes = vec2ind(y);
</code></pre>
<p>Lastly, the same can be done straightforwardly with <code>kmeans</code>:</p>
<pre><code>clusters = kmeans(x_i',nsyms)';
</code></pre>
| 1,567
|
implement quantization
|
Encoding ac and dc cofficients in jpeg compression after zigzag ordering
|
https://stackoverflow.com/questions/78916934/encoding-ac-and-dc-cofficients-in-jpeg-compression-after-zigzag-ordering
|
<p>I'm trying to implement JPEG compression in python and so far I have done the following steps</p>
<ul>
<li>Color Space conversion</li>
<li>Downscaling chrominance channels</li>
<li>8x8 block splitting ( adds padding if size is not perfect )</li>
<li>DCT</li>
<li>Quantization</li>
<li>ZigZag Ordering</li>
<li>runlength encoding on ac coefficients, transformed into (RUNLENGTH, SIZE)(AMPLITUDE) pattern</li>
</ul>
<p>e.g <code>[(0, 2), -2, (4, 1), 1, (0, 0)]</code></p>
<ul>
<li>DPCM on dc cofficients</li>
</ul>
<p>I'm a lil confused about 2 things.
how should I encode the dc coefficients before Huffman encoding. Is there a specific pattern just like ac coefficients?</p>
<p>After encoding dc coefficients are stored in their respective blocks ?</p>
<p><code>[dc],[ac_value],....</code></p>
<p>or all dc values are stored separately and Huffman encoded separately</p>
<p><code>[dc value,...]</code></p>
<p><code>[ac values,...]</code></p>
<p>I read that fixed huffman tables are used by most encoders. Where exactly can I find the fixed standard huffman tables.
I'm not sure if I am missing any crucial steps so far?</p>
| 1,568
|
|
implement quantization
|
Hidden Markov Models with C++
|
https://stackoverflow.com/questions/8562545/hidden-markov-models-with-c
|
<p>I've been looking into implementations of Hidden Markov Models in C++ lately. I was wondering If I could use any of the existing HMM libraries written in C++ out there to use
with Action Recognition (with OpenCV)?</p>
<p>I'm tying to AVOID "re-inventing the wheel"!</p>
<p>Is it possible to use <a href="http://torch3vision.idiap.ch/" rel="noreferrer">Torch3Vision</a> even though(looks like) it was designed to
work for speech recognition?</p>
<p>My idea is that, if we can convert the feature vectors into Symbols/Observations
(using Vector Quantization - Kmeans clustering), we can use those symbols for
decoding, inference, parameter learning (Baum–Welch algorithm). This way it
would work with Torch3Vision in OpenCV.</p>
<p>Any help on this will be truly appreciated.</p>
|
<p>You can take a look at <a href="http://www.ece.ucsb.edu/Faculty/Rabiner/ece259/Reprints/tutorial%20on%20hmm%20and%20applications.pdf" rel="noreferrer">http://www.ece.ucsb.edu/Faculty/Rabiner/ece259/Reprints/tutorial%20on%20hmm%20and%20applications.pdf</a> for the theory behind HMMs. It's not hard to implement the algorithms yourself.</p>
<p>For a C-based version, you can take a look at my implementation, <a href="http://code.google.com/p/accelges/" rel="noreferrer">http://code.google.com/p/accelges/</a>, which I've done for a Google Summer of Code project.</p>
| 1,569
|
implement quantization
|
Color quantization with N out of M predefined colors
|
https://stackoverflow.com/questions/21472245/color-quantization-with-n-out-of-m-predefined-colors
|
<p>I am having a slightly odd problem trying to quantize and dither an RGB image. Ideally, I should be able to implement a suitable algorithm in Java or use a Java library, but references to implementations in other languages may be helpful as well.</p>
<p>The following is given as input:</p>
<ul>
<li><code>image</code>: 24-bit RGB bitmap</li>
<li><code>palette</code>: a list of colors defined with their RGB values</li>
<li><code>max_cols</code>: the maximum number of colours to be used in the output image</li>
</ul>
<p>It is perhaps important, that both the size of the palette as well as the maximum number of allowed colours is not necessarily a power of 2 and may be greater than 255.</p>
<p>So, the goal is to take the <code>image</code>, select up to <code>max_cols</code> colours from the provided <code>palette</code> and output an image using only the picked colours and rendered using some kind of error-diffusion dithering. Which dithering algorithm to use is not that important, but it should be an error-diffusion variant (e.g. Floyd-Steinberg) and not simple halftone or ordered dithering.</p>
<p>Performance is not particularly important and the size of the expected data input is relatively small. The images would rarely be larger than 500x500 pixel, the provided palette may contain some 3-400 colours and the number of colours will usually be limited to less than 100. It is also safe to assume that the palette contains a wide selection of colours, covering variations of both hue, saturation and brightness.</p>
<p>The palette selection and dithering used by <a href="http://www.cs.berkeley.edu/~dcoetzee/downloads/scolorq/">scolorq</a> would be ideal, but it does not seem easy to adapt the algorithm to select colours from an already defined palette instead of arbitrary colours.</p>
<p>To be more precise, the problem where I am stuck is the selection of suitable colours from the provided palette. Assume that I e.g. use scolorq to create a palette with N colours and later replace the colours defined by scolorq with the closest colours from the provided palette, and then use these colours combined with error-diffused dithering. This will produce a result at least similar to the input image, but due to the unpredictable hues of the selected colours, the output image may get a strong, undesired colour cast. E.g. when using a grey-scale input image and a palette with only few neutral gray tones, but a great range of brown tones (or more generally, many colours with the same hue, low saturation and a great variation in the brightness), my colour selection algorithm seem to prefer these colours above the neutral greys since the brown tones are at least mathematically closer to the desired colour than the greys. The same problem remains even if I convert the RGB values to HSB and use different weights for the H, S and B channels when trying to find the nearest available colour. </p>
<p>Any suggestions how to implement this properly, or even better a library I can use to perform the task?</p>
<p>Since Xabster asked, I can also explain the goal with this excercise, although it has nothing to do with how the actual problem can be solved. The target for the output image is an embroidery or tapestry pattern. In the most simplest case, each pixel in the output image corresponds to a stitch made on some kind of carrier fabric. The palette corresponds to the available yarns, which usually come in several hundred colours. For practical reasons, it is however necessary to limit the number of colours used in the actual work. Googling for gobelin embroideries will give several examples. </p>
<p>And to clarify where the problem exactly lies... The solution can indeed be split into two separate steps:</p>
<ul>
<li>selecting the optimal subset of the original palette</li>
<li>using the subset to render the output image</li>
</ul>
<p>Here, the first step is the actual problem. If the palette selection works properly, I could simply use the selected colours and e.g. Floyd-Steinberg dithering to produce a reasonable result (which is rather trivial to implement).</p>
<p>If I understand the implementation of scolorq correctly, scolorq however combines these two steps, using knowledge of the dithering algorithm in the palette selection to create an even better result. That would of course be a preferred solution, but the algorithms used in scolorq work slightly beyond my mathematical knowledge.</p>
|
<p><strong>OVERVIEW</strong></p>
<p>This is a possible approach to the problem:</p>
<p>1) Each color from the input pixels is mapped to the closest color from the input color palette.</p>
<p>2) If the resulting palette is greater than the allowed maximum number of colors, the palette gets reduced to the maximum allowed number, by removing the colors, that are most similar with each other from the computed palette (I did choose the nearest distance for removal, so the resulting image remains high in contrast).</p>
<p>3) If the resulting palette is smaller than the allowed maximum number of colors, it gets filled with the most similar colors from the remaining colors of the input palette until the allowed number of colors is reached. This is done in the hope, that the dithering algorithm could make use of these colors during dithering. Note though that I didn't see much difference between filling or not filling the palette for the Floyd-Steinberg algorithm...</p>
<p>4) As a last step the input pixels get dithered with the computed palette.</p>
<hr>
<p><strong>IMPLEMENTATION</strong></p>
<p>Below is an implementation of this approach.</p>
<p>If you want to run the source code, you will need this class: <a href="http://www.java2s.com/Code/Java/2D-Graphics-GUI/Aframethatdisplaysanimage.htm" rel="nofollow">ImageFrame.java</a>. You can set the input image as the only program argument, all other parameters must be set in the main method. The used Floyd-Steinberg algorithm is from <a href="http://en.literateprograms.org/Floyd-Steinberg_dithering_%28Java%29?oldid=12476" rel="nofollow">Floyd-Steinberg dithering</a>.</p>
<p>One can choose between 3 different reduction strategies for the palette reduction algorithm:</p>
<p>1) <code>ORIGINAL_COLORS</code>: This algorithm tries to stay as true to the input pixel colors as possible by searching for the two colors in the palette, that have the least distance. From these two colors it removes the one with the fewest mappings to pixels in the input map.</p>
<p>2) <code>BETTER_CONTRAST</code>: Works like <code>ORIGINAL_COLORS</code>, with the difference, that from the two colors it removes the one with the lowest average distance to the rest of the palette.</p>
<p>3) <code>AVERAGE_DISTANCE</code>: This algorithm always removes the colors with the lowest average distance from the pool. This setting can especially improve the quality of the resulting image for grayscale palettes.</p>
<p>Here is the complete code:</p>
<pre><code>import java.awt.Color;
import java.awt.Image;
import java.awt.image.PixelGrabber;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Random;
import java.util.Set;
public class Quantize {
public static class RGBTriple {
public final int[] channels;
public RGBTriple() { channels = new int[3]; }
public RGBTriple(int color) {
int r = (color >> 16) & 0xFF;
int g = (color >> 8) & 0xFF;
int b = (color >> 0) & 0xFF;
channels = new int[]{(int)r, (int)g, (int)b};
}
public RGBTriple(int R, int G, int B)
{ channels = new int[]{(int)R, (int)G, (int)B}; }
}
/* The authors of this work have released all rights to it and placed it
in the public domain under the Creative Commons CC0 1.0 waiver
(http://creativecommons.org/publicdomain/zero/1.0/).
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Retrieved from: http://en.literateprograms.org/Floyd-Steinberg_dithering_(Java)?oldid=12476
*/
public static class FloydSteinbergDither
{
private static int plus_truncate_uchar(int a, int b) {
if ((a & 0xff) + b < 0)
return 0;
else if ((a & 0xff) + b > 255)
return (int)255;
else
return (int)(a + b);
}
private static int findNearestColor(RGBTriple color, RGBTriple[] palette) {
int minDistanceSquared = 255*255 + 255*255 + 255*255 + 1;
int bestIndex = 0;
for (int i = 0; i < palette.length; i++) {
int Rdiff = (color.channels[0] & 0xff) - (palette[i].channels[0] & 0xff);
int Gdiff = (color.channels[1] & 0xff) - (palette[i].channels[1] & 0xff);
int Bdiff = (color.channels[2] & 0xff) - (palette[i].channels[2] & 0xff);
int distanceSquared = Rdiff*Rdiff + Gdiff*Gdiff + Bdiff*Bdiff;
if (distanceSquared < minDistanceSquared) {
minDistanceSquared = distanceSquared;
bestIndex = i;
}
}
return bestIndex;
}
public static int[][] floydSteinbergDither(RGBTriple[][] image, RGBTriple[] palette)
{
int[][] result = new int[image.length][image[0].length];
for (int y = 0; y < image.length; y++) {
for (int x = 0; x < image[y].length; x++) {
RGBTriple currentPixel = image[y][x];
int index = findNearestColor(currentPixel, palette);
result[y][x] = index;
for (int i = 0; i < 3; i++)
{
int error = (currentPixel.channels[i] & 0xff) - (palette[index].channels[i] & 0xff);
if (x + 1 < image[0].length) {
image[y+0][x+1].channels[i] =
plus_truncate_uchar(image[y+0][x+1].channels[i], (error*7) >> 4);
}
if (y + 1 < image.length) {
if (x - 1 > 0) {
image[y+1][x-1].channels[i] =
plus_truncate_uchar(image[y+1][x-1].channels[i], (error*3) >> 4);
}
image[y+1][x+0].channels[i] =
plus_truncate_uchar(image[y+1][x+0].channels[i], (error*5) >> 4);
if (x + 1 < image[0].length) {
image[y+1][x+1].channels[i] =
plus_truncate_uchar(image[y+1][x+1].channels[i], (error*1) >> 4);
}
}
}
}
}
return result;
}
public static void generateDither(int[] pixels, int[] p, int w, int h){
RGBTriple[] palette = new RGBTriple[p.length];
for (int i = 0; i < palette.length; i++) {
int color = p[i];
palette[i] = new RGBTriple(color);
}
RGBTriple[][] image = new RGBTriple[w][h];
for (int x = w; x-- > 0; ) {
for (int y = h; y-- > 0; ) {
int index = y * w + x;
int color = pixels[index];
image[x][y] = new RGBTriple(color);
}
}
int[][] result = floydSteinbergDither(image, palette);
convert(result, pixels, p, w, h);
}
public static void convert(int[][] result, int[] pixels, int[] p, int w, int h){
for (int x = w; x-- > 0; ) {
for (int y = h; y-- > 0; ) {
int index = y * w + x;
int index2 = result[x][y];
pixels[index] = p[index2];
}
}
}
}
private static class PaletteColor{
final int color;
public PaletteColor(int color) {
super();
this.color = color;
}
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + color;
return result;
}
@Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
PaletteColor other = (PaletteColor) obj;
if (color != other.color)
return false;
return true;
}
public List<Integer> indices = new ArrayList<>();
}
public static int[] getPixels(Image image) throws IOException {
int w = image.getWidth(null);
int h = image.getHeight(null);
int pix[] = new int[w * h];
PixelGrabber grabber = new PixelGrabber(image, 0, 0, w, h, pix, 0, w);
try {
if (grabber.grabPixels() != true) {
throw new IOException("Grabber returned false: " +
grabber.status());
}
} catch (InterruptedException e) {
e.printStackTrace();
}
return pix;
}
/**
* Returns the color distance between color1 and color2
*/
public static float getPixelDistance(PaletteColor color1, PaletteColor color2){
int c1 = color1.color;
int r1 = (c1 >> 16) & 0xFF;
int g1 = (c1 >> 8) & 0xFF;
int b1 = (c1 >> 0) & 0xFF;
int c2 = color2.color;
int r2 = (c2 >> 16) & 0xFF;
int g2 = (c2 >> 8) & 0xFF;
int b2 = (c2 >> 0) & 0xFF;
return (float) getPixelDistance(r1, g1, b1, r2, g2, b2);
}
public static double getPixelDistance(int r1, int g1, int b1, int r2, int g2, int b2){
return Math.sqrt(Math.pow(r2 - r1, 2) + Math.pow(g2 - g1, 2) + Math.pow(b2 - b1, 2));
}
/**
* Fills the given fillColors palette with the nearest colors from the given colors palette until
* it has the given max_cols size.
*/
public static void fillPalette(List<PaletteColor> fillColors, List<PaletteColor> colors, int max_cols){
while (fillColors.size() < max_cols) {
int index = -1;
float minDistance = -1;
for (int i = 0; i < fillColors.size(); i++) {
PaletteColor color1 = colors.get(i);
for (int j = 0; j < colors.size(); j++) {
PaletteColor color2 = colors.get(j);
if (color1 == color2) {
continue;
}
float distance = getPixelDistance(color1, color2);
if (index == -1 || distance < minDistance) {
index = j;
minDistance = distance;
}
}
}
PaletteColor color = colors.get(index);
fillColors.add(color);
}
}
public static void reducePaletteByAverageDistance(List<PaletteColor> colors, int max_cols, ReductionStrategy reductionStrategy){
while (colors.size() > max_cols) {
int index = -1;
float minDistance = -1;
for (int i = 0; i < colors.size(); i++) {
PaletteColor color1 = colors.get(i);
float averageDistance = 0;
int count = 0;
for (int j = 0; j < colors.size(); j++) {
PaletteColor color2 = colors.get(j);
if (color1 == color2) {
continue;
}
averageDistance += getPixelDistance(color1, color2);
count++;
}
averageDistance/=count;
if (minDistance == -1 || averageDistance < minDistance) {
minDistance = averageDistance;
index = i;
}
}
PaletteColor removed = colors.remove(index);
// find the color with the least distance:
PaletteColor best = null;
minDistance = -1;
for (int i = 0; i < colors.size(); i++) {
PaletteColor c = colors.get(i);
float distance = getPixelDistance(c, removed);
if (best == null || distance < minDistance) {
best = c;
minDistance = distance;
}
}
best.indices.addAll(removed.indices);
}
}
/**
* Reduces the given color palette until it has the given max_cols size.
* The colors that are closest in distance to other colors in the palette
* get removed first.
*/
public static void reducePalette(List<PaletteColor> colors, int max_cols, ReductionStrategy reductionStrategy){
if (reductionStrategy == ReductionStrategy.AVERAGE_DISTANCE) {
reducePaletteByAverageDistance(colors, max_cols, reductionStrategy);
return;
}
while (colors.size() > max_cols) {
int index1 = -1;
int index2 = -1;
float minDistance = -1;
for (int i = 0; i < colors.size(); i++) {
PaletteColor color1 = colors.get(i);
for (int j = i+1; j < colors.size(); j++) {
PaletteColor color2 = colors.get(j);
if (color1 == color2) {
continue;
}
float distance = getPixelDistance(color1, color2);
if (index1 == -1 || distance < minDistance) {
index1 = i;
index2 = j;
minDistance = distance;
}
}
}
PaletteColor color1 = colors.get(index1);
PaletteColor color2 = colors.get(index2);
switch (reductionStrategy) {
case BETTER_CONTRAST:
// remove the color with the lower average distance to the other palette colors
int count = 0;
float distance1 = 0;
float distance2 = 0;
for (PaletteColor c : colors) {
if (c != color1 && c != color2) {
count++;
distance1 += getPixelDistance(color1, c);
distance2 += getPixelDistance(color2, c);
}
}
if (count != 0 && distance1 != distance2) {
distance1 /= (float)count;
distance2 /= (float)count;
if (distance1 < distance2) {
// remove color 1;
colors.remove(index1);
color2.indices.addAll(color1.indices);
} else{
// remove color 2;
colors.remove(index2);
color1.indices.addAll(color2.indices);
}
break;
}
//$FALL-THROUGH$
default:
// remove the color with viewer mappings to the input pixels
if (color1.indices.size() < color2.indices.size()) {
// remove color 1;
colors.remove(index1);
color2.indices.addAll(color1.indices);
} else{
// remove color 2;
colors.remove(index2);
color1.indices.addAll(color2.indices);
}
break;
}
}
}
/**
* Creates an initial color palette from the given pixels and the given palette by
* selecting the colors with the nearest distance to the given pixels.
* This method also stores the indices of the corresponding pixels inside the
* returned PaletteColor instances.
*/
public static List<PaletteColor> createInitialPalette(int pixels[], int[] palette){
Map<Integer, Integer> used = new HashMap<>();
ArrayList<PaletteColor> result = new ArrayList<>();
for (int i = 0, l = pixels.length; i < l; i++) {
double bestDistance = Double.MAX_VALUE;
int bestIndex = -1;
int pixel = pixels[i];
int r1 = (pixel >> 16) & 0xFF;
int g1 = (pixel >> 8) & 0xFF;
int b1 = (pixel >> 0) & 0xFF;
for (int k = 0; k < palette.length; k++) {
int pixel2 = palette[k];
int r2 = (pixel2 >> 16) & 0xFF;
int g2 = (pixel2 >> 8) & 0xFF;
int b2 = (pixel2 >> 0) & 0xFF;
double dist = getPixelDistance(r1, g1, b1, r2, g2, b2);
if (dist < bestDistance) {
bestDistance = dist;
bestIndex = k;
}
}
Integer index = used.get(bestIndex);
PaletteColor c;
if (index == null) {
index = result.size();
c = new PaletteColor(palette[bestIndex]);
result.add(c);
used.put(bestIndex, index);
} else{
c = result.get(index);
}
c.indices.add(i);
}
return result;
}
/**
* Creates a simple random color palette
*/
public static int[] createRandomColorPalette(int num_colors){
Random random = new Random(101);
int count = 0;
int[] result = new int[num_colors];
float add = 360f / (float)num_colors;
for(float i = 0; i < 360f && count < num_colors; i += add) {
float hue = i;
float saturation = 90 +random.nextFloat() * 10;
float brightness = 50 + random.nextFloat() * 10;
result[count++] = Color.HSBtoRGB(hue, saturation, brightness);
}
return result;
}
public static int[] createGrayScalePalette(int count){
float[] grays = new float[count];
float step = 1f/(float)count;
grays[0] = 0;
for (int i = 1; i < count-1; i++) {
grays[i]=i*step;
}
grays[count-1]=1;
return createGrayScalePalette(grays);
}
/**
* Returns a grayscale palette based on the given shades of gray
*/
public static int[] createGrayScalePalette(float[] grays){
int[] result = new int[grays.length];
for (int i = 0; i < result.length; i++) {
float f = grays[i];
result[i] = Color.HSBtoRGB(0, 0, f);
}
return result;
}
private static int[] createResultingImage(int[] pixels,List<PaletteColor> paletteColors, boolean dither, int w, int h) {
int[] palette = new int[paletteColors.size()];
for (int i = 0; i < palette.length; i++) {
palette[i] = paletteColors.get(i).color;
}
if (!dither) {
for (PaletteColor c : paletteColors) {
for (int i : c.indices) {
pixels[i] = c.color;
}
}
} else{
FloydSteinbergDither.generateDither(pixels, palette, w, h);
}
return palette;
}
public static int[] quantize(int[] pixels, int widht, int heigth, int[] colorPalette, int max_cols, boolean dither, ReductionStrategy reductionStrategy) {
// create the initial palette by finding the best match colors from the given color palette
List<PaletteColor> paletteColors = createInitialPalette(pixels, colorPalette);
// reduce the palette size to the given number of maximum colors
reducePalette(paletteColors, max_cols, reductionStrategy);
assert paletteColors.size() <= max_cols;
if (paletteColors.size() < max_cols) {
// fill the palette with the nearest remaining colors
List<PaletteColor> remainingColors = new ArrayList<>();
Set<PaletteColor> used = new HashSet<>(paletteColors);
for (int i = 0; i < colorPalette.length; i++) {
int color = colorPalette[i];
PaletteColor c = new PaletteColor(color);
if (!used.contains(c)) {
remainingColors.add(c);
}
}
fillPalette(paletteColors, remainingColors, max_cols);
}
assert paletteColors.size() == max_cols;
// create the resulting image
return createResultingImage(pixels,paletteColors, dither, widht, heigth);
}
static enum ReductionStrategy{
ORIGINAL_COLORS,
BETTER_CONTRAST,
AVERAGE_DISTANCE,
}
public static void main(String args[]) throws IOException {
// input parameters
String imageFileName = args[0];
File file = new File(imageFileName);
boolean dither = true;
int colorPaletteSize = 80;
int max_cols = 3;
max_cols = Math.min(max_cols, colorPaletteSize);
// create some random color palette
// int[] colorPalette = createRandomColorPalette(colorPaletteSize);
int[] colorPalette = createGrayScalePalette(20);
ReductionStrategy reductionStrategy = ReductionStrategy.AVERAGE_DISTANCE;
// show the original image inside a frame
ImageFrame original = new ImageFrame();
original.setImage(file);
original.setTitle("Original Image");
original.setLocation(0, 0);
Image image = original.getImage();
int width = image.getWidth(null);
int heigth = image.getHeight(null);
int pixels[] = getPixels(image);
int[] palette = quantize(pixels, width, heigth, colorPalette, max_cols, dither, reductionStrategy);
// show the reduced image in another frame
ImageFrame reduced = new ImageFrame();
reduced.setImage(width, heigth, pixels);
reduced.setTitle("Quantized Image (" + palette.length + " colors, dither: " + dither + ")");
reduced.setLocation(100, 100);
}
}
</code></pre>
<hr>
<p><strong>POSSIBLE IMPROVEMENTS</strong></p>
<p>1) The used Floyd-Steinberg algorithm does currently only work for palettes with a maximum size of <strong>256</strong> colors. I guess this could be fixed easily, but since the used FloydSteinbergDither class requires quite a lot of conversions at the moment, it would certainly be better to implement the algorithm from scratch so it fits the color model that is used in the end.</p>
<p>2) I believe using another dithering algorithm like <a href="http://www.cs.berkeley.edu/~dcoetzee/downloads/scolorq/" rel="nofollow">scolorq</a> would perhaps be better. On the "To Do List" at the end of their homepage they write:</p>
<blockquote>
<p>[TODO:] The ability to fix some colors to a predetermined set (supported by the algorithm but not the current implementation)</p>
</blockquote>
<p>So it seems using a fixed palette should be possible for the algorithm. The Photoshop/Gimp plugin <a href="http://www.ximagic.com/" rel="nofollow">Ximagic</a> seems to implement this functionality using scolorq. From their homepage:</p>
<blockquote>
<p>Ximagic Quantizer is a Photoshop plugin for image color quantization (color reduction) & dithering.
Provides: <strong>Predefined palette quantization</strong></p>
</blockquote>
<p>3) The algorithm to fill the palette could perhaps be improved - e.g. by filling the palette with colors depending on their average distance (like in the reduction algorithm). But this should be tested depending on the finally used dithering algorithm.</p>
| 1,570
|
implement quantization
|
Need a suggestion for a color palette data structure for iterative color quantization; in particular, any experiences with KD heaps?
|
https://stackoverflow.com/questions/57542028/need-a-suggestion-for-a-color-palette-data-structure-for-iterative-color-quantiz
|
<p>I am implementing color quantization that works in iterations. During each iteration, a new color palette is built up, and then that palette is searched through many times for the palette entry that best matches a given RGB triplet.</p>
<p>Also, I need to be able to access the palette in an array-like fashion so I can construct the final image later. My immediate thought was a KD tree that only contains references to array entries. But, rebuilding such a sparse data structure does not sound ideal, at least in the naive way, since it means (re)allocating space for KD nodes all the time.</p>
<p>I suppose a better approach would be to never actually free nodes, but instead just mark them as unused. This would allow for much faster rebuilding, since reallocations would only happen if more nodes are needed.</p>
<p>Still, something that intrinsically works within an array-like structure would be even better, since it would be more CPU cache friendly. So I stumbled upon KD heaps. <a href="https://en.wikipedia.org/wiki/K-D_heap" rel="nofollow noreferrer">Here is a brief Wikipedia article</a>, and <a href="https://link.springer.com/chapter/10.1007/3-540-57155-8_257" rel="nofollow noreferrer">here is the paper about it</a>. The basic idea seems to be an extension of the heap property, and this would make it work within the array. So, this sounds ideal, since heaps typically are implemented with an array. But I have never used KD heaps, so I am not sure if there's a catch.</p>
<p>So, would you use KD heaps for being able to find the closest matching color in color palettes? If not, what other data structure would you recommend that can be constructed quickly and efficiently?</p>
<p>("Constructing" means here that the entire palette data structure is constructed at once with all color values; they do not get added one by one.)</p>
|
<p>It turned out that I was thinking too complicated. Also, there was a bug elsewhere that drastically impacted performance. A K-D tree packed in an array works fine, and with that other bug fixed, everything is OK.</p>
| 1,571
|
implement quantization
|
Question regarding color histogram based methods of generating color lookup-up tables
|
https://stackoverflow.com/questions/58399642/question-regarding-color-histogram-based-methods-of-generating-color-lookup-up-t
|
<p>I have a piece of code that needs to conform to a research paper's implementation of a color quantization algorithm for a 256 entry LUT whose 24-bit color entries are derived from a "population count" color histogram algorithm. The problem is that I don't know how the authors originally implemented their histogram algorithm -- the paper is a little ambiguous. Currently, I index a 2^24 entry array of integers based on a pixel's raw RGB 24-bit color triple, and increment the particular indexed entry in the array. I then sort the histogram and then I organize it into an effective 15-bit color space by putting blocks of 512 color counts into bins and taking the arithmetic mean of all the colors in the bin. I then stuff 256 averaged color values, starting with the largest color count, in decreasing order of color count into a 256 entry 24-bit color LUT. The output is very disappointing though and low quality. I know that vector quantization or something like median-cut would be better, but I'm constrained to do it with a histogram. I've extensively searched, using google, for "population count" histogram algorithms, but none of the search results were very helpful.</p>
<p>For reference, I'll include the original 512x512 pixel 24-bit color image along with its histogram based color LUT counterpart :</p>
<p><a href="https://i.sstatic.net/KJILP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KJILP.png" alt="Original, uncompressed 24-bit color image"></a></p>
<p><a href="https://i.sstatic.net/7L7AF.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7L7AF.jpg" alt="Image after color quantization based on 256 entry LUT"></a></p>
<p>If anyone could provide some ideas or suggestions of where to look for the right algorithm, I'd be very appreciative.</p>
<p>Thanks,</p>
<p>jdb2</p>
|
<p>try this <a href="https://stackoverflow.com/a/30265253/2521214">Effective gif/image color quantization?</a> its also histogram color quantization based, very similar to your approach but it create the histogram from 15 bit colors directly to spare space and do not use bins instead it sort colors by occurrence and use min distance to already used colors in palette thresholding to avoid almost duplicate colors... I developed it for my GIF encoder lib some years back...</p>
<p>If I take this as input (converted to jpg):</p>
<p><a href="https://i.sstatic.net/1wSgW.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1wSgW.jpg" alt="24bpp"></a></p>
<p>And use mine algo on it without dithering I got this result:</p>
<p><a href="https://i.sstatic.net/BOlPO.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BOlPO.gif" alt="8bpp no dither"></a></p>
<p>With dithering enabled I got this result:</p>
<p><a href="https://i.sstatic.net/teVoh.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/teVoh.gif" alt="8bpp with dithering"></a></p>
<p>as you can see on the cat ear the dithering is much better but even without dithering the result is way better than yours.</p>
<p>However over the years the palette computation code evolves a bit (from the one posted in linked answer) into this (<strong>C++</strong>):</p>
<pre class="lang-cpp prettyprint-override"><code>void gif::compute_palette0()
{
int x,y,r0,g0,b0,r,g,b,a,aa,c,e,hists;
DWORD i,i0,cc;
union { DWORD dd; BYTE db[4]; } c0,c1;
DWORD his[32768];
DWORD idx[32768];
// 15bit histogram
for (x=0;x<32768;x++) { his[x]=0; idx[x]=x; }
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
{
cc=pic.pyx[y][x];
cc=((cc>>3)&0x1F)|((cc>>6)&0x3E0)|((cc>>9)&0x7C00);
if (his[cc]<0xFFFFFFFF) his[cc]++;
}
// add RGB shades combinations for dithering
if (_dither)
{
x=xs*ys; // max possible count to make as start colors in palette
for (r=0;r<32;r+=10)
for (g=0;g<32;g+=10)
for (b=0;b<32;b+=10,x++)
his[(r<<10)|(g<<5)|( b)]=x;
}
// set recolor as unused
for (r0=0;r0<32;r0++)
for (g0=0;g0<32;g0++)
for (b0=0;b0<32;b0++)
recolor[r0][g0][b0]=255;
// remove zeroes
for (x=0,y=0;y<32768;y++)
{
his[x]=his[y];
idx[x]=idx[y];
if (his[x]) x++;
} hists=x;
// sort by hist
for (i=1,e=hists;i;e--)
for (i=0,x=0,y=1;y<e;x++,y++)
if (his[x]<his[y])
{
i=his[x]; his[x]=his[y]; his[y]=i;
i=idx[x]; idx[x]=idx[y]; idx[y]=i; i=1;
}
// set lcolor color palete
for (i0=0,x=0;x<hists;x++) // main colors
{
cc=idx[x];
b= cc &31;
g=(cc>> 5)&31;
r=(cc>>10)&31;
c0.db[0]=b;
c0.db[1]=g;
c0.db[2]=r;
c0.dd=(c0.dd<<3)&0x00F8F8F8;
// skip if similar color already in lcolor[]
for (a=0,i=0;i<i0;i++)
{
c1.dd=lcolor[i];
aa=int(BYTE(c1.db[0]))-int(BYTE(c0.db[0])); if (aa<=0) aa=-aa; a =aa;
aa=int(BYTE(c1.db[1]))-int(BYTE(c0.db[1])); if (aa<=0) aa=-aa; a+=aa;
aa=int(BYTE(c1.db[2]))-int(BYTE(c0.db[2])); if (aa<=0) aa=-aa; a+=aa;
if (a<=16) { a=1; break; } a=0; // *** treshold ***
}
if (a) recolor[r][g][b]=i;
else{
recolor[r][g][b]=i0;
lcolor[i0]=c0.dd; i0++;
if (i0>=DWORD(lcolors)) { x++; break; }
}
} // i0 = new color table size
for (;x<hists;x++) // minor colors
{
cc=idx[x];
b= cc &31;
g=(cc>> 5)&31;
r=(cc>>10)&31;
c0.db[0]=b;
c0.db[1]=g;
c0.db[2]=r;
c0.dd=(c0.dd<<3)&0x00F8F8F8;
// find closest color
int dc=-1; DWORD ii=0;
for (a=0,i=0;i<i0;i++)
{
c1.dd=lcolor[i];
aa=int(BYTE(c1.db[0]))-int(BYTE(c0.db[0])); if (aa<=0) aa=-aa; a =aa;
aa=int(BYTE(c1.db[1]))-int(BYTE(c0.db[1])); if (aa<=0) aa=-aa; a+=aa;
aa=int(BYTE(c1.db[2]))-int(BYTE(c0.db[2])); if (aa<=0) aa=-aa; a+=aa;
if ((dc<0)||(dc>a)) { dc=a; ii=i; }
}
recolor[r][g][b]=ii;
}
encode_palette_compute(true);
if ((frame)&&(hists<lcolors))
for (lcolor_bits=1,lcolors=1<<lcolor_bits;lcolors<hists;lcolors<<=1,lcolor_bits++);
// compute recolor for 16 base colors for all yet unused colors
for (r0=0;r0<32;r0++)
for (g0=0;g0<32;g0++)
for (b0=0;b0<32;b0++)
if (recolor[r0][g0][b0]==255)
{
// find closest color
for (i=0,c=-1;i<16;i++)
{
c0.dd=lcolor[i];
b=WORD(c0.db[0])>>3;
g=WORD(c0.db[1])>>3;
r=WORD(c0.db[2])>>3;
a=(r-r0); aa =a*a;
a=(g-g0); aa+=a*a;
a=(b-b0); aa+=a*a;
if ((c<0)||(e>aa)) { e=aa; c=i; }
}
recolor[r0][g0][b0]=c;
}
}
</code></pre>
<p>Where my <code>gif</code> class looks like this (so you can extract config and used variables...):</p>
<pre class="lang-cpp prettyprint-override"><code>class gif
{
public:
// IO interface
file_cache<4<<20> fi,fo; // file cache
BYTE dat[256]; // internal buffer 256 Bytes needed
// Performance counter
double Tms,tms,tdec,tenc; // perioda citaca [ms], zmerany cas [ms],cas encodovania [ms]
void tbeg(); // zaciatok merania
void tend(); // koniec merania
// image data
gif_frame32 pic,pic0; // actual and restore to 32bit frames
gif_frame8 pic1; // 8bit input conversion frame
int xs,ys; // resolution
int *py; // interlace table
// colors (colors are computed from color_bits)
DWORD gcolor[256]; //hdr
DWORD lcolor[256]; //img
BYTE recolor[32][32][32]; //encode reduce color table
int scolors,scolor_bits; //hdr screen color depth
int gcolors,gcolor_bits; //hdr global pallete
int lcolors,lcolor_bits; //img/hdr local palette
// info
bool _89a; //hdr extensions present?
bool _interlaced; //img interlaced frame?
bool _gcolor_table; //hdr
bool _gcolor_sorted; //hdr
bool _lcolor_table; //img local palette present?
bool _lcolor_sorted; //img local palette colors sorted?
int cpf,cpf_error; //clears per frame counter,clear_errors total
// debug
bool _draw_palette; //draw pallete?
// animation
int frame,disposal; // frame ix,disposal of frame
double t,tn; // animation time,next frame time
// encode config
int _force_disposal; // -1 or forced disposal
bool _precomputed_palette; // if true recolor and global palete is already set before encoding
bool _dither; // dither colors?
// inter thread comm
volatile bool _image_copied; // flag that source image is not needed anymore while encoding
// temp dictionary for dec/enc
gif_str dict[_gif_maxdecode];
DWORD dicts,code_clr,code_end,code_min;
// temp dictionary speed up tables (encoding)
WORD dtab[256][_gif_maxencode],dnum[256],dmask[256]; // dtab[i][dnum[i]] all dictionary codes (sorted by code) starting with i for encode speed up, 1<<dmask[i]<=dnum[i]
#pragma pack(1)
struct __hdr
{
// Header
BYTE Signature[3]; /* Header Signature (always "GIF") */
BYTE Version[3]; /* GIF format version("87a" or "89a") */
// Logical Screen Descriptor
WORD xs;
WORD ys;
BYTE Packed; /* Screen and Color Map Information */
BYTE BackgroundColor; /* Background Color Index */
BYTE AspectRatio; /* Pixel Aspect Ratio */
__hdr(){}; __hdr(__hdr& a){ *this=a; }; ~__hdr(){}; __hdr* operator = (const __hdr *a) { *this=*a; return this; }; /*__hdr* operator = (const __hdr &a) { ...copy... return this; };*/
};
struct _hdr:__hdr
{
DWORD adr,siz;
_hdr(){}; _hdr(_hdr& a){ *this=a; }; ~_hdr(){}; _hdr* operator = (const _hdr *a) { *this=*a; return this; }; /*_hdr* operator = (const _hdr &a) { ...copy... return this; };*/
} hdr;
struct __img
{
// Logical Image Descriptor
BYTE Separator; /* Image Descriptor identifier 0x2C */
WORD x0; /* X position of image on the display */
WORD y0; /* Y position of image on the display */
WORD xs; /* Width of the image in pixels */
WORD ys; /* Height of the image in pixels */
BYTE Packed; /* Image and Color Table Data Information */
__img(){}; __img(__img& a){ *this=a; }; ~__img(){}; __img* operator = (const __img *a) { *this=*a; return this; }; /*__img* operator = (const __img &a) { ...copy... return this; };*/
};
struct _img:__img
{
DWORD adr,siz;
_img(){}; _img(_img& a){ *this=a; }; ~_img(){}; _img* operator = (const _img *a) { *this=*a; return this; }; /*_img* operator = (const _img &a) { ...copy... return this; };*/
} img;
struct __gfxext
{
BYTE Introducer; /* Extension Introducer (always 21h) */
BYTE Label; /* Graphic Control Label (always F9h) */
BYTE BlockSize; /* Size of remaining fields (always 04h) */
BYTE Packed; /* Method of graphics disposal to use */
WORD DelayTime; /* Hundredths of seconds to wait */
BYTE ColorIndex; /* Transparent Color Index */
BYTE Terminator; /* Block Terminator (always 0) */
__gfxext(){}; __gfxext(__gfxext& a){ *this=a; }; ~__gfxext(){}; __gfxext* operator = (const __gfxext *a) { *this=*a; return this; }; /*__gfxext* operator = (const __gfxext &a) { ...copy... return this; };*/
};
struct _gfxext:__gfxext
{
DWORD adr,siz;
_gfxext(){}; _gfxext(_gfxext& a){ *this=a; }; ~_gfxext(){}; _gfxext* operator = (const _gfxext *a) { *this=*a; return this; }; /*_gfxext* operator = (const _gfxext &a) { ...copy... return this; };*/
} gfxext;
struct __txtext
{
BYTE Introducer; /* Extension Introducer (always 21h) */
BYTE Label; /* Extension Label (always 01h) */
BYTE BlockSize; /* Size of Extension Block (always 0Ch) */
WORD TextGridLeft; /* X position of text grid in pixels */
WORD TextGridTop; /* Y position of text grid in pixels */
WORD TextGridWidth; /* Width of the text grid in pixels */
WORD TextGridHeight; /* Height of the text grid in pixels */
BYTE CellWidth; /* Width of a grid cell in pixels */
BYTE CellHeight; /* Height of a grid cell in pixels */
BYTE TextFgColorIndex; /* Text foreground color index value */
BYTE TextBgColorIndex; /* Text background color index value */
// BYTE *PlainTextData; /* The Plain Text data */
// BYTE Terminator; /* Block Terminator (always 0) */
__txtext(){}; __txtext(__txtext& a){ *this=a; }; ~__txtext(){}; __txtext* operator = (const __txtext *a) { *this=*a; return this; }; /*__txtext* operator = (const __txtext &a) { ...copy... return this; };*/
};
struct _txtext:__txtext
{
DWORD adr,siz;
AnsiString dat;
_txtext(){}; _txtext(_txtext& a){ *this=a; }; ~_txtext(){}; _txtext* operator = (const _txtext *a) { *this=*a; return this; }; /*_txtext* operator = (const _txtext &a) { ...copy... return this; };*/
};
List<_txtext> txtext;
struct __remext
{
BYTE Introducer; /* Extension Introducer (always 21h) */
BYTE Label; /* Comment Label (always FEh) */
// BYTE *CommentData; /* Pointer to Comment Data sub-blocks */
// BYTE Terminator; /* Block Terminator (always 0) */
__remext(){}; __remext(__remext& a){ *this=a; }; ~__remext(){}; __remext* operator = (const __remext *a) { *this=*a; return this; }; /*__remext* operator = (const __remext &a) { ...copy... return this; };*/
};
struct _remext:__remext
{
DWORD adr,siz;
AnsiString dat;
_remext(){}; _remext(_remext& a){ *this=a; }; ~_remext(){}; _remext* operator = (const _remext *a) { *this=*a; return this; }; /*_remext* operator = (const _remext &a) { ...copy... return this; };*/
};
List<_remext> remext;
struct __appext
{
BYTE Introducer; /* Extension Introducer (always 21h) */
BYTE Label; /* Extension Label (always FFh) */
BYTE BlockSize; /* Size of Extension Block (always 0Bh) */
CHAR Identifier[8]; /* Application Identifier */
BYTE AuthentCode[3]; /* Application Authentication Code */
// BYTE *ApplicationData; /* Point to Application Data sub-blocks */
// BYTE Terminator; /* Block Terminator (always 0) */
__appext(){}; __appext(__appext& a){ *this=a; }; ~__appext(){}; __appext* operator = (const __appext *a) { *this=*a; return this; }; /*__appext* operator = (const __appext &a) { ...copy... return this; };*/
};
struct _appext:__appext
{
DWORD adr,siz;
AnsiString dat;
_appext(){}; _appext(_appext& a){ *this=a; }; ~_appext(){}; _appext* operator = (const _appext *a) { *this=*a; return this; }; /*_appext* operator = (const _appext &a) { ...copy... return this; };*/
};
List<_appext> appext;
#pragma pack()
gif();
gif(gif& a);
~gif();
gif* operator = (const gif *a);
//gif* operator = (const gif &a);
void _resize(int _xs,int _ys); // resize buffers
void load_beg(AnsiString filename); // open GIF file for decoding
void decode(int _ignore_delay=0); // decode frame from GIF, if _ignore_delay then ignore realtime
void load_end(); // close GIF file
void save_beg(AnsiString filename); // create new GIF file for encoding
void compute_palette0(); // compute palette from frame method 0
void compute_palette1(); // compute palette from frame method 1
void encode_palette_RGB256(); // set RGB combinations as 256 color palette as predefined global only palette
void encode_palette_VGA256(); // set default 256 color VGA palette as predefined global only palette
void encode_palette_compute(bool _local); // compute recolor[][][] from palette
void encode(const gif_frame32 &src,int dt=0); // encode frame to GIF , dt is delay in [ms] instead of realtime in range <10 .. 655350> [ms]
// void encode(int dst_xs,int dst_ys,TCanvas *src,int src_x0,int src_y0,int src_x1,int src_y1,int dt=0); // encode frame to GIF , dt is delay in [ms] instead of realtime in range <10 .. 655350> [ms]
void save_end(); // finalize and close GIF file
void draw_info(int x,int y,TCanvas *can);
void draw_info(int x,int y,Graphics::TBitmap *bmp);
void configure(gif &src); // set all encoding variables from src (for multithreaded encoding)
};
</code></pre>
| 1,572
|
implement quantization
|
Importing PIL images into FFMPY/FFMPEG to save as GIF/video
|
https://stackoverflow.com/questions/76305884/importing-pil-images-into-ffmpy-ffmpeg-to-save-as-gif-video
|
<p>I would like to know how I can transfer PIL images to FFMPY to save it as video, or gif, since the PIL library's quantization method has strong quality losses in certain cases. I first do some modifications with PIL, and then want to export and save the result.</p>
<p>I did not find any information on the topic online, beside one post with PIL to FFMPEG:
<a href="https://stackoverflow.com/questions/43650860/pipe-pil-images-to-ffmpeg-stdin-python">Pipe PIL images to ffmpeg stdin - Python</a>
How could I implement something similar in FFMPY?</p>
<p>If I have for example this setup to begin with:</p>
<pre><code>import ffmpy
import PIL
from PIL import Image as Img
images = [Img.open('frame 1.png'),Img.open('frame 2.png')]#How do I convert them to FFMPEG?
#Here I modify the images using PIL
#Save with FFMPEG:
ff = ffmpy.FFmpeg(
inputs={images ?: None},#How do I insert PIL images here?
outputs={'output.gif': None},
executable='ffmpeg\\bin\\ffmpeg.exe')
ff.run()
</code></pre>
<p>How would I proceed to convert and save the images as a video using FFMPY?
Is it possible by adding some steps inbetween? I wouldn't want to have to save all PIL images first as images, and then import them and save them with FFMPY a second time, since that would be very time consuming with larger files.</p>
|
<p>According to <code>ffmpy</code> documentation, it seems like the most relevant option is using <a href="https://ffmpy.readthedocs.io/en/latest/examples.html#using-pipe-protocol" rel="nofollow noreferrer">using-pipe-protocol</a>.</p>
<ul>
<li><p>Instead of using PIL for reading the images, we may read the PNG images as binary data into <code>BytesIO</code> (reading all images to in-memory file-like object):</p>
<pre><code> # List of input image files (assume all images are in the same resolution, and the same "pixel format").
images = ['frame 1.png', 'frame 2.png', 'frame 3.png', 'frame 4.png', 'frame 5.png', 'frame 6.png', 'frame 7.png', 'frame 8.png']
# Read PNG images from files, and write to BytesIO object in memory (read images as binary data without decoding).
images_in_memory = io.BytesIO()
for png_file_name in images:
with open(png_file_name, 'rb') as f:
images_in_memory.write(f.read())
</code></pre>
</li>
<li><p>Run <code>ffmpy.FFmpeg</code> using <strong>pipe protocol</strong>.<br />
Pass <code>images_in_memory.getbuffer()</code> as <code>input_data</code> argument to <code>ff.run</code>:</p>
<pre><code> ff = ffmpy.FFmpeg(
inputs={'pipe:0': '-y -f image2pipe -r 1'},
outputs={'output.gif': None},
executable='\\ffmpeg\\bin\\ffmpeg.exe')
# Write the entire buffer of encoded PNG images to the "pipe".
ff.run(input_data=images_in_memory.getbuffer(), stdout=subprocess.PIPE)
</code></pre>
</li>
</ul>
<hr />
<p>The above solution seems a bit awkward, but it's the best solution I could find using <code>ffmpy</code>.<br />
There are other FFmpeg to Python binding like <a href="https://pypi.org/project/ffmpeg-python/" rel="nofollow noreferrer">ffmpeg-python</a>, that supports writing the images one by one in a loop.<br />
Using ffmpy, we have to read all the images into memory from advance.</p>
<p>The above solution keeps the PNG images in their encoded (binary form).<br />
Instead of decoding the images with PIL (for example), FFmpeg is going to decode the PNG images.<br />
Letting FFmpeg decode the images is more efficient, and saves memory.<br />
The limitation is that all the images must have the same resolution.<br />
The images also must have the same "pixel format" (all RGB or all RGBA but not a mix).<br />
In case images have different resolution or pixels format, we have to decode the images (and maybe resize the images) using Python, and write images as "raw video".</p>
<hr />
<p>For testing we may create PNG images using FFmpeg CLI:</p>
<p><code>ffmpeg -f lavfi -i testsrc=size=192x108:rate=1:duration=8 "frame %d.png"</code>.</p>
<hr />
<p>Complete code sample:</p>
<pre><code>import ffmpy
import io
import subprocess
#Building sample images using FFmpeg CLI for testing: ffmpeg -f lavfi -i testsrc=size=192x108:rate=1:duration=8 "frame %d.png"
# List of input image files (assume all images are in the same resolution, and the same "pixel format").
images = ['frame 1.png', 'frame 2.png', 'frame 3.png', 'frame 4.png', 'frame 5.png', 'frame 6.png', 'frame 7.png', 'frame 8.png']
# Read PNG images from files, and write to BytesIO object in memory (read images as binary data without decoding).
images_in_memory = io.BytesIO()
for png_file_name in images:
with open(png_file_name, 'rb') as f:
images_in_memory.write(f.read())
# Use pipe protocol: https://ffmpy.readthedocs.io/en/latest/examples.html#using-pipe-protocol
ff = ffmpy.FFmpeg(
inputs={'pipe:0': '-y -f image2pipe -r 1'},
outputs={'output.gif': None},
executable='\\ffmpeg\\bin\\ffmpeg.exe') # Note: I have ffmpeg.exe is in C:\ffmpeg\bin folder
ff.run(input_data=images_in_memory.getbuffer(), stdout=subprocess.PIPE)
</code></pre>
<hr />
<p>Sample output <code>output.gif</code>:<br />
<a href="https://i.sstatic.net/JYiVJ.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JYiVJ.gif" alt="enter image description here" /></a></p>
<hr />
<h2>Update:</h2>
<p>Same solution using images from Pillow:</p>
<p>The above solution also works if we save the images from Pillow to BytesIO in PNG format.</p>
<p>Example:</p>
<pre><code>import ffmpy
import io
import subprocess
from PIL import Image as Img
#Building sample images using FFmpeg CLI for testing: ffmpeg -f lavfi -i testsrc=size=192x108:rate=1:duration=8 "frame %d.png"
# List of input image files (assume all images are in the same resolution, and the same "pixel format").
images = ['frame 1.png', 'frame 2.png', 'frame 3.png', 'frame 4.png', 'frame 5.png', 'frame 6.png', 'frame 7.png', 'frame 8.png']
# Read PNG images from files, and write to BytesIO object in memory (read images as binary data without decoding).
images_in_memory = io.BytesIO()
for png_file_name in images:
img = Img.open(png_file_name)
# Modify the images using PIL...
img.save(images_in_memory, format="png")
# Use pipe protocol: https://ffmpy.readthedocs.io/en/latest/examples.html#using-pipe-protocol
ff = ffmpy.FFmpeg(
inputs={'pipe:0': '-y -f image2pipe -r 1'},
outputs={'output.gif': None},
executable='\\ffmpeg\\bin\\ffmpeg.exe')
ff.run(input_data=images_in_memory.getbuffer(), stdout=subprocess.PIPE)
</code></pre>
<p>Encoding the images as PNG in memory is not most efficient in terms of execution time, but it saves memory space.</p>
| 1,573
|
implement quantization
|
What parts of the image tracing process can be handled by classes in the JDK?
|
https://stackoverflow.com/questions/19874745/what-parts-of-the-image-tracing-process-can-be-handled-by-classes-in-the-jdk
|
<p>I have a study assignment where I need to write a Java program to trace (vectorize) images. </p>
<p>I may only use the JDK 1.5 and up; so, I'll have to implement some algorithms where required. </p>
<p>The program has to pass the following steps:</p>
<ol>
<li>Color reduction (color quantization); [for a <em>set of colors</em> or a maximum <em>number of colors</em>]</li>
<li>Removal of areas [with a given maximum size]</li>
<li>Edge detection</li>
<li>Simplify segments [minimum is the Douglas-Peucker algorithm]</li>
<li>Round segments to curves</li>
<li>Output to SVG</li>
</ol>
<p>I want to make sure that I use JDK APIs wherever possible. My previous research however didn't really turn up a lot of helpful resources. (Most helpful resource thus far is the Sun documentation of the JAI -- Java Advanced Imaging API -- at <a href="http://docs.oracle.com/cd/E19957-01/806-5413-10/806-5413-10.pdf" rel="nofollow">this location</a>)</p>
<p>My question is: which of these steps can be handled — or facilitated — by classes in the JDK?</p>
<hr>
<p>Since this is a rather comprehensive question, I'll put up a 250 point bounty once I can.</p>
|
<p>Classes written using the JDK can handle all aspects of the image tracing process (since Java is Turing complete and image tracers exist, they can be implemented in Java - QED). As for your specific areas of inquiry,</p>
<blockquote>
1) Yes! The JAI includes <a href="http://docs.oracle.com/cd/E17802_01/products/products/java-media/jai/forDevelopers/jai-apidocs/javax/media/jai/operator/ColorQuantizerDescriptor.html" rel="nofollow">quantization</a> methods.<br/>
6) Yes! Try the <a href="http://xmlgraphics.apache.org/batik/" rel="nofollow">Batik SVG Toolkit</a>, especially the <a href="http://xmlgraphics.apache.org/batik/using/svg-generator.html" rel="nofollow">SVG Generator</a>.<br/>
2-5) You're going to have to implement. 4) in particular looks like there are some implementations available but I have not investigated in detail.
</blockquote>
| 1,574
|
implement quantization
|
C# PNG Image saving with selected filter
|
https://stackoverflow.com/questions/44764631/c-png-image-saving-with-selected-filter
|
<p>I couldn't help myself so once again I'm asking you for help. This time I will show the problem better than last time, I hope.</p>
<p>I'm writing a program to check if Quantization have any influence on image sizes. To do that I need to have implemented : </p>
<ol>
<li>Open PNG Image <em>(done)</em></li>
<li>"Quantize" pixel by pixel till the end of the image <em>(done)</em></li>
<li>Save <strong>(this is the problem)</strong></li>
</ol>
<blockquote>
<p>PNG filter method 0 defines five basic filter types: Type Name</p>
<p>0 - None, 1 - Sub, 2 - Up, 3 - Average, 4 - Paeth </p>
</blockquote>
<p>And now I'm standing with an image in memory that I want to save using one of that filters, but after checking multiple of PNG libraries, none of them allow me to choose one. Can anyone help me with that or at least with one filter?
Here you go with some code : </p>
<pre><code>private void btnSelectImg_Click(object sender, EventArgs e)
{
openFileDialog1.Filter = "PNG Image | *.png";
DialogResult result = openFileDialog1.ShowDialog();
if (result == DialogResult.OK)
{
string imgPath = openFileDialog1.FileName;
tbSourceImageFile.Text = imgPath;
string[] NameCutter = imgPath.Split('\\');
lblFileName.Text = NameCutter.Last();
ImageToWork = Image.FromFile(imgPath);
System.Drawing.Imaging.ImageFormat Format = ImageToWork.RawFormat;
tbInfo.Text += string.Format("Resolution : {0}x{1} | Bits : {2:n0} | Format : {3}", ImageToWork.Width, ImageToWork.Height, ImageToWork.Width * ImageToWork.Height, GetFilenameExtension(Format));
}
}
private void btnSave_Click(object sender, EventArgs e)
{
#region Check Image
if (tbSourceImageFile.Text == "")
{
MessageBox.Show("File not selected. Select file first.");
return;
}
#endregion
#region Operations on image
Bitmap Image111 = new Bitmap(tbSourceImageFile.Text, true);
#region Progress Bar Settings
ProgressBar.Visible = true;
ProgressBar.Value = 1;
ProgressBar.Maximum = Image111.Width;
ProgressBar.Step = 1;
#endregion
if (cboxEnableScale.Checked == true)
{
int red, green, blue, red2=0, blue2=0, green2=0;
int scale = int.Parse(cbSelectorScale.SelectedItem.ToString());
for (int w = 0; w < Image111.Width; w++)
{
for (int h = 0; h < Image111.Height; h++)
{
Color PixelColor = Image111.GetPixel(w, h);
#region Quantization
red = PixelColor.R;
green = PixelColor.G;
blue = PixelColor.B;
Color newColor = Color.FromArgb(Valuator_v3(red, scale), Valuator_v3(green, scale), Valuator_v3(blue, scale));
Image111.SetPixel(w, h, newColor);
#endregion
}
ProgressBar.PerformStep();
}
}
#endregion
#region Saving
string SaveDirectory = tbSaveDestination.Text + '\\' + tbSaveFileName.Text + ".bmp";
SaveDirectory = tbSaveDestination.Text + '\\' + tbSaveFileName.Text + ".jpeg";
Image111.Save(SaveDirectory, System.Drawing.Imaging.ImageFormat.Png);
ProgressBar.Visible = false;
MessageBox.Show("Saved successfully.");
#endregion
}
</code></pre>
<p>In region "Saving" I want to select which filter will be used and save it using it.</p>
|
<p>If the PNG libraries don't do what you want, just roll your own filters. It's not that difficult.</p>
<p>The filtering should take place inside the #region Quantization. As far as I unterstand it, the Valuator_v3() method converts the RGB channels separately, then you store the transformed pixel with Image111.SetPixel(). The PNG filter needs to be inserted between the two calls.</p>
<p>PNG filters work on the current pixel color and one, two, or three previously read neighboring pixels. They never look ahead. So you just use Image111.GetPixel() to retrieve previous pixels and use them to transform the current pixel. In the case of the filter type "None", there's nothing to do, and you just store the quantized pixel.</p>
<p>In the case of "Sub", you test if you're in the leftmost column (i.e., w == 0). If so, you leave the pixel as is. Otherwise, you call Image111.GetPixel (w-1, h) and subtract the resulting RGB values from the current pixel:</p>
<pre><code>Color pixelLeft = Image111.GetPixel (w-1, h);
newColor.R -= pixelLeft.R;
newColor.G -= pixelLeft.G;
newColor.B -= pixelLeft.B;
</code></pre>
<p>That's it. The "Up" transform is likewise trivial. You just check for h == 0 this time, and call Image111.GetPixel (w, h-1) if the current pixel is not in the top row. The "Average" filter requires both the left and upper pixels, and computes the arithmetic mean of the RGB channel values. Note that pixelLeft = 0 in case of w == 0, and pixelTop = 0 in case of h == 0:</p>
<pre><code>Color pixelLeft = Image111.GetPixel (w-1, h);
Color pixelTop = Image111.GetPixel (w, h-1);
newColor.R -= (Byte) (((UInt64) pixelLeft.R + (UInt64) pixelTop.R) >> 1);
newColor.G -= (Byte) (((UInt64) pixelLeft.G + (UInt64) pixelTop.G) >> 1);
newColor.B -= (Byte) (((UInt64) pixelLeft.B + (UInt64) pixelTop.B) >> 1);
</code></pre>
<p>The Paeth filter is more complex. It uses three neighboring pixels, pixelLeft, pixelTop, and pixelTopLeft. Once again you need to check the special border cases appropriately. The following predictor is computed seperately for each channel, e.g. red:</p>
<pre><code>Int64 valueLeft = pixelLeft.R;
Int64 valueTop = pixelTop.R;
Int64 valueTopLeft = pixelTopLeft.R;
Int64 valueCombined = valueLeft + valueTop - valueTopLeft;
Int64 deltaLeft = Math.Abs (valueCombined - valueLeft);
Int64 deltaTop = Math.Abs (valueCombined - valueTop);
Int64 deltaTopLeft = Math.Abs (valueCombined - valueTopLeft);
newColor.R -= (deltaLeft <= deltaTop) && (deltaLeft <= deltaTopLeft)
? pixelLeft.R
: deltaTop <= deltaTopLeft ? pixelTop.R : pixelTopLeft.R);
</code></pre>
<p>Although the Paeth filter looks quite promising, my own tests have shown that the "Up" filter yields the best results in most cases. Don't know why. So by default I'm using the "Sub" filter for the first row, and the "Up" filter for all subsequent ones.</p>
<p>So now you've got the filtered image in memory. What you need now is a standard DEFLATE encoder, like ZLIB uses. The encoder input is the filtered RGB data. Note that PNG requires you to emit the filter type (0..4) as a literal code at the beginning of each row.
The compressed DEFLATE stream is packaged into an IDAT chunk of a PNG container, which is not a difficult task.</p>
| 1,575
|
implement quantization
|
In the CVPR16 paper "Deepbit" by Kevin Lin, et al, is it a typo in the loss function section?
|
https://stackoverflow.com/questions/51280164/in-the-cvpr16-paper-deepbit-by-kevin-lin-et-al-is-it-a-typo-in-the-loss-func
|
<p>Recently I'm researching through possible ways of encoding images into compact binary descriptors that allows for fast image matching in a large corpus and came across <a href="http://www.iis.sinica.edu.tw/~kevinlin311.tw/cvpr16-deepbit.pdf" rel="nofollow noreferrer">this paper</a> written by Kevin Lin and his colleagues.</p>
<p>In the article, they proposed an unsupervised learning approach to learn compact binary descriptors for images. Specifically, they proposed a loss function to penalize the descriptor that consists of 3 components: </p>
<ol>
<li>quantization loss</li>
<li>even distribution loss</li>
<li>bits correlation loss</li>
</ol>
<p>My question lies in the first component. In the paper, the loss function is defined as the sum of the squared error between the binary bit and the last layer's activation across all the training data in the mini-batch. However, when I implement this, the overall loss became so large that the other 2 components became somewhat irrelevant and therefore not evenly distributed at all.</p>
<p>So I'm wondering if this is a typo in the paper where it should be the mean of the squared error instead of the sum. </p>
<p>Cheers.</p>
| 1,576
|
|
implement quantization
|
How to speed up color quantization via kmeans clustering
|
https://stackoverflow.com/questions/69062479/how-to-speed-up-color-quantization-via-kmeans-clustering
|
<p>I'm trying to speed up my implementation of "kmeans" clustering to minimize the number of colors in the image yet keep it pretty. It's extremely slow on the image 1000x1000 with k=32. But the function gives a perfect color/tone match (which is crucial) in comparison with the other tested approaches so I could not find any equivalent replacement for it (if any?).</p>
<p>I had an idea to resize the image before the clustering process (e.g. make 300x300), then use the reduced color palette for the original image, but my python knowledge is not enough to make that within the function. Could you please help?</p>
<p>This is my original slow function (I commented out my experimental attempts):</p>
<pre><code>def color_quantize(image, K):
(h, w) = image.shape[:2]
img = cv2.cvtColor(image, cv2.COLOR_BGR2LAB)
Z = img.reshape((-1, 3))
# *** my attempt to resize to a smaller size for clustering
#thumbnail = cv2.resize(img, (300, 300), cv2.INTER_CUBIC)
#Z = thumbnail.reshape((-1, 3))
Z = np.float32(Z)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 15, 1.0)
ret, label, center = cv2.kmeans(Z, K, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
center = np.uint8(center)
res = center[label.flatten()]
# *** here should go some backward process to apply the reduced color palette to the original image...
quantized_img = res.reshape(img.shape)
quantized_img = cv2.cvtColor(quantized_img, cv2.COLOR_LAB2BGR)
return quantized_img
</code></pre>
<h2>UPDATE</h2>
<p>After lots of attempts to find a solution which gives good coloring and fast speed I ended up with the following:</p>
<pre><code>def color_quantize_fast(image, K):
img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
im_pil = Image.fromarray(np.uint8(img))
im_pil = im_pil.quantize(K, None, 0, None)
return cv2.cvtColor(np.array(im_pil.convert("RGB")), cv2.COLOR_RGB2BGR)
</code></pre>
<p>It accepts image in format cv2.imread() returns. And it returns a quantized image in a similar format back.
K is the number of colors (1 <= K <= 255)</p>
| 1,577
|
|
implement quantization
|
How to implement ImageMagick command for halftone dither into ruby script?
|
https://stackoverflow.com/questions/43953119/how-to-implement-imagemagick-command-for-halftone-dither-into-ruby-script
|
<p>I'm trying to make a script for creating halftone dither. Script should take an rgb image and convert it to four png files for all CMYK channels, each being a bitmap with according threshold pattern, as in this image:</p>
<p><a href="https://i.sstatic.net/gBIvo.png" rel="nofollow noreferrer">halftone</a></p>
<p>So far I made a script for converting image to cmyk, resizing it to wanted size and splitting it by channels. I also found this great resource on making it with ImageMagick - <a href="http://www.imagemagick.org/Usage/quantize/#halftone_offset" rel="nofollow noreferrer">http://www.imagemagick.org/Usage/quantize/#halftone_offset</a>.
It seams that it's exactly what I need, but I'm stuck having no idea how to implement this:</p>
<pre class="lang-sh prettyprint-override"><code>convert colorwheel.png -set option:distort:viewport '%wx%h+0+0' \
-colorspace CMYK -separate null: \
\( -size 2x2 xc: \( +clone -negate \) \
+append \( +clone -negate \) -append \) \
-virtual-pixel tile -filter gaussian \
\( +clone -distort SRT 60 \) +swap \
\( +clone -distort SRT 30 \) +swap \
\( +clone -distort SRT 45 \) +swap \
\( +clone -distort SRT 0 \) +swap +delete \
-compose Overlay -layers composite \
-set colorspace CMYK -combine -colorspace RGB \
offset_colorwheel.png
</code></pre>
<p>Into what I wrote so far:</p>
<pre class="lang-rb prettyprint-override"><code>require 'rmagick'
include Magick
width = 1181
height = 826
puts "loading"
img = Image.read("123.jpg").first()#.resize_to_fit!(width, height)
puts "converting to cmyk"
img.colorspace = Magick::CMYKColorspace
puts "resizing"
img = img.resize_to_fill(width,height)
puts "channel separation"
a = img.separate(AllChannels)
channels = ["c", "m", "y", "k"]
a.each_with_index do |channel, index|
puts channels[index]
result.write("#{channels[index]}.jpg")
channel.ordered_dither('h4x4a').write("#{channels[index]}.jpg")
end
</code></pre>
<p>I would appreciate any suggestions on how to translate given ImageMagick command</p>
| 1,578
|
|
implement quantization
|
Wrong result after converting image from floating-point to unsigned in implementing JPEG
|
https://stackoverflow.com/questions/77307475/wrong-result-after-converting-image-from-floating-point-to-unsigned-in-implement
|
<p>I have a problem with my code in implementing the JPEG compression using OpenCV and C++. On the encoder and after the code does the DCT when I add a line of code, which converts the planes from 32-bit floating-point to 8-bit unsigned, I get some weird output as you can see below. After this conversion, besides the fact that the result is not acceptable (cause I didn't even write the quantization part yet) even the original image is affected by this line of code. When changing this conversion to 16-bit unsigned, the original image is untouched and the decoded image becomes better, but the result is still not acceptable.</p>
<pre><code>constexpr int blocksize = 8;
struct block {
cv::Rect roi;
cv::Mat subimage;
};
int main()
{
//Reading the image and storing its necessary information
std::string addr("C:\\Users\\lena.jpg");
const cv::Mat img = cv::imread(addr, cv::IMREAD_COLOR);
if (img.empty()) {
std::cerr << "Could not open the image!" << std::endl;
return -1;
}
int height = img.size().height;
int width = img.size().width;
std::cout << "width * height: " << width << " * " << height << std::endl;
//Constructing 8 by 8 blocks
std::vector<block> blocks;
for (int y = 0; y < height; y += blocksize){
for (int x = 0; x < width; x += blocksize){
int block_width = std::min(blocksize, width - x);
int block_height = std::min(blocksize, height - y);
block temp;
cv::Rect newROI(x, y, block_width, block_height);
temp.roi = newROI;
temp.subimage = img(newROI);
blocks.push_back(temp);
}
}
//Discrete Cosine Transformation and Quantization
for (int bIdx = 0; bIdx < blocks.size(); ++bIdx) {
std::vector<cv::Mat> planes;
cv::split(blocks[bIdx].subimage, planes);
std::vector<cv::Mat> resultPlanes(planes.size());
for (int k = 0; k < planes.size(); ++k) {
planes[k].convertTo(planes[k], CV_32FC1);
cv::dct(planes[k], resultPlanes[k]);
//The line that I mentioned above
resultPlanes[k].convertTo(resultPlanes[k], CV_8UC1);
}
cv::merge(resultPlanes, blocks[bIdx].subimage);
}
//Inverse of DCT and Quantization
for (int bIdx = 0; bIdx < blocks.size(); ++bIdx) {
std::vector<cv::Mat> planes;
cv::split(blocks[bIdx].subimage, planes);
std::vector<cv::Mat> resultPlanes(planes.size());
for (int k = 0; k < planes.size(); ++k) {
planes[k].convertTo(planes[k], CV_32FC1);
cv::idct(planes[k], resultPlanes[k]);
resultPlanes[k].convertTo(resultPlanes[k], CV_8UC1);
}
cv::merge(resultPlanes, blocks[bIdx].subimage);
}
//Reconstructing the whole image
cv::Mat modifiedImg = img.clone();
for (int bIdx = 0; bIdx < blocks.size(); ++bIdx) {
blocks[bIdx].subimage.copyTo(modifiedImg(blocks[bIdx].roi));
}
//Show the result
cv::imshow("JPEG", img);
cv::imshow("My JPEG", modifiedImg);
cv::waitKey(0);
}
</code></pre>
<p><a href="https://i.sstatic.net/kCCyF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kCCyF.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/SH6gM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SH6gM.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/7DcOz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7DcOz.png" alt="enter image description here" /></a></p>
| 1,579
|
|
DPO model
|
Pretrained Model Weights Not Updating During DPO Training
|
https://stackoverflow.com/questions/78664372/pretrained-model-weights-not-updating-during-dpo-training
|
<p>I'm trying to apply DPO to a pre-trained model. However, during the training process, the scores given by the pre-trained model and the fine-tuned model are identical, and the loss remains the same across all batches, leading me to believe the weights are not being updated. My training method is given below.</p>
<pre><code>def train(model, optimizer, pref_set, dispref_set, epochs, beta, bs):
model.train()
#print(list(model.parameters())[0])
#print(list(model.parameters())[0].grad)
for epoch in range(epochs):
cur_pref=[]
cur_dispref=[]
for i in range(len(pref_set)):
cur_pref.append(pref_set[i])
cur_dispref.append(dispref_set[i]) #collects preferred and dispreferred responses
if (i+1) % bs == 0:
make_fastas(cur_pref, cur_dispref) #sets up necessary files
run_mpnn('model-DPO') #scores responses
optimizer.zero_grad()
b_ref, nb_ref, b_dpo, nb_dpo = collect_logps(cur_pref) #collects scores
loss = calc_loss(b_dpo, nb_dpo, b_ref, nb_ref, beta) #computes DPO loss
print(loss)
loss.backward()
optimizer.step()
print(optimizer)
torch.save({ #saves updated model for next round of scoring
'epoch': epoch+1,
'step': i,
'num_edges' : 48,
'noise_level': 0.2,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
}, "../ProteinMPNN/vanilla_model_weights/model-DPO.pt")
print(loss)
cur_pref=[]
cur_dispref=[]
</code></pre>
<p>In short, the scoring of my preferred and dispreferred responses must be done in a separate script, meaning I must save the updated model after each batch to be loaded for the following round of scoring. But as I mentioned, the model weights are not changing, and the scores returned by the reference and target models are always the same. Any help in resolving this issue would be greatly appreciated.</p>
<p>I've checked to make sure that the model parameters are initialized correctly, with requires_grad=True. They also have no gradient before training (list(model.parameters())[0].grad = None). I also checked to ensure that I'm not overwriting the updated model weights, or accidentally loading the vanilla weights during scoring. I double checked my loss function, and tried setting the loss and learning rates to arbitrarily high values to force the weights to update. However, no change in scoring occurred. The model parameter gradient after the backward call is still None, and I'm not sure why. As mentioned previously, all model parameters are initialized with requires_grad=True.</p>
| 0
|
|
DPO model
|
How should the DPO algorithm be executed when the SFT model is unavailable?
|
https://stackoverflow.com/questions/78922265/how-should-the-dpo-algorithm-be-executed-when-the-sft-model-is-unavailable
|
<p><a href="https://i.sstatic.net/657AD55B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/657AD55B.png" alt="enter image description here" /></a></p>
<p>The above image captures an excerpt from the original DPO paper.</p>
<p><strong>My current understanding of the DPO process is as follows:</strong></p>
<ol>
<li>First, initialize both the policy model and the reference model from the SFT model.</li>
<li>For each prompt, the reference model generates a pair of answers, which are then labeled by human annotators to create a human preference dataset in an offline manner.</li>
<li>Minimize the DPO loss to continuously optimize the policy model.</li>
</ol>
<p>However, there's one point in this passage that I don't quite understand. It mentions that, in practice, people often prefer to use publicly available preference datasets rather than generating their own privately. Since the preference dataset is obtained through sampling from the SFT model, we initialize the reference model from the SFT model when it is available. If the SFT model is unavailable, we need to initialize a reference model ourselves according to the corresponding objective function.</p>
<p><strong>The parts I don't understand are as follows:</strong></p>
<ol>
<li>The policy model is also initialized from the SFT model. If the SFT model is unavailable, where does the policy model come from?</li>
<li>When the SFT model is unavailable, the reference model is trained using the objective function provided in the paper. What is its initial state then? Is it randomly initialized and then trained according to the preference responses?</li>
<li>Is my understanding of the DPO process correct?</li>
</ol>
<p>Thank you in advance for your insights and assistance!</p>
| 1
|
|
DPO model
|
Standford NLP library - How to identify similar words (Dash, DashPro, Dash Pro, Dpo, dpo) and get one word (DashPro) to match against training model?
|
https://stackoverflow.com/questions/79546447/standford-nlp-library-how-to-identify-similar-words-dash-dashpro-dash-pro
|
<p>Is there a way to identify similar words and convert it into one word before match against training model using Stanford NLP library?</p>
<p>For example, user inputs could be:</p>
<ol>
<li>DashPro</li>
<li>Dash Pro</li>
<li>dpo</li>
<li>Dash</li>
</ol>
<p>For all the above inputs, the return result should be "DashPro" so that it can match with the training model which contains only "DashPro"</p>
<p>Which of NLP Stanford Library pipeline or tools can help to resolve above scenario. And if you can provide any example code or references using java?</p>
<p>Thanks,</p>
| 2
|
|
DPO model
|
Optimizing an LLM Using DPO: nan Loss Values During Evaluation
|
https://stackoverflow.com/questions/78685861/optimizing-an-llm-using-dpo-nan-loss-values-during-evaluation
|
<p>I want to optimize an LLM based on DPO. When I tried to train and evaluate the model, but there are nan values in the evaluation results.</p>
<blockquote>
</blockquote>
<pre><code>import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from datasets import Dataset
from trl import DPOTrainer, DPOConfig
from datasets import load_dataset
model_name = "EleutherAI/pythia-14m"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
def preprocess_data(item):
return {
'prompt': 'Instruct: ' + item['prompt'] + '\n',
'chosen': 'Output: ' + item['chosen'],
'rejected': 'Output: ' + item['rejected']
}
dataset = load_dataset('jondurbin/truthy-dpo-v0.1', split="train")
dataset = dataset.map(preprocess_data)
split_dataset = dataset.train_test_split(test_size=0.1) # Adjust the test_size as needed
train_dataset = split_dataset['train']
val_dataset = split_dataset['test']
print(f"Length of train data: {len(train_dataset)}")
print(f"Length of validation data: {len(val_dataset)}")
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.unk_token
# Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
low_cpu_mem_usage=True,
torch_dtype=torch.float16
).to(device)
model_ref = AutoModelForCausalLM.from_pretrained(
model_name,
low_cpu_mem_usage=True,
torch_dtype=torch.float16
).to(device)
# Config
training_args = DPOConfig(
output_dir="./output",
beta=0.1,
max_length=512,
max_prompt_length=128,
remove_unused_columns=False,
)
# Load trainer
dpo_trainer = DPOTrainer(
model,
model_ref,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
tokenizer=tokenizer,
)
# Train
dpo_trainer.train()
# Evaluate
evaluation_results = dpo_trainer.evaluate()
print("Evaluation Results:", evaluation_results)
</code></pre>
<p>This is the code used to train a simple 'pythia-14m' model. Below is the result.</p>
<pre><code>Evaluation Results: {'eval_loss': nan, 'eval_runtime': 0.5616, 'eval_samples_per_second': 181.61, 'eval_steps_per_second': 12.463, 'eval_rewards/chosen': nan, 'eval_rewards/rejected': nan, 'eval_rewards/accuracies': 0.0, 'eval_rewards/margins': nan, 'eval_logps/rejected': nan, 'eval_logps/chosen': nan, 'eval_logits/rejected': nan, 'eval_logits/chosen': nan, 'epoch': 3.0}
</code></pre>
<p>any idea why nan values during evaluation ? is there anything wrong in the code ?</p>
|
<p>I would first look for any Nan values in the data before training them if not try gradient clipping to prevent exploding gradients.</p>
| 3
|
DPO model
|
TypeError: empty_like(): argument 'input' (position 1) must be Tensor, not NoneType
|
https://stackoverflow.com/questions/79244854/typeerror-empty-like-argument-input-position-1-must-be-tensor-not-nonet
|
<p>I'm trying to fine-tune the "unsloth/Llama-3.2-11B-Vision-Instruct" model using the DPOTrainer from trl. My dataset is trl-lib/rlaif-v, and I verified its format aligns with the requirements for DPO training. However, when I run the code on Kaggle, I encounter the following error during training:</p>
<pre><code>Copy code
TypeError: empty_like(): argument 'input' (position 1) must be Tensor, not NoneType
</code></pre>
<p>Code
Here is the relevant code snippet:</p>
<pre><code>model, tokenizer = FastVisionModel.from_pretrained(
"unsloth/Llama-3.2-11B-Vision-Instruct",
load_in_4bit = True, # Use 4bit to reduce memory use. False for 16bit LoRA.
use_gradient_checkpointing = "unsloth", # True or "unsloth" for long context
)
model = FastVisionModel.get_peft_model(
model,
finetune_vision_layers = True, # False if not finetuning vision layers
finetune_language_layers = True, # False if not finetuning language layers
finetune_attention_modules = True, # False if not finetuning attention layers
finetune_mlp_modules = True, # False if not finetuning MLP layers
r = 16, # The larger, the higher the accuracy, but might overfit
lora_alpha = 16, # Recommended alpha == r at least
lora_dropout = 0,
bias = "none",
random_state = 3407,
use_rslora = False, # We support rank stabilized LoRA
loftq_config = None, # And LoftQ
)
from datasets import load_dataset
dataset = load_dataset("trl-lib/rlaif-v", split="train[:1%]")
from trl import DPOConfig, DPOTrainer
training_args = DPOConfig(
output_dir="output",
fp16=True,
gradient_checkpointing=True,
per_device_train_batch_size=2,
gradient_accumulation_steps=32,
num_train_epochs=1,
dataset_num_proc=4, # tokenization will use 32 processes
dataloader_num_workers=4, # data loading will use 32 workers
logging_steps=1,
)
trainer = DPOTrainer(
model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
)
trainer.train()
</code></pre>
<p>Error Details
The error occurs at this point in the traceback:</p>
<pre><code>TypeError Traceback (most recent call last)
Cell In[11], line 20
2 training_args = DPOConfig(
3 output_dir="output",
4 fp16=True,
(...)
11 logging_steps=1,
12 )
13 trainer = DPOTrainer(
14 model,
15 args=training_args,
16 train_dataset=dataset,
17 tokenizer=tokenizer,
18 )
---> 20 trainer.train()
...
File /opt/conda/lib/python3.10/site-packages/unsloth_zoo/loss_utils.py:74, in patch_loss_functions.<locals>.UnslothForCausalLMLoss(logits, labels, vocab_size, num_items_in_batch, ignore_index, **kwargs)
70 def UnslothForCausalLMLoss(
71 logits, labels, vocab_size: int, num_items_in_batch: int = None, ignore_index: int = -100, **kwargs
72 ):
73 shift_logits = logits
---> 74 shift_labels = torch.empty_like(labels)
75 shift_labels[..., :-1] = labels[..., 1:]
76 shift_labels[..., -1] = ignore_index
TypeError: empty_like(): argument 'input' (position 1) must be Tensor, not NoneType
</code></pre>
<p>But I think the format of the dpo dataset is correct.
How to solve it?</p>
<p>What I've Tried
Verified the dataset format is correct for DPO training.
Tried using other datasets, but the error persists.
Checked that the dataset has valid input fields (logits, labels, etc.).</p>
| 4
|
|
DPO model
|
Hugging Face Model import error to Jupyter Notebook :
|
https://stackoverflow.com/questions/77658327/hugging-face-model-import-error-to-jupyter-notebook
|
<p>When I try to import a pre-trained fine tuned model from hugging face to jupyter notebook it's shows that the Kernel Restarting: The kernel for .ipynb appears to have died. It will restart automatically.</p>
<pre><code>from transformers import pipeline
pipe = pipeline("text-generation", model="lvkaokao/mistral-7b-finetuned-orca-dpo-v2")
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("lvkaokao/mistral-7b-finetuned-orca-dpo-v2")
model = AutoModelForCausalLM.from_pretrained("lvkaokao/mistral-7b-finetuned-orca-dpo-v2")
</code></pre>
<p>This is the process by which I try to import the model in jupyter notebook!</p>
<p>How to solve this and how can I import the model properly?</p>
|
<p>As @Ro.oT has mentioned, it seems like you're running out of RAM when trying to load the model.</p>
<p>Check the below to reduce the RAM usage</p>
<ul>
<li>data pipeline : if it is too big before the model is loaded, discard some of the dataset</li>
<li>model pretrained checkpoint : I'm not sure, but downloading the pretrained checkpoint might take some RAM space. Try to use a smaller version of the checkpoint instead.</li>
</ul>
| 5
|
DPO model
|
Fine-tune llama2 on cuda:1
|
https://stackoverflow.com/questions/76929997/fine-tune-llama2-on-cuda1
|
<p>When I load the model I use device_map to use cuda:1 still it seems that the model and training are on different cores. How should I properly do this?</p>
<p>Code running at Tesla T4 below:</p>
<pre><code># load the base model in 4-bit quantization
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_name,
quantization_config=bnb_config,
device_map={"": 1},
trust_remote_code=True,
use_auth_token=True,
)
base_model.config.use_cache = False
tokenizer = AutoTokenizer.from_pretrained(base_model_name, use_auth_token=True)
# add LoRA layers on top of the quantized base model
peft_config = LoraConfig(
r=16,
lora_alpha=64,
lora_dropout=0.1,
target_modules=["q_proj", "v_proj"],
bias="none",
task_type="CAUSAL_LM",
)
trainer = SFTTrainer(
model=base_model,
train_dataset=dataset,
peft_config=peft_config,
packing=True,
max_seq_length=None,
dataset_text_field="text",
tokenizer=tokenizer,
args=training_args, # HF Trainer arguments
)
trainer.train()
</code></pre>
<p>Gives error:</p>
<blockquote>
<p>ValueError: You can't train a model that has been loaded in 8-bit precision on a different device than the one you're training on. Make sure you loaded the model on the correct device using for example device_map={'':torch.cuda.current_device()} you're training on.</p>
</blockquote>
<p>I following this guide: <a href="https://huggingface.co/blog/dpo-trl" rel="nofollow noreferrer">https://huggingface.co/blog/dpo-trl</a></p>
| 6
|
|
DPO model
|
Linear layers for LORA
|
https://stackoverflow.com/questions/79476319/linear-layers-for-lora
|
<p>I have been trying to do DPO on the Llava models (llava-hf/llava-v1.6-mistral-7b-hf) and came across the training script Llava folks provided and realized that all the multimodal linear layers are ignored when selecting LORA targets. Can someone please explain why?</p>
<p><a href="https://github.com/LLaVA-VL/LLaVA-NeXT/blob/09e5840d5589ad2d6a8656c0a60f21ae134b3309/llava/train/train_dpo.py#L226" rel="nofollow noreferrer">https://github.com/LLaVA-VL/LLaVA-NeXT/blob/09e5840d5589ad2d6a8656c0a60f21ae134b3309/llava/train/train_dpo.py#L226</a></p>
<p>Here is the function they have for selecting the layers:</p>
<pre class="lang-py prettyprint-override"><code>def find_all_linear_names(model):
cls = torch.nn.Linear
lora_module_names = set()
multimodal_keywords = ["mm_projector", "vision_tower", "vision_resampler"]
for name, module in model.named_modules():
if any(mm_keyword in name for mm_keyword in multimodal_keywords):
continue
if isinstance(module, cls):
names = name.split(".")
lora_module_names.add(names[0] if len(names) == 1 else names[-1])
if "lm_head" in lora_module_names: # needed for 16-bit
lora_module_names.remove("lm_head")
return list(lora_module_names)
</code></pre>
<p>I expected the vision_tower layers(linear) to also be included primarily fc1 and fc2 but the LLaVA training script ignores them. Trying to understand why.</p>
| 7
|
|
DPO model
|
Predicting data from gamlss model in handler function using tryCatch in R
|
https://stackoverflow.com/questions/64638070/predicting-data-from-gamlss-model-in-handler-function-using-trycatch-in-r
|
<p>I am having a problem using the <code>tryCatch()</code> function in R in a function I created.</p>
<p>What I want to do is this:</p>
<ol>
<li>simulate data based on model results</li>
<li>analyze simulated data using my <code>gamlss</code> model</li>
<li>use the <code>predict</code> function to extract model predictions over a new range of values</li>
<li>store these predictions in a data frame</li>
<li>do this many times</li>
</ol>
<p>My main problem is that my model is somewhat unstable and once in a while predictions are kind of wild, which in turn generates an error when I try to analyze it with <code>gamlss</code>. My objective is to write a <code>tryCatch</code> statement within my simulation function and to basically simply run the simulation/prediction code a second time in the event that an error occurs. (I know this is not optimal, I could also write it in a recursive statement using <code>repeat</code> for example and run it until I don't get an error but I get few enough errors that the probability of getting two in a row is quite low, and I'm having enough troube with this task as it is.)</p>
<p>So I simplified my code as much as I could and created a dummy dataframe for which the modelling still works.</p>
<p>I wrote in the code where I believe the error is (with the predict function which does not find the <code>mod_sim</code> object). It is likely there since the <code>cat</code> just above this line prints while the one just below doesn't print.</p>
<p>I think there are some things about how <code>tryCatch</code> works that I don't understand well enough and I'm having a hard time to understand which objects are kept in which parts of functions and when they can be called or not...</p>
<p>Here is the code I have so far. The error occurs at l.84 (identified in the script). The data and code can be found <a href="https://github.com/LeTourneuxF/DHT" rel="nofollow noreferrer">here</a>.</p>
<pre><code>library(tidyverse)
library(gamlss)
library(gamlss.dist)
#Load data
load('DHT.RData')
#Run original model
mod_pred<-gamlss(harvest_total ~ ct,
data = DHT,
family = DPO)
#Function to compute predictions based on model
compute_CI_trad_gamlss<-function(n.sims=200, mod){#,
#DF for simulations
df_sims<-as.data.frame(DHT)
#Dateframe with new data to predict over
new.data.ct<<-expand.grid(ct=seq(from=5, to=32, length.out=50))
#matrix to store predictions
preds.sim.trad.ct <<- matrix(NA, nrow=nrow(new.data.ct), ncol=n.sims)
#Number of obs to simulate
n<-nrow(df_sims)
#Simulation loop (simulate, analyze, predict, write result)
for(i in 1:n.sims){
#Put in tryCatch to deal with potential error on first run
tryCatch({
#Create matrix to store results of simulation
y<-matrix(NA,n,1)
#in DF for simulations, create empty row to be filled by simulated data
df_sims$sim_harvest<-NA
#Loop to simulate observations
for(t in 1:n){
#Simulate data based on model parameters
y[t]<-rDPO(n=1, mu=mod$mu.fv[t], sigma = mod$sigma.fv[t])
}#enf of simulation loop
#Here I want the result of the simulation loop to be pasted in the df_sims dataset
df_sims$sim_harvest<-y
#Analysis of simulated data
mod_sim<-gamlss(sim_harvest ~ ct,
data = df_sims,
family = DPO)
#Refit the model if convergence not attained
if(mod_sim$converged==T){
#If converged do nothing
} else {
#If not converged refit model
mod_sim<-refit(mod_sim)
}
cat('we make it to here!\n')
#Store results in object
ct <<-as.vector(predict(mod_sim, newdata = new.data.ct, type='response'))
cat('but not to here :( \n')
#If we made it down here, register err as '0' to be used in the if statement in the 'finally' code
err<<-0
},
#If error register the error and write it!
error = function(e) {
#If error occured, show it
cat('error at',i,'\n')
#Register err as 1 to be used in the if statement in the finally code below
err<<-1
},
finally = {
if(err==0){
#if no error, do nothing and keep going outside of tryCatch
}#End if err==0
else if (err==1){
#If error, re-simulate data and do the analysis again
y<-matrix(NA,n,1)
df_sims$sim_harvest<-NA
#Loop to simulate observations
for(t in 1:n){
#Simuler les données basées sur les résultats du modèle
y[t]<-rDPO(n=1, mu=mod$mu.fv[t], sigma = mod$sigma.fv[t])
}#enf of simulation loop
#Here I want the result of the simulation loop to be pasted in the df_sims dataset
df_sims$sim_harvest<-y
#Analysis of simulated data
mod_sim<-gamlss(sim_harvest ~ ct,
data = df_sims,
family = DPO)
cat('we also make it here \n')
#Store results in object
ct <<-as.vector(predict(mod_sim, newdata = new.data.ct, type='response'))
cat('but not here... \n')
}#End if err==1,
}#End finally
)#End tryCatch
#Write predictions for this iteration to the DF and start over
preds.sim.trad.ct[,i] <<-ct
#Show iteration number
cat(i,'\n')
}
#Do some more stuff here
#Return results
return(preds = list(ct= list(predictions=preds.sim.trad.ct)))
}
#Run simulation and store object
result<-compute_CI_trad_gamlss(n.sims=20, mod=mod_pred)
</code></pre>
<p>Anyway I hope someone can help!</p>
<p>Thanks a lot!</p>
|
<p>So after a bit of trial and error I managed to make it work. I believe the problem lies in the <code>mod_sim</code> object that is not saved to the global environment. <code>predict</code> (or <code>predict.gamlss</code> here) is probably not looking in the function environment for the <code>mod_sim</code> object although I don't understand why it wouldn't. Anyway using <code><<-</code> (i.e. assigning the object in the global environment from the function) for every object created in the function seemed to do the trick. If anyone has an explanation on why this happens though I'd be glad to understand what I'm doing wrong!</p>
| 8
|
DPO model
|
DDD / Presenter pattern VS Use case optimal query
|
https://stackoverflow.com/questions/20788646/ddd-presenter-pattern-vs-use-case-optimal-query
|
<p>In this great <a href="https://rads.stackoverflow.com/amzn/click/com/0321834577" rel="nofollow noreferrer" rel="nofollow noreferrer">book</a> about Domain-Driven Design, a chapter is dedicated to the user interface and its relationship to domain objects.</p>
<p>One point that confuses me is the comparison between Use case optimal queries and presenters.</p>
<p>The excerpt dealing with optimal queries (page 517) is:</p>
<blockquote>
<p>Rather than reading multiple whole Aggregate instances of various
types and then programmatically composing them into a single container
(DTO or DPO), you might instead use what is called a use case optimal
query.<br>
This is where you design your Repository with finder query
methods that compose a custom object as a superset of one or more
Aggregate instances.<br>
The query dynamically places the results into a
Value Object (6) specifically designed to address the needs of the use
case.<br>
You design a Value Object, not a DTO, because the query is
domain specific, not application specific (as are DTOs). The custom
use case optimal Value Object is then consumed directly by the view
renderer. </p>
</blockquote>
<p>Thus, the benefit of optimal queries is to directly provide a specific-to-view value object, acting as the real view model.</p>
<p>A page later, presenter pattern is described:</p>
<blockquote>
<p>The presentation model acts as an Adapter. It masks the details of the
domain model by providing properties and behaviours that are designed
in terms of the needs of the view.<br>
Rather than requiring the
domain model to specifically support the necessary view properties, it
is the responsibility of the Presentation Model to derive the
view-specific indicators and properties from the state of the domain
model.</p>
</blockquote>
<p>It sounds that both ways achieve the construction of a view model, specific to the use case.</p>
<p>Currently my call chain (using Play Framework) looks like:</p>
<p>For queries: Controllers (acting as Rest interface sending Json) -> Queries (returning specific value object through optimal queries)</p>
<p>For commands: Controllers (acting as Rest interface sending Json) -> Application services (Commands) -> domain services/repositories/Aggregates (application services returns void)</p>
<p><strong>My question is: if I already practice the use case optimal query, what would be the benefit of implementing the presenter pattern? Why bother with a presenter if one could always use optimal queries to satisfy the client needs directly?</strong> </p>
<p>I just think of one benefit of the presenter pattern: dealing with commands, not queries, thus providing to command some domain objects corresponding to the view models determined by the presenter. Controller would then be decoupled from domain object.
Indeed, another excerpt of Presenter description is:</p>
<blockquote>
<p>Additionally, edits performed by the user are tracked by the
Presentation Model.<br>
This is not the case of placing overloaded
responsibilities on the Presentation Model, since it's meant to adapt
in both directions, model to view and view to model.</p>
</blockquote>
<p>However, I prefer sending pure primitives to application services (commands), rather than dealing directly with domain object, so this benefit would not apply for me.<br>
Any explanation?</p>
|
<p>Just a guess :)</p>
<p>The preseneter pattern could reuse your repository's aggregate finder methods as much as possible. For example, we have two views, in this case we need two adapters(an adapter per view), but we only need one repository find method:</p>
<pre><code>class CommentBriefViewAdapter {
private Comment comment;
public String getTitle() {
return partOf(comment.getTitle());
//return first 10 characters of the title, hide the rest
}
.....//other fields to display
}
class CommentDetailViewAdapter {
private Comment comment;
public String getTitle() {
return comment.getTitle();//return full title
}
.....//other fields to display
}
//In controller:
model.addAttribute(new CommentBriefViewAdapter(commentRepo.findBy(commentId)));
// same repo method
model.addAttribute(new CommentDetailViewAdapter(commentRepo.findBy(commentId)));
</code></pre>
<p>But optimal queries is view oriented(a query per view). I think these two solutions are designed for <strong>none-cqrs</strong> style ddd architecture. They're no longer needed in a cqrs-style arichitecture since queries are not based on repository but specific thin data layer.</p>
| 9
|
DPO model
|
How to show record int profile related to user using Laravel?
|
https://stackoverflow.com/questions/61463879/how-to-show-record-int-profile-related-to-user-using-laravel
|
<p>I want to show record into user's profile related to user. I am trying to do this unfortunately it's not showing record related to user. How to do this?</p>
<p><strong>Database</strong></p>
<pre><code> digitizing_orders table has user_id
</code></pre>
<p><strong>Digitizingorder</strong></p>
<pre><code> class Digitizingorder extends Model
{
protected $table="digitizing_orders";
public function user()
{
return $this->belongsTo('App\User');
}
}
</code></pre>
<p><strong>User Model</strong></p>
<pre><code> class User extends Authenticatable
{
public function digitizing()
{
return $this->hasMany('App\Digitizingorder','user_id');
}
}
</code></pre>
<p><strong>controller</strong></p>
<pre><code> public function index()
{
$data=
[
'digitizings'=>Digitizingorder::with('user')->where('id','=',Auth::id())->get()
];
return view('front_end.Customerprofile.digitizing_view_order',$data);
}
@foreach($digitizings as $digitizing)
<tr>
<td>1</td>
<td>DPO-{{$digitizing->id}}</td>
<td>{{$digitizing->order_name}}</td>
<td>{{$digitizing->created_at}}</td>
<td>-</td>
<td>$0.00</td>
</tr>
@endforeach
</code></pre>
|
<p>Since you have a <code>hasMany</code> relationship you can get digitizings like so:</p>
<pre><code> public function index()
{
$data=
[
'digitizings'=>Auth::user()->digitizing()->get()
];
return view('front_end.Customerprofile.digitizing_view_order',$data);
}
</code></pre>
<p>This will get the orders for the authenticated user.</p>
| 10
|
DPO model
|
Trying to get property 'first_name' of non-object
|
https://stackoverflow.com/questions/61896917/trying-to-get-property-first-name-of-non-object
|
<p>I am trying to fetch digitizing order related to the user but unfortunately, I am facing an error.</p>
<p>Please see this error: <a href="https://flareapp.io/share/VmeWJ47Q" rel="nofollow noreferrer">https://flareapp.io/share/VmeWJ47Q</a></p>
<p><strong>Controller</strong></p>
<pre><code>public function index()
{
$data=
[
'digitizings'=>Digitizing::with('user')->paginate(8)
];
return view('front_end.profile.digitizing.digitizing_view_order',$data);
}
</code></pre>
<p><strong>User Model</strong></p>
<pre><code>class User extends Authenticatable
{
use Notifiable;
public function digitizing()
{
return $this->hasMany('App\Digitizing','user_id');
}
}
</code></pre>
<p><strong>Digitizing Model</strong></p>
<pre><code>class Digitizing extends Model
{
protected $fillable = ['id','order_name','height','width','urgent','image',
'order_placement','required_format','order_fabric','instruction','user_id'];
protected $table ="digitizing_orders";
public function user()
{
return $this->belongsTo('App\User');
}
}
</code></pre>
<p><strong>HTML view</strong></p>
<pre><code> @foreach($digitizings as $key =>$digitizing)
<tr>
<td>DPO# {{$digitizing->id}}</td>
<td>{{$digitizing->created_at}}</td>
<td>{{$digitizing->order_name}}</td>
<td>{{$digitizing->user->first_name}}</td>
<td>{{$digitizing->user->email}}</td>
<td>{{$digitizing->released_date ?? 'processing'}}</td>
<td><a href="">View</a>
</td>
</tr>
@endforeach
</code></pre>
|
<p>Does every Digitizing-Entry has set a valid <code>user_id</code> in database? Try checking eager loaded data is set before accessing it.</p>
| 11
|
DPO model
|
Size mismatch for embed_out.weight: copying a param with shape torch.Size([0]) from checkpoint - Huggingface PyTorch
|
https://stackoverflow.com/questions/78712878/size-mismatch-for-embed-out-weight-copying-a-param-with-shape-torch-size0-f
|
<p>I want to finetune an LLM. I am able to successfully finetune LLM. But when reload the model after save, gets error. Below is the code</p>
<pre><code>import argparse
import numpy as np
import torch
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForCausalLM
from trl import DPOTrainer, DPOConfig
def preprocess_data(item):
return {
'prompt': 'Instruct: ' + item['prompt'] + '\n',
'chosen': 'Output: ' + item['chosen'],
'rejected': 'Output: ' + item['rejected']
}
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--epochs", type=int, default=1)
parser.add_argument("--beta", type=float, default=0.1)
parser.add_argument("--batch_size", type=int, default=4)
parser.add_argument("--lr", type=float, default=1e-6)
parser.add_argument("--seed", type=int, default=2003)
parser.add_argument("--model_name", type=str, default="EleutherAI/pythia-14m")
parser.add_argument("--dataset_name", type=str, default="jondurbin/truthy-dpo-v0.1")
parser.add_argument("--local_rank", type=int, default=0)
args = parser.parse_args()
# Determine device based on local_rank
device = torch.device("cuda", args.local_rank) if torch.cuda.is_available() else torch.device("cpu")
tokenizer = AutoTokenizer.from_pretrained(args.model_name)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(args.model_name).to(device)
ref_model = AutoModelForCausalLM.from_pretrained(args.model_name).to(device)
dataset = load_dataset(args.dataset_name, split="train")
dataset = dataset.map(preprocess_data)
# Split the dataset into training and validation sets
dataset = dataset.train_test_split(test_size=0.1, seed=args.seed)
train_dataset = dataset['train']
val_dataset = dataset['test']
training_args = DPOConfig(
learning_rate=args.lr,
num_train_epochs=args.epochs,
per_device_train_batch_size=args.batch_size,
logging_steps=10,
remove_unused_columns=False,
max_length=1024,
max_prompt_length=512,
fp16=True
)
# Verify and print embedding dimensions before finetuning
print("Base model embedding dimension:", model.config.hidden_size)
model.train()
ref_model.eval()
dpo_trainer = DPOTrainer(
model,
ref_model,
beta=args.beta,
train_dataset=train_dataset,
eval_dataset=val_dataset,
tokenizer=tokenizer,
args=training_args,
)
dpo_trainer.train()
# Evaluate
evaluation_results = dpo_trainer.evaluate()
print("Evaluation Results:", evaluation_results)
save_model_name = 'finetuned_model'
model.save_pretrained(save_model_name)
if __name__ == "__main__":
main()
</code></pre>
<p>Error I was getting as below</p>
<pre><code> return model_class.from_pretrained(
File "/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3838, in from_pretrained
) = cls._load_pretrained_model(
File "/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4349, in _load_pretrained_model
raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}")
RuntimeError: Error(s) in loading state_dict for GPTNeoXForCausalLM:
size mismatch for gpt_neox.embed_in.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([50304, 128]).
size mismatch for embed_out.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([50304, 128]).
You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
</code></pre>
<p>After finetuning, model works perfectly. But after reloading the saved trained model its not working. Any idea why gets this error when reloading the model ?</p>
|
<p>Instead of</p>
<pre><code>model.save_pretrained(save_model_name)
</code></pre>
<p>try this</p>
<pre><code>dpo_trainer.save_model(save_model_name)
</code></pre>
| 12
|
DPO model
|
cyclic mss for icu beds---help writing the code - CPLEX
|
https://stackoverflow.com/questions/63934728/cyclic-mss-for-icu-beds-help-writing-the-code-cplex
|
<p>The OPL model is in the code box.<br />
My problem concerns the scheduling of the icu beds and I have a MIQP (mixed integer quadratic program) formulation.<br />
The goal is to level out the positive and negative deviation of the beds in intensive care,<br />
so that the use of one bed is balanced.<br />
For example, we want to avoid that in the first 3 days of the surgical cycle there are all occupied beds and in the following 2 days there are all empty beds.</p>
<p>The problem is that Opl does not return any solution, it tells me that<br />
-obj no value<br />
-dneg no value<br />
-dpos no value<br />
I can not understand how to solve this problem.</p>
<pre><code> using CP;
int nspeciality=...; //Set of clinical specialties
int nOR=...; //Set of surgery rooms
int ndays=...; //Set of days in surgery cycle
int npazients=...; // Set of patient types (non-ICU or ICU, I={0,1})
int nsurgerylengthtypes=...; // Set of surgery length types (short or long)
int ndaysinICUcycle =...; //Set of days in ICU cycle
int nweekend=...; //Set of days without surgery (i.e. Saturday, Sunday)
range speciality=0..nspeciality;
range OR=1..nOR;
range days=0..ndays;
range pazients=0..npazients;
range surgerylengthtypes=0..nsurgerylengthtypes;
range daysinICUcycle=0..ndaysinICUcycle;
range weekend=0..nweekend;
;
int s[speciality][surgerylengthtypes]=...; //Surgery duration by specialty c and length type l
int w[OR]=...; // Opening hours for each room r
int p[c in speciality][i in pazients][l in surgerylengthtypes]=2*rand(10); //Number of scheduled
surgeries in one surgery cycle by specialty c, patient type i, and length type l
int N=10000;
float randU[c in speciality][l in surgerylengthtypes][k in daysinICUcycle]=rand(N)/N; //Probability that a scheduled patient stays at least k days in the ICU after having surgery by specialty c and length type l
float b=8.99; //Bed utilization target level
int v[speciality][OR]=...;
int j[OR][days]=...;
int m[c in speciality][r in OR][t in days]= v[c][r]*j[r][t]; //1 if specialty c is assigned to room r on day t; 0 otherwise
dvar int dneg[days]; //Negative deviation from bed utilization target level for day t
dvar int dpos[days]; //Positive deviation from bed utilization target level for day t
dvar int x[speciality][OR][days][pazients][surgerylengthtypes]; //Number of assigned surgeries by specialty c, room r, day t, patient type i, length type l
int h[speciality]=...; //Maximum number of MSS blocks in one surgery cycle by specialty c
int g[speciality]=...; //Maximum number of daily MSS blocks by specialty c
float y=0.25; //Weight for positive and negative deviation from the bed utilization target level
dexpr float obj= sum(t in days) ((y*(dneg[t])^2) + (1-y)*(dpos[t])^2);
minimize obj;
subject to{
constraint_1:
forall (c in speciality:(c-4) in speciality, r in OR, t in days)
sum (i in pazients, l in surgerylengthtypes) s[c][l]*x[c][r][t][i][l]<= w[r]*m[c][t][r];
constraint_2:
forall(c in speciality, i in pazients, l in surgerylengthtypes)
sum(r in OR, t in days) x[c][r][t][i][l]==p[c][i][l];
constraint_3:
forall(t in days)
sum(c in speciality, r in OR, k in daysinICUcycle, l in surgerylengthtypes) (randU[c][l][k]*x[c][r][t][1][l]+dneg[t]-dpos[t])==b;
constraint_4:
forall (c in speciality)
sum (t in days, r in OR) m[c][r][t]<=h[c];
constraint_5:
forall (c in speciality, t in days)
sum (r in OR) m[c][r][t]<=g[c];
constraint_6:
forall (t in weekend)
sum (c in speciality, r in OR) m[c][r][t]<=0;
constraint_7:
forall (t in days, r in OR)
sum (c in speciality) m[c][r][t]<=1;
constraint_8:
forall (t in days)
dneg[t]>=0;
constraint_9:
forall (t in days)
dpos[t]>=0;
constraint_10:
forall (c in speciality, r in OR, t in days, i in pazients, l in surgerylengthtypes)
x[c][r][t][i][l]>0;
</code></pre>
<p>below my file .dat:</p>
<pre><code>nspeciality=3;
nOR=4;
ndays=6;
npazients=1;
nsurgerylengthtypes=1;
ndaysinICUcycle=27;
nweekend=1;
s=[[1,1],
[0,1],
[1,0],
[0,0]];
w=[10,8,15,9];
v=[[1,1,0,0],
[1,0,0,1],
[1,1,0,0],
[0,0,1,0]];
j=[[0,1,1,0,0,1,0],
[1,0,0,1,1,0,0],
[1,1,1,0,1,1,1],
[1,0,0,0,0,0,1]];
h=[15,18,9,20];
g=[7,8,2,10];
</code></pre>
|
<p>If you run in the IDE you ll see some conflicts in the conflict tab.</p>
<p>And then if you comment the constraints mentioned in the conflicts:</p>
<pre><code>// constraint_6:
// forall (t in weekend)
// sum (c in speciality, r in OR) m[c][r][t]<=0;
// constraint_7:
// forall (t in days, r in OR)
// sum (c in speciality) m[c][r][t]<=1;
constraint_8:
forall (t in days)
dneg[t]>=0;
constraint_9:
forall (t in days)
dpos[t]>=0;
// constraint_10:
// forall (c in speciality, r in OR, t in days, i in pazients, l in surgerylengthtypes)
// x[c][r][t][i][l]>=1;
</code></pre>
<p>then you ll get a feasible solution</p>
<p>NB:</p>
<p>Changing</p>
<pre><code>x[c][r][t][i][l]>0;
</code></pre>
<p>to</p>
<pre><code>x[c][r][t][i][l]>=1;
</code></pre>
<p>will make your model both work with MIP and CP.</p>
| 13
|
DPO model
|
Triplet Network, loss function and equal distances
|
https://stackoverflow.com/questions/50621897/triplet-network-loss-function-and-equal-distances
|
<p>I'm currently implementing a triplet network to recognise if two images are describing the same 3d-model or not, but I have some problems with the results, the distances between anchor-positive is always equal to the distance between anchor-negative. </p>
<p>Here the code of my loss function :</p>
<pre><code> def triplet_loss(self):
self.d_pos = tf.reduce_sum(tf.square(self.o1 - self.o2), axis=-1)
self.d_neg = tf.reduce_sum(tf.square(self.o1 - self.o3), axis=-1)
loss = tf.maximum(0.0, self.margin + (self.d_pos - self.d_neg))
loss = tf.reduce_mean(loss)
return loss
</code></pre>
<p>Where o1, o2 and o3 are the output of convolutional networks with shared weights and are batch normalized :</p>
<pre><code>output = tf.layers.batch_normalization(inputs=output, axis=-1, momentum=0.9, epsilon=0.0001, center=True, scale=True, name='batch_3_norm')
</code></pre>
<p>And the first results are the followings : </p>
<pre><code>epoch 0: batch:0 loss 0.0000199945 dneg : 0.079995 dpos; 0.079995
epoch 0: batch:1 loss 0.0000201295 dneg : 0.092946 dpos; 0.092946
epoch 0: batch:2 loss 0.0000205572 dneg : 0.110583 dpos; 0.110583
epoch 0: batch:3 loss 0.0000216728 dneg : 0.122692 dpos; 0.122693
epoch 0: batch:4 loss 0.0000202223 dneg : 0.111207 dpos; 0.111207
epoch 0: batch:5 loss 0.0000200346 dneg : 0.105684 dpos; 0.105684
############### Test set : batch:5 loss 0.000
epoch 1: batch:0 loss 0.0000207106 dneg : 0.105736 dpos; 0.105737
epoch 1: batch:1 loss 0.0000200992 dneg : 0.107299 dpos; 0.107299
epoch 1: batch:2 loss 0.0000207007 dneg : 0.111667 dpos; 0.111667
epoch 1: batch:3 loss 0.0000201932 dneg : 0.109080 dpos; 0.109081
epoch 1: batch:4 loss 0.0000206707 dneg : 0.111295 dpos; 0.111295
</code></pre>
<p>(dneg and dpos are the distances for positive and negative couples)</p>
<p>So many questions :</p>
<ul>
<li><p>how to tune the margin? So the difference between the two distances is so small that I have to put a very small margin?</p></li>
<li><p>Because the two distances are equals, the loss is equal to the margin. How avoid this issue?</p></li>
<li><p>how to measure the accuracy of a triplet network? For example if a batch of size 100, can we count the number of negative examples, which have a distance to the anchor bigger than the distance between anchor and positive + margin?</p></li>
</ul>
<p>Thanks a lot for your answers!</p>
| 14
|
|
DPO model
|
Deepspeed : AttributeError: 'DummyOptim' object has no attribute 'step'
|
https://stackoverflow.com/questions/78697835/deepspeed-attributeerror-dummyoptim-object-has-no-attribute-step
|
<p>I want to use deepspeed for training LLMs along with Huggingface Trainer. But when I use deepspeed along with trainer I get error "AttributeError: 'DummyOptim' object has no attribute 'step'". Below is my code</p>
<pre><code>import argparse
import numpy as np
import torch
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForCausalLM
from trl import DPOTrainer, DPOConfig
def preprocess_data(item):
return {
'prompt': 'Instruct: ' + item['prompt'] + '\n',
'chosen': 'Output: ' + item['chosen'],
'rejected': 'Output: ' + item['rejected']
}
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--epochs", type=int, default=1)
parser.add_argument("--beta", type=float, default=0.1)
parser.add_argument("--batch_size", type=int, default=4)
parser.add_argument("--lr", type=float, default=1e-6)
parser.add_argument("--seed", type=int, default=2003)
parser.add_argument("--model_name", type=str, default="EleutherAI/pythia-14m")
parser.add_argument("--dataset_name", type=str, default="jondurbin/truthy-dpo-v0.1")
parser.add_argument("--local_rank", type=int, default=0)
args = parser.parse_args()
# Determine device based on local_rank
device = torch.device("cuda", args.local_rank) if torch.cuda.is_available() else torch.device("cpu")
tokenizer = AutoTokenizer.from_pretrained(args.model_name)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(args.model_name).to(device)
ref_model = AutoModelForCausalLM.from_pretrained(args.model_name).to(device)
dataset = load_dataset(args.dataset_name, split="train")
dataset = dataset.map(preprocess_data)
# Split the dataset into training and validation sets
dataset = dataset.train_test_split(test_size=0.1, seed=args.seed)
train_dataset = dataset['train']
val_dataset = dataset['test']
training_args = DPOConfig(
learning_rate=args.lr,
num_train_epochs=args.epochs,
per_device_train_batch_size=args.batch_size,
logging_steps=10,
remove_unused_columns=False,
max_length=1024,
max_prompt_length=512,
deepspeed="ds_config.json"
)
# Verify and print embedding dimensions before finetuning
print("Base model embedding dimension:", model.config.hidden_size)
model.train()
ref_model.eval()
dpo_trainer = DPOTrainer(
model,
ref_model,
beta=args.beta,
train_dataset=train_dataset,
eval_dataset=val_dataset,
tokenizer=tokenizer,
args=training_args,
)
dpo_trainer.train()
# Evaluate
evaluation_results = dpo_trainer.evaluate()
print("Evaluation Results:", evaluation_results)
save_model_name = 'finetuned_model'
model.save_pretrained(save_model_name)
if __name__ == "__main__":
main()
</code></pre>
<p>The config file used is the below one</p>
<pre><code>{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"bf16": {
"enabled": "auto"
},
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"initial_scale_power": 32,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false,
"flops_profiler": {
"enabled": false,
"detailed": false
},
"optimizer": {
"type": "Lamb",
"params": {
"lr": "auto",
"betas": [0.9, 0.999],
"eps": "auto",
"weight_decay": "auto"
}
},
"zero_allow_untested_optimizer": true
}
</code></pre>
<p>The code works with out deepspeed. I have torch=2.3.1, deepspeed =0.14.5, trl=0.9.4 and CUDA Version: 12.5.</p>
<p>Appreciate any hint on this !</p>
|
<pre><code>from accelerate.utils import DistributedType
training_args.distributed_state.distributed_type = DistributedType.DEEPSPEED
</code></pre>
<p>adding this solves the issue</p>
| 15
|
DPO model
|
Adalm Pluto works on Ubuntu but NOT on Ubuntu Server 20.04 LTS
|
https://stackoverflow.com/questions/71949828/adalm-pluto-works-on-ubuntu-but-not-on-ubuntu-server-20-04-lts
|
<p>I'm running ubuntu server and have tried installing libiio packages from both source and apt-get repositories. I can detect the adalm pluto sdr device with iio_info -s (as root because I have not installed the udev rules) but it does not assume an ip address (e.g. 192.168.2.1) like it does on ubuntu 20.04 LTS.</p>
<pre><code>>iio_info -s
Library version: 0.19 (git tag: v0.19)
Compiled with backends: local xml ip usb serial
Available contexts:
0: 0456:b673 (Analog Devices Inc. PlutoSDR (ADALM-PLUTO)), serial=104473b04a060006ffff1c00dd1f8473f8 [usb:3.2.5]
</code></pre>
<p>I've followed the instructions here: <a href="https://wiki.analog.com/university/tools/pluto/drivers/linux" rel="nofollow noreferrer">https://wiki.analog.com/university/tools/pluto/drivers/linux</a></p>
<p>The output of dmesg when the pluto is plugged in is this:</p>
<pre><code>[380299.366375] usb 3-1: new high-speed USB device number 2 using xhci_hcd
[380299.520117] usb 3-1: New USB device found, idVendor=0456, idProduct=b673, bcdDevice= 4.19
[380299.520120] usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[380299.520122] usb 3-1: Product: PlutoSDR (ADALM-PLUTO)
[380299.520123] usb 3-1: Manufacturer: Analog Devices Inc.
[380299.520124] usb 3-1: SerialNumber: 104473b04a060006ffff1c00dd1f8473f8
[380299.553556] usb-storage 3-1:1.2: USB Mass Storage device detected
[380299.555040] scsi host4: usb-storage 3-1:1.2
[380299.555206] usbcore: registered new interface driver usb-storage
[380299.555732] cdc_acm 3-1:1.3: ttyACM0: USB ACM device
[380299.558342] usbcore: registered new interface driver cdc_acm
[380299.558344] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters
[380299.560691] usbcore: registered new interface driver uas
[380299.569781] cfg80211: Loading compiled-in X.509 certificates for regulatory database
[380299.570205] cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
[380299.574968] usbcore: registered new interface driver cdc_ether
[380299.578953] rndis_host 3-1:1.0 eth0: register 'rndis_host' at usb-0000:00:14.0-1, RNDIS device, 00:e0:22:81:0c:b6
[380299.579489] usbcore: registered new interface driver rndis_host
[380299.583238] usbcore: registered new interface driver rndis_wlan
[380299.595374] rndis_host 3-1:1.0 enx00e022810cb6: renamed from eth0
[380300.582838] scsi 4:0:0:0: Direct-Access Linux File-Stor Gadget 0419 PQ: 0 ANSI: 2
[380300.583349] sd 4:0:0:0: Attached scsi generic sg1 type 0
[380300.584030] sd 4:0:0:0: [sdb] 61441 512-byte logical blocks: (31.5 MB/30.0 MiB)
[380300.584266] sd 4:0:0:0: [sdb] Write Protect is off
[380300.584274] sd 4:0:0:0: [sdb] Mode Sense: 0f 00 00 00
[380300.584510] sd 4:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[380300.607978] sdb: sdb1
[380300.623364] sd 4:0:0:0: [sdb] Attached SCSI removable disk
</code></pre>
<p>This is not consistent with what the guide shows from analog devices' wiki.</p>
<p>I'm at a loss here as to what I'm doing wrong. The device shows up and I have the drivers and required kernel modules (I check with lsmod). Any ideas on what would make this work in ubuntu but not ubuntu server.</p>
| 16
|
|
DPO model
|
Error"AttributeError: 'product.pricelist' object has no attribute 'get_product_pricelist'
|
https://stackoverflow.com/questions/50290468/errorattributeerror-product-pricelist-object-has-no-attribute-get-product-p
|
<p>i'm using odoo 9 and i want to install module "product_print_zpl_barcode" with add a wizard on product variant which allows to generate and print a product barcode on a ZPL printer . When i press on the button "Print barcode" an error shows which said " AttributeError: 'product.pricelist' object has no attribute 'get_product_pricelist' " Any help please ??</p>
<p>Product.xml </p>
<pre><code> <?xml version="1.0" encoding="utf-8"?>
<odoo>
<record id="product_normal_form_view" model="ir.ui.view">
<field name="name">generate.weight.price.barcode.product.product.form</field>
<field name="model">product.product</field>
<field name="inherit_id" ref="product.product_normal_form_view" />
<field name="arch" type="xml">
<header position="inside">
<button name="%(product_print_zpl_barcode.product_print_zpl_barcode_action)d" type="action" string="Print Barcode"/>
</header>
</field>
</record>
</odoo>
</code></pre>
<p>product_print_zpl_barcode.py</p>
<h1>-<em>- coding: utf-8 -</em>-</h1>
<pre><code> from openerp import models, fields, api, _
from openerp.exceptions import UserError
from openerp.tools import float_compare, float_is_zero
import openerp.addons.decimal_precision as dp
import base64
import re
class ProductPrintZplBarcode(models.TransientModel):
_name = 'product.print.zpl.barcode'
_description = 'Generate and print product barcodes in ZPL'
@api.model
def default_get(self, fields_list):
res = super(ProductPrintZplBarcode, self).default_get(fields_list)
assert self._context.get('active_model') == 'product.product',\
'wrong active_model, should be product.product'
product_id = self._context.get('active_id')
product = self.env['product.product'].browse(product_id)
if not product:
raise UserError(_('Missing Product'))
if not product.barcode:
raise UserError(_(
"Product '%s' doesn't have a barcode") % product.display_name)
nomenclature = self.env.ref('barcodes.default_barcode_nomenclature')
company = self.env.user.company_id
posconfig = self.env['pos.config'].sudo().search(
[('company_id', '=', company.id)], limit=1)
if posconfig:
pricelist = posconfig.pricelist_id
else:
pricelist = self.env['product.pricelist'].search([
'|', ('company_id', '=', False),
('company_id', '=', company.id),
], limit=1)
if not pricelist:
raise UserError(_(
"There are no pricelist in company %s ?") % company.name)
printer = self.env['printing.printer'].get_default()
res.update({
'nomenclature_id': nomenclature.id,
'pricelist_id': pricelist.id,
'currency_id': pricelist.currency_id.id,
'barcode': product.barcode,
'product_name': product.name,
'product_id': product_id,
'zpl_printer_id': printer and printer.id or False,
})
return res
product_id = fields.Many2one(
'product.product', string='Product', required=True, readonly=True)
uom_id = fields.Many2one(
related='product_id.uom_id', readonly=True)
# 1 line = un peu moins de 30
product_name = fields.Char('Product Label', required=True, size=56)
nomenclature_id = fields.Many2one(
'barcode.nomenclature', 'Barcode Nomenclature', required=True)
rule_id = fields.Many2one(
'barcode.rule', string='Barcode Rule', readonly=True,
compute='_compute_rule_id')
barcode_type = fields.Selection(
related='rule_id.type', readonly=True, string="Barcode Type")
label_size = fields.Selection([
('38x25', '38x25 mm'),
], required=True, default='38x25', string='Label Size')
pricelist_id = fields.Many2one(
'product.pricelist', string='Pricelist', required=True)
currency_id = fields.Many2one(
related='pricelist_id.currency_id', readonly=True)
# TODO: for the moment, we only support weight, but...
quantity = fields.Float(digits=dp.get_precision('Stock Weight'))
price_uom = fields.Monetary(
readonly=True, string="Price per Unit of Measure",
compute='_compute_price') # given by pricelist
price = fields.Monetary(compute='_compute_price', readonly=True)
currency_id = fields.Many2one('res.currency', string='Currency')
state = fields.Selection([
('step1', 'Step1'),
('step2', 'Step2'),
], default='step1', readonly=True)
zpl_file = fields.Binary(string='ZPL File', readonly=True)
zpl_filename = fields.Char('ZPL Filename')
barcode = fields.Char(readonly=True)
copies = fields.Integer(
string='Number of Labels', default=1, required=True)
zpl_printer_id = fields.Many2one(
'printing.printer', string='ZPL Printer')
@api.depends('pricelist_id', 'quantity', 'product_id')
def _compute_price(self):
# for regular barcodes
for wiz in self:
if wiz.pricelist_id and wiz.product_id:
price_uom = wiz.pricelist_id.get_product_pricelist(
wiz.product_id, 1, False)
wiz.price_uom = price_uom
wiz.price = price_uom * wiz.quantity
return wiz.price
@api.one
@api.depends('nomenclature_id')
def _compute_rule_id(self):
match_rule = False
if self.nomenclature_id and self.barcode:
for rule in self.nomenclature_id.rule_ids:
match = self.nomenclature_id.match_pattern(
self.barcode, rule.pattern)
if match.get('match'):
match_rule = rule.id
break
self.rule_id = match_rule
def _prepare_price_weight_barcode_type(self):
dpo = self.env['decimal.precision']
bno = self.env['barcode.nomenclature']
prec = dpo.precision_get('Stock Weight')
value = self.quantity
pbarcode = self.barcode
if float_is_zero(value, precision_digits=prec):
raise UserError(_(
"The quantity (%s) must be positive !") % value)
# check prefix
pattern = self.rule_id.pattern
if '{' not in pattern:
raise UserError(_(
"The barcode rule '%s' has a pattern '%s' which doesn't "
"contain a integer and decimal part between '{}'.")
% (self.rule_id.name, pattern))
prefix = pattern.split('{')[0]
assert len(prefix) >= 1
if len(prefix) > len(pbarcode):
raise UserError(_(
"The barcode of the product (%s) has %d characters, "
"which is smaller than the %d characters of the prefix "
"of the barcode pattern (%s).")
% (pbarcode, len(pbarcode), len(prefix), prefix))
barcode = pbarcode[0:len(prefix)]
# print "barcode=", barcode
# print "pattern=", pattern
m = re.search('\{N+D+\}', pattern)
# print "m=", m
assert m
pattern_val = m.group(0)
pattern_val = pattern_val[1:-1]
# print "pattern_val=", pattern_val
max_value = 10**pattern_val.count('N')
if float_compare(value, max_value, precision_digits=prec) != -1:
raise UserError(_(
"The value to encode in the barcode (%s) is superior "
"to the maximum value allowed by the barcode pattern (%s).")
% (value, max_value))
value_u = unicode(value)
value_u_split = value_u.split('.')
assert len(value_u_split) == 2
value_n = value_u_split[0]
value_d = value_u_split[1]
assert len(value_n) <= pattern_val.count('N')
barcode += value_n.zfill(pattern_val.count('N'))
# end fill doesn't exists... so:
# 1) make sure we have enough 0 after
value_d_ext = value_d + '0' * pattern_val.count('D')
# 2) cut at the right size
barcode += value_d_ext[0:pattern_val.count('D')]
# print "barcode=", barcode
# Add checksum
if self.rule_id.encoding == 'ean13':
barcode = bno.sanitize_ean(barcode)
# print "barcode FINAL=", barcode
zpl_unicode = self._price_weight_barcode_type_zpl() % {
'product_name': self.product_name,
'ean13_no_checksum': barcode[:12],
'price_uom': self.price_uom,
'price': self.price,
'currency_symbol': self.currency_id.symbol,
'copies': self.copies,
'quantity': value,
'uom_name': self.uom_id.name,
}
zpl_encoded = zpl_unicode.encode('utf-8')
vals = {
'zpl_file': zpl_encoded.encode('base64'),
'barcode': barcode,
}
return vals
@api.model
def _price_weight_barcode_type_zpl(self):
label = u"""
^XA
^CI28
^PW304
^LL200
^LH0,20
^CF0,30
^FO15,0^FB270,1,0,C^FD%(price).2f %(currency_symbol)s^FS
^CF0,20
^FO15,30^FB270,3,0,C^FD%(product_name)s^FS
^CF0,25
^FO15,75^FB270,1,0,C^FD%(quantity).3f %(uom_name)s %(price_uom).2f %
(currency_symbol)s/%(uom_name)s^FS
^FO60,110^BEN,50^FD%(ean13_no_checksum)s^FS
^PQ%(copies)s
^XZ
"""
return label
@api.model
def _product_barcode_type_zpl(self):
label = u"""
^XA
^CI28
^PW304
^LL200
^LH0,20
^CF0,30
^FO15,0^FB270,1,0,C^FD%(price_uom).2f %(currency_symbol)s^FS
^CF0,20
^FO15,30^FB270,3,0,C^FD%(product_name)s^FS
^FO60,100^BEN,60^FD%(ean13_no_checksum)s^FS
^PQ%(copies)s
^XZ
"""
return label
def _prepare_product_barcode_type(self):
zpl_unicode = self._product_barcode_type_zpl() % {
'product_name': self.product_name,
'ean13_no_checksum': self.barcode[:12],
'price_uom': self.price_uom,
'currency_symbol': self.currency_id.symbol, # symbol is a required field
'copies': self.copies,
}
zpl_encoded = zpl_unicode.encode('utf-8')
vals = {
'zpl_file': zpl_encoded.encode('base64'),
'barcode': self.barcode, # unchanged
}
return vals
def generate(self):
assert self.barcode
if len(self.barcode) != 13:
raise UserError(_(
"This wizard only supports EAN13 for the moment. Barcode '%s' "
"has %d digits instead of 13") % (
self.barcode,
len(self.barcode)))
if not self.copies:
raise UserError(_("The number of copies cannot be 0"))
if self.barcode_type in ('price', 'weight'):
vals = self._prepare_price_weight_barcode_type()
elif self.barcode_type == 'product':
vals = self._prepare_product_barcode_type()
else:
raise UserError(_(
"Barcode Type %s is not supported for the moment")
% self.barcode_type)
vals.update({
'state': 'step2',
'zpl_filename': 'barcode_%s.zpl' % vals['barcode'],
})
self.write(vals)
action = self.env['ir.actions.act_window'].for_xml_id(
'product_print_zpl_barcode',
'product_print_zpl_barcode_action')
action.update({
'res_id': self.id,
'context': self._context,
'views': False})
return action
def print_zpl(self):
if not self.zpl_printer_id:
raise UserError(_(
"You must select a ZPL Printer."))
self.zpl_printer_id.print_document(
self.zpl_filename, base64.decodestring(self.zpl_file), 'raw')
action = True
if self._context.get('print_and_new'):
action = self.env['ir.actions.act_window'].for_xml_id(
'product_print_zpl_barcode',
'product_print_zpl_barcode_action')
action.update({
'views': False,
'context': self._context,
})
return action
</code></pre>
<p>Traceback </p>
<pre><code> Traceback (most recent call last):
File "D:\Projet_Odoo\Odoo 9.0-20180426\server\.\openerp\http.py", line
650, in _handle_exception
File "D:\Projet_Odoo\Odoo 9.0-20180426\server\.\openerp\http.py", line
687, in dispatch
File "D:\Projet_Odoo\Odoo 9.0-20180426\server\.\openerp\http.py", line
323, in _call_function
File "D:\Projet_Odoo\Odoo 9.0-
20180426\server\.\openerp\service\model.py", line 118, in wrapper
File "D:\Projet_Odoo\Odoo 9.0-20180426\server\.\openerp\http.py", line
316, in checked_call
File "D:\Projet_Odoo\Odoo 9.0-20180426\server\.\openerp\http.py", line
966, in __call__
File "D:\Projet_Odoo\Odoo 9.0-20180426\server\.\openerp\http.py", line
516, in response_wrap
File "D:\Projet_Odoo\Odoo 9.0-
20180426\server\openerp\addons\web\controllers\main.py", line 896, in
call_kw
File "D:\Projet_Odoo\Odoo 9.0-
20180426\server\openerp\addons\web\controllers\main.py", line 888, in
_call_kw
File "D:\Projet_Odoo\Odoo 9.0-20180426\server\.\openerp\api.py", line
250, in wrapper
File "D:\Projet_Odoo\Odoo 9.0-20180426\server\.\openerp\api.py", line
381, in old_api
File "D:\Projet_Odoo\Odoo 9.0-20180426\server\.\openerp\models.py", line
6067, in onchange
File "D:\Projet_Odoo\Odoo 9.0-20180426\server\.\openerp\models.py", line
5770, in __getitem__
File "D:\Projet_Odoo\Odoo 9.0-20180426\server\.\openerp\fields.py", line
834, in __get__
File "D:\Projet_Odoo\Odoo 9.0-20180426\server\.\openerp\fields.py", line
949, in determine_draft_value
File "D:\Projet_Odoo\Odoo 9.0-20180426\server\.\openerp\fields.py", line
895, in compute_value
File "D:\Projet_Odoo\Odoo 9.0-20180426\server\.\openerp\fields.py", line
885, in _compute_value
File "D:\Projet_Odoo\Odoo 9.0-20180426\server\openerp\addons\product_print_zpl_barcode\models\product_print_zpl_barcode.py", line 98, in _compute_price
AttributeError: 'product.pricelist' object has no attribute
'get_product_pricelist'
</code></pre>
|
<p>You said it yourself, but it looks like you just missed it.</p>
<blockquote>
<p>Yes it exists</p>
<pre><code>def _get_product_pricelist(...):
...
</code></pre>
</blockquote>
<p>However, <code>_get_product_pricelist</code> is not the same as what you're calling, which is <code>get_product_pricelist</code>. </p>
<p>You are missing the underscore prior to the method name.</p>
<pre><code>price_uom = wiz.pricelist_id._get_product_pricelist(
... ^
</code></pre>
| 17
|
DPO model
|
linuxkit getty getting stuck
|
https://stackoverflow.com/questions/46545259/linuxkit-getty-getting-stuck
|
<p>I am currently trying out linuxkit external disk.
However, getty is getting stuck whenever I placed binds definition, it would not go pass the login prompt. </p>
<pre><code>kernel:
image: linuxkit/kernel:4.9.52
cmdline: "console=tty0 console=ttyS0 console=ttyAMA0"
init:
- linuxkit/init:7804129bd06218b72c298139a25698a748d253c6
- linuxkit/runc:a1b564248a0d0b118c11e61db9f84ecf41dd2d2a
- linuxkit/containerd:417f83f7b8dc1fa36acf90effe44f99c7397480a
- linuxkit/ca-certificates:e44b0a66df5a102c0e220f0066b0d904710dcb10
onboot:
- name: sysctl
image: linuxkit/sysctl:154913b72c6f1f33eb408609fca9963628e8c051
- name: dhcpcd
image: linuxkit/dhcpcd:d4408777ed6b6e6e562a5d4938fd09804324b33e
command: ["/sbin/dhcpcd", "--nobackground", "-f", "/dhcpcd.conf", "-1"]
- name: format
image: linuxkit/format:158d992b7bf7ab984100c697d7e72161ea7d7382
- name: mount
image: linuxkit/mount:96ac4d32d340ac6e4ddfbf506fa3a497d23649da
command: ["/usr/bin/mountie", "/tmp"]
services:
- name: getty
image: linuxkit/getty:bf6872ce0a9f3ab519b3e502cc41ba3958bda2a6
capabilities:
- all
binds:
- /tmp:/tmp
- name: rngd
image: linuxkit/rngd:558e86a36242bb74353bc9287b715ddb8567357e
files:
- path: etc/getty.shadow
# sample sets password for root to "abcdefgh" (without quotes)
contents: 'root:$6$6tPd2uhHrecCEKug$8mKfcgfwguP7f.BLdZsT1Wz7WIIJOBY1oUFHzIv9/O71M2J0EPdtFqFGTxB1UK5ejqQxRFQ.ZSG9YXR0SNsc11:17322:0:::::'
</code></pre>
<p>And my linuxkit run command is as follows:</p>
<pre><code>linuxkit -v run qemu -disk /home/tweakmy/slowdisk/linuxkit/getty/blank.img,size=3G,format=qcow2 getty2.iso
</code></pre>
<p>This is the screen where I am stuck:</p>
<pre><code>> [ 0.000000] Linux version 4.9.52-linuxkit (root@b81a1f7ba2ff) (gcc version 6.3.0 (Alpine 6.3.0) ) #1 SMP Thu Sep 28 15:02:54 UTC
> 2017
> [ 0.000000] Command line: BOOT_IMAGE=/boot/kernel console=tty0 console=ttyS0 console=ttyAMA0 root=/dev/sr0
> [ 0.000000] x86/fpu: Legacy x87 FPU detected.
> [ 0.000000] x86/fpu: Using 'eager' FPU context switches.
> [ 0.000000] e820: BIOS-provided physical RAM map:
> [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
> [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
> [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
> [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000003ffdefff] usable
> [ 0.000000] BIOS-e820: [mem 0x000000003ffdf000-0x000000003fffffff] reserved
> [ 0.000000] BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
> [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
> [ 0.000000] NX (Execute Disable) protection: active
> [ 0.000000] SMBIOS 2.8 present.
> [ 0.000000] e820: last_pfn = 0x3ffdf max_arch_pfn = 0x400000000
> [ 0.000000] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WC UC- WT
> [ 0.000000] found SMP MP-table at [mem 0x000f6630-0x000f663f] mapped at [ffff92a1000f6630]
> [ 0.000000] ACPI: Early table checksum verification disabled
> [ 0.000000] ACPI: RSDP 0x00000000000F6460 000014 (v00 BOCHS )
> [ 0.000000] ACPI: RSDT 0x000000003FFE2267 000038 (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001)
> [ 0.000000] ACPI: FACP 0x000000003FFE1E32 000074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001)
> [ 0.000000] ACPI: DSDT 0x000000003FFE0040 001DF2 (v01 BOCHS BXPCDSDT 00000001 BXPC 00000001)
> [ 0.000000] ACPI: FACS 0x000000003FFE0000 000040
> [ 0.000000] ACPI: SSDT 0x000000003FFE1EA6 0002D5 (v01 BOCHS BXPCSSDT 00000001 BXPC 00000001)
> [ 0.000000] ACPI: APIC 0x000000003FFE217B 000078 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001)
> [ 0.000000] ACPI: HPET 0x000000003FFE21F3 000038 (v01 BOCHS BXPCHPET 00000001 BXPC 00000001)
> [ 0.000000] ACPI: MCFG 0x000000003FFE222B 00003C (v01 BOCHS BXPCMCFG 00000001 BXPC 00000001)
> [ 0.000000] Zone ranges:
> [ 0.000000] DMA [mem 0x0000000000001000-0x0000000000ffffff]
> [ 0.000000] DMA32 [mem 0x0000000001000000-0x000000003ffdefff]
> [ 0.000000] Normal empty
> [ 0.000000] Movable zone start for each node
> [ 0.000000] Early memory node ranges
> [ 0.000000] node 0: [mem 0x0000000000001000-0x000000000009efff]
> [ 0.000000] node 0: [mem 0x0000000000100000-0x000000003ffdefff]
> [ 0.000000] Initmem setup node 0 [mem 0x0000000000001000-0x000000003ffdefff]
> [ 0.000000] ACPI: PM-Timer IO Port: 0x608
> [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
> [ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
> [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
> [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
> [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
> [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
> [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
> [ 0.000000] Using ACPI (MADT) for SMP configuration information
> [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000
> [ 0.000000] smpboot: Allowing 1 CPUs, 0 hotplug CPUs
> [ 0.000000] e820: [mem 0x40000000-0xafffffff] available for PCI devices
> [ 0.000000] Booting paravirtualized kernel on bare hardware
> [ 0.000000] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
> [ 0.000000] setup_percpu: NR_CPUS:128 nr_cpumask_bits:128 nr_cpu_ids:1 nr_node_ids:1
> [ 0.000000] percpu: Embedded 35 pages/cpu @ffff92a13fc00000 s105240 r8192 d29928 u2097152
> [ 0.000000] Built 1 zonelists in Zone order, mobility grouping on. Total pages: 257896
> [ 0.000000] Kernel command line: BOOT_IMAGE=/boot/kernel console=tty0 console=ttyS0 console=ttyAMA0 root=/dev/sr0
> [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
> [ 0.000000] Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes)
> [ 0.000000] Inode-cache hash table entries: 65536 (order: 7, 524288 bytes)
> [ 0.000000] Memory: 1014172K/1048052K available (8035K kernel code, 1367K rwdata, 2696K rodata, 1380K init, 568K bss, 33880K
> reserved, 0K cma-reserved)
> [ 0.000000] Hierarchical RCU implementation.
> [ 0.000000] Build-time adjustment of leaf fanout to 64.
> [ 0.000000] RCU restricting CPUs from NR_CPUS=128 to nr_cpu_ids=1.
> [ 0.000000] RCU: Adjusting geometry for rcu_fanout_leaf=64, nr_cpu_ids=1
> [ 0.000000] NR_IRQS:8448 nr_irqs:256 16
> [ 0.000000] Console: colour VGA+ 80x25
> [ 0.000000] console [tty0] enabled
> [ 0.000000] console [ttyS0] enabled
> [ 0.000000] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
> [ 0.000000] tsc: Fast TSC calibration using PIT
> [ 0.000000] tsc: Detected 3394.161 MHz processor
> [ 0.020805] Calibrating delay loop (skipped), value calculated using timer frequency.. 6788.32 BogoMIPS (lpj=33941610)
> [ 0.021301] pid_max: default: 32768 minimum: 301
> [ 0.021766] ACPI: Core revision 20160831
> [ 0.051169] ACPI: 2 ACPI AML tables successfully acquired and loaded
> [ 0.052932] Security Framework initialized
> [ 0.053092] Yama: becoming mindful.
> [ 0.054186] Mount-cache hash table entries: 2048 (order: 2, 16384 bytes)
> [ 0.054407] Mountpoint-cache hash table entries: 2048 (order: 2, 16384 bytes)
> [ 0.065666] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
> [ 0.065816] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
> [ 0.213551] Freeing SMP alternatives memory: 20K
> [ 0.218208] ftrace: allocating 35516 entries in 139 pages
> [ 0.316894] smpboot: Max logical packages: 1
> [ 0.323220] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
> [ 0.430000] smpboot: CPU0: AMD QEMU Virtual CPU version 2.5+ (family: 0x6, model: 0x6, stepping: 0x3)
> [ 0.430000] Performance Events: PMU not available due to virtualization, using software events only.
> [ 0.430000] x86: Booted up 1 node, 1 CPUs
> [ 0.430000] smpboot: Total of 1 processors activated (6788.32 BogoMIPS)
> [ 0.431509] NMI watchdog: disabled (cpu0): hardware events not enabled
> [ 0.431704] NMI watchdog: Shutting down hard lockup detector on all cpus
> [ 0.440776] devtmpfs: initialized
> [ 0.443633] x86/mm: Memory block size: 128MB
> [ 0.478379] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
> [ 0.478698] futex hash table entries: 256 (order: 2, 16384 bytes)
> [ 0.483445] NET: Registered protocol family 16
> [ 0.489199] cpuidle: using governor ladder
> [ 0.489359] cpuidle: using governor menu
> [ 0.490130] ACPI: bus type PCI registered
> [ 0.491828] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000)
> [ 0.492159] PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820
> [ 0.492812] PCI: Using configuration type 1 for base access
> [ 0.531450] HugeTLB registered 2 MB page size, pre-allocated 0 pages
> [ 0.538790] ACPI: Added _OSI(Module Device)
> [ 0.538902] ACPI: Added _OSI(Processor Device)
> [ 0.539012] ACPI: Added _OSI(3.0 _SCP Extensions)
> [ 0.539147] ACPI: Added _OSI(Processor Aggregator Device)
> [ 0.585379] ACPI: Interpreter enabled
> [ 0.585758] ACPI: (supports S0 S5)
> [ 0.586102] ACPI: Using IOAPIC for interrupt routing
> [ 0.587008] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
> [ 0.619328] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
> [ 0.619839] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
> [ 0.623139] acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability]
> [ 0.625196] PCI host bridge to bus 0000:00
> [ 0.625377] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window]
> [ 0.625564] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window]
> [ 0.625743] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
> [ 0.625948] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
> [ 0.626226] pci_bus 0000:00: root bus resource [bus 00-ff]
> [ 0.651380] pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
> [ 0.671821] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
> [ 0.673270] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
> [ 0.674330] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
> [ 0.675368] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
> [ 0.676522] ACPI: PCI Interrupt Link [LNKE] (IRQs 5 *10 11)
> [ 0.677613] ACPI: PCI Interrupt Link [LNKF] (IRQs 5 *10 11)
> [ 0.678626] ACPI: PCI Interrupt Link [LNKG] (IRQs 5 10 *11)
> [ 0.679460] ACPI: PCI Interrupt Link [LNKH] (IRQs 5 10 *11)
> [ 0.679860] ACPI: PCI Interrupt Link [GSIA] (IRQs *16)
> [ 0.680257] ACPI: PCI Interrupt Link [GSIB] (IRQs *17)
> [ 0.680507] ACPI: PCI Interrupt Link [GSIC] (IRQs *18)
> [ 0.680763] ACPI: PCI Interrupt Link [GSID] (IRQs *19)
> [ 0.681018] ACPI: PCI Interrupt Link [GSIE] (IRQs *20)
> [ 0.681270] ACPI: PCI Interrupt Link [GSIF] (IRQs *21)
> [ 0.681531] ACPI: PCI Interrupt Link [GSIG] (IRQs *22)
> [ 0.681773] ACPI: PCI Interrupt Link [GSIH] (IRQs *23)
> [ 0.684151] ACPI: Enabled 16 GPEs in block 00 to 3F
> [ 0.687071] SCSI subsystem initialized
> [ 0.688025] ACPI: bus type USB registered
> [ 0.688723] usbcore: registered new interface driver usbfs
> [ 0.689160] usbcore: registered new interface driver hub
> [ 0.690058] usbcore: registered new device driver usb
> [ 0.690710] pps_core: LinuxPPS API ver. 1 registered
> [ 0.690836] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
> [ 0.691124] PTP clock support registered
> [ 0.691748] wmi: Mapper loaded
> [ 0.692310] PCI: Using ACPI for IRQ routing
> [ 0.702615] NetLabel: Initializing
> [ 0.702735] NetLabel: domain hash size = 128
> [ 0.702860] NetLabel: protocols = UNLABELED CIPSOv4
> [ 0.703897] NetLabel: unlabeled traffic allowed by default
> [ 0.704460] HPET: 3 timers in total, 0 timers will be used for per-cpu timer
> [ 0.705076] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
> [ 0.705363] hpet0: 3 comparators, 64-bit 100.000000 MHz counter
> [ 0.710792] clocksource: Switched to clocksource hpet
> [ 0.853794] VFS: Disk quotas dquot_6.6.0
> [ 0.854214] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
> [ 0.855696] FS-Cache: Loaded
> [ 0.858617] CacheFiles: Loaded
> [ 0.859503] pnp: PnP ACPI init
> [ 0.870398] pnp: PnP ACPI: found 5 devices
> [ 0.902634] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
> [ 0.904149] NET: Registered protocol family 2
> [ 0.908186] TCP established hash table entries: 8192 (order: 4, 65536 bytes)
> [ 0.908544] TCP bind hash table entries: 8192 (order: 5, 131072 bytes)
> [ 0.908888] TCP: Hash tables configured (established 8192 bind 8192)
> [ 0.910261] UDP hash table entries: 512 (order: 2, 16384 bytes)
> [ 0.910503] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes)
> [ 0.911805] NET: Registered protocol family 1
> [ 0.912306] pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
> [ 0.929224] PCLMULQDQ-NI instructions are not detected.
> [ 0.929727] AVX or AES-NI instructions are not detected.
> [ 0.929918] CPU feature 'AVX registers' is not supported.
> [ 0.930322] CPU feature 'AVX registers' is not supported.
> [ 0.930516] CPU feature 'AVX registers' is not supported.
> [ 0.930679] CPU feature 'AVX registers' is not supported.
> [ 0.930853] AVX2 or AES-NI instructions are not detected.
> [ 0.931025] AVX2 instructions are not detected.
> [ 0.934248] audit: initializing netlink subsys (disabled)
> [ 0.935302] audit: type=2000 audit(1507033386.930:1): initialized
> [ 0.939467] workingset: timestamp_bits=46 max_order=18 bucket_order=0
> [ 0.943051] FS-Cache: Netfs 'cifs' registered for caching
> [ 0.943453] fuse init (API version 7.26)
> [ 0.944703] SGI XFS with ACLs, security attributes, no debug enabled
> [ 0.946675] 9p: Installing v9fs 9p2000 file system support
> [ 0.946881] FS-Cache: Netfs '9p' registered for caching
> [ 0.963743] NET: Registered protocol family 38
> [ 0.964009] Key type asymmetric registered
> [ 0.964162] Asymmetric key parser 'x509' registered
> [ 0.964553] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
> [ 0.965177] io scheduler noop registered
> [ 0.965335] io scheduler deadline registered (default)
> [ 0.965680] io scheduler cfq registered
> [ 0.967207] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
> [ 0.967409] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
> [ 0.967786] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
> [ 0.968171] hv_vmbus: registering driver hyperv_fb
> [ 0.970941] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
> [ 0.971483] ACPI: Power Button [PWRF]
> [ 0.974145] GHES: HEST is not enabled!
> [ 0.979701] ACPI: PCI Interrupt Link [GSIG] enabled at IRQ 22
> [ 0.981277] virtio-pci 0000:00:02.0: virtio_pci: leaving for legacy driver
> [ 0.986222] ACPI: PCI Interrupt Link [GSIH] enabled at IRQ 23
> [ 0.986554] virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver
> [ 0.987788] xenfs: not registering filesystem on non-xen platform
> [ 0.989043] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
> [ 1.011940] 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
> [ 1.018118] Non-volatile memory driver v1.3
> [ 1.021651] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
> [ 1.051511] loop: module loaded
> [ 1.058762] nbd: registered device at major 43
> [ 1.085048] lpc_ich 0000:00:1f.0: RCBA is disabled by hardware/BIOS, device disabled
> [ 1.085414] lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
> [ 1.085629] lpc_ich 0000:00:1f.0: No MFD cells added
> [ 1.086574] VMware PVSCSI driver - version 1.0.7.0-k
> [ 1.086921] hv_vmbus: registering driver hv_storvsc
> [ 1.092274] ACPI: PCI Interrupt Link [GSIA] enabled at IRQ 16
> [ 1.095960] ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode
> [ 1.096204] ahci 0000:00:1f.2: flags: ncq only
> [ 1.111650] scsi host0: ahci
> [ 1.113866] scsi host1: ahci
> [ 1.115211] scsi host2: ahci
> [ 1.116497] scsi host3: ahci
> [ 1.117772] scsi host4: ahci
> [ 1.119029] scsi host5: ahci
> [ 1.120130] ata1: SATA max UDMA/133 abar m4096@0xfebd2000 port 0xfebd2100 irq 24
> [ 1.120448] ata2: SATA max UDMA/133 abar m4096@0xfebd2000 port 0xfebd2180 irq 24
> [ 1.120645] ata3: SATA max UDMA/133 abar m4096@0xfebd2000 port 0xfebd2200 irq 24
> [ 1.120847] ata4: SATA max UDMA/133 abar m4096@0xfebd2000 port 0xfebd2280 irq 24
> [ 1.121067] ata5: SATA max UDMA/133 abar m4096@0xfebd2000 port 0xfebd2300 irq 24
> [ 1.121290] ata6: SATA max UDMA/133 abar m4096@0xfebd2000 port 0xfebd2380 irq 24
> [ 1.127988] tun: Universal TUN/TAP device driver, 1.6
> [ 1.128131] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
> [ 1.135619] VMware vmxnet3 virtual NIC driver - version 1.4.a.0-k-NAPI
> [ 1.136192] hv_vmbus: registering driver hv_netvsc
> [ 1.136435] Fusion MPT base driver 3.04.20
> [ 1.136628] Copyright (c) 1999-2008 LSI Corporation
> [ 1.136939] Fusion MPT SPI Host driver 3.04.20
> [ 1.138154] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
> [ 1.141004] serio: i8042 KBD port at 0x60,0x64 irq 1
> [ 1.141291] serio: i8042 AUX port at 0x60,0x64 irq 12
> [ 1.143263] hv_vmbus: registering driver hyperv_keyboard
> [ 1.144518] mousedev: PS/2 mouse device common for all mice
> [ 1.146480] random: fast init done
> [ 1.147649] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
> [ 1.149387] input: PC Speaker as /devices/platform/pcspkr/input/input2
> [ 1.152614] rtc_cmos 00:00: RTC can wake from S4
> [ 1.156152] rtc_cmos 00:00: rtc core: registered rtc_cmos as rtc0
> [ 1.157273] rtc_cmos 00:00: alarms up to one day, 114 bytes nvram, hpet irqs
> [ 1.157913] i2c /dev entries driver
> [ 1.159306] device-mapper: ioctl: 4.35.0-ioctl (2016-06-23) initialised: dm-devel@redhat.com
> [ 1.163046] usbcore: registered new interface driver usbhid
> [ 1.163345] usbhid: USB HID core driver
> [ 1.163761] hv_utils: Registering HyperV Utility Driver
> [ 1.163992] hv_vmbus: registering driver hv_util
> [ 1.164204] hv_vmbus: registering driver hv_balloon
> [ 1.164536] oprofile: using NMI interrupt.
> [ 1.165219] GACT probability on
> [ 1.165513] Mirror/redirect action on
> [ 1.166086] Simple TC action Loaded
> [ 1.166446] u32 classifier
> [ 1.166583] Performance counters on
> [ 1.166761] input device check on
> [ 1.166934] Actions configured
> [ 1.167295] Netfilter messages via NETLINK v0.30.
> [ 1.167631] nfnl_acct: registering with nfnetlink.
> [ 1.168502] nf_conntrack version 0.5.0 (8192 buckets, 32768 max)
> [ 1.170713] ctnetlink v0.93: registering with nfnetlink.
> [ 1.173036] nf_tables: (c) 2007-2009 Patrick McHardy <kaber@trash.net>
> [ 1.173555] nf_tables_compat: (c) 2012 Pablo Neira Ayuso <pablo@netfilter.org>
> [ 1.174999] xt_time: kernel timezone is -0000
> [ 1.175250] ip_set: protocol 6
> [ 1.175601] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP)
> [ 1.175912] IPVS: Connection hash table configured (size=4096, memory=64Kbytes)
> [ 1.176885] IPVS: Creating netns size=2104 id=0
> [ 1.178270] IPVS: ipvs loaded.
> [ 1.178429] IPVS: [rr] scheduler registered.
> [ 1.178613] IPVS: [wrr] scheduler registered.
> [ 1.178781] IPVS: [lc] scheduler registered.
> [ 1.178943] IPVS: [wlc] scheduler registered.
> [ 1.179104] IPVS: [fo] scheduler registered.
> [ 1.179258] IPVS: [ovf] scheduler registered.
> [ 1.179479] IPVS: [lblc] scheduler registered.
> [ 1.179701] IPVS: [lblcr] scheduler registered.
> [ 1.179882] IPVS: [dh] scheduler registered.
> [ 1.180044] IPVS: [sh] scheduler registered.
> [ 1.180598] IPVS: [sed] scheduler registered.
> [ 1.180857] IPVS: [nq] scheduler registered.
> [ 1.181507] IPVS: ftp: loaded support on port[0] = 21
> [ 1.182279] ipip: IPv4 and MPLS over IPv4 tunneling driver
> [ 1.186673] gre: GRE over IPv4 demultiplexor driver
> [ 1.188779] ip_tables: (C) 2000-2006 Netfilter Core Team
> [ 1.191375] ipt_CLUSTERIP: ClusterIP Version 0.8 loaded successfully
> [ 1.191803] arp_tables: arp_tables: (C) 2002 David S. Miller
> [ 1.193550] NET: Registered protocol family 10
> [ 1.199911] ip6_tables: (C) 2000-2006 Netfilter Core Team
> [ 1.204865] NET: Registered protocol family 17
> [ 1.205437] Bridge firewalling registered
> [ 1.205702] Ebtables v2.0 registered
> [ 1.206749] 8021q: 802.1Q VLAN Support v1.8
> [ 1.207607] 9pnet: Installing 9P2000 support
> [ 1.208453] Key type dns_resolver registered
> [ 1.209055] microcode: AMD CPU family 0x6 not supported
> [ 1.211556] registered taskstats version 1
> [ 1.221238] Key type big_key registered
> [ 1.222689] Key type encrypted registered
> [ 1.224875] rtc_cmos 00:00: setting system clock to 2017-10-03 12:23:07 UTC (1507033387)
> [ 1.452864] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
> [ 1.456756] ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
> [ 1.457129] ata3.00: applying bridge limits
> [ 1.458326] ata2: SATA link down (SStatus 0 SControl 300)
> [ 1.459404] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
> [ 1.460621] ata1.00: ATA-7: QEMU HARDDISK, 2.5+, max UDMA/100
> [ 1.460936] ata1.00: 6291456 sectors, multi 16: LBA48 NCQ (depth 31/32)
> [ 1.461326] ata1.00: applying bridge limits
> [ 1.462129] ata1.00: configured for UDMA/100
> [ 1.463843] ata6: SATA link down (SStatus 0 SControl 300)
> [ 1.464096] ata5: SATA link down (SStatus 0 SControl 300)
> [ 1.464642] ata4: SATA link down (SStatus 0 SControl 300)
> [ 1.465121] ata3.00: configured for UDMA/100
> [ 1.476104] scsi 0:0:0:0: Direct-Access ATA QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
> [ 1.481672] sd 0:0:0:0: [sda] 6291456 512-byte logical blocks: (3.22 GB/3.00 GiB)
> [ 1.483328] sd 0:0:0:0: [sda] Write Protect is off
> [ 1.483818] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
> [ 1.486493] sd 0:0:0:0: Attached scsi generic sg0 type 0
> [ 1.490757] scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5
> [ 1.492993] sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
> [ 1.493262] cdrom: Uniform CD-ROM driver Revision: 3.20
> [ 1.501226] sr 2:0:0:0: Attached scsi generic sg1 type 5
> [ 1.502777] sda: sda1
> [ 1.509588] sd 0:0:0:0: [sda] Attached SCSI disk
> [ 1.545030] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
> [ 1.547194] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
> [ 1.551881] VFS: Mounted root (iso9660 filesystem) readonly on device 11:0.
> [ 1.632754] Freeing unused kernel memory: 1380K
> [ 1.632905] Write protecting the kernel read-only data: 12288k
> [ 1.636369] Freeing unused kernel memory: 140K
> [ 1.703301] Freeing unused kernel memory: 1400K
> [ 1.920501] tsc: Refined TSC clocksource calibration: 3394.155 MHz
> [ 1.920948] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x30ecbcc6c3d, max_idle_ns: 440795207542 ns
> [ 2.940581] clocksource: Switched to clocksource tsc
>
> Welcome to LinuxKit
>
> ## .
> ## ## ## ==
> ## ## ## ## ## ===
> /"""""""""""""""""\___/ ===
> { / ===-
> \______ O __/
> \ \ __/
> \____\_______/
>
> [ 6.609377] 8021q: adding VLAN 0 to HW filter on device eth0
> [ 7.123456] IPVS: Creating netns size=2104 id=1
> [ 7.123918] IPVS: ftp: loaded support on port[0] = 21
> [ 8.354013] IPVS: Creating netns size=2104 id=2
> [ 8.354294] IPVS: ftp: loaded support on port[0] = 21
> [ 9.536923] random: crng init done
> [ 9.604634] EXT4-fs (sda1): couldn't mount as ext3 due to feature incompatibilities
> [ 9.609077] EXT4-fs (sda1): couldn't mount as ext2 due to feature incompatibilities
> [ 9.661331] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
> [ 14.967631] IPVS: Creating netns size=2104 id=3
> [ 14.968941] IPVS: ftp: loaded support on port[0] = 21
</code></pre>
<p>I can see the mounting. But I am not too sure the mounting did eventually happend on the linuxkit guest?</p>
|
<p>The documentation is a little bit unclear on this: <code>binds</code> doesn't <strong>add</strong> the mount points, but <strong>replaces</strong> them. The documentation update is pending, but to solve your particular problem, just please put all the existing binds into the <code>.yml</code> (an example below shows how to add the custom aliases to <code>getty</code>):</p>
<pre><code>services:
- name: getty
image: linuxkit/getty:bf6872ce0a9f3ab519b3e502cc41ba3958bda2a6
env:
- INSECURE=true
binds:
- /etc/resolv.conf:/etc/resolv.conf
- /run:/run
- /tmp:/tmp
- /etc:/hostroot/etc
- /usr/bin/ctr:/usr/bin/ctr
- /usr/bin/runc:/usr/bin/runc
- /containers:/containers
- /var/log:/var/log
- /dev:/dev
- /sys:/sys
- /etc/profile.d/aliases.sh:/etc/profile.d/aliases.sh
files:
- path: etc/profile.d/aliases.sh
contents: |
alias c='clear'
</code></pre>
| 18
|
MCQA
|
Should i use threads when executing action method through AJAX?
|
https://stackoverflow.com/questions/7131500/should-i-use-threads-when-executing-action-method-through-ajax
|
<p>I am building a questionnarie. When a user clicks on an answer possibility for a multiple choice question (this is a radio button), i call an action method to save this answer. </p>
<p>The code:</p>
<pre><code><script language="javascript">
$(document).ready(function () {
$('.MCQRadio').click(function () {
var question_id = $(this).attr('question-id');
var mcq_id = $(this).attr('mcq-id');
$.ajax({
url: '/SaveSurveyAnswers/SaveMCQAnswer',
data: { "mcq_id": mcq_id, "question_id": question_id },
success: function (data) {
}
});
});
});
</code></pre>
<p></p>
<p>The code to save the answer:</p>
<pre><code>public EmptyResult SaveMCQAnswer(int mcq_id, int question_id)
{
MCQ_Answers mcqa = null;
try
{
mcqa = db.MCQ_Answers.Single(x => x.question_ID == question_id);
}
catch (InvalidOperationException e)
{
}
if (mcqa != null)
{
mcqa.mcq_id = mcq_id;
}
else
{
MCQ_Answers mcq_answer = new MCQ_Answers()
{
question_ID = question_id,
respondent_id = 1
};
db.MCQ_Answers.AddObject(mcq_answer);
}
db.SaveChanges();
return new EmptyResult();
}
</code></pre>
<p>If a question has 5 answer possibilities, and i click on them randomly and fast, and then go back to the previous page, ie, when i return the correct answer wont be saved. Should i use threading to make sure the correct answer is saved? And how?</p>
<p>Thanks </p>
|
<p>rather than saving your answer by post all the time, you can just create a JSOn object and save the answers within json. you can then at the end post all completed answers in one go.</p>
<p>take a look at this: <a href="http://msdn.microsoft.com/en-us/scriptjunkie/ff962533" rel="nofollow">http://msdn.microsoft.com/en-us/scriptjunkie/ff962533</a></p>
<p>basically this will allow you to store session data - json on the remote machine you then just need an add, delete function and away you go.... </p>
<p>i use this to huge extent in an application that would require the server to be updated with the location of objects on a canvas, however with sessvars i just keep all the X and Y locations within there and do a final push of JSON when i am done.</p>
<p>if you change pages, you can then get your values from the JSON object without a server call.</p>
<p>as a note you may also be better off with tabs or hiden sections of form, and therfor reduce the need to re-populate say page1, page2 etc as they will already be there, just hidden!</p>
| 19
|
Fourier transform
|
Understanding where the constant $2/N$ comes from in Fourier transformation
|
https://dsp.stackexchange.com/questions/48049/understanding-where-the-constant-2-n-comes-from-in-fourier-transformation
|
<p>I'm implementing Fourier transformation in my analysis and I wanted dig a bit deeper on the reasons why the absolute value of Fourier transformation is usually multiplied by the constant <span class="math-container">$2/N$</span> to get the peak amplitude value of a sinewave with certain frequency.</p>
<p>In the book <a href="https://rads.stackoverflow.com/amzn/click/com/0137027419" rel="nofollow noreferrer" rel="nofollow noreferrer">Understanding Digital Signal Processing</a> by Lyons, the author states the following relationship between the peak amplitude <span class="math-container">$A$</span> of a sinewave and the output magnitude <span class="math-container">$M_r$</span> of the discrete Fourier transformation (DFT) for that particular sinewave is:</p>
<p><span class="math-container">$$M_r=AN/2,\;\;\;\;\;\; \tag{1}$$</span></p>
<p>where the <span class="math-container">$r$</span> stands for real input values to DFT and <span class="math-container">$N$</span> is the number of input values to DFT. From this relationship, I trivially get the amplitude I want to as <span class="math-container">$A=2M_r/N$</span>, which I see is many times done in many <code>fft</code>-examples in Matlab found throughout the web.</p>
<p>Now my big question was, why is the relationship in <span class="math-container">$(1)$</span> true? I started to read more from the book <a href="https://rads.stackoverflow.com/amzn/click/com/0821847902" rel="nofollow noreferrer" rel="nofollow noreferrer">Fourier Analysis and Its Applications</a> by Folland and I found the following in his book (in section about DFT):</p>
<p><span class="math-container">$$\widehat{f}\left(\frac{2\pi m}{\Omega}\right)\approx \frac{\Omega}{N}\widehat{a}_m,\;\;\;m=0,1,...,N-1\;\;\;\;\;\;\tag{2}$$</span></p>
<p>where <span class="math-container">$\widehat{f}$</span> is the amplitude function, <span class="math-container">$\widehat{a}_m$</span> is the <span class="math-container">$m^{th}$</span> output of DFT, <span class="math-container">$N$</span> is again the number of inputs and <span class="math-container">$\Omega$</span> is the length of the time interval <span class="math-container">$[0, \Omega]:$</span></p>
<p><span class="math-container">$$\widehat{f}\left(\frac{2\pi m}{\Omega}\right)=\int_0^\Omega e^{-2\pi i m t/\Omega}\;f(t)\;dt,$$</span></p>
<p>where <span class="math-container">$f$</span> is the wave function. Now when I look at <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span>, there seems to be a connection between them:</p>
<p><span class="math-container">$${\color{red}{\widehat{a}_m}} \approx{\color{blue}{\frac{N}{\Omega}}}{\color{green}{\widehat{f}\left(\frac{2\pi m}{\Omega}\right)}},\;\;\;\;\;\;{\color{red}{M_r}}={\color{blue}{\frac{N}{2}}}{\color{green}{A}}.$$</span></p>
<p>These two results are almost satisfying but I wondered why it seems to be the case that <span class="math-container">$\Omega=2$</span>?</p>
<p><strong>My questions: Where does this <span class="math-container">$2$</span> come from? Why in Lyons's book there is <span class="math-container">$2$</span> instead of <span class="math-container">$\Omega$</span>?</strong></p>
<p>I thought could it be somehow related to the symmetry of the DFT output? One time unit to left and right: <span class="math-container">$[-1,1]$</span> so the length of the interval would be <span class="math-container">$\Omega=2$</span>? A bit vague this last part but could I be onto something here?</p>
<p><strong>UPDATE</strong>:</p>
<p>The definition for DFT in book Understanding Digital Signal Processing is given as:</p>
<p><span class="math-container">$$X(m)=\sum_{n=0}^{N-1} x(n) e^{-2\pi i nm/N},$$</span></p>
<p>where <span class="math-container">$x(n)$</span> is some continuous time-domain signal. In the book Fourier Analysis and Its Applications the corresponding definition is:</p>
<p><span class="math-container">$$\widehat{a}_m = \sum_{n=0}^{N-1}a_n e^{-2\pi i mn/N}\;\;\;(0\leq m<N),$$</span></p>
<p>where <span class="math-container">$a_n = f\left(\frac{n\Omega}{N}\right)$</span>.</p>
|
<p>The two comes from the fact that a real valued sinusoid is really the average of two complex one.</p>
<p>$$ \cos( \theta ) = \frac{ e^{i \theta} + e^{-i \theta} }{2} $$</p>
<p>There is the "2" in the denominator. In the DFT, the other half is located at bin $N-k$, if $k$ is the bin number.</p>
<p>That definition of cosine comes straight from Euler's equation:</p>
<p>$$ e^{i \theta} = \cos( \theta ) + i \sin( \theta ) $$</p>
<p>The $N$ comes straight from the definition of the DFT. The relationship beteen the amplitude, N, and the magnitude of the DFT bin is why I like to use the $1/N$ normalized form of the the DFT over the more conventional unnormalized form.</p>
<p>I recommend that you read my first blog article <a href="https://www.dsprelated.com/showarticle/754.php" rel="nofollow noreferrer">The Exponential Nature of the Complex Unit Circle</a> for an understanding of Euler's equation and my blog article <a href="https://www.dsprelated.com/showarticle/771.php" rel="nofollow noreferrer">DFT Bin Value Formulas for Pure Real Tones</a> for understanding the bin values of a pure tone and what "leakage" really means. Equation (19) is the answer to your "big question".</p>
<p>The $\Omega$ comes from the continuous FT. The DFT is the Discrete Fourier Transform. Too often, they are conflated leaving confusion like yours. It takes a lot of heavy math to understand the relationship between the two. The premise of my blog articles is that the DFT can be learned and understood straight from the summation definition without any reference to the continuous case.</p>
<p>Hope this helps.</p>
<p>Ced</p>
| 0
|
Fourier transform
|
How to determine the sine Fourier coefficients of discrete data?
|
https://dsp.stackexchange.com/questions/70037/how-to-determine-the-sine-fourier-coefficients-of-discrete-data
|
<p>The following relation gives me the measurements of interest <span class="math-container">$w$</span> at equally distanced locations <span class="math-container">$x_j$</span> in space:</p>
<p><span class="math-container">$$w_j=\sum_{m=1}^{11}A_m\sin\left(\frac{mπx_j}{L}\right)$$</span></p>
<p>where <span class="math-container">$A_m$</span> are the Fourier coefficients of sine series and <span class="math-container">$L$</span> the total length is physical space. I also assume <span class="math-container">$m = 1,2,...,11$</span>.</p>
<p>Now, how can I obtain the coefficients <span class="math-container">$A_m$</span> given the data <span class="math-container">$w_j$</span> ?</p>
|
<p>With the correction, it can now be answered.</p>
<p>I will assume that you have also have 11 readings. Fewer and you are underdetermined, with more you are overdetermined.</p>
<p>Express your problem in matrix form.</p>
<p><span class="math-container">$$
\begin{bmatrix}
w_1 \\
w_2 \\
w_3 \\
: \\
w_{11} \\
\end{bmatrix}
=
\begin{bmatrix}
\sin\left(\frac{ \pi x_1}{L}\right) & \sin\left(\frac{2 \pi x_1}{L}\right) & \sin\left(\frac{3 \pi x_1}{L}\right) & \dots & \sin\left(\frac{11 \pi x_1}{L}\right) \\
\sin\left(\frac{ \pi x_2}{L}\right) & \sin\left(\frac{2 \pi x_2}{L}\right) & \sin\left(\frac{3 \pi x_2}{L}\right) & \dots & \sin\left(\frac{11 \pi x_2}{L}\right) \\
\sin\left(\frac{ \pi x_3}{L}\right) & \sin\left(\frac{2 \pi x_3}{L}\right) & \sin\left(\frac{3 \pi x_3}{L}\right) & \dots & \sin\left(\frac{11 \pi x_3}{L}\right) \\
: & : & : & ::: & : \\
\sin\left(\frac{ \pi x_{11}}{L}\right) & \sin\left(\frac{2 \pi x_{11}}{L}\right) & \sin\left(\frac{3 \pi x_{11}}{L}\right) & \dots & \sin\left(\frac{11 \pi x_{11}}{L}\right) \\
\end{bmatrix}
\begin{bmatrix}
A_1 \\
A_2 \\
A_3 \\
: \\
A_{11} \\
\end{bmatrix}
$$</span></p>
<p>This can be seen as:</p>
<p><span class="math-container">$$ W = S A $$</span></p>
<p>The solution is:</p>
<p><span class="math-container">$$ A = S^{-1} W $$</span></p>
<p>This is very similar to my two answers here:</p>
<p><a href="https://dsp.stackexchange.com/questions/69761/reconstructing-a-sine-wave-from-an-interval-shorter-than-half-its-wavelength">Reconstructing a sine wave from an interval shorter than half its wavelength</a></p>
<hr />
<p>In the overdetermined case:</p>
<p><span class="math-container">$$ W = S A $$</span></p>
<p><span class="math-container">$$ S^T W = S^T S A $$</span></p>
<p><span class="math-container">$$ A = (S^T S)^{-1} S^T W $$</span></p>
<p>This is done inside np.linalg.solve, so you just need to use that (or your platform equivalent).</p>
| 1
|
Fourier transform
|
Fourier transform diagonalizes time-invariant convolution operators
|
https://dsp.stackexchange.com/questions/71261/fourier-transform-diagonalizes-time-invariant-convolution-operators
|
<p>I got the following paragraph from the book "A wavelet tour of signal processing" chapter one, page 2.</p>
<blockquote>
<p>The Fourier transform is everywhere in physics and mathematics because
<strong>it diagonalizes time-invariant convolution operators</strong>. It rules over linear time-invariant signal processing, the building blocks of
which are frequency filtering operators.</p>
</blockquote>
<p>How is it (illustrated) formulated mathematically?</p>
|
<p>For linear time invariant systems, complex exponentials are eigenfunctions - see <a href="https://ptolemy.berkeley.edu/eecs20/week9/lti.html" rel="nofollow noreferrer">here</a> and <a href="https://cnx.org/contents/d2CEAGW5@15.4:zRGnlxUF@2/Eigenfunctions-of-Continuous-Time-LTI-Systems" rel="nofollow noreferrer">here</a>. The complex exponentials are the basis functions used in the Fourier transform i.e. the FT is a linear combination of complex exponentials.</p>
<p>The following is a slightly modified version of <a href="https://www.science20.com/jon_lederman/fourier_transform_diagonalizing_convolution_operator" rel="nofollow noreferrer">here</a>:
The convolution operator acting on <span class="math-container">$f(y)$</span> is given by:</p>
<p><span class="math-container">$$g(y)=\int_{-\infty}^{\infty}f(y-x)h(x)dx$$</span>
Applying the convolution operator to <span class="math-container">$f(y)=e^{iky}$</span> gives
<span class="math-container">$$
\begin{eqnarray*}
g(y)&=&\int_{-\infty}^{\infty}e^{ik(y-x)}h(x)dx \\
g(y)&=&e^{iky}\int_{-\infty}^{\infty}e^{-ikx}h(x)dx \\
g(y)&=&f(y)\lambda,
\end{eqnarray*}
$$</span></p>
<p>where <span class="math-container">$\lambda = H(k)=\int_{-\infty}^{\infty}e^{-ikx}h(x)dx$</span> is the Fourier transform of <span class="math-container">$h(x)$</span>.</p>
<p>To see the diagonalization effect of the eigenvectors, we have by definition:
<span class="math-container">$$Ax_n=\lambda_n x_n,$$</span>. Thus the complete set of eigenvectors/eigenvalues can be written as:
<span class="math-container">$$AX=\Lambda X,$$</span>
where <span class="math-container">$\Lambda =diag(\lambda_1 ... \lambda_N)$</span> and each column of <span class="math-container">$X$</span> is the corresponding eigenvector. Finally, because the eigenvectors are orthogonal <span class="math-container">$X^T = X^{-1}$</span> so we have:
<span class="math-container">$$X^TAX=\Lambda. $$</span></p>
| 2
|
Fourier transform
|
The Fourier transform of sinusoids' products with possible other components
|
https://dsp.stackexchange.com/questions/72820/the-fourier-transform-of-sinusoids-products-with-possible-other-components
|
<p>I know that in general it transforms the signal from the time to the frequency field but these specific cases seem pretty demanding. Do I calculate each part separately and then just leave them with convolution between them? Or do I have to calculate any integrals?</p>
<p><span class="math-container">\begin{align}
&\frac{1}{t^2}\cdot\cos(2πt)\cdot\cos(2πt)\\
&8\cos(20πt)\cdot\cos(40\pi{t})\cdot\cos(80\pi{t})
\end{align}</span></p>
<p>For example for the second one will the result be</p>
<p><span class="math-container">$$\bigg(4\big[\delta(f-10)+\delta(f+10)\big]\bigg)\star \bigg(\frac 12\big[\delta(f-20)+\delta(f+20)\big]\bigg)\star\bigg(\frac 12\big[\delta(f-40)+\delta(f+40)\big]\bigg)$$</span></p>
<p>Is that correct?</p>
|
<p><strong>HINT:</strong></p>
<p>It is easy to see that things can be simplified using the trigonometric product-to-sum identity in Equation <span class="math-container">$(1)$</span> below:
<span class="math-container">$$
\cos(\alpha)\cos(\beta) = \frac 12\big[\cos(\alpha+\beta) + \cos(\alpha-\beta)\big]\tag{1}
$$</span></p>
<ul>
<li><p>In the first example
<span class="math-container">$$
\frac{1}{t^2}\cdot\cos(2πt)\cdot\cos(2πt) = \frac 1{2t^2}\big(\cos(4\pi t) + 1\big)
$$</span>
From there you would want to visit the cosine modulation frequency-shift and differentiation properties of the Fourier transform, here in <span class="math-container">$(2)$</span> and <span class="math-container">$(3)$</span> respectively.
<span class="math-container">\begin{align}
\mathcal F\big\{x(t)\cos(2\pi f_0 t)\big\} &= \frac 12\big[X(f - f_0) + X(f + f_0)\big]\tag{2}\\
\mathcal F\left\{\frac{d^n x(t)}{dt^n}\right\}&= \left(j2\pi f\right)^nX(f)\tag{3}
\end{align}</span></p>
</li>
<li><p>In the second example, using Equation <span class="math-container">$(1)$</span> you then have
<span class="math-container">\begin{align}
8\cos(20\pi t)\cos(40\pi t)\cos(80\pi t) & = 8\bigg(\frac 12\big(\cos(60\pi t) + \cos(20\pi t)\big)\cos(80\pi t)\bigg)\\
& = 4\big(\cos(60\pi t) + \cos(20\pi t)\big)\cos(80\pi t)\\
& = 2\big(\cos(140\pi t) + \cos(20\pi t)\big)\\
&\quad + 2\big(\cos(100\pi t) + \cos(60\pi t)\big)\\
\end{align}</span>
With this you simply have sums and you don't have to think of convolutions, you have individual sinusoids at frequencies <span class="math-container">$10\ \rm Hz$</span>, <span class="math-container">$30\ \rm Hz$</span>, <span class="math-container">$50\ \rm Hz$</span>, and <span class="math-container">$70\ \rm Hz$</span>.</p>
</li>
</ul>
| 3
|
Fourier transform
|
Is it possible for a signal to be represented by *both* sinusoidal *and* rectangular/triangular Fourier transforms?
|
https://dsp.stackexchange.com/questions/158/is-it-possible-for-a-signal-to-be-represented-by-both-sinusoidal-and-rectang
|
<p>A signal might have both continuous and discrete parts (where the "discrete" parts are regions where a sinusoidal Fourier transform would be subject to unnecessary Gibbs Noise). So I would think that it could be useful, even if it would require an entirely different implementation strategy.</p>
<p>If it is possible, what are some concrete examples?</p>
|
<p>Yes, but it would be</p>
<ol>
<li><p>Calculation costly</p></li>
<li><p>coefficient would depend on number of harmonics $N$</p></li>
<li><p>depend on error norm</p></li>
<li><p>Most probably not worth the effort -you will probably be better with wavelets instead</p></li>
</ol>
<p>You can define $N$ and calculate least square error (or other error norm) as (discrete case) </p>
<p>$$
\sum_{k = 0}^{K-1} | f(k) - \sum_{n=0}^{N-1} A_n \cos(2p nk) + B_n \sin(2p nk) + C_n \mathrm{sq_0}(2p nk) + D_n \mathrm{sq_1}(2p nk) |^2
$$</p>
<p>and find the minimum of the sum as function of $4N$ variables $A_n$, $B_n$, $C_n$, $D_n.$</p>
<p>The $A_n, B_n, C_n, D_n$ are your representation</p>
| 4
|
Fourier transform
|
Basic question about trigonometric series and transforms thereof
|
https://dsp.stackexchange.com/questions/1303/basic-question-about-trigonometric-series-and-transforms-thereof
|
<p>I would like to know the relation between the parameters $\{\omega_k,A_k\;|\;k\in\mathbb{Z}\}$ of a series $\sum_kA_ksin(\omega_kx)$ and a related series, for example, $\sum_kA_k^2sin^2(\omega_kx)$.</p>
<p>I would also like to know why a multiplicity of peaks appears in the FT when the components of a series are raised to some power? $N$ mirrored sets of peaks with characteristic spacing on the frequency axis and scaling on the amplitude axis are manifest when taking the FT of $\sum_kA_k^Nsin^N(\omega_kx)$. I am interested to know the relation between the parameters of these $N$-fold multiplicities as well. For instance, the amplitudes and the frequencies appear to scale geometrically (e.g., $10$ Hz, $5$ Hz, $2.5$ Hz, $1.25$ Hz).</p>
|
<p>There is no general relationship between the Fourier transform of $f$ and that of $g(f)$ where $g$ is an arbitrary function. The Fourier transform does have the linearity property, so if $g$ is something simple like an affine transform, then the same linear relationship applies to their transforms $F$ and $G$.</p>
<p>With respect to your second question, where $h = \sum_k \left(A_k \sin(\omega_k x)\right)^N$, the presence of more than $k$ peaks in the Fourier transform of $h$ is easily explained using the <a href="http://en.wikipedia.org/wiki/List_of_trigonometric_identities#Power-reduction_formula" rel="nofollow">power-reduction trigonometric identity</a>:</p>
<p>$$
\sin^n(\theta) =
\begin{cases}
\frac{2}{2^n} \sum_{k=0}^{\frac{n-1}{2}} (-1)^{(\frac{n-1}{2}-k)} \binom{n}{k} \sin{((n-2k)\theta)}, & n \text{ is odd} \\
\frac{1}{2^n} \binom{n}{\frac{n}{2}} + \frac{2}{2^n} \sum_{k=0}^{\frac{n}{2}-1} (-1)^{(\frac{n}{2}-k)} \binom{n}{k} \cos{((n-2k)\theta)}, & n \text{ is even}
\end{cases}
$$</p>
<p>(the above is shamelessly borrowed from Wikipedia)</p>
<p>So, when you raise a sinusoid to a power, the result can be expressed as a weighted sum of sinusoids at different frequencies, where the number of individual terms is related to the power. That's why you see additional peaks in the spectrum of $h$.</p>
<p>You can come up with a more general relationship for some cases by taking advantage of the multiplication property of the Fourier transform. That is, if $g = f \cdot e$, then its Fourier transform is $G = F * E$ (where $*$ indicates convolution). You could apply this relationship repeatedly to the sinusoid raised to a power to derive the same result as above.</p>
| 5
|
Fourier transform
|
Signal Reconstruction after fourier transform
|
https://dsp.stackexchange.com/questions/3231/signal-reconstruction-after-fourier-transform
|
<p>I'm working from an example posted <a href="http://www.mathworks.com/help/techdoc/math/brentm1-1.html" rel="nofollow">here</a>. I understand the steps to acquire the fourier transform and can clearly see the spikes at normalized frequencies at 15 and 40 Hz from the 0-centered periodogram. Knowing this, I believe that I can reconstruct a smoother version of the signal as:</p>
<p>$x_{\text{reconstructed}}(t)=\alpha_1 sin(30\pi t)+\alpha_2 sin(80\pi t)$. </p>
<p>I have two questions related to this reconstruction: </p>
<ol>
<li>How do I obtain the coefficients $\alpha_1$ and $\alpha_2$ without an inverse fourier transform of the entire frequency domain data set? Is there a more efficient way? </li>
<li>How could I have obtained the 15 & 40 Hz frequencies from the transformed data? I know I can sort the transformed data to determine that these two frequencies had the highest two powers. But if the data set were very large, this might be unfeasible. Is there another way to determine the important frequencies?</li>
</ol>
|
<p>Suppose you want to perform <code>N</code> points FFT, then the evenly spaced frequency vector is given by</p>
<pre><code> f= (0:NumUniquePts-1)*Fs/N;
</code></pre>
<p>where <code>NumUniquePts = ceil((N+1)/2)</code> is the number of unique points in <code>f</code>, and <code>Fs</code> is the sampling rate.</p>
<p>So if <code>fftx=fft(x,N)</code>, then $\alpha_1$ = fftx[15/(Fs/N)+1] and $\alpha_2$ = fftx[40/(Fs/N)+1] (provided 15 and 40 can be divided by <code>Fs/N</code>).</p>
<p>See <a href="http://www.mathworks.com/support/tech-notes/1700/1702.html" rel="nofollow">here</a> for more.</p>
| 6
|
Fourier transform
|
What is the role of complex exponential?
|
https://dsp.stackexchange.com/questions/8482/what-is-the-role-of-complex-exponential
|
<p>What is the role of complex exponential $ e^{jθ} $ in Fourier Transform? Is it different in the continuous and in discrete time domain?</p>
|
<p>Euler's relationship says that $e^{j\Theta}$ is equal to $cos(\Theta) + j*sin(\Theta)$. The Fourier Transform can then be seen as correlating the signal with sinusoids at various frequencies. The continuous Fourier Transform correlates with an infinite number of sinusoids, while the discrete transform uses $N$ sinusoids, where $N$ is the length of the transform.</p>
| 7
|
Fourier transform
|
Spatial Aliasing - Wrap Around F-K Spectra
|
https://dsp.stackexchange.com/questions/10036/spatial-aliasing-wrap-around-f-k-spectra
|
<p>I've been using F-K Filter, for a while, but I guess I never had good basic understanding about it Math. Someone asked me what is the cause of frequency wrap around in F-K Spectra plot ? I know it's because of aliasing. But if somebody please elaborate more on the cause of this wrap around? Simple Math explanation perhaps.</p>
<p>Thanks </p>
<p>:)</p>
|
<p>Aliasing and frequency wrap is a consequence of violation of <strong><em>Nyquist-Shannon sampling theorem</em></strong> which states that a continuous signal must be discretely sampled at least twice the frequency of the highest frequency in the signal.Hence we need to briefly go into the mathematics of the same.</p>
<p>Let <em>x(t)</em> be a continuous signal, <em>y(n)</em> is discrete signal where <em>y(n)= x(nT).</em> Please note that x(t) will have a continuous time Fourier transform (CTFT) while y(n) will have a Discrete Time Fourier Transform (DTFT)</p>
<p>We can construct a mathematical model of sampling using the continuous Dirac delta function <em>p(t)</em> :
$ p(t)= \sum_{k=-\infty}^{k= \infty}\delta(t-KT) \forall t \in Real $</p>
<p>Let <em>w(t) =x(t)p(t)</em>.We can show that CTFT of continuous function <em>w(t)</em> is DTFT of <em>y(n)</em>. Writing the frequency domain representation of <em>w(t)</em></p>
<p>$W(\omega) = 1/2\pi \ X(\omega) * P(\omega) = \int_{-\infty}^{\infty} X(\Omega) P(\Omega -\omega)d\Omega \ \ \ $ , where * is convolution </p>
<p>now CTFT of <em>p(t)</em> can be written as </p>
<p>$P(\omega) = 2\pi /T \sum_{-\infty}^{\infty} \delta(\omega -k\ 2\pi /T) $</p>
<p>thus $W(\omega) = 1/2\pi \int_{-\infty}^{\infty} X(\Omega)2\pi /T \sum_{-\infty}^{\infty} \delta(\omega -k\ 2\pi /T) d\Omega \\
= 1/T \sum_{-\infty}^{\infty} \int_{-\infty}^{\infty} X(\Omega) \delta(\omega -k\ 2\pi /T) d\Omega \\
=1/T \sum_{-\infty}^{\infty}X(\omega -k\ 2\pi /T) \ \ \ \ \ \ \ \ \ ...using\ \ shifting\ property $ , </p>
<p>Thus we can say that $Y(\omega) = 1/T \sum_{-\infty}^{\infty} X((\omega -2\pi k)/T) $</p>
<p>So we can say that DTFT of y(n): $Y(\omega) $, is a shifted and repeated version of CTFT x(t) i.e. $X(\omega)$. The DTFT is the sum of the CTFT and its copies shifted by multiples of 2π/T.This is shown in the figure bleow. The frequency axis is normalized to −π/T < ω < π/T. </p>
<p><a href="https://i.sstatic.net/Ersht.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ersht.png" alt="enter image description here"></a></p>
<p>If X(ω) = 0 outside the range −π/T < ω < π/T, i.e. <em>x(t)</em> has no frequency greater than nyquist, then the copies will not overlap the range −π < ω < π and there is no problem as seen in figure above.</p>
<p>However if X has non-zero frequency components higher than π/T (fs/2 or nyquist). There will be overlap causing a wrap around in f-k domain. Refer below:</p>
<p><a href="https://i.sstatic.net/xPbtb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xPbtb.png" alt="enter image description here"></a></p>
<p>Notice that in the sampled signal, the frequencies in the vicinity of π are distorted by the overlapping of frequency components above and below π/T in the original signal causing wrap in F-k domain. Now in often spatial sampling is low as in case of Seismic or MRI due to constrains on number of detectors hence there is a aliasing in wavenumber. This spatial aliasing appears as wrap around in Frequency wavenumber Spectra. </p>
<p>Source of Images :R. G. Lyons: Understanding Digital Signal Processing (2nd Edition)</p>
<p>Reference : EECS, University of California Berkeley</p>
| 8
|
Fourier transform
|
Can I apply Fourier Transform to a non-time-indexed signal?
|
https://dsp.stackexchange.com/questions/10783/can-i-apply-fourier-transform-to-a-non-time-indexed-signal
|
<p>Say I have a signal that is not x-indexed. That is, the x-axis of the signal is the distance traversed by car and the y-axis is the heading direction of the car at the corresponding distance.</p>
<p>Can I apply the Fourier Transform to this signal?
If so, what is the physical meaning of this transformation? I believe that the horizontal axis is no longer frequency any more. What is it in this case?</p>
|
<p>Yes you can. The unit of the "frequency" axis after the transform will be $m^{-1}$, and is known as <a href="http://en.wikipedia.org/wiki/Spatial_frequency" rel="noreferrer">spatial frequency</a>.</p>
<p>For example, if there is a strong peak in the Fourier transform at $3.10^{-4} m^{-1}$, it means that your original curve exhibits a strong pattern that repeats at a scale of every $3.3km$, and from that you could infer that maybe the signal was recorded from a vehicle doing laps at the Monaco Grand Prix. The harmonics of this spatial frequency would contain a "signature" of the shape of the circuit.</p>
<p>A practical application of this is handwriting recognition - looking at shapes in the Fourier domain yields representations invariant to scaling, rotations, or more robust to deformations than the original data.</p>
| 9
|
Fourier transform
|
fourier transform to the power of two
|
https://dsp.stackexchange.com/questions/10933/fourier-transform-to-the-power-of-two
|
<p>Could someone explain to me why the computation speed for the fast Fourier transform increases by padding the series with zeros to the point that its length is close to a power of 2? This is common in the matlab environment, for example: </p>
<p><a href="http://www.mathworks.co.uk/help/matlab/ref/fft.html" rel="nofollow">http://www.mathworks.co.uk/help/matlab/ref/fft.html</a></p>
|
<p>Firstly the terminology: The thing that you are trying to compute is the Discrete Fourier Series (DFS). This the only flavor for Fourier Transforms that is discrete in both time and frequency and can thus be represented numerically inside a computer. The Fast Fourier Transform is a specific algorithm to computer the DFS (see for example <a href="http://en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT_algorithm" rel="nofollow">http://en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT_algorithm</a> ). The term FFT is often used to refer to the DFS but that's actually somewhat wrong and often confusing.</p>
<p>You can always calculate the DFS directly using it's definition but this will require $N^{2}$ complex multiplies. The FFT speeds this up by breaking splitting the vector N into smaller sub vectors and then using the symmetry properties of the transform coefficient (often referred to as "twiddle factors"). The vector needs to be split in equal pieces though and that works better the more prime factors N has. A power of two is the best since it has the most prime factors and the factors themselves are the smallest possible. Worst case are N's that are prime themselves. </p>
<p>Let's assume you want to calculate the DFS of sequence of 1021 points. Since it's a prime number you need to use the direct formula which requires 1021*1021 multiples, roughly a million. If you zero pad to 1024, you can use the most efficient FFT version and the number of complex multiplies is 2*log2(N)*N is only about 20000. Hence that's a lot faster.</p>
<p>FFT is efficient if your DFS length can be broken down into lots of small prime numbers. Large prime numbers are slower. In practice you can typically find a spot made our of 2s and 3s that's fairly close to where you want to be so there is no need to always go to a power of 2.</p>
| 10
|
Fourier transform
|
$\sin (t \omega)$ is not an Energy Signal, then how come its Fourier transform do exist?
|
https://dsp.stackexchange.com/questions/14990/sin-t-omega-is-not-an-energy-signal-then-how-come-its-fourier-transform-d
|
<p>The following integral (perhaps fourier tranform of $\sin (t \omega)$ ) is not convergent:</p>
<p>$\int_{-\infty }^{\infty } e^{-i t \omega } \sin (t \omega ) \, dt$</p>
<p>As, $\sin (t \omega)$ is NOT an Energy Signal (but a Power Signal), then how come we get successful in finding the fourier transform of $\sin (t \omega)$ ?</p>
|
<p>You are right that such integrals are meaningless unless they are interpreted as distributions. And this is what we need to do, because - as you know - the Fourier transform of a sine function involves delta impulses. Let me try to make this a bit more intuitive:</p>
<p>The inverse Fourier transform of the delta function $\delta(\omega)$ (in the frequency domain) is given by</p>
<p>$$\frac{1}{2\pi}\int_{-\infty}^{\infty}\delta(\omega)e^{i\omega t}d\omega= \frac{1}{2\pi}e^{i0\cdot t}=\frac{1}{2\pi}$$</p>
<p>So we have the Fourier transform relation (time domain $\Longleftrightarrow$ frequency domain)</p>
<p>$$1\Longleftrightarrow 2\pi\delta(\omega)$$</p>
<p>Using the shifting property we obtain</p>
<p>$$e^{i\omega_0 t}\Longleftrightarrow 2\pi\delta(\omega-\omega_0)$$</p>
<p>And since</p>
<p>$$\sin(\omega_0t)=\frac{1}{2i}[e^{i\omega_0 t}-e^{-i\omega_0 t}]$$</p>
<p>we get for its Fourier transform</p>
<p>$$\sin(\omega_0t)\Longleftrightarrow \frac{\pi}{i}[\delta(\omega-\omega_0)-\delta(\omega+\omega_0)]$$</p>
| 11
|
Fourier transform
|
Fourier Transform Form: two sin components & a phase shift & a magnitude for only one term
|
https://dsp.stackexchange.com/questions/16582/fourier-transform-form-two-sin-components-a-phase-shift-a-magnitude-for-onl
|
<p>This is an example from my text book of a continuous signal:
$$x_{in}(t)=\sin \left( 2\pi \cdot 1000 \cdot t\right) + 0.5\sin \left( 2\pi \cdot 2000 \cdot t + \dfrac{3\pi}{4} \right) $$
So to perform a fourier transform on this signal, how to do that, isn't it a bit funny, since it has two sine components. Shouldn't complex numbers have a sine term and a cosine term? And it's got a scalar term applied to only one component, don't those usually apply across both terms of a complex component? And it's phase shifted, what to do about that?</p>
|
<p>Fourier Transform is a linear one, so you can make use of superposition principle:</p>
<p>$$ \mathscr{F} [\alpha x(t) + \beta y(t)] = \alpha \mathscr{F}[x(t)] + \beta \mathscr{F}[y(t)] $$</p>
<p>So for the <strong>first component</strong> $$x(t) = \sin \left( 2\pi \cdot 1000 \cdot t\right)$$</p>
<p>by <a href="http://www.mechmat.ethz.ch/Lectures/tables.pdf" rel="nofollow"><strong>definition</strong></a>:</p>
<p>$$\mathscr{F}\left[\sin(2\pi f_0 t + \phi) \right] = \dfrac{i}{2} \left[ e^{-i \phi}\delta(f+f_0) - e^{i \phi}\delta(f-f_0) \right] $$</p>
<p>you get:</p>
<p>$$ \mathscr{F}[x(t)]=\dfrac{i}{2} \left[ \delta(f+1000) - \delta(f-1000) \right] $$</p>
<p><strong>Second component</strong> is a sinusoid with shifted phase, so the complex exponent represents that:</p>
<p>$$y(t) = \dfrac{1}{2} \sin \left( 2\pi \cdot 2000 \cdot t + \dfrac{3\pi}{4} \right)$$</p>
<p>has following Fourier Transform:</p>
<p>$$\mathscr{F}[y(t)] = \dfrac{1}{2}\dfrac{i}{2} \left[ e^{\dfrac{-3\pi i}{4}}\delta(f+2000) - e^{\dfrac{3\pi i}{4}}\delta(f-2000) \right] $$</p>
<p>By summing both results you get the Fourier Transform of your signal.</p>
| 12
|
Fourier transform
|
What part of complex number of inverse discrete Fourier transform?
|
https://dsp.stackexchange.com/questions/18251/what-part-of-complex-number-of-inverse-discrete-fourier-transform
|
<p>Ok, so we have an image that is a Fourier inverse of the original picture. We want to get the original picture back. We use Matlab to get that job done. We import the image and then we invert it with the help of ifft(), this gives us a matrix with complex numbers. But to get the original picture we need to do some operation on the complex numbers to get it. But what is that operations. I tried the magnitude, real and imaginary part but this doesn't create the picture we want.</p>
|
<p>To apply <code>IFFT</code> you need back the signal do complex numbers, you need use magnitude and phase information to rebuild correctly.</p>
<p>The real part is = <code>magnitude * cos(phase)</code></p>
<p>The imaginary part is = <code>magnitude * sin(phase)</code></p>
<p>You can use square roots of −1 (<code>sqrt(-1)</code>) to get Imaginary unit.</p>
<p>Now multiply imaginary unit with imaginary part and sum with real part, OK now are you done to apply <code>IFFT</code> !</p>
<p>At the end I apply a mat2gray function to convert the matrix to the intensity!</p>
<p>here how it is really done in matlab: </p>
<pre><code>x=imread('C:\Users\Eder\Pictures\download.jpg');
figure(1);imshow(x);
%Make FFT
y=fft(x);
%Amplitude of the FFT
mx=abs(y);
%get Phase Information
ma=angle(y);
%back the signal to complex
y2= mx .* ( cos(ma) + sqrt(-1) * sin(ma) );
%Apply Inverse FFT
x2=real(ifft(y2));
result=mat2gray(x2);
figure(2);imshow(result);
</code></pre>
| 13
|
Fourier transform
|
Fourier Transform Problem - absolute value, time-saving tricks, etc
|
https://dsp.stackexchange.com/questions/18967/fourier-transform-problem-absolute-value-time-saving-tricks-etc
|
<p>I am given the following signal:</p>
<p>$$[e^{-at}cos(w_{o}t)]u(t),\ a>0$$</p>
<p>Then I am told to find the Fourier Transform, which tells me I need an answer of the form:
$$X(jw)=\int_{-\infty}^\infty \! x(t)e^{-jwt} \, \mathrm{d}t.$$</p>
<p>I know I can reset the bounds of my integral with the unit step function, so my equation becomes
$$X(jw)=\int_{0}^\infty \! e^{-at}cos(w_ot)e^{-jwt} \, \mathrm{d}t.$$</p>
<p>From here, can I essentially solve this out and get a correct answer, keeping $w$ and $w_o$ as separate variables?</p>
<p>I also know I can solve it by using the relation
$$x(t)=a(t)b(t)\xrightarrow{\mathscr{F}} X(jw)=\frac{1}{2\pi}A(jw)*B(jw)$$</p>
<p>So, essentially, I can figure out a transform for each part and convolve to find my answer?</p>
<p>From the book examples, it seems $cos(w_ot)$ can be broken down into $\pi\delta(w-w_o)+\pi\delta(w+w_o).$ If we convolve this with the result from the transform of $e^{-at}u(t)$, a correct answer should be obtained. </p>
<p>It is known that $X(jw)$ when $x(t)=e^{-at}$ is $\frac{1}{(a+jw)}$</p>
<p>Therefore, since impulses sift through the other function in a convolution to get the nonzero values, is the following a correct result? $$\frac{1}{2\pi}(\pi)(\frac{1}{(a+j(w-w_o))}+\frac{1}{(a+j(w+w_o))})$$</p>
<p>$$=\frac{1}{2}(\frac{1}{(a+j(w-w_o))}+\frac{1}{(a+j(w+w_o))})$$</p>
<p>Thank you, sorry for the long question!</p>
|
<p>As you suggested, you could simply solve the integral using $\cos(\omega_0t)=(e^{j\omega_0t}+e^{-j\omega_0t})/2$. But as you've also noted, with this type of problems there is usually some smart way using known transform pairs. And what you suggested appears to me a very sane approach: convolve the known transform of $\cos(\omega_0t)$ with the known transform of $e^{-at}u(t)$ and you're done. I would recommend to you to cross-check your result by solving the integral, which in this case is also quite straight-forward. But I can assure you that your result looks good.</p>
| 14
|
Fourier transform
|
why is the DFS of a delta function equal to 1
|
https://dsp.stackexchange.com/questions/19509/why-is-the-dfs-of-a-delta-function-equal-to-1
|
<p>I have a x[n] = $\delta$[n].
By formula is should be</p>
<p>$$</p>
<p>X[k]= \sum_{n=0}^{N-1} \delta[n]W_N^{kn}</p>
<p>X[k]= \sum_{n=0}^{N-1} e^{-j2*pi*kn/N}</p>
<p>$$</p>
<p>The formulae isn't showing for some reason. I took a screenshot of what I got here: <a href="https://imgur.com/6j0Ibgu" rel="nofollow noreferrer">http://imgur.com/6j0Ibgu</a></p>
<p>basically why is the summation of an exponential term going to 1. $W_N=e^{-j*2*pi*k*n/N}$ in this case. I tried to prove it via $\sum\alpha^k=\frac{1-\alpha^N}{1-\alpha}$ but it doesn't work for me.</p>
|
<p>$X[k]= \sum_{n=0}^{N-1} \delta[n]W_N^{kn} \quad$, where $\>W_N^{kn}=e^{−j\,2\pi k\,n/N}$</p>
<p><strong>HINT:</strong></p>
<p>What is the value of $\delta[n]$ when $n \neq 0 \>$ ?</p>
| 15
|
Fourier transform
|
Can convolution of one signal with different signals give the same answer?
|
https://dsp.stackexchange.com/questions/23085/can-convolution-of-one-signal-with-different-signals-give-the-same-answer
|
<p>Let us consider $x_1(t)$, $x_2(t)$, $x_3(t)$, all the same within some some duration 0 to $T$ but all different outside this interval. Now let us multiply each of these signals with $w(t)$, a window function - nonzero from 0 to $T$ but 0 outside this interval. So the multiplication of this $w(t)$ with each of $x_i(t)$ will give the same signal. This should be same as Inverse Fourier Transform of $W(\omega)$ convolved with $X_i(\omega)$. Does this mean convolution of the same signal with different signals can give the same result? Any comments? Is there a pitfall in my interpretation?</p>
<p>Also I see mathematically equations neatly entered in questions and answers on this site? Where should I start to learn on how to write the equations? </p>
|
<p>Convolution is filtering. </p>
<p>Consider creating a filter with a rectangular response in the frequency domain. Then any signal with the same content within the filter's passband, no matter what the signal content outside the passband of the filter, should result in only the content within the passband after convolution with the impulse response of the filter. Thus, multiple inputs (signals with different stuff outside of the passband) to a convolution with this filter should result in the same output.</p>
<p>(Your opening statement merely swaps the frequency and time domains for the rectangular function, making this relationship harder to recognize.)</p>
<p>This should work even for a non-rectangular and finite length FIR filter with at least one zero in it's transform (maybe at a nice integer submultiple of the sample rate). Add any magnitude of sinusoidal input at the frequency of the zero, and the convolution result or filter output remains unchanged (except perhaps for numerical issues/clipping/quantization/etc.).</p>
| 16
|
Fourier transform
|
What is Fourier transform in terms of area under the curve?
|
https://dsp.stackexchange.com/questions/23099/what-is-fourier-transform-in-terms-of-area-under-the-curve
|
<p>We know that integration gives area under the curve. And that FT is also an integration. How do we interpret FT in terms of the area under the curve especially because e^(jwt) is a complex term? In general, while dealing with complex terms in integration can we relate to some area?</p>
|
<p>Since</p>
<p>$$X(\omega)=\int_{-\infty}^{\infty}x(t)e^{-j\omega t}dt$$</p>
<p>you get for the real part of $X(\omega)$ (assuming that $x(t)$ is real)</p>
<p>$$X_R(\omega)=\int_{-\infty}^{\infty}x(t)\cos(\omega t)dt\tag{1}$$</p>
<p>and for the imaginary part</p>
<p>$$X_I(\omega)=-\int_{-\infty}^{\infty}x(t)\sin(\omega t)dt\tag{2}$$</p>
<p>So if you like, from (1) and (2) you can interpret the real part of the Fourier transform as the area under the curve $x(t)\cos(\omega t)$, and the imaginary part would then be the area under the curve $-x(t)\sin(\omega t)$.</p>
<p>Note that interpreting an integral as the area under a curve does not always help intuition. In the case of the Fourier transform it is more natural to interpret the integrals (1) and (2) as <a href="http://en.wikipedia.org/wiki/Dot_product#Functions" rel="nofollow">inner products</a>.</p>
| 17
|
Fourier transform
|
Fourier transform's sine and cosine count as N grows
|
https://dsp.stackexchange.com/questions/23323/fourier-transforms-sine-and-cosine-count-as-n-grows
|
<p>I remember reading that in (discrete) Fourier transform for signals with even numbered N for length, the sine and cosine count is equal. Is this correct?</p>
<p>A bit of analysis:</p>
<p>N=1, there is only DC offset, which is a cosine wave of unlimited length.</p>
<p>N=2, Now, in addition to the DC offset term there is a other wave. But it's also a cosine wave, since trying to represent a sine with 2 points becomes impossible as only the 0 values of it can be sampled.</p>
<p>N=3, Sine can be represented as well as cos so there are 1 cos, 1 sine of same frequency and the DC offset cosine.</p>
<p>N=4, 3 cos, 1 sine, 3rd bin added with only cos wave possible.
...</p>
<p>So to me it seems that there are always going to be more cosine waves than sines. Did I make a mistake?</p>
<p>The interpretation that DC offset is a cosine could perhaps be debated though, has there ever been debate over this term? The fact that it surpasses the boundaries of N is a bit mind boggling.</p>
|
<p>As far as the rfft is concerned (real-valued time domain), your analysis is not exact even tough you are not far. First you have to remember that <code>rfft(x) -> X</code> transforms the time domain discreet signal <code>x</code> to a frequency spectrum X composed of N/2+1 real values and N/2+1 imaginary values (for <code>N=len(x)</code>).</p>
<p>The only question here is how is it possible to create information from nothing (<code>N/2+1 + N/2+1 = N+2</code> not <code>N</code>). The answer is that you don't and you already discussed the reason: </p>
<ul>
<li><p>N=1: only the DC offset as you said, so it is defined for cosine but it is not for sine (since <code>ci1/N * sin(k*0) = 0</code>). So ci1, the corresponding coefficient in Fourier space is set to 0. In other words, here is one point here conveying no information.</p></li>
<li><p>N=2: you are actually wrong on this one. Just try it. The 0 values are not the only one that can be sampled, so ci2 conveys information.</p></li>
<li><p>N=N/2+1: this is the missing piece of the fake information creation. Because of symmetry, if N is even, the Fourier coefficient should be real ("giving" a "pure" cosine) removing another piece of meaningful information so that none is ultimately created. If it is odd, the coefficient should be complex without any other restriction.</p></li>
</ul>
<p>So, yes, in some case there are more cosine than sine. But as stated otherwise, this is not conceptually important nor unsettling. On the contrary...</p>
<p>You were talking about odd functions, how is the DC component only odd or even? And in the end, the choice of an extra odd/even function depends only on the number of original points. :) </p>
<hr>
<p><strong>Edit:</strong> for me <code>N</code> was a number of components, <em>not</em> the number of samples!</p>
<p>Then yes: for <code>N=2</code>, you've got a real-valued <code>rfft</code> because of what you explained. However, this is not actually a property of the second bin of your <code>rfft</code>, this is a property of the <em>last</em> bin (if <code>N</code> is even).</p>
<p>So for N=2, two real numbers, for N=3, two real numbers and one imaginary, for N=4, 3 real numbers and 1 imaginary, for N=5, 3 real numbers and 2 imaginary and so one.</p>
| 18
|
Fourier transform
|
Numerical Fourier transform for exact frequency
|
https://dsp.stackexchange.com/questions/27022/numerical-fourier-transform-for-exact-frequency
|
<p>Mathematically, suppose I have a function $f(t)=\sum_k c_k e^{-i \omega_kt}$, where $\omega_k$ may not fall in $[0,2\pi]$. With an analytical Fourier transform, I can get a sum of delta functions centered at those frequencies. Now, I only have $N$ evenly-spaced points of $f(t_j)$ in the time domain $[0,T]$, albeit $N$ and $T$ can be freely chosen. How do I use a numerical Fourier transform to get the frequencies $\omega_k$ (e.g. for $\omega_1=1,\omega_2=10,\omega_3=-30$), not confined in $[0,2\pi]$ and with no prior knowledge of the frequency upper/lower limit?</p>
| 19
|
|
Fourier transform
|
Is the following equation established?
|
https://dsp.stackexchange.com/questions/27315/is-the-following-equation-established
|
<p>Is the following equation established?</p>
<blockquote>
<p>$$\int_{-\infty}^{\infty}s(t)r^*(t)dt=\int_W S(f)R^*(f)df $$</p>
<p>$$s(t)\xrightarrow{\text{Fourier Transform}}S(f) $$</p>
<p>$$r(t)\xrightarrow{\text{Fourier Transform}}R(f) $$</p>
</blockquote>
|
<p>It's a version of <a href="https://en.wikipedia.org/wiki/Parseval's_theorem" rel="nofollow">Parseval's Theorem</a>, which can be easily proved by noting that</p>
<p>$$\int_{-\infty}^{\infty}s(t)r^*(t)dt=\mathcal{F}\{s(t)r^*(t)\}\big|_{f=0}\tag{1}$$</p>
<p>where $\mathcal{F}$ denotes the Fourier transform. Note that in general the integration limits should be $-\infty$ and $\infty$ for both integrals. I assume your notation refers to band-limited functions.</p>
<p>From $(1)$, the proof is quite short:</p>
<p>$$\mathcal{F}\{s(t)r^*(t)\}=S(f)\star R^*(-f)=\int_{-\infty}^{\infty}S(\nu)R^*(\nu -f)d\nu\tag{2}$$</p>
<p>where $\star$ denotes convolution. So</p>
<p>$$\mathcal{F}\{s(t)r^*(t)\}\big|_{f=0}=\int_{-\infty}^{\infty}S(\nu)R^*(\nu)d\nu\tag{3}$$</p>
<p>which establishes the equation in your question.</p>
| 20
|
Fourier transform
|
Frequency shift property of Fourier transform
|
https://dsp.stackexchange.com/questions/27383/frequency-shift-property-of-fourier-transform
|
<p>Which one of the following is actually a definition of frequency shift property
$$e^{j\omega_0 t}x(t)\leftrightarrow X(j\omega - \omega_0) \tag{1}$$
$$e^{j\omega_0 t}x(t)\leftrightarrow X(j(\omega - \omega_0)) \tag{2}$$<hr>
<strong>Is frequency shift applicaple in all the situations?</strong><hr></p>
<p>Case I<br>
$$1 \leftrightarrow 2\pi \delta(\omega)$$
if we apply frequency shift property we may obtain $$e^{j\omega_0 t} \leftrightarrow 2\pi \delta(\omega -\omega_0)$$
which works according to result <strong>2</strong><hr>
Case II
$$u(t) \leftrightarrow \frac{1}{j\omega} +\pi \delta(\omega)$$
$$e^{-at} u(t)\leftrightarrow \frac{1}{a+j\omega}$$
which exactly isn't <strong>1</strong> or <strong>2</strong></p>
|
<p>Your Eq. $(2)$ is the frequency shift property; Eq. $(1)$ is wrong. This is easy to show:</p>
<p>$$\begin{align}\mathcal{F}\left\{e^{j\omega_0t}x(t)\right\}=&\int_{-\infty}^{\infty}x(t)e^{j\omega_0t}e^{-j\omega t}dt\\&=\int_{-\infty}^{\infty}x(t)e^{-j(\omega-\omega_0)t}dt\\&=X(j(\omega-\omega_0))
\end{align}$$</p>
<p>Your 'Case II' has nothing to do with the frequency shift property, because it is no frequency shift. For a frequency shift you need to multiply by a <em>complex</em> exponential $e^{j\omega_0t}$, and not by a real-valued exponential $e^{at}$.</p>
<p>You <em>can</em> come up with a rule for the Fourier transform of $x(t)e^{-at}$, $a\in\mathbb{R}$, but this is more tricky than the frequency shift property. For the frequency shift property, if you know that $X(j\omega)$ exists, then you know for sure that also the Fourier transform of $x(t)e^{j\omega_0t}$ exists. On the other hand, if $X(j\omega)$ exists, it is not certain that also the Fourier transform of $x(t)e^{-at}$ exists. This has everything to do with the Laplace transform and its region of convergence:</p>
<p>$$\begin{align}\mathcal{F}\left\{x(t)e^{-at}\right\}&=\int_{-\infty}^{\infty}x(t)e^{-at}e^{-j\omega t}dt\\&=\int_{-\infty}^{\infty}x(t)e^{-(a+j\omega)t}dt\stackrel{?}{=}X(a+j\omega)\end{align}$$</p>
<p>The last equality only holds if the integral converges, i.e. if $s=a$ is inside the region of convergence of the Laplace transform of $x(t)$, and if the Laplace transform of $x(t)$ has no singularities on the imaginary axis, i.e. if its Fourier transform has no Dirac delta impulses. The latter explains why the above 'rule' doesn't work for $x(t)=u(t)$ (as in your example).</p>
| 21
|
Fourier transform
|
What's a "Fourier filter"?
|
https://dsp.stackexchange.com/questions/27566/whats-a-fourier-filter
|
<p>E.g. the constant Q-transform is built by adding so called "Fourier filters".</p>
<p>What's a "Fourier filter"?</p>
|
<p>People (usually from fields outside signal processing) sometimes use the term <a href="http://terpconnect.umd.edu/~toh/spectrum/FourierFilter.html" rel="nofollow noreferrer">Fourier filter</a> for a filtering operation in the FFT domain, which simply works by multiplying the FFT bins of a signal with a given filter function (often just ones and zeros, corresponding to pass bands and stop bands, respectively). Why this is generally not such a good idea is explained <a href="https://dsp.stackexchange.com/questions/6220/why-is-it-a-bad-idea-to-filter-by-zeroing-out-fft-bins">here</a>.</p>
<p>Also in Computer Vision, the term <a href="https://books.google.nl/books?id=ZCu8BAAAQBAJ&lpg=PA422&ots=sPE4rKyiWs&dq=%22fourier%20filter%22%20formula&pg=PA15#v=onepage&q&f=false" rel="nofollow noreferrer">Fourier filter</a> is used as explained above.</p>
<p>In the <a href="http://doc.ml.tu-berlin.de/bbci/material/publications/Bla_constQ.pdf" rel="nofollow noreferrer">document</a> you linked to in a comment, the term is used to describe the computation of the Discrete-Time Fourier Transform (DTFT) at a given frequency from a finite length portion of a signal. This computation can be interpreted as a filtering operation, because it is a sum of products. The corresponding filter is a band pass filter with center frequency equal to the given DTFT frequency. More more information on the filter interpretation of the D(T)FT have a look at <a href="http://www.dsprelated.com/freebooks/sasp/DFT_Filter_Bank.html" rel="nofollow noreferrer">this page</a>.</p>
| 22
|
Fourier transform
|
Fourier transform of ${\Pi}_{a}(t)\cos(2{\pi}f_{0}t)$
|
https://dsp.stackexchange.com/questions/28317/fourier-transform-of-pi-at-cos2-pif-0t
|
<p>I want to find the following: $\mathfrak{F}[{\Pi}_{a}(t)\cos(2{\pi}f_{0}t)]$ <br>
<br>
I first did: $\mathfrak{F}[{\Pi}_{a}(t)] = 2a\text{ sinc}(2af)$ <br>
<br>then: $\mathfrak{F}[\cos(2{\pi}f_{0}t)] = \frac{1}{2}[\delta(f-f_{0})+\delta(f+f_{0})]$ <br></p>
<p>So: $$\begin{align}\mathfrak{F}[{\Pi}_{a}(t)\cos(2{\pi}f_{0}t)] &= \mathfrak{F}[{\Pi}_{a}(t)]\ast \mathfrak{F}[\cos(2{\pi}f_{0}t)]\\&=2a\text{ sinc}(2af)\ast\frac{1}{2}[\delta(f-f_{0})+\delta(f+f_{0})]\\&=a\text{ sinc}(2af)\ast\delta(f-f_{0})+a\text{ sinc}(2af)\ast\delta(f+f_{0})\\&=a\text{ sinc}(2af)+a\text{ sinc}(2af)=2a\text{ sinc}(2af)\end{align}$$</p>
<p><br/><br/>I would like to know if it is correct and if there is an alternative way to find the result easier.</p>
|
<p>Your last line is wrong. Note that you have</p>
<p>$$H(f)\star\delta(f-f_0)=H(f-f_0)\tag{1}$$</p>
<p>for any $H(f)$.</p>
<p>So your last line should be</p>
<p>$$\begin{align*}
&a\,\text{sinc}(2af)\star\delta(f-f_{0})+a\text{ sinc}(2af)\star\delta(f+f_{0})=\\&a\,\text{sinc}(2a(f-f_0))+a\,\text{sinc}(2a(f+f_0))\tag{2}
\end{align*}$$</p>
<p>You basically get copies of your original spectrum centered at $f_0$ and $-f_0$.</p>
| 23
|
Fourier transform
|
Is it possible exctract sinusoids from non periodic signal?
|
https://dsp.stackexchange.com/questions/28599/is-it-possible-exctract-sinusoids-from-non-periodic-signal
|
<p>Digital signal <a href="http://hpiers.obspm.fr/eop-pc/index.php?index=C04&lang=en" rel="nofollow">UT1-UTC</a> is not periodic but is including many sinusoids (periodic elements in IERS nomenclature) that are not multiples of some fundamental. For example tidal sinusoids are not multiples of yearly seasonal sinusoid because lunar month is not submultiple of the year. Then, is it possible these sinusoids be extracted from UT1-UTC by Discrete Fourier Transform? </p>
|
<p>Fourier's theorem say almost any (non-pathological) waveform, periodic or not, can be decomposed into sinusoids (or complex exponentials). Whether, or how well, those sinusoids correspond to any underlying pseudo-periodic phenomena or not is another issue.</p>
<p>Note that with a DFT, you may need to interpolate a periodic signal between the DFT result bin sinusoids.</p>
| 24
|
Fourier transform
|
What information does fourier transform carry?
|
https://dsp.stackexchange.com/questions/28856/what-information-does-fourier-transform-carry
|
<p>As one starts learning signal processing, then comes inevitably the topic of Fourier Transforms. Unfortunately I have difficulties not in computing but in interpreting the results of the Fourier Transforms, in particular the one being Continuous-Time Fourier Transform, CTFT, of the signal $x(t)$ which is: $$X(j\omega) = \int_{-\infty}^{\infty}{x(t)e^{-j\omega t}dt}$$ </p>
<p>Now I wonder what kind of information does this $X(j\omega)$ give about the signal $x(t)$? An example is highly appreciated, if possible.</p>
|
<p>There are a variety of Fourier Transforms (and Series) such as Continuous Time FT, Discrete Time FT, Discrete FT all of which are generally attributed to the fundamental assertion made by J.B.Fourier about at the beginning of 19th century, which claims (without proof) that "if a continuous-time signal (function) x(t) is periodic with T, then it is possible to represent that signal x(t) as an infinite sum of harmonically related trigonometric functions (sines and cosines) as $ x(t)= \sum{ a_k \sin { 2\pi kt \over T} + b_k \cos { 2\pi kt \over T}}$ in which the weights $a_k$ and $b_k$ (the Fourier Coefficients) represent the amount of that particular harmonic in the signal x(t) being analysed"</p>
<p>In fact the above argumentation is strictly for Continuous-Time Fourier Series. But the core idea is generalised into Fourier-Transforms of aperiodic and periodic (with the help of impulse $\delta (t)$ functions) signals. There are conditions on which signals can have such a representation.</p>
<p>In essence, computing a Fourier Transform means finding those coefficients $a_k$ and $b_k$ for which the method is suggested by the analysis equation of the Fourier transform, while the equality in the first paragraph is noted as the synthesis equation.</p>
<p>Eventhough it is quite solid to understand the meaning of those sinusoids inside a periodic signal, when it comes to non-periodic signals, for which we use the Fourier Transforms, the exact meaning of what a single sine wave represents inside such a signal is a little vague and instead we emphasize the transient character of that signal under concern and the necessity of existance of a continuum of infinetely many sine waves.</p>
| 25
|
Fourier transform
|
Which transformation in frequency domain equals a x-axis shift of a signal in time domain?
|
https://dsp.stackexchange.com/questions/29035/which-transformation-in-frequency-domain-equals-a-x-axis-shift-of-a-signal-in-ti
|
<p>I have discrete Fourier transformation results from measurements. Looking at the signal from the time domain perspective, I want to shift the signal on the $x$-axis to the left or right.</p>
<p>Which transformation in the frequency-domain results in a left/right shift in the time-domain?</p>
|
<p>If $s(t)$ is your signal in time domain, you want to make the operation $s(t+\Delta t)$, which according to the <a href="https://en.wikipedia.org/wiki/Fourier_transform" rel="nofollow">Fourier's transform</a> properties, is equivalent to the operation :
$$
S(f)e^{j2\pi f\Delta t}
$$
$S(f)$ being $s(t)$ Fourier's transform.</p>
<p>In case of a <a href="https://en.wikipedia.org/wiki/Discrete_Fourier_transform" rel="nofollow">discrete Fourier's transform</a>, which is defined as
$$
S[k] = \sum^{N-1}_{n=0}{s[n]e^{\frac{-i2\pi kn}{N}}}
$$
with $s$ the signal and N the number of points of both signal and DFT.</p>
<p>The operation to be applied in order to induce a shift by a number c of samples to $s[n]$ and obtain $s[n+c]$ is :
$$
e^{\frac{-i2\pi kc}{N}}S[k]
$$</p>
<p>Which in matlab code would be done with :</p>
<pre><code>exp(-j*2*pi*(1:(N-1))*c/N).*S_fdt;
</code></pre>
| 26
|
Fourier transform
|
Bandwidth range for Fast Fourier vs principal component analysis?
|
https://dsp.stackexchange.com/questions/29893/bandwidth-range-for-fast-fourier-vs-principal-component-analysis
|
<p>I've read somewhere that the Fast Fourier is only applicable to those processes exhibiting bandwidth. Where as principal component analysis can be applied to a process exhibiting any finite bandwidth. Why is this?</p>
<p>The band width is simply the difference in the upper and lower frequencies in a contentious set of frequencies? So, for the Fast Fourier only a set of basis functions of discrete frequencies (i.e. $\sin(2\pi f x)$) are considered, but why does the bandwidth ($f_\text{max}-f_\text{min}$) need to be infinitesimal? Why can it not be finite? Is is due to the fact that the duration of the sine wave is infinite and therefore the bandwidth infinitesimal? Or have I miss understood? I cannot find this information anywhere.</p>
|
<p>From <a href="https://en.wikipedia.org/wiki/Karhunen%E2%80%93Lo%C3%A8ve_theorem" rel="nofollow">Karhunen–Loève theorem</a>, when talking about stochastic processes:</p>
<blockquote>
<p>In the theory of stochastic processes, the Karhunen–Loève theorem
(named after Kari Karhunen and Michel Loève), also known as the
Kosambi–Karhunen–Loève theorem is a representation of a
stochastic process as an infinite linear combination of orthogonal
functions, analogous to a Fourier series representation of a function
on a bounded interval.</p>
</blockquote>
<p>Basically, K.-L. yields an adaptive representation, while Fourier provides a fixed representation with sinusoidal functions. </p>
<p>There are many flavors of Fourier tools, so since you are talking about "fast" Fourier and PCA, I assume for now you are dealing with discrete data, thus the empirical version of the Karhunen–Loève transform (PCA) and the discrete Fourier transform (DFT). The idea of a continuous set of frequencies does not fit gracefully here. </p>
<p>Both PCA et discrete Fourier can be cast into linear and orthogonal transforms, so they are applicable to any data, and may preserve all information. </p>
<p>From <a href="http://luthuli.cs.uiuc.edu/~daf/courses/CS-498-DAF-PS/Lecture%209%20-%20PCA.pdf" rel="nofollow">Principal Component Analysis</a> (p. 44 sq.), you can find that you can derive DFT bases from PCA, when you are studying a process that follows a "correlated" Markov model.</p>
<p>Finally, when you talk about infinite sines with infinitesimal bandwidth, you are more in the context of continuous functions (or distributions) and standard Fourier analysis, which might be a cause for confusion. A solid book on the topic could help, like Mathematical principles of signal processing: Fourier and wavelet analysis, by P. Brémaud.</p>
| 27
|
Fourier transform
|
How is Linear Canonical Transform a generalization of Fractional Fourier Transform?
|
https://dsp.stackexchange.com/questions/30523/how-is-linear-canonical-transform-a-generalization-of-fractional-fourier-transfo
|
<p>I have studied that Fourier transform changes the domain of a signal from time to frequency, and in that way it is a 90 degree shift. When it comes to Fractional Fourier Transform a generalization of Fourier Transform, the resulting transform can lie anywhere between time and frequency domain depending on parameter 'alpha'.</p>
<p>My question is how in the same way Linear Canonical Transform is a generalization of Fractional Fourier Transform.Its physical interpretation in terms of signal processing.
Can any body help?</p>
|
<p>The linear canonical transforms are all area preserving, orientation preserving, linear transforms on the time-frequency plane. The fractional Fourier transforms are a subset, namely the rotations. Both sets form (Lie-)groups under composition, and the rotations are a subgroup.</p>
| 28
|
Fourier transform
|
How to do addition of sound pressure RMS in a DFT
|
https://dsp.stackexchange.com/questions/34036/how-to-do-addition-of-sound-pressure-rms-in-a-dft
|
<p>Based on <a href="https://dsp.stackexchange.com/a/14935/23552">this</a> answer. Once you have your DFT in the unit sound pressure RMS, what is the correct method for getting the sound pressure RMS sum over multiple bins (if for example you're trying to get the total RMS over a frequency band)?</p>
|
<p>The power spectrum of the DFT is the square of the magnitudes and is related to the power spectrum of the discrete time signal by Parsevel's theorem:</p>
<p>$$E_x=\sum_{k=0}^{N-1}\lvert x[k]\rvert ^2=\frac{1}{N}\sum_{k}^{N-1}\lvert X[k]\rvert^2.$$</p>
<p>The RMS is just the square root of the power. (I assume you've already taken care of the 1/N since you mentioned your DFT is already in the correct units).</p>
<p>$$\textrm{RMS}_{x}=\sqrt{\sum_{k=0}^{N-1}\lvert X[k]\rvert^2}.$$</p>
<p>You can compute the RMS of a range of bins $[i,j]$ by constraining $k$.</p>
<p>$$\textrm{RMS}_{x_{ij}}=\sqrt{\sum_{k=i}^{j}\lvert X[k]\rvert^2}.$$</p>
| 29
|
Fourier transform
|
Time-dependent Fourier Transform (TDFT)?
|
https://dsp.stackexchange.com/questions/34967/time-dependent-fourier-transform-tdft
|
<p>In my DSP course we are learning right now about the Time-dependent Fourier Transform (TDFT), but I can't find any information about it online. </p>
<p>My professor said that it is similar to the Short-time Fourier Transform which I can find information about, except that whereas the STFT has a fixed signal and moving window, the TDFT has a moving signal and fixed window. </p>
<p>So does this TDFT usually go by a different name?</p>
| 30
|
|
Fourier transform
|
What is (Fourier frame length/2 + 1)?
|
https://dsp.stackexchange.com/questions/35528/what-is-fourier-frame-length-2-1
|
<p>If $N$ is discrete Fourier transform's frame length, and $N/2$ is half the frame length, how would you call $\frac N2 + 1$? </p>
<pre><code>frame_length = N ;
frame_length_half = N/2 ;
? = N/2 + 1;
</code></pre>
|
<p>Using your notations: <code>number_frequency_bin</code>, i.e. the number of "non-redundant" frequency bins, the first (corresponding to the DC) and last (corresponding to Nyquist) ones being always real for a real signal. The first is a (normalized) sum of samples, the last a sum of samples with alternated signs (since $e^{i\pi n }=(-1)^n$).</p>
| 31
|
Fourier transform
|
Taking the FFT of a sinusoidal signal and going back
|
https://dsp.stackexchange.com/questions/36596/taking-the-fft-of-a-sinusoidal-signal-and-going-back
|
<p>I am a computer science student and want to do some stuff with audio data. I want to use the DFT to analyze and synthesize some sounds. Before going to the more complex stuff I experimented with basic tones and found some issues I do not understand - therefore I want to ask for help in this physics forum.</p>
<p>I created a single sinus wave function with an audioprogram of $440\textrm{ Hz}$. From this I take the DFT and want to recreate the original signal from this. While doing so, first I emit the negative frequencies, thus take only the first $n/2$ frequencies. $N$ is my blocksize (I divide the signal into blocks of length $n$). The sampling rate of my audio signal is $44100\textrm{ Hz}$.
So when I do this, and take the IFT from the resulting data, I can perfectly reconstruct my signal.</p>
<p>Now to my issues and questions: </p>
<ol>
<li>When I set the phase to $0$ in the Fourier space (the imaginary part), with different $n$, I recover a sinus signal with some periodic noise.
<ul>
<li>I assume this is the phase difference which can be heard between the blocks?</li>
</ul></li>
<li>When I set $n$ to $44100$ (the sampling frequency), I get complete noise.
<ul>
<li>Why is that?</li>
</ul></li>
<li>Now I want to take only the strongest frequency amplitude wise (which in my opinion should perfectly work) - thus I set all other frequencies in the Fourier space to $0$ and then do the IFT. For different $n$ this kind of works, I get a sinus signal with some periodic noise.
<ul>
<li>Why the noise? </li>
<li>Moreover, the frequency of the resulting tone changes with $n$. Why is that? </li>
<li>With $n = 44100$ I get complete silence. Why?</li>
</ul></li>
<li>When I set the phases to $0$ again I get the same results.</li>
</ol>
<p>I hope it's somewhat interesting for you too. Could you explain to me these "phenomena"?</p>
|
<p>Part of the problem is that you are using sine waves. The imaginary components of an FFT result contains all the information about pure sine waves (e.g. that are equivalent to a sin() function that starts with phase of zero at the start of the FFT aperture, and that are exactly integer periodic in aperture). If you use only the real part of an FFT result, you can only see cosine waves (not sine waves, periodic in aperture), thus your silent result. </p>
<p>For shorter FFTs, the real component of the result might contain windowing artifacts if the frequency of your sine wave is between FFT result bins for the given FFT length (e.g. not integer periodic in aperture). So, in synthesis, you are hearing an artifact, quantized in frequency.</p>
| 32
|
Fourier transform
|
Application of the time-shifting property in case of Fourier-Transform of cosine
|
https://dsp.stackexchange.com/questions/36675/application-of-the-time-shifting-property-in-case-of-fourier-transform-of-cosine
|
<ol>
<li>Time-shifting property: $x[n-n_d] \xrightarrow{\mathscr{F}} e^{-j\omega n_d} X(e^{j\omega}) $</li>
<li>Fourier-Transform of cosine-signal: $\cos(\omega_0n) \xrightarrow{\mathscr{F}} \frac{1}{2}(\delta(\omega - \omega_0) + \delta(\omega+\omega_0)) $</li>
</ol>
<p>Combining 1. & 2. together, I am getting:
$\cos(\omega_0n - \frac{\pi}{2}) \xrightarrow{\mathscr{F}} e^{-j\frac{\pi}{2}}\frac{1}{2}(\delta(\omega - \omega_0) + \delta(\omega+\omega_0)) $, but instead the Fourier-Transform of</p>
<p>$$\cos(\omega_0n - \tfrac{\pi}{2})$$</p>
<p>is </p>
<p>$$\cos(\omega_0n - \tfrac{\pi}{2}) \xrightarrow{\mathscr{F}} \tfrac{1}{2}\delta(\omega - \omega_0)e^{-j\frac{\pi}{2}} + \tfrac{1}{2}\delta(\omega+\omega_0)e^{j\frac{\pi}{2}}$$</p>
<p>Can anyone tell what I'm doing wrong here?</p>
|
<p>You have done a wrong calculation. First, you need to write the cosine as</p>
<p>$$
\cos(\omega_0n-\pi/2)=\cos\left(\omega_0(n-\tfrac{\pi}{2\omega_0})\right)
$$</p>
<p>i.e. the time-shift needs to be performed on the non-scaled version of the time variable $n$. Then, you apply the Fourier Transform:</p>
<p>$$
\mathscr{F}\left\{\cos\left(\omega_0(n-\tfrac{\pi}{2\omega_0})\right)\right\}=\exp\left(-j\tfrac{\pi\omega}{2\omega_0}\right)\tfrac{1}{2}\big(\delta(\omega-\omega_0)+\delta(\omega+\omega_0)\big)
$$</p>
<p>And now, with the filtering property of the Dirac impulse you end up with the correct result</p>
<p>$$
\mathscr{F}\left\{\cos\left(\omega_0(n-\tfrac{\pi}{2\omega_0})\right)\right\}=\tfrac{1}{2}e^{-j\pi/2}\delta(\omega-\omega_0)+\tfrac{1}{2}e^{j\pi/2}\delta(\omega+\omega_0)
$$</p>
| 33
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.