Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
6,400
| 66,241,524
|
How can I scrape a table's header that contains an image?
|
<p>I'm trying to scrape a table from a wiki website, by using pandas library, the header consists of 5 parts: name, stars, image, health, notes.<br />
I successfully scraped name, stars and notes, but the "Health" header has an image instead of a string name.<br />
(I would like to display "Health" as a string instead of an image.)</p>
<pre><code>import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
pd.set_option('display.max_colwidth', None)
url = pd.read_html("https://azurlane.koumakan.jp/List_of_Cargo#Min_Stats")
print("Type eq name: ")
eq_name = input(str())
# list of cargo
if eq_name == "Type94".casefold():
# 40cm Type 94
df = url[1]
df = df.drop([1, 2, 3], axis = 0)
name_notes = df[['Name','Stars','Health','Notes']]
print(name_notes)
</code></pre>
|
<pre><code>df = url[1]
df.rename(columns={'Unnamed: 3':'Health'}, inplace=True)
</code></pre>
<p>result:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">Name</th>
<th style="text-align: right;">Image</th>
<th style="text-align: left;">Stars</th>
<th style="text-align: right;">Health</th>
<th style="text-align: left;">Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">40cm Type 94 Naval Gun Parts (Cargo)</td>
<td style="text-align: right;">nan</td>
<td style="text-align: left;">β
β
β
β
β
</td>
<td style="text-align: right;">450</td>
<td style="text-align: left;">Gun Components: When equipped by Kashino: increases the FP of your Main Fleet and your CBs by 10%.</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">Aviation Materials (Cargo)</td>
<td style="text-align: right;">nan</td>
<td style="text-align: left;">β
β
β
β
</td>
<td style="text-align: right;">250</td>
<td style="text-align: left;">Aviation Materials: When equipped by a Munition Ship: increases the AVI of your fleet by 8%.</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">Small-Caliber Naval Gun Parts (Cargo)</td>
<td style="text-align: right;">nan</td>
<td style="text-align: left;">β
β
β
β
</td>
<td style="text-align: right;">250</td>
<td style="text-align: left;">Small-Caliber Gun Parts: When equipped by a Munition Ship: increases the FP of your Vanguard by 8%.</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">Torpedo Materials (Cargo)</td>
<td style="text-align: right;">nan</td>
<td style="text-align: left;">β
β
β
β
</td>
<td style="text-align: right;">250</td>
<td style="text-align: left;">Torpedo Materials: When equipped by a Munition Ship: increases the TRP of your fleet by 8%.</td>
</tr>
</tbody>
</table>
</div>
|
html|python-3.x|pandas|dataframe|web-scraping
| 0
|
6,401
| 66,006,228
|
3D CNN using keras-tensorflow on pycharm ( Process finished with exit code 137 (interrupted by signal 9: SIGKILL) )
|
<p>I'm doing a 3D CNN to classify LUNA16 data set (CT scan data set), I'm using keras-tensorflow on pycharm.</p>
<p>I'm following this code
<a href="https://github.com/keras-team/keras-io/blob/master/examples/vision/3D_image_classification.py" rel="nofollow noreferrer">https://github.com/keras-team/keras-io/blob/master/examples/vision/3D_image_classification.py</a></p>
<p>and I just modified it to fit my (*.mhd) data (this is whats I'm running now) <a href="https://github.com/Mustafa-MS/3D-CNN-LUNA16/blob/main/3DCNN.py" rel="nofollow noreferrer">https://github.com/Mustafa-MS/3D-CNN-LUNA16/blob/main/3DCNN.py</a></p>
<p>Each time I run the code a different error came up and stop the process! but all errors about memory.</p>
<ul>
<li>Process finished with exit code 137 (interrupted by signal 9: SIGKILL)</li>
<li>Out of memory</li>
<li>W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 3355443200 exceeds 10% of free system memory.<br />
W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 369098752 exceeds 10% of free system memory.<br />
W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 3355443200 exceeds 10% of free system memory. <br /></li>
</ul>
<p>My model summary is</p>
<pre><code> _________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 64, 1)] 0
_________________________________________________________________
conv3d (Conv3D) (None, 126, 126, 62, 64) 1792
_________________________________________________________________
max_pooling3d (MaxPooling3D) (None, 63, 63, 31, 64) 0
_________________________________________________________________
batch_normalization (BatchNo (None, 63, 63, 31, 64) 256
_________________________________________________________________
conv3d_1 (Conv3D) (None, 61, 61, 29, 64) 110656
_________________________________________________________________
max_pooling3d_1 (MaxPooling3 (None, 30, 30, 14, 64) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 30, 30, 14, 64) 256
_________________________________________________________________
conv3d_2 (Conv3D) (None, 28, 28, 12, 128) 221312
_________________________________________________________________
max_pooling3d_2 (MaxPooling3 (None, 14, 14, 6, 128) 0
_________________________________________________________________
batch_normalization_2 (Batch (None, 14, 14, 6, 128) 512
_________________________________________________________________
conv3d_3 (Conv3D) (None, 12, 12, 4, 256) 884992
_________________________________________________________________
max_pooling3d_3 (MaxPooling3 (None, 6, 6, 2, 256) 0
_________________________________________________________________
batch_normalization_3 (Batch (None, 6, 6, 2, 256) 1024
_________________________________________________________________
global_average_pooling3d (Gl (None, 256) 0
_________________________________________________________________
dense (Dense) (None, 512) 131584
_________________________________________________________________
dropout (Dropout) (None, 512) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 513
=================================================================
Total params: 1,352,897
Trainable params: 1,351,873
Non-trainable params: 1,024
</code></pre>
<p>The dimension of each CT scan is
<code>(128, 128, 64, 1)</code></p>
<p>The shape of training and validating is <br />
<code>xtrain = (800, 128, 128, 64) / xval = (88, 128, 128, 64) / ytrain = (800,) / yval = (88,)</code><br /></p>
<p>The batch size = 2 <br /></p>
<p>I'm monitoring my model using wandb you can check it out here <a href="https://wandb.ai/mustafa-ms/monitor-gpu?workspace=user-mustafa-ms" rel="nofollow noreferrer">https://wandb.ai/mustafa-ms/monitor-gpu?workspace=user-mustafa-ms</a>
<br /> It shows that the model is consuming 100% of each system memory and gpu memory just before it stopped working. <br />
<a href="https://i.stack.imgur.com/7bec5.png" rel="nofollow noreferrer">picture of gpu memory Allocated %</a> <br />
<a href="https://i.stack.imgur.com/aIS6v.png" rel="nofollow noreferrer">picture of system memory allocated %</a> <br />
I know there are tons of answers about the same problem but none of it is fixing my problem. <br />
The CNN is not big, the batch is very small 2 only, and my data is only 888 CT scan!
My pc have 32 gb memory and RTX 2080ti gpu. <br />
full log is here <br /></p>
<pre><code>import sys; print('Python %s on %s' % (sys.version, sys.platform))
sys.path.extend(['/home/mustafa/home/mustafa/project/LUNAMASK', '/home/mustafa/home/mustafa/project/LUNAMASK'])
PyDev console: starting.
Python 3.8.7 (default, Dec 21 2020, 20:10:35)
[GCC 7.5.0] on linux
runfile('/home/mustafa/home/mustafa/project/LUNAMASK/3DCNN.py', wdir='/home/mustafa/home/mustafa/project/LUNAMASK')
2021-02-02 05:40:34.999468: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
wandb: Currently logged in as: mustafa-ms (use `wandb login --relogin` to force relogin)
2021-02-02 05:40:37.643336: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
wandb: Tracking run with wandb version 0.10.15
wandb: Syncing run clean-violet-3
wandb: βοΈ View project at https://wandb.ai/mustafa-ms/monitor-gpu
wandb: View run at https://wandb.ai/mustafa-ms/monitor-gpu/runs/4y03vu5s
wandb: Run data is saved locally in /home/mustafa/home/mustafa/project/LUNAMASK/wandb/run-20210202_054036-4y03vu5s
wandb: Run `wandb offline` to turn off syncing.
y train length 800
y test length 88
xtrain = (800, 128, 128, 64)
xval = (88, 128, 128, 64)
ytrain = (800,)
yval = 2021-02-02 08:27:05.599099: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
(88,)
2021-02-02 08:27:05.606801: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-02-02 08:27:05.657391: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-02-02 08:27:05.658293: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:09:00.0 name: GeForce RTX 2080 Ti computeCapability: 7.5
coreClock: 1.605GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s
2021-02-02 08:27:05.658325: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-02-02 08:27:05.667884: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-02-02 08:27:05.667982: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-02-02 08:27:05.674032: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-02-02 08:27:05.676356: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-02-02 08:27:05.684058: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-02-02 08:27:05.686346: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-02-02 08:27:05.687068: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-02-02 08:27:05.687204: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-02-02 08:27:05.688185: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-02-02 08:27:05.689043: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-02-02 08:27:05.690061: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-02 08:27:05.690188: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-02-02 08:27:05.691084: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:09:00.0 name: GeForce RTX 2080 Ti computeCapability: 7.5
coreClock: 1.605GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s
2021-02-02 08:27:05.691117: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-02-02 08:27:05.691137: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-02-02 08:27:05.691152: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-02-02 08:27:05.691165: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-02-02 08:27:05.691179: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-02-02 08:27:05.691192: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-02-02 08:27:05.691205: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-02-02 08:27:05.691218: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-02-02 08:27:05.691292: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-02-02 08:27:05.692206: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-02-02 08:27:05.693051: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-02-02 08:27:05.693086: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-02-02 08:27:06.001440: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-02-02 08:27:06.001467: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0
2021-02-02 08:27:06.001473: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N
2021-02-02 08:27:06.001663: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-02-02 08:27:06.002169: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-02-02 08:27:06.002643: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-02-02 08:27:06.003097: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9508 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:09:00.0, compute capability: 7.5)
2021-02-02 08:27:06.004312: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 3355443200 exceeds 10% of free system memory.
2021-02-02 08:27:06.983079: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 369098752 exceeds 10% of free system memory.
2021-02-02 08:27:07.406900: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 3355443200 exceeds 10% of free system memory.
2021-02-02 08:27:09.210752: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2021-02-02 08:27:09.229323: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 3699750000 Hz
Dimension of the CT scan is: (128, 128, 64, 1)
Model: "3dcnn"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 64, 1)] 0
_________________________________________________________________
conv3d (Conv3D) (None, 126, 126, 62, 64) 1792
_________________________________________________________________
max_pooling3d (MaxPooling3D) (None, 63, 63, 31, 64) 0
_________________________________________________________________
batch_normalization (BatchNo (None, 63, 63, 31, 64) 256
_________________________________________________________________
conv3d_1 (Conv3D) (None, 61, 61, 29, 64) 110656
_________________________________________________________________
max_pooling3d_1 (MaxPooling3 (None, 30, 30, 14, 64) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 30, 30, 14, 64) 256
_________________________________________________________________
conv3d_2 (Conv3D) (None, 28, 28, 12, 128) 221312
_________________________________________________________________
max_pooling3d_2 (MaxPooling3 (None, 14, 14, 6, 128) 0
_________________________________________________________________
batch_normalization_2 (Batch (None, 14, 14, 6, 128) 512
_________________________________________________________________
conv3d_3 (Conv3D) (None, 12, 12, 4, 256) 884992
_________________________________________________________________
max_pooling3d_3 (MaxPooling3 (None, 6, 6, 2, 256) 0
_________________________________________________________________
batch_normalization_3 (Batch (None, 6, 6, 2, 256) 1024
_________________________________________________________________
global_average_pooling3d (Gl (None, 256) 0
_________________________________________________________________
dense (Dense) (None, 512) 131584
_________________________________________________________________
dropout (Dropout) (None, 512) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 513
=================================================================
Total params: 1,352,897
Trainable params: 1,351,873
Non-trainable params: 1,024
_________________________________________________________________
2021-02-02 08:27:26.194010: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 369098752 exceeds 10% of free system memory.
2021-02-02 08:27:26.397041: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 3355443200 exceeds 10% of free system memory.
Epoch 1/5
2021-02-02 08:27:30.705650: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-02-02 08:27:31.247841: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-02-02 08:27:31.879674: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
400/400 - 105s - loss: 0.6529 - acc: 0.6325 - val_loss: 0.8511 - val_acc: 0.6705
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)
</code></pre>
|
<p>Right now the code is running flawlessly on google colab.</p>
<p>I think the limitation was with the GPU (my gpu is RTX 2080ti) vs google colab gpu Nvidia T4.</p>
<p>I just preprocessed the data and saved it as a numpy array, then uploaded the arrays to google colab, and run the code after preprocessing.
Now everything is working fine!</p>
|
python|tensorflow|keras|deep-learning
| 0
|
6,402
| 66,115,893
|
Multiple instances of tensorflow lite with NNAPI delegate
|
<p>NNAPI Delegate in Tensorflow lite uses shared memory for input and output tensors of the graph. However the name of the shared memory pool is hardcoded (<code>"input_pool"</code> and <code>"otput_pool"</code>):</p>
<pre><code> // Create shared memory pool for inputs and outputs.
nn_input_memory_.reset(
new NNMemory(nnapi_, "input_pool", total_input_byte_size));
nn_output_memory_.reset(
new NNMemory(nnapi_, "output_pool", total_output_byte_size));
</code></pre>
<p>Now what happens if multiple instances of tensorflow lite with NNAPI delegate are executed? Per my understanding as all of them will map and use the same shared memory pool. Doesn't this lead to race condition?</p>
|
<p>The name given to the shared name is just used as a label. Using the same name when creating two different shared memory regions won't cause the same memory to be used. See for example <a href="https://cs.android.com/android/platform/superproject/+/master:system/core/libcutils/ashmem-dev.cpp;l=370" rel="nofollow noreferrer">the case where no name is provided</a> and all regions are created with the name "none"</p>
|
tensorflow|tensorflow-lite|nnapi
| 1
|
6,403
| 46,251,557
|
Tensorflow : Is there way to feed the crop value (central fraction) to tf.image.central_crop?
|
<p>I want to central crop images, with different crop fraction for each image.</p>
|
<p>Lets say that you have the images saved in a list called <code>images</code> where <code>images[0]</code> is the first one and so on. Lets assume that the central crop fractions lie on the list called <code>central</code> where the <code>central[0]</code> is the fraction that you want for the first image and so on for <code>central[i]</code> for the <em>i-th</em> image. Now try this:</p>
<pre><code>croped_images = [tf.image.central(i, j) for i, j in zip(images, central)]
</code></pre>
<p>As a result one get <code>tf.image.central_crop(images[0], central[0])</code> from cropping the first image with the first central fraction and so on.</p>
<p><strong>UPDATE</strong>:</p>
<p>If you want to process only a subsets <em>n</em> of your batch then use a generator. This will spare your memory.</p>
<pre><code>from itertools import izip
def gen_central_cropped(images, central, n):
for i in range(0, len(images), n):
yield [tf.image.central_crop(img,fraction) for img, fraction in izip(images[i:i+n],central[i:i+n])]
func = gen_central_cropped
for n_central_croped in func(images, central, n):
#write or process n central cropped images here
</code></pre>
<p>So, you read in your program from your 70k images training set a batch of lets say 256. You want to central crop this batch with different fractions but your memory can not handel it. Then use the generator with an acceptable sub_batch <em>n</em>, lets say 10. Now your memory will hold only 10 cropped images at a time. You can either write them in another dataset or process them further then write/delete them.</p>
|
python|tensorflow
| 0
|
6,404
| 46,257,905
|
pandas_datareader not working in jupyter-notebook (Anaconda)
|
<p>ModuleNotFoundError Traceback (most recent call last)
in ()
3 from matplotlib import style
4 import pandas as pd
----> 5 import pandas_datareader.data as web
6
7 style.use('ggplot')</p>
<p>ModuleNotFoundError: No module named 'pandas_datareader'</p>
|
<p>data reader gotta be installed separetly.</p>
<p>if using anacoda, try:</p>
<pre><code>conda install -c https://conda.anaconda.org/anaconda pandas-datareader
</code></pre>
|
python-3.x|pandas|anaconda|jupyter-notebook
| 4
|
6,405
| 58,253,826
|
Eigen values and vectors calculation error in python
|
<p>I'm trying to obtain the eigenvectors and values of any matrix 'X' in a specific format. I used the <code>linalg</code> function to get the eigen pairs but the expected output format is different from my result. For example, <code>v</code> and <code>e</code> denote the eigenvalues and eigenvectors. <code>v1 = 1</code>, <code>e1 = [1,0,0]</code>, <code>v2 = 2</code>, <code>e2 = [0,1,0]</code>, <code>v3 = 3</code>, <code>e3 = [0,0,1]</code>.</p>
<p>So in this example, the eigen pairs of matrix X should be <code>Ep =[(1, [1,0,0]) (2, [0,1,0]), (3, [0,0,1])]</code>.
Here <code>P[0]</code> represents the first eigen pair <code>(1,[1,0,0])</code>, where the eigenvalue is 1, and the eigenvector is <code>[1,0,0]</code>.</p>
<p>Can you please help me code this part further?</p>
<pre><code>e,v = np.linalg.eigh(X)
</code></pre>
|
<p><strong>np.linalg.eigh</strong></p>
<p>First, one should note that <code>np.linalg.eigh</code> calculates the eigenvalues of a Hermitian matrix -- this will not apply for all matrices. If you want to calculate the eigenvalues of any matrix <code>X</code> you should probably switch to something like <code>np.linalg.eig</code>:</p>
<pre><code>import numpy as np
L = np.diag([1,2,3])
V = np.vstack(([1,0,0],[0,1,0],[0,0,1]))
# X = V@L@V.T (eigendecomposition)
X = V@L@V.T
w,v = np.linalg.eig(X)
assert (np.diag(w) == L).all()
assert (v == V).all()
</code></pre>
<p><strong>Eigenpairs</strong></p>
<p>To construct the eigenpairs, just use some list comprehension:</p>
<pre><code>import numpy as np
# X = V@L@V.T (eigendecomposition)
X = np.diag([1,2,3])
w,v = np.linalg.eig(X)
Ep = [(val,vec.tolist()) for val,vec in zip(w,v)]
</code></pre>
<p>Enjoy!</p>
|
python|numpy|matrix|eigenvalue|eigenvector
| 0
|
6,406
| 58,550,023
|
Pandas apply not working inside Spark parallelized code
|
<p>I am trying to use Pandas "apply" inside the parallelized code but the "apply" is not working at all. Can we use "apply" inside the code which gets distributed to the executors while using Spark (parallelize on RDD)?</p>
<p>Code:</p>
<pre><code>def testApply(k):
return pd.DataFrame({'col1':k,'col2':[k*2]*5})
def testExec(x):
df=pd.DataFrame({'col1':range(0,10)})
ddf=pd.DataFrame(columns=['col1', 'col2'])
##In my case the below line doesn't get executed at all
res= df.apply(lambda row: testApply(row.pblkGroup) if row.pblkGroup%2==0 else pd.DataFrame(), axis=1)
list1=[1,2,3,4]
sc=SparkContext.getOrCreate()
testRdd= sc.parallelize(list1)
output=testRdd.map(lambda x: testExec(x)).collect()
</code></pre>
|
<p>I am also getting the same error (TypeError: an integer is required (got type bytes)</p>
<pre><code>from pyspark.context import SparkContext
TypeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_11288/3937779276.py in <module>
----> 1 from pyspark.context import SparkContext
~\miniconda3\lib\site-packages\pyspark\__init__.py in <module>
49
50 from pyspark.conf import SparkConf
---> 51 from pyspark.context import SparkContext
52 from pyspark.rdd import RDD, RDDBarrier
53 from pyspark.files import SparkFiles
~\miniconda3\lib\site-packages\pyspark\context.py in <module>
29 from py4j.protocol import Py4JError
30
---> 31 from pyspark import accumulators
32 from pyspark.accumulators import Accumulator
33 from pyspark.broadcast import Broadcast, BroadcastPickleRegistry
~\miniconda3\lib\site-packages\pyspark\accumulators.py in <module>
95 import socketserver as SocketServer
96 import threading
---> 97 from pyspark.serializers import read_int, PickleSerializer
98
99
~\miniconda3\lib\site-packages\pyspark\serializers.py in <module>
69 xrange = range
70
---> 71 from pyspark import cloudpickle
72 from pyspark.util import _exception_message
73
~\miniconda3\lib\site-packages\pyspark\cloudpickle.py in <module>
143
144
--> 145 _cell_set_template_code = _make_cell_set_template_code()
146
147
~\miniconda3\lib\site-packages\pyspark\cloudpickle.py in _make_cell_set_template_code()
124 )
125 else:
--> 126 return types.CodeType(
127 co.co_argcount,
128 co.co_kwonlyargcount,
TypeError: an integer is required (got type bytes
</code></pre>
<p>)</p>
|
python|apache-spark|pyspark|apply|pandas-apply
| 0
|
6,407
| 69,286,904
|
How to get evenly-spaced data quickly with a MultiIndex in pandas
|
<p>I have a dataframe indexed by stock ticker and date that's pretty sparse, something like:</p>
<pre><code>df = pd.DataFrame({
'ticker': ['SPY', 'GOOGL', 'GOOGL', 'TSLA', 'TSLA'],
'date': ['2021-01-01', '2021-09-01', '2021-09-21', '2021-09-21', '2021-09-22'],
'price': [430.0, 2500.0, 2600.0, 700.0, 710.0],
}).astype({'date': 'datetime64[ns]'}).set_index(['ticker', 'date'])
price
ticker date
SPY 2021-01-01 430.0
GOOGL 2021-09-01 2500.0
2021-09-21 2600.0
TSLA 2021-09-21 700.0
2021-09-22 710.0
</code></pre>
<p>I want to end up with a dataframe that has the last three days of data, as best we know, e.g.,</p>
<pre><code>want = pd.DataFrame({
'ticker': ['SPY', 'SPY', 'SPY', 'GOOGL', 'GOOGL', 'GOOGL', 'TSLA', 'TSLA', 'TSLA'],
'date': ['2021-09-20', '2021-09-21', '2021-09-22', '2021-09-20', '2021-09-21', '2021-09-22', '2021-09-20', '2021-09-21', '2021-09-22'],
'price': [430.0, 430.0, 430.0, 2500.0, 2600.0, 2600.0, 0.0, 700.0, 710.0],
}).astype({'date': 'datetime64[ns]'}).set_index(['ticker', 'date'])
price
ticker date
SPY 2021-09-20 430.0
2021-09-21 430.0
2021-09-22 430.0
GOOGL 2021-09-20 2500.0
2021-09-21 2600.0
2021-09-22 2600.0
TSLA 2021-09-20 0.0
2021-09-21 700.0
2021-09-22 710.0
</code></pre>
<p>I've figured out a couple of ways to do this, so far I think the clearest is a groupby with a custom apply, i.e.,</p>
<pre><code>OUTPUT_DATES = pd.date_range(
start=pd.Timestamp.today() - pd.DateOffset(days=2),
end=pd.Timestamp.today(),
freq='D')
def LastNDays(df):
return (
df
.reset_index(level=0, drop=True)
.reindex(OUTPUT_DATES, method='ffill')
.rename_axis('date')
.fillna(0))
df.groupby(level=0).apply(LastNDays)
</code></pre>
<p>And this works. However, it's also <em>really</em> slow for my actual dataset (several hundred thousand data points). I think it's all the reindexing? This seems like a pretty common task for pandas (take some weird stock data, make it conform to data points) so I feel like there's probably a better way to do this but I don't even know what to search for. Any ideas on how to make this faster?</p>
|
<p>You'll probably see a huge improvement in performance using an <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer"><code>asof</code> merge</a>. First create all the rows you need from the cartesian product of unique ticker labels and the last three dates. Then perform the merge to bring over the closest value (same date or in the past), contruct and sort the MultiIndex, and fill missing values with 0.</p>
<pre><code>import pandas as pd
dates = pd.date_range(pd.to_datetime('today').normalize(), freq='-1D', periods=3)
#DatetimeIndex(['2021-09-22', '2021-09-21', '2021-09-20'], dtype='datetime64[ns]', freq='-1D')
df1 = pd.DataFrame(product(dates, df.index.get_level_values('ticker').unique()),
columns=['date', 'ticker'])
result = (pd.merge_asof(df1.sort_values('date'), df.reset_index().sort_values('date'),
by='ticker', on='date', direction='backward')
.set_index(['ticker', 'date'])
.sort_index()
.fillna(0, downcast='infer')
)
</code></pre>
<hr />
<pre><code>print(result)
price
ticker date
GOOGL 2021-09-20 2500
2021-09-21 2600
2021-09-22 2600
SPY 2021-09-20 430
2021-09-21 430
2021-09-22 430
TSLA 2021-09-20 0
2021-09-21 700
2021-09-22 710
</code></pre>
|
python|pandas
| 3
|
6,408
| 44,740,150
|
Measure correlation without counting some values
|
<p>I have an array:</p>
<pre><code>a = np.array([[1,2,3], [0,0,3], [1,2,0],[0,2,3]])
</code></pre>
<p>which looks like:</p>
<pre><code>array([[1, 2, 3],
[0, 0, 3],
[1, 2, 0],
[0, 2, 3]])
</code></pre>
<p>I need to calculate paired correlations, <strong>but without</strong> taking <code>0</code>s in considerations. So, for example correlations between "1" and "2" should be calculated between arrays:</p>
<pre><code>array([[1, 2],
[1, 2]])
</code></pre>
<p><strong>Problem:</strong> Numpy and pandas method will consider zeros and i can't remind them.
So, <strong>I need</strong> a faster, willingly built-n method for this.</p>
<p>Though, i wrote mine algorithm, but it works really slow on large arrays. </p>
<pre><code>correlations = np.zeros((1000,1000))
for i, column_i in enumerate(np.transpose(array_data)):
for j, column_j in enumerate(np.transpose(array_data[:,i+1:])):
if i != j:
column_i = np.reshape(column_i,(column_i.shape[0], 1))
column_j = np.reshape(column_j,(column_j.shape[0], 1))
values = np.concatenate([column_i, column_j],axis=1)
values = [row for row in values if (row[0] != 0) & (row[1] != 0)]
values = np.array(values)
correlation = np.corrcoef(values[:,0], values[:,1])[0][1]
correlations[i,j] = correlation
</code></pre>
|
<p>Actually, i decieded to change all zeros in data to <code>np.nan</code></p>
<pre><code>for i,e_i in enumerate(array_data):
for j, e_j in enumerate(e_i):
if e_j == 0:
array_data[i,j] = np.NaN
</code></pre>
<p>and then, <code>pandas.corr()</code> worked fine...</p>
|
python|numpy|correlation
| 0
|
6,409
| 44,770,573
|
Python pandas large database using excel
|
<p>I am comfortable using python / excel / pandas for my dataFrames . I do not know sql or database languages . </p>
<p>I am about to start on a new project that will include around 4,000 different excel files I have. I will call to have the file opened saved as a dataframe for all 4000 files and then do my math on them. This will include many computations such a as sum , linear regression , and other normal stats. </p>
<p>My question is I know how to do this with 5-10 files no problem. Am I going to run into a problem with memory or the programming taking hours to run? The files are around 300-600kB . I don't use any functions in excel only holding data. Would I be better off have 4,000 separate files or 4,000 tabs. Or is this something a computer can handle without a problem? Thanks for looking into have not worked with a lot of data before and would like to know if I am really screwing up before I begin. </p>
|
<p>You definitely want to use a database. At nearly 2GB of raw data, you won't be able to do too much to it without choking your computer, even reading it in would take a while. </p>
<p>If you feel comfortable with python and pandas, I guarantee you can learn SQL very quickly. The basic syntax can be learned in an hour and you won't regret learning it for future jobs, its a very useful skill. </p>
<p>I'd recommend you install <a href="https://www.postgresql.org/" rel="nofollow noreferrer">PostgreSQL</a> locally and then use <a href="https://www.sqlalchemy.org/" rel="nofollow noreferrer">SQLAlchemy</a> to connect to create a database connection (or engine) to it. Then you'll be happy to hear that Pandas actually has <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html" rel="nofollow noreferrer">df.to_sql</a> and <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql.html" rel="nofollow noreferrer">pd.read_sql</a> making it really easy to push and pull data to and from it as you need it. Also SQL can do any basic math you want like summing, counting etc. </p>
<p>Connecting and writing to a SQL database is as easy as:</p>
<pre><code>from sqlalchemy import create_engine
my_db = create_engine('postgresql+psycopg2://username:password@localhost:5432/database_name')
df.to_sql('table_name', my_db, if_exists='append')
</code></pre>
<p>I add the last if_exists='append' because you'll want to add all 4000 to one table most likely. </p>
|
python|pandas
| 2
|
6,410
| 60,760,573
|
Multiply 2 different dataframe with same dimension and repeating rows
|
<p>I am trying to multiply two data frame</p>
<p>Df1</p>
<pre><code>Name|Key |100|101|102|103|104
Abb AB 2 6 10 5 1
Bcc BC 1 3 7 4 2
Abb AB 5 1 11 3 1
Bcc BC 7 1 4 5 0
</code></pre>
<p>Df2</p>
<pre><code>Key_1|100|101|102|103|104
AB 10 2 1 5 1
BC 1 10 2 2 4
</code></pre>
<p>Expected Output</p>
<pre><code>Name|Key |100|101|102|103|104
Abb AB 20 12 10 25 1
Bcc BC 1 30 14 8 8
Abb AB 50 2 11 15 1
Bcc BC 7 10 8 10 0
</code></pre>
<p>I have tried grouping Df1 and then multiplying with Df2 but it didn't work
Please help me on how to approach this problem </p>
|
<p>You can <code>rename</code> the df2 <code>Key_1</code> to <code>Key</code>(similar to df1) , then set index and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mul.html" rel="nofollow noreferrer"><code>mul</code></a> on <code>level=1</code></p>
<pre><code>df1.set_index(['Name','Key']).mul(df2.rename(columns={'Key_1':'Key'})
.set_index('Key'),level=1).reset_index()
</code></pre>
<p>Or similar:</p>
<pre><code>df1.set_index(['Name','Key']).mul(df2.set_index('Key_1')
.rename_axis('Key'),level=1).reset_index()
</code></pre>
<p>As correctly pointed by @QuangHoang , you can do without renaming too:</p>
<pre><code>df1.set_index(['Name','Key']).mul(df2.set_index('Key_1'),level=1).reset_index()
</code></pre>
<hr>
<pre><code> Name Key 100 101 102 103 104
0 Abb AB 20 12 10 25 1
1 Bcc BC 1 30 14 8 8
2 Abb AB 50 2 11 15 1
3 Bcc BC 7 10 8 10 0
</code></pre>
|
python-3.x|pandas|dataframe
| 4
|
6,411
| 61,051,911
|
How to ensure each worker use exactly one CPU?
|
<p>I'm implementing <a href="https://arxiv.org/abs/1910.06591" rel="nofollow noreferrer">SEED</a> using ray, and therefore, I define a <code>Worker</code> class as follows</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import gym
class Worker:
def __init__(self, worker_id, env_name, n):
import os
os.environ['OPENBLAS_NUM_THREADS'] = '1'
self._id = worker_id
self._n_envs = n
self._envs = [gym.make(env_name)
for _ in range(self._n_envs)]
def reset_env(self, env_id):
return self._envs[env_id].reset()
def env_step(self, env_id, action):
return self._envs[env_id].step(action)
</code></pre>
<p>Besides that, there is a loop in the <code>Leaner</code> that invoke methods of <code>Worker</code> when necessary to interact with the environment. </p>
<p>As <a href="https://ray.readthedocs.io/en/latest/troubleshooting.html#no-speedup" rel="nofollow noreferrer">this document</a> suggests, I want to make sure each worker use exactly one CPU resource. Here's some of my attempts:</p>
<ol>
<li>When creating a <code>worker</code>, I set <code>num_cpus=1</code>: <code>worker=ray.remote(num_cpus=1)(Worker).remote(...)</code></li>
<li>I checked my numpy configuration using <code>np.__config__.show()</code> which gave me the following information</li>
</ol>
<blockquote>
<p>blas_mkl_info:
NOT AVAILABLE</p>
<p>blis_info:
NOT AVAILABLE</p>
<p>openblas_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]</p>
<p>blas_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]</p>
<p>lapack_mkl_info:
NOT AVAILABLE</p>
<p>openblas_lapack_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]</p>
<p>lapack_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]</p>
</blockquote>
<p>I noticed that numpy is using OpenBLAS, so I set <code>os.environ['OPENBLAS_NUM_THREADS'] = '1'</code> in the <code>Worker</code> class as the above code does following <a href="https://ray.readthedocs.io/en/latest/auto_examples/plot_pong_example.html#parallelizing-gradients" rel="nofollow noreferrer">this instruction</a>.</p>
<p>After both are done, I opened top but still noticed that each Worker use <code>130%-180%</code> CPUs, exactly the same as before. I've also tried to set <code>os.environ['OPENBLAS_NUM_THREADS'] = '1'</code> at the beginning of main python script or using <code>export OPENBLAS_NUM_THREADS=1</code>, but nothing helps. What can I do now?</p>
|
<p>You can pin your core at each worker. For example, you can use something like psutil.Process().cpu_affinity([i]) to pin an index i core at each worker. </p>
<p>Also, before you pin your cpu, make sure to know what cpu has been assigned to the worker by this api. <a href="https://github.com/ray-project/ray/blob/203c077895ac422b80e31f062d33eadb89e66768/python/ray/worker.py#L457" rel="nofollow noreferrer">https://github.com/ray-project/ray/blob/203c077895ac422b80e31f062d33eadb89e66768/python/ray/worker.py#L457</a></p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>ray.init(num_cpus=4)
@ray.remote(num_cpus=1)
def f():
import numpy
resources = ray.ray.get_resource_ids()
cpus = [v[0] for v in resources['CPU']]
psutil.Process().cpu_affinity(cpus)
</code></pre>
|
python|numpy|ray
| 2
|
6,412
| 71,787,733
|
Pandas filter/subset columns based on conditions
|
<p>I have about 300 columns that are basically encoding of categorical variables. I'd like to drop columns where <code>sum</code> of values of column is <code><</code>, say 3.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'id': [0, 1, 2, 3, 4, 5],
'col1': [0, 0, 0, 0, 0, 1],
'col2': [0, 1, 0, 0, 1, 0],
'col3': [1, 1, 0, 1, 1, 0],
'col4': [0, 1, 1, 1, 1, 0]
})
df.sum(axis=0)
</code></pre>
<p>Expected output:</p>
<pre><code>id col3 col4
0 1 0
1 1 1
2 0 1
3 1 1
4 1 1
5 0 0
</code></pre>
|
<p>You can use <code>loc</code> to use a boolean indexing on the columns:</p>
<pre><code>N = 3
out = df.loc[:, df.sum(axis=0) > N]
</code></pre>
<p>If <code>id</code> is not actually numeric or if <code>N</code> can be a very large number, then maybe <code>set_index</code> with <code>id</code> first, then use boolean indexing and <code>reset_index</code> back to original:</p>
<pre><code>df = df.set_index('id')
df = df.loc[:, df.sum(axis=0)>3].reset_index()
</code></pre>
<p>Output:</p>
<pre><code> id col3 col4
0 0 1 0
1 1 1 1
2 2 0 1
3 3 1 1
4 4 1 1
5 5 0 0
</code></pre>
|
python|pandas
| 2
|
6,413
| 71,465,857
|
How to conditionally aggregate a Pandas dataframe
|
<p>I have a dataframe with some data that I'm going to run simulations on. Each row is a datetime and a value. Because of the nature of the problem, I need to keep the original frequency of 1 hour when the value is above a certain threshold. When it's not, I could resample the data and run that part of the simulation on lower frequency data, in order to speed up the simulation.</p>
<p>My idea is to somehow group the dataframe by day (since I've noticed there are many whole days where the value stays below the threshold), check the max value over each group, and if the max is below the threshold then aggregate the data in that group into a single mean value.</p>
<p>Here's a minimal working example:</p>
<pre><code>import pandas as pd
import numpy as np
threshold = 3
idx = pd.date_range("2018-01-01", periods=27, freq="H")
df = pd.Series(np.append(np.ones(26), 5), index=idx).to_frame("v")
print(df)
</code></pre>
<p>Output:</p>
<pre><code> v
2018-01-01 00:00:00 1.0
2018-01-01 01:00:00 1.0
2018-01-01 02:00:00 1.0
2018-01-01 03:00:00 1.0
2018-01-01 04:00:00 1.0
2018-01-01 05:00:00 1.0
2018-01-01 06:00:00 1.0
2018-01-01 07:00:00 1.0
2018-01-01 08:00:00 1.0
2018-01-01 09:00:00 1.0
2018-01-01 10:00:00 1.0
2018-01-01 11:00:00 1.0
2018-01-01 12:00:00 1.0
2018-01-01 13:00:00 1.0
2018-01-01 14:00:00 1.0
2018-01-01 15:00:00 1.0
2018-01-01 16:00:00 1.0
2018-01-01 17:00:00 1.0
2018-01-01 18:00:00 1.0
2018-01-01 19:00:00 1.0
2018-01-01 20:00:00 1.0
2018-01-01 21:00:00 1.0
2018-01-01 22:00:00 1.0
2018-01-01 23:00:00 1.0
2018-01-02 00:00:00 1.0
2018-01-02 01:00:00 1.0
2018-01-02 02:00:00 5.0
</code></pre>
<p>The desired output of the operation would be this dataframe:</p>
<pre><code> v
2018-01-01 00:00:00 1.0
2018-01-02 00:00:00 1.0
2018-01-02 01:00:00 1.0
2018-01-02 02:00:00 5.0
</code></pre>
<p>where the first value is the mean of the first day.</p>
<p>I think I'm getting close</p>
<pre><code>grouped = df.resample("1D")
for name, group in grouped:
if group["v"].max() <= 3:
group['v'].agg("mean")
</code></pre>
<p>but I'm unsure how to actually apply the aggregation to the desired groups, and get a dataframe back.</p>
<p>Any help is greatly appreciated.</p>
|
<p>So I found a solution.</p>
<pre><code>grouped = df.resample("1D")
def conditionalAggregation(x):
if x['v'].max() <= 3:
idx = [x.index[0].replace(hour=0, minute=0, second=0, microsecond=0)]
return pd.DataFrame(x['v'].max(), index=idx, columns=['v'])
else:
return x
conditionallyAggregated = grouped.apply(conditionalAggregation)
conditionallyAggregated = conditionallyAggregated.droplevel(level=0)
conditionallyAggregated
</code></pre>
<p>This gives the following df:</p>
<pre><code> v
2018-01-01 00:00:00 1.0
2018-01-02 00:00:00 1.0
2018-01-02 01:00:00 1.0
2018-01-02 02:00:00 5.0
</code></pre>
|
python|pandas|dataframe
| 1
|
6,414
| 42,221,022
|
Pandas subtract all values from one value, move to next value and repeat
|
<p>I have a df with two columns 'a' and 'b' </p>
<pre><code>[a] [b]
11 100
2 100
10 100
</code></pre>
<p>What I need is an extra column 'c', which represents following calculation:</p>
<p>((11-2) + (11-10)) / 100</p>
<p>((2-11) + (2-10)) / 100</p>
<p>((10-11) + (10-2)) / 100</p>
<pre><code>[a] [b] [c]
11 100 0.1
2 100 -0.17
10 100 0.07
</code></pre>
<p>It should be highly dynamic, so the row count of [a] can differ. Speed is also a concern thats why I want to avoid for loops. </p>
<p>I tried to use .apply() and .pivot() to get it in an easy format to just call sub(), but it didn't work out.</p>
|
<p>I'll give a numpy example. For</p>
<pre><code>>>> a = numpy.array([11, 2, 10])
>>> b = numpy.array([100, 100, 100])
</code></pre>
<p>you can do</p>
<pre><code>>>> c = (len(a) * a - sum(a)) / b
</code></pre>
<p>Similar for a pandas data frame.</p>
|
python|pandas|dataframe|sum
| 2
|
6,415
| 69,782,130
|
Set alignment of columns in pandastable
|
<p>I am trying to set different alignment types for different columns in <a href="https://pandastable.readthedocs.io/en/latest/" rel="nofollow noreferrer">pandastable</a>.</p>
<p>Meanwhile I have found a way to set the alignment for the complete table using the global configuration as shown here (in this example, default alignment "w" can overwritten by "e" by setting the "align" option):</p>
<pre><code>from tkinter import *
from pandastable import Table, TableModel, config
class TestApp(Frame):
"""Basic test frame for the table"""
def __init__(self, parent=None):
self.parent = parent
Frame.__init__(self)
self.main = self.master
self.main.geometry('600x400+200+100')
self.main.title('Table app')
f = Frame(self.main)
f.pack(fill=BOTH,expand=1)
df = TableModel.getSampleData()
self.table = pt = Table(f, dataframe=df,
showtoolbar=True, showstatusbar=True)
options = config.load_options()
options = {'align':'e'} # set alignment of complete table from "w" to "e"
config.apply_options(options, pt)
pt.show()
return
app = TestApp()
#launch the app
app.mainloop()
</code></pre>
<p>However I have no hint of how to solve this for individual columns.</p>
|
<p>You can use <code>pt.columnformats['alignment'][colname]</code> to set the alignment of an individual column with name <code>colname</code>.</p>
<p>For example, to change the alignment for the <code>label</code> column (one of the columns in the data model returned by <code>.getSampleData()</code>:</p>
<pre class="lang-py prettyprint-override"><code>pt.columnformats['alignment']['label'] = 'w'
</code></pre>
|
python|tkinter|pandastable
| 2
|
6,416
| 69,673,717
|
Python: numpy.sum returns wrong ouput (numpy version 1.21.3)
|
<p>Here I have a 1D array:</p>
<pre><code>>>> import numpy as np
>>> a = np.array([75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328,
75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328,
75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328,
75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328,
75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328,
75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328,
75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328,
75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328, 75491328])
</code></pre>
<p>And the sum of all elements in the array should be <code>75491328*8*8 = 4831444992</code>. However, when I use <code>np.sum</code>, I get a different output.</p>
<pre><code>>>> np.sum(a)
536477696
</code></pre>
<p>That's what happens on my Jupyter Notebook using the latest version of Numpy. But when I use Jupyter Notebook of Coursera using old version 1.18.4 of Numpy, everything is fine.</p>
<p>How can I fix this bug? Is it a bug or is it because of me?</p>
|
<p>The problem is caused by integer overflow, you should change the datatype of <code>np.array</code> to <code>int64</code>.</p>
<pre><code>import numpy as np
np.array([Your values here], dtype=np.int64)
</code></pre>
|
python|arrays|numpy
| 1
|
6,417
| 72,437,909
|
Playing with my Apple Health Data but the csv is harder to wrangle than I have encountered
|
<p>My pandas dataframe comes out as:</p>
<pre><code> BURNED CALORIES
Date 00 - 01 01 - 02 02 - 03 .... 23 - 24
1/13/19 17.6 11.53 3.24 28.6
1/14/19 1.5 1.46 2.41 27.44
</code></pre>
<p>The top row is the hourly breakdown but the only column is Burned Calories over the 23-24 hour column. I feel as though I need to pivot it but I can't pull the Date column as an actual column. Any suggestions? Ideally, I would like to create a timeseries and play with hourly data.</p>
<p>This is what I would like to get to:</p>
<p>Date Hour Burned Calories</p>
<p>Thank you!!</p>
<p><a href="https://i.stack.imgur.com/NBrLx.png" rel="nofollow noreferrer">DataFrame</a></p>
|
<p>IIUC, is that what you're looking for?</p>
<pre><code>df2=df.melt(id_vars='Date', var_name='variable')
df2.rename(columns={'variable':'hours', 'value':'calories burned'}, inplace=True)
df2
</code></pre>
<pre><code> Date hours calories burned
0 1/13/19 00-01 17.6
1 1/14/19 00-01 1.5
2 1/13/19 01-02 11.53
3 1/14/19 01-02 1.46
4 1/13/19 02-03 3.24
5 1/14/19 02-03 2.41
6 1/13/19 23-24 28.6
7 1/14/19 23-24 27.44
</code></pre>
|
pandas|time|time-series
| 1
|
6,418
| 50,317,538
|
calculate end time from start time and duration (minutes) using pandas. Error on standard approach
|
<p>i have a pandas dataframe:</p>
<p>Start_time | Duration(minutes) </p>
<p>2018-03-01 16:37:09 | 155 </p>
<p>2018-03-01 07:02:10 | 5 </p>
<p>2018-03-01 13:07:09 | 250 </p>
<p>2018-03-01 20:46:34 | 180 </p>
<p>2018-03-01 07:45:49 | 5</p>
<p>I want output as</p>
<p>Start_time | End time
2018-03-01 16:37:09 | 2018-03-01 19:12:09</p>
<p>2018-03-01 07:02:10 | 2018-03-01 07:07:10</p>
<p>2018-03-01 13:07:09 | 2018-03-01 17:17:09</p>
<p>2018-03-01 20:46:34 | 2018-03-01 23:46:34</p>
<p>2018-03-01 07:45:49 | 2018-03-01 07:50:49</p>
<blockquote>
<p>I am using following code and getting the output for 5-10 rows as required with an warning and when I applied same code on full data set it is showing error as **TypeError: Cannot compare type 'Timestamp' with type 'int'
** </p>
<p>time_temp['End_time'] = pd.DatetimeIndex(time_temp['Start_time']) + pd.to_timedelta(time_temp['Duration'], unit='m')</p>
</blockquote>
<p>Error: Cannot compare type 'Timestamp' with type 'int'
Warning: /usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:1:
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead</p>
<p>See the caveats in the documentation: <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy</a>
"""Entry point for launching an IPython kernel.</p>
|
<p>You need change <code>DatetimeIndex</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a> for remove first error:</p>
<blockquote>
<p>Cannot compare type 'Timestamp' with type 'int' ** </p>
</blockquote>
<pre><code>time_temp['End_time'] = (pd.to_datetime(time_temp['Start_time']) +
pd.to_timedelta(time_temp['Duration'], unit='m'))
print (time_temp)
Start_time Duration End_time
0 2018-03-01 16:37:09 155 2018-03-01 19:12:09
1 2018-03-01 07:02:10 5 2018-03-01 07:07:10
2 2018-03-01 13:07:09 250 2018-03-01 17:17:09
3 2018-03-01 20:46:34 180 2018-03-01 23:46:34
4 2018-03-01 07:45:49 5 2018-03-01 07:50:49
</code></pre>
<p>For avoid second <code>SettingWithCopyWarning</code> obviously need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.copy.html" rel="nofollow noreferrer"><code>copy</code></a> if some filtering, because if you modify values in <code>df</code> later you will find that the modifications do not propagate back to the original data (<code>time_temp</code>), and that Pandas does warning:</p>
<pre><code>time_temp = df[some filtering].copy()
</code></pre>
<p>There should be another problems, check <a href="https://stackoverflow.com/questions/20625582/how-to-deal-with-settingwithcopywarning-in-pandas">how to deal with settingwithcopywarning in pandas</a></p>
|
python|pandas
| 3
|
6,419
| 45,584,907
|
Flatten layer of PyTorch build by sequential container
|
<p>I am trying to build a cnn by sequential container of PyTorch, my problem is I cannot figure out how to flatten the layer.</p>
<pre><code>main = nn.Sequential()
self._conv_block(main, 'conv_0', 3, 6, 5)
main.add_module('max_pool_0_2_2', nn.MaxPool2d(2,2))
self._conv_block(main, 'conv_1', 6, 16, 3)
main.add_module('max_pool_1_2_2', nn.MaxPool2d(2,2))
main.add_module('flatten', make_it_flatten)
</code></pre>
<p>What should I put in the "make_it_flatten"?
I tried to flatten the main but it do not work, main do not exist something call view</p>
<pre><code>main = main.view(-1, 16*3*3)
</code></pre>
|
<p>This might not be exactly what you are looking for, but you can simply create your own <code>nn.Module</code> that flattens any input, which you can then add to the <code>nn.Sequential()</code> object:</p>
<pre><code>class Flatten(nn.Module):
def forward(self, x):
return x.view(x.size()[0], -1)
</code></pre>
<p>The <code>x.size()[0]</code> will select the batch dim, and <code>-1</code> will compute all remaining dims to fit the number of elements, thereby flattening any tensor/Variable.</p>
<p>And using it in <code>nn.Sequential</code>:</p>
<pre><code>main = nn.Sequential()
self._conv_block(main, 'conv_0', 3, 6, 5)
main.add_module('max_pool_0_2_2', nn.MaxPool2d(2,2))
self._conv_block(main, 'conv_1', 6, 16, 3)
main.add_module('max_pool_1_2_2', nn.MaxPool2d(2,2))
main.add_module('flatten', Flatten())
</code></pre>
|
python|conv-neural-network|pytorch
| 18
|
6,420
| 45,713,159
|
print mismatch items in two array
|
<p>I want to compare two array(4 floating point)and print mismatched items.
I used this code:</p>
<pre><code>>>> from numpy.testing import assert_allclose as np_assert_allclose
>>> x=np.array([1,2,3])
>>> y=np.array([1,0,3])
>>> np_assert_allclose(x,y, rtol=1e-4)
AssertionError:
Not equal to tolerance rtol=0.0001, atol=0
(mismatch 33.33333333333333%)
x: array([1, 2, 3])
y: array([1, 0, 3])
</code></pre>
<p>the problem by this code is with big array:</p>
<pre><code>(mismatch 0.0015104228617559556%)
x: array([ 0.440088, 0.35994 , 0.308225, ..., 0.199546, 0.226758, 0.2312 ])
y: array([ 0.44009, 0.35994, 0.30822, ..., 0.19955, 0.22676, 0.2312 ])
</code></pre>
<p>I can not find what values are mismatched. how can see them ?</p>
|
<p>Just use</p>
<pre><code>~np.isclose(x, y, rtol=1e-4) # array([False, True, False], dtype=bool)
</code></pre>
<p>e.g.</p>
<pre><code>d = ~np.isclose(x, y, rtol=1e-4)
print(x[d]) # [2]
print(y[d]) # [0]
</code></pre>
<p>or, to get the indices</p>
<pre><code>np.where(d) # (array([1]),)
</code></pre>
|
python-3.x|numpy
| 7
|
6,421
| 62,877,986
|
AttributeError: module 'pandas' has no attribute 'df'
|
<p>For a current project, I am planning to clean a Pandas DataFrame off its Null values. For this purpose, I want to use <code>Pandas.DataFrame.fillna</code>, which is apparently a solid soliton for data cleanups.</p>
<p>When running the below code, I am however receiving the following error <code>AttributeError: module 'pandas' has no attribute 'df'</code>. I tried several options to rewrite the line <code>df = pd.df().fillna</code>, none of which changed the outcome.</p>
<p>Is there any smart tweak to get this running?</p>
<pre><code>import string
import json
import pandas as pd
# Loading and normalising the input file
file = open("sp500.json", "r")
data = json.load(file)
df = pd.json_normalize(data)
df = pd.df().fillna
</code></pre>
|
<p>When you load the file to the pandas - in your code the <code>data</code> variable is a <code>DataFrame</code> instance. However, you made a typo.</p>
<pre><code>df = pd.json_normalize(data)
df = df.fillna()
</code></pre>
|
python|pandas|dataframe
| 2
|
6,422
| 71,231,466
|
Muliply only numeric values from object type column pandas
|
<pre><code>df = pd.DataFrame({'col1': [1, 2, 4, 5, 'object']})
df['col1'] * 5
</code></pre>
<p>this code multiplies 'object' string to 5 and writes the string 5 times but I want to multiply only numeric values strings should be leave as it is.
I have also tried to convert column to numeric using to_numeric with errors='coerce' it converts all string to nan</p>
|
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.isnumeric.html#pandas.Series.str.isnumeric" rel="nofollow noreferrer"><code>isnumeric</code></a> to identify rows with numerics values.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'col1': [1, 2, 4, 5, 'object']})
mask_ = df['col1'].astype(str).str.isnumeric()
df.loc[mask_, 'col1'] = df.loc[mask_, 'col1'] * 5
</code></pre>
<hr />
<pre><code> col1
0 5
1 10
2 20
3 25
4 object
</code></pre>
|
python|pandas
| 0
|
6,423
| 52,273,275
|
regex replacing multiple codes with values in a column
|
<p>I have a dataframe(df) with different columns. One of the column (col1) is as follows:</p>
<pre><code> col1
----
0 1
1 2
2 1-2
3 1,2
4 1-3
5 3
</code></pre>
<p>I am using .replace method in python/pandas to replace the codes in col1 using the code:</p>
<pre><code> df.col1.replace(to_replace=({'1':'Normal','2':'1-2 more than normal','3':'3-4 more than normal'}), regex=True)
</code></pre>
<p>I am using <code>regex=True</code> because there are codes like 1-2 in cells where 1 and 2 have different meanings as mentioned in the dictionary.</p>
<p><strong>Output</strong></p>
<pre><code> col1
--------
0 Normal
1 1-2 more than normal
2 Normal-1-2 more than normal
3 Normal,1-2 more than normal
4 Normal-1-2 more than normal-3 more than normal
5 1-2 more than normal-3 more than normal
</code></pre>
<p><strong>Desired Output</strong></p>
<pre><code> col1
--------
0 Normal
1 1-2 more than normal
2 Normal-1-2 more than normal
3 Normal,1-2 more than normal
4 Normal-3-4 more than normal
5 3-4 more than normal
</code></pre>
<p><strong>The Problem:</strong></p>
<p>If I do not consider the fourth row (1-3) then all the codes are replaced correctly, except for the code 3. I further experimented with adding a row with only code 3 and there I found that regex first replaces the values for code 3 and then in those values replaces the codes with values from the dictionary. </p>
<p>It is strange as I am running the regex code/command only once. </p>
<p>One solution is that instead of using numbers in the dictionary values i could use English words, e.g. instead of writing <code>1-2 more than normal</code>, i can write <code>one-two more than normal</code> and then it works. But I want to keep the numbers as they are easy to interpret. </p>
<p>Any suggestions? </p>
|
<p>Repeating your work I don't seem to get the same error as you do with input </p>
<p><code>df = pd.DataFrame({'col1' : ['1', '2', '1-2', '1,2', '1-3', '3']})</code></p>
<p>and applying the same .replace method:</p>
<p><code>df.col1.replace(to_replace=({'1':'Normal','2':'1-2 more than normal','3':'3-4 more than normal'}), regex=True)</code></p>
<p>My output matches your Desired Output</p>
<p>Output:</p>
<pre><code> col1
---------
0 Normal
1 1-2 more than normal
2 Normal-1-2 more than normal
3 Normal,1-2 more than normal
4 Normal-3-4 more than normal
5 3-4 more than normal
</code></pre>
<p>So I can't really see any problem.</p>
<p>Beyond that though I would consider what transformation you are doing here, and how readable the output is. If you are evaluating each value against some pre-determined limits, why not create another column with a label for each row indicating which classification group it is a member of? Hope that helps!</p>
|
python|string|pandas|dictionary
| 0
|
6,424
| 52,046,971
|
sigmoid_cross_entropy loss function from tensorflow for image segmentation
|
<p>I'm trying to understand what the <code>sigmoid_cross_entropy</code> loss function does with regards to image segmentation neural networks:</p>
<p>Here is the relevant Tensorflow source <a href="https://github.com/tensorflow/tensorflow/blob/600caf99897e82cd0db8665acca5e7630ec1a292/tensorflow/python/ops/nn_impl.py#L107" rel="noreferrer">code</a>:</p>
<pre><code>zeros = array_ops.zeros_like(logits, dtype=logits.dtype)
cond = (logits >= zeros)
relu_logits = array_ops.where(cond, logits, zeros)
neg_abs_logits = array_ops.where(cond, -logits, logits)
return math_ops.add(
relu_logits - logits * labels,
math_ops.log1p(math_ops.exp(neg_abs_logits)), name=name)
</code></pre>
<p>My main question is why is there a <code>math_ops.add()</code> at the return? Is the add referring to the summation of the loss for every pixel in the image or is the summation doing something different? I'm not able to properly follow the dimensional changes to deduce what the summation is doing. </p>
|
<p><code>sigmoid_cross_entropy_with_logits</code> is used in multilabel classification.</p>
<p>The whole problem can be divided into binary cross-entropy loss for the class predictions that are independent(e.g. 1 is both even and prime). Finaly collect all prediction loss and average them.</p>
<p>Below is an example:</p>
<pre><code>import tensorflow as tf
logits = tf.constant([[0, 1],
[1, 1],
[2, -4]], dtype=tf.float32)
y_true = tf.constant([[1, 1],
[1, 0],
[1, 0]], dtype=tf.float32)
# tensorflow api
loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=y_true,
logits=logits)
# manul computing
probs = tf.nn.sigmoid(logits)
loss_t = tf.reduce_mean(y_true * (-tf.log(probs)) +
(1 - y_true) * (-tf.log(1 - probs)))
config = tf.ConfigProto()
config.gpu_options.allow_growth = True # pylint: disable=no-member
with tf.Session(config=config) as sess:
loss_ = loss.eval()
loss_t_ = loss_t.eval()
print('sigmoid_cross_entropy: {: .3f}\nmanual computing: {: .3f}'.format(
loss_, loss_t_))
------------------------------------------------------------------------------
#output:
sigmoid_cross_entropy: 0.463
manual computing: 0.463
</code></pre>
|
python|tensorflow|machine-learning|neural-network|deep-learning
| 7
|
6,425
| 52,422,265
|
Python MultiIndex Column Rename
|
<p>I am a <strong>NEWBIE</strong> to panda framework. I tried various things shown below and I Want to rename a column name in pandas dataframe , can some one please guide me with this. The column is multilevel pivot column. </p>
<pre><code>import pandas as pd
import numpy as np
df = pd.read_excel(r'D:\cod_sheets\18_09_18.xlsx')
df_pivot = pd.pivot_table(df,index=["REGION NAME","Month"],values=["CNNO","AMOUNT COLLECTED"],
columns=["CDS_STATUS",] ,aggfunc={"CNNO":'count',"AMOUNT COLLECTED":'sum'}, margins=True)
df_pivot.rename(columns = {"CDS_STATUS":"CDS"})
</code></pre>
<p><a href="https://i.stack.imgur.com/gJEkS.png" rel="nofollow noreferrer">enter image description here</a></p>
<p><a href="https://i.stack.imgur.com/gJEkS.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>You just need to add the argument to <code>df_pivot.rename(columns = {"CDS_STATUS":"CDS"})</code> that is <code>df_pivot.rename(columns = {"CDS_STATUS":"CDS"},inplace=True)</code></p>
|
python|pandas|pivot-table|pandas-groupby
| 1
|
6,426
| 52,107,106
|
Visualizing cumsum python
|
<p>This is related to post <a href="https://stackoverflow.com/questions/52104500/rolling-difference-using-pandas">Rolling Difference using Pandas</a></p>
<p>Now that I have this dataframe below, i am trying to visualize this. </p>
<pre><code>Item Add Subtracts Month Net_Items Monthly_Available_Items
C 68 30 1 38 38
C 58 34 2 24 62
C 64 47 3 17 79
C 263 81 4 182 261
C 95 104 5 -9 252
C 38 63 6 -25 227
C 115 95 7 20 247
C 97 112 8 -15 232
</code></pre>
<p>The code and graph are below:</p>
<pre><code>plt.figure(figsize=(20,10))
fig, ax1 = plt.subplots(figsize = (20,10))
ax1 = sns.pointplot(x='Month', y='value', hue='variable',data=stack_df)
ax1.legend(loc = 'upper left')
ax2 = sns.barplot(x = 'Month', y = 'Monthly_Available_Items', data =
stack_df, color = 'purple')
ax1.set_ylabel("Count of Items")
</code></pre>
<p>Comparison of Add and subtracts vs available monthly inventory:
<img src="https://i.stack.imgur.com/kS8D2.png" alt="Comparison of Add and subtracts vs available monthly inventory"></p>
<p>Questions:</p>
<ol>
<li><p>How can i add the legend to the ax2 axis. This represent the Monthly Available items for each month. i tried </p>
<p>ax2.legend() but it not work</p></li>
<li><p>How can i create similar plots for each item(A,B,C,D,E)</p></li>
</ol>
|
<p>You can add a legend by specifying the <code>label</code></p>
<pre><code>ax2 = sns.barplot(x = 'Month', y = 'Monthly_Available_Items',
data = stack_df, color = 'purple',
label = "Monthly_Available_Items")
ax2.legend() # will show the legend for the barplot
</code></pre>
<p>If you want to plot multiple plots based on Item column you could use <code>groupby</code> to plot.</p>
<p>Assuming this is our dataframe <code>df:</code></p>
<pre><code> Item Add Subtracts Month Net_Items Monthly_Available_Items
0 C 68 30 1 38 38
1 D 58 34 2 24 62
2 C 64 47 3 17 79
3 C 263 81 4 182 261
4 D 95 104 5 -9 252
5 D 38 63 6 -25 227
6 D 115 95 7 20 247
7 C 97 112 8 -15 232
</code></pre>
<p>The easiest way to plot multiple plots for each unique value from <code>Item</code> column would be using pandas <code>plot</code> method. We'll first use <code>melt</code> and then <code>groupby</code>.</p>
<pre><code>melt = df.melt(id_vars=('Item', 'Month', 'Monthly_Available_Items'),
value_vars=['Add','Subtracts'])
# sort the melted df by item column
melt.sort_values("Item", inplace=True)
# using groupby to plot by item column.
ax = df.groupby("Item").plot(x='Month', y = "Monthly_Available_Items",
kind='bar', color='purple')
# list of axes generated by ax
axes = [i for i in ax]
# list of unique items from item column eg.,C,D
items = melt.Item.unique()
for i in range(len(c)):
sns.pointplot(x='Month', y='value', hue='variable',
data=melt[melt.Item == c[i]], ax=axes[i])
# customize
axes[0].set_title("Plot for Item C")
axes[1].set_title("Plot for Item D")
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/p21Vw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p21Vw.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/kGMFf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kGMFf.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib|seaborn
| 0
|
6,427
| 60,478,009
|
Convert pandas DataFrame to 2-layer nested JSON using groupby
|
<p>Assume that I have a pandas dataframe called <code>df</code> similar to:</p>
<pre><code>source tables
src1 table1
src1 table2
src1 table3
src2 table1
src2 table2
</code></pre>
<p>I'm currently able to output a JSON file that iterates through the various sources, creating an object for each, with the code below:</p>
<pre><code>all_data = []
for src in df['source']:
source_data = {
src: {
}
}
all_data.append(source_data)
with open('data.json', 'w') as f:
json.dump(all_data, f, indent = 2)
</code></pre>
<p>This yields the following output:</p>
<pre><code>[
{
"src1": {}
},
{
"src2": {}
}
]
</code></pre>
<p>Essentially, what I want to do is also iterate through those list of sources and add the table objects corresponding to each source respectively. My desired output would look similar to as follows:</p>
<pre><code>[
{
"src1": {
"table1": {},
"table2": {},
"table3": {}
}
},
{
"src2": {
"table1": {},
"table2": {}
}
}
]
</code></pre>
<p>Any assistance on how I can modify my code to also iterate through the tables column and append that to the respective source values would be greatly appreciated. Thanks in advance.</p>
|
<p>Is this what you're looking for?</p>
<pre><code>data = [
{k: v}
for k, v in df.groupby('source')['tables'].agg(
lambda x: {v: {} for v in x}).items()
]
with open('data.json', 'w') as f:
json.dump(data, f, indent=2)
</code></pre>
<p>There are two layers to the answer here. To group the tables by source, use <code>groupby</code> first with an inner comprehension. You can use a list comprehension to assemble your data in this specific format overall.</p>
<pre><code>[
{
"src1": {
"table1": {},
"table2": {},
"table3": {}
}
},
{
"src2": {
"table1": {},
"table2": {}
}
}
]
</code></pre>
<hr>
<p><strong>Example using <code>.apply</code> with arbitrary data</strong></p>
<pre><code>df['tables2'] = 'abc'
def func(g):
return {x: y for x, y in zip(g['tables'], g['tables2'])}
data = [{k: v} for k, v in df.groupby('source').apply(func).items()]
data
# [{'src1': {'table1': 'abc', 'table2': 'abc', 'table3': 'abc'}},
# {'src2': {'table1': 'abc', 'table2': 'abc'}}]
</code></pre>
<p>Note that this will not work with pandas 1.0 (probably because of a bug)</p>
|
python|json|pandas|dataframe|object
| 1
|
6,428
| 72,769,631
|
How use two columns as a single condition to get results in pyspark
|
<p>I have:</p>
<pre><code>+-----------+------+
|ColA |ColB |
+-----------+------+
| A | B|
| A | D|
| C | U|
| B | B|
| A | B|
+-----------+------+
</code></pre>
<p>and I want to get:</p>
<pre><code>+-----------+------+
|ColA |ColB |
+-----------+------+
| A | D|
| C | U|
| B | B|
+-----------+------+
</code></pre>
<p>I want to "remove" all rows with the combination of "colA == A and colB == B".
When I tried this SQL Statement</p>
<p><code>SELECT * FROM table where (colA != 'A' and colB != 'B')</code></p>
<p>worked fine.</p>
<p>But when I try to translate to spark (or even to pandas) I got an error.</p>
<p>Py4JError: An error occurred while calling o109.and. Trace:...</p>
<pre><code>#spark
sparkDF.where((sparkDF['colA'] != 'A' & sparkDF['colB'] != 'B')).show()
#pandas
pandasDF[(pandasDF["colA"]!="A" & pandasDF["colB"]!="B")]
</code></pre>
<p>What am I doing wrong here?</p>
|
<p>Need add parentheses and <code>|</code> for bitwise OR:</p>
<pre><code>pandasDF[(pandasDF["colA"]!="A") | (pandasDF["colB"]!="B")]
sparkDF.where((sparkDF['colA'] != 'A') | (sparkDF['colB'] != 'B')).show()
</code></pre>
|
pandas|apache-spark|pyspark
| 1
|
6,429
| 72,799,969
|
install tensorflow-decision-forests in windows
|
<p>I have to install tensorflow-decision-forests in windows. I tried:</p>
<pre><code>pip install tensorflow-decision-forests
pip3 install tensorflow-decision-forests
pip3 install tensorflow_decision_forests --upgrade
</code></pre>
<p>I get:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement tensorflow-decision-forests (from versions: none)
ERROR: No matching distribution found for tensorflow-decision-forests
</code></pre>
<p>I have (pip show tensorflow):</p>
<pre><code>Name: tensorflow
Version: 2.9.1
Summary: TensorFlow is an open source machine learning framework for everyone.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: packages@tensorflow.org
License: Apache 2.0
</code></pre>
<p>AFIK this is the latest version amy ideas?</p>
|
<p><code>tensorflow-decision-forests</code> 0.2.6 <a href="https://pypi.org/project/tensorflow-decision-forests/#files" rel="nofollow noreferrer">provides</a> binary wheels only for Linux and no source code. So <code>pip</code> cannot install it on non-Linux platforms.</p>
<p>There're <a href="https://github.com/tensorflow/decision-forests/blob/main/documentation/installation.md" rel="nofollow noreferrer">instructions</a> on how to compile it on Linux and Mac OS X but that's all. Perhaps it's not possible to compile it anywhere else.</p>
|
python|tensorflow|pip|tensorflow-decision-forests
| 1
|
6,430
| 72,805,397
|
How to create a new dataframe that contains the value changes from multiple columns between two exisitng DFs
|
<p>The first two tables represent snippets from the first and second dataframes, respectively. I am trying to create a new dataframe that contains the numerical changes for each attribute</p>
<p><img src="https://i.stack.imgur.com/2o0vM.png" alt="" /></p>
<p>Please also see my other post for how I framed the same question in a different way: <a href="https://stackoverflow.com/questions/72803474/how-to-create-a-new-dataframe-that-contains-the-value-changes-from-multiple-colu/72803556#72803556">How to create a new dataframe that contains the value changes from multiple columns between two exisitng dataframes</a></p>
|
<p>You can try setting <code>ID</code> and <code>Name</code> columns as index then do substraction</p>
<pre class="lang-py prettyprint-override"><code>out = (df2.set_index(['ID', 'Name']) - df1.set_index(['ID', 'Name'])).reset_index()
</code></pre>
<pre><code>print(df1)
ID Name A B
0 1 a 89 91
print(df2)
ID Name A B
0 1 a 95 98
print(out)
ID Name A B
0 1 a 6 7
</code></pre>
|
python|pandas|dataframe
| 0
|
6,431
| 61,663,005
|
Working with pandas to select/extract rows from dataframe
|
<p>I am trying to extract information from a number of countries from a dataset I downloaded. I was able to figure out how to pull one country, but have had syntax errors trying to pull more than one in the same line. Here it is with the output:</p>
<pre><code>ef=df1.loc[df1['countries'] == 'Hong Kong']
print(ef)
</code></pre>
<pre class="lang-none prettyprint-override"><code> year ISO_code countries ECONOMIC FREEDOM rank quartile \
2016 HKG Hong Kong 8.97 1.0 1.0
2015 HKG Hong Kong 8.97 1.0 1.0
2014 HKG Hong Kong 9.00 1.0 1.0
2013 HKG Hong Kong 8.96 1.0 1.0
2012 HKG Hong Kong 8.96 1.0 1.0
</code></pre>
<p>Can someone explain how I pull multiple countries in the same line of code? Also, am I able to output this information into a separate .csv file?</p>
<p>Thanks for your help.</p>
|
<p>If you want to filter the dataframe based on multiple countries you can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>Series.isin</code></a>.</p>
<pre><code>country_list = ['Hong Kong', 'US', 'Canada', 'India', 'Russia']
ef = df1[df1['countries'].isin(country_list)]
</code></pre>
|
python|pandas|csv|dataframe
| 2
|
6,432
| 61,964,496
|
Pandas - json normalize inside dataframe
|
<p>I want to break down a column in a dataframe into multiple columns. </p>
<p>I have a dataframe with the following configuration:</p>
<pre><code>
GroupId,SubGroups,Type,Name
-4781505553015217258,"{'GroupId': -732592932641342965, 'SubGroups': [], 'Type': 'DefaultSite', 'Name': 'Default Site'}",OrganisationGroup,CompanyXYZ
-4781505553015217258,"{'GroupId': 8123255835936628631, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'MERCEDES BENZ'}",OrganisationGroup,CompanyXYZ
-4781505553015217258,"{'GroupId': -1785570219922840611, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'VOLVO'}",OrganisationGroup,CompanyXYZ
-4781505553015217258,"{'GroupId': -3670461095557699088, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'SCANIA'}",OrganisationGroup,CompanyXYZ
-4781505553015217258,"{'GroupId': 8683757391859854416, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'DRIVERS'}",OrganisationGroup,CompanyXYZ
-4781505553015217258,"{'GroupId': -8066654520755643389, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'X - DECOMMISSION'}",OrganisationGroup,CompanyXYZ
-4781505553015217258,"{'GroupId': 4177323092254043025, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'X-INSTALLATION'}",OrganisationGroup,CompanyXYZ
-4781505553015217258,"{'GroupId': -6088426161802844604, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'FORD'}",OrganisationGroup,CompanyXYZ
-4781505553015217258,"{'GroupId': 8512440039365422841, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'HEAVY VEHICLES'}",OrganisationGroup,CompanyXYZ
</code></pre>
<p>I want to create a new dataframe where the <code>SubGroups</code> column is broken into its components. Note that the names inside <code>SubGroups</code> column are prefixed with <code>SubGroups_</code></p>
<pre><code>GroupId, SubGroup_GroupId, SubGroup_SubGroups, SubGroup_Type, SubGroup_Name, Type, Name
-4781505553015217258, -732592932641342965, [], 'DefaultSite', 'Default Site', OrganisationGroup, CompanyXYZ
-4781505553015217258, 8123255835936628631, [], 'SiteGroup', 'MERCEDES BENZ', OrganisationGroup, CompanyXYZ
</code></pre>
<p>I have tried the following code:</p>
<pre><code>for row in AllSubGroupsDF.itertuples():
newDF= newDF.append((pd.io.json.json_normalize(row.SubGroups)))
</code></pre>
<p>But it returns</p>
<pre><code>GroupId,SubGroups,Type,Name
-732592932641342965,[],DefaultSite,Default Site
8123255835936628631,[],SiteGroup,MERCEDES BENZ
-1785570219922840611,[],SiteGroup,VOLVO
-3670461095557699088,[],SiteGroup,SCANIA
8683757391859854416,[],SiteGroup,DRIVERS
-8066654520755643389,[],SiteGroup,X - DECOMMISSION
4177323092254043025,[],SiteGroup,X-INSTALLATION
-6088426161802844604,[],SiteGroup,FORD
8512440039365422841,[],SiteGroup,HEAVY VEHICLES
</code></pre>
<p>I would like to have it all end up in one dataframe but I'm not sure how. Please help?</p>
|
<p>You can try using <code>ast</code> package:-</p>
<pre><code>import pandas as pd
import ast
data = [[-4781505553015217258,"{'GroupId': -732592932641342965, 'SubGroups': [], 'Type': 'DefaultSite', 'Name': 'Default Site'}","OrganisationGroup","CompanyXYZ"],
[-4781505553015217258,"{'GroupId': 8123255835936628631, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'MERCEDES BENZ'}","OrganisationGroup","CompanyXYZ"],
[-4781505553015217258,"{'GroupId': -1785570219922840611, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'VOLVO'}","OrganisationGroup","CompanyXYZ"],
[-4781505553015217258,"{'GroupId': -3670461095557699088, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'SCANIA'}","OrganisationGroup","CompanyXYZ"],
[-4781505553015217258,"{'GroupId': 8683757391859854416, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'DRIVERS'}","OrganisationGroup","CompanyXYZ"],
[-4781505553015217258,"{'GroupId': -8066654520755643389, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'X - DECOMMISSION'}","OrganisationGroup","CompanyXYZ"],
[-4781505553015217258,"{'GroupId': 4177323092254043025, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'X-INSTALLATION'}","OrganisationGroup","CompanyXYZ"],
[-4781505553015217258,"{'GroupId': -6088426161802844604, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'FORD'}","OrganisationGroup","CompanyXYZ"],
[-4781505553015217258,"{'GroupId': 8512440039365422841, 'SubGroups': [], 'Type': 'SiteGroup', 'Name': 'HEAVY VEHICLES'}","OrganisationGroup","CompanyXYZ"]]
df = pd.DataFrame(data,columns=["GroupId","SubGroups","Type","Name"])
df["SubGroup_GroupId"] = df["SubGroups"].map(lambda x: ast.literal_eval(x)["GroupId"])
df["SubGroup_SubGroups"] = df["SubGroups"].map(lambda x: ast.literal_eval(x)["SubGroups"])
df["SubGroup_Type"] = df["SubGroups"].map(lambda x: ast.literal_eval(x)["Type"])
df["SubGroup_Name"] = df["SubGroups"].map(lambda x: ast.literal_eval(x)["Name"])
df
</code></pre>
<p>Hope this helps!!</p>
|
json|pandas|dataframe
| 0
|
6,433
| 62,020,666
|
Is there any way to insert image logo/ Text in before saving to_html in pandas
|
<p>I am saving pandas output as to_html()</p>
<p>Is there any way to integrate the logo/Text at the top of the html page before saving.</p>
|
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_html.html" rel="nofollow noreferrer"><code>to_html</code></a> returns a string with the html if the first parameter <code>buf</code> is <code>None</code>. You can than prepend your image or text html to this string and then write this result string to a file.</p>
<pre><code>output = '<img src="logo.jpg" alt="logo"><br><b>some text</b><br>' + df.to_html()
with open('output.html', 'w') as f:
f.write(output)
</code></pre>
|
pandas
| 1
|
6,434
| 61,657,329
|
How to resample a pandas dataframe to hourly mean, taking into account both a time and a column with a string value?
|
<p>I am trying to make an hourly mean of a dataframe in python, by taking into account the date info but also string info in a certain column. Please see the example below.</p>
<pre><code> station time temperature
0 EHAM 2020-01-01 13:30:00 2
1 EHAM 2020-01-01 13:50:00 5
2 EHAM 2020-01-02 13:30:00 7
3 EHAM 2020-01-02 13:50:00 1
4 EBBR 2020-01-01 13:30:00 6
5 EBBR 2020-01-01 13:55:00 1
6 EBBR 2020-01-02 14:30:00 2
7 EBBR 2020-01-02 14:40:00 3
</code></pre>
<p>From this example, ideally I would like to get the following new dataframe:</p>
<pre><code> station time temperature
0 EHAM 2020-01-01 13:00:00 3.5
1 EHAM 2020-01-02 13:00:00 4
2 EBBR 2020-01-01 13:00:00 3.5
3 EBBR 2020-01-02 14:00:00 2.5
</code></pre>
<p>The code for this dataframe is:</p>
<pre><code>import pandas as pd
from datetime import datetime
flights = {'station': ['EHAM','EHAM','EHAM','EHAM','EBBR','EBBR','EBBR','EBBR'],
'time': [datetime.strptime('1/1/2020 1:30 PM', '%d/%m/%Y %I:%M %p'),datetime.strptime('1/1/2020 1:50 PM', '%d/%m/%Y %I:%M %p'),
datetime.strptime('2/1/2020 1:30 PM', '%d/%m/%Y %I:%M %p'),datetime.strptime('2/1/2020 1:50 PM', '%d/%m/%Y %I:%M %p'),
datetime.strptime('1/1/2020 1:30 PM', '%d/%m/%Y %I:%M %p'),datetime.strptime('1/1/2020 1:55 PM', '%d/%m/%Y %I:%M %p'),
datetime.strptime('2/1/2020 2:30 PM', '%d/%m/%Y %I:%M %p'),datetime.strptime('2/1/2020 2:40 PM', '%d/%m/%Y %I:%M %p')],
'temperature': ['2', '5','7','1','6','1','2','3']}
df = pd.DataFrame(flights, columns = ['station', 'time','temperature'])
</code></pre>
<p>Any help would be appreciated!</p>
|
<p>Aggregate <code>mean</code> with convert <code>datetimes</code> to dates by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.date.html" rel="nofollow noreferrer"><code>Series.dt.date</code></a>:</p>
<pre><code>#convert sampel data to numeric
df['temperature'] = df['temperature'].astype(int)
df1 = (df.groupby(['station', df['time'].dt.date], sort=False)['temperature']
.mean()
.reset_index())
print (df1)
station time temperature
0 EHAM 2020-01-01 3.5
1 EHAM 2020-01-02 4.0
2 EBBR 2020-01-01 3.5
3 EBBR 2020-01-02 2.5
</code></pre>
<p>Solution with <code>Grouper</code>:</p>
<pre><code>df1 = (df.groupby(['station', pd.Grouper(key='time', freq='D')], sort=False)['temperature']
.mean()
.reset_index())
print (df1)
station time temperature
0 EHAM 2020-01-01 3.5
1 EHAM 2020-01-02 4.0
2 EBBR 2020-01-01 3.5
3 EBBR 2020-01-02 2.5
</code></pre>
|
python|string|pandas|dataframe|datetime
| 1
|
6,435
| 61,806,786
|
How to create multiple columns when map the column data in pandas
|
<p>I am trying to create three additional columns from one column , I have datetime and categorial data , I want to show the number of categories contain each row for example </p>
<p><strong>I have the date, categories and count. This is the dataframe</strong></p>
<p><a href="https://i.stack.imgur.com/k5l8d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k5l8d.png" alt="enter image description here"></a></p>
<p>I want to show the output with mapping like this</p>
<p><a href="https://i.stack.imgur.com/FDbv2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FDbv2.png" alt="enter image description here"></a></p>
<p>Where I have map the categories in two columns , such as 1 is fair, 2 is good and 3 is Great.
After mapping I want to add the count value into the additional map categories created
Now when I am trying to map I get </p>
<p><strong>'Series' object has no attribute 'applymap'</strong></p>
<pre><code>columns = ['Fair', 'Good', 'Great']
categories = {1: 'Fair', 2: 'Good', 3: 'Great'}
SampleResult=SampleResult['Categories'].applymap(categories.get)
</code></pre>
|
<p>You can make use of pandas <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer">pivot</a></p>
<p>Considering your original dataframe containing columns <code>CreatedDate</code>, <code>Categories</code> and <code>count</code></p>
<pre><code>categories = {1: 'Fair', 2: 'Good', 3: 'Great'}
out_dataframe = SampleResult.pivot(columns="Categories", values=["count"], index="CreatedDate").rename(columns=categories)
</code></pre>
<p>This will give you required output.</p>
|
python|python-3.x|pandas
| 0
|
6,436
| 58,019,367
|
Selectively update dataframe column with a dictionary map
|
<p>I want to use a dictionary to map / replace row values in a pandas column - but only for a subset of rows based on a criteria</p>
<pre><code>df['var'] = df['var'].map(mydict)
</code></pre>
<p>but only where </p>
<pre><code>df['anothervar'] is somevalue
</code></pre>
<p>Can I do this?</p>
|
<p>Check with <code>np.where</code> </p>
<pre><code>df['var'] = np.where(df['anothervar']=='somevalue',df['var'].map(mydict),df['var'])
</code></pre>
|
python|pandas
| 0
|
6,437
| 55,017,220
|
Getting "key error" while plotting using Pandas
|
<p>I am trying to do a simple plot of a data from a text file. Below is the file:</p>
<pre><code>Date,Open,High,Low,Close
03-10-16,774.25,776.065002,769.578768,772.559998
04-10-16,776.03,778.710022,772.890015,776.429993
05-10-16,779.30,782.070007,775.650024,776.469971
06-10-16,779.00,780.479989,775.539978,776.859985
07-10-16,779.65,779.659973,770.757867,775.080017
</code></pre>
<p>Below is the python code i m trying to execute:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
df = pd.read_csv('financial.txt', index_col=0)
df.plot(x=df.index, y=df.columns)
plt.show()
</code></pre>
<p>Error:</p>
<pre><code>KeyError: "Index(['03-10-16', '04-10-16', '05-10-16', '06-10-16', '07-10-16'], dtype='object', name='Date') not in index"
</code></pre>
<p>I am not sure why i am getting that error? Although i have achieved what i wanted by using csv but not sure why i am getting that error.
Checked for the same error online as well but didn't get much.I have checked this.
<a href="https://stackoverflow.com/questions/45263070/key-error3-while-using-for-to-plot-using-matplotlib-pyplot-scatter">Key Error:3 while using For to plot using matplotlib.pyplot.scatter</a></p>
<p>Any light on the error is much appreciated.
Thanks.</p>
|
<p>This modification will work. You misunderstood how to use df.plot(). Please refer this page <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html" rel="nofollow noreferrer">visualization</a>. This code below is just a basic visualization, you can change to <strong>df.plot.box()</strong> or <strong>df.plot.area()</strong> to get more advanced visualization.</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
df = pd.read_csv('financial.txt', index_col=0)
df.plot()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/iOYrb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iOYrb.png" alt="result"></a></p>
|
python-3.x|pandas
| 0
|
6,438
| 54,769,135
|
assign new value to repeated (or multiple) objective element(s) to a pandas dataframe
|
<p>I have a pandas dataframe:</p>
<pre><code>df = pd.DataFrame({'AKey':[1, 9999, 1, 1, 9999, 2, 2, 2],\
'AnotherKey':[1, 1, 1, 1, 2, 2, 2, 2]})
</code></pre>
<p>I want to assign a new value to a specific column and for each element having a specific value in that column.</p>
<p>Let say I want to assign the new value <code>8888</code> to the elements having value <code>9999</code>.
I tried the following:</p>
<pre><code>df[df["AKey"]==9999]["AKey"]=8888
</code></pre>
<p>but it returns the following error:</p>
<p><code>A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead</code></p>
<p>So I tried to use <code>loc</code></p>
<pre><code>df.loc[df["AKey"]==9999]["AKey"]=8888
</code></pre>
<p>which returned the same error.</p>
<p>I would appreciate some help and some explanation on the error as I really can't wrap my head around it.</p>
|
<p>You can use loc in this way:</p>
<pre><code>df.loc[df["AKey"]==9999, "AKey"] = 8888
</code></pre>
<p>Producing the following output:</p>
<p><a href="https://i.stack.imgur.com/OtxfV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OtxfV.png" alt="enter image description here"></a></p>
<p>With your original code you are first slicing the dataframe with:</p>
<pre><code>df.loc[df["AKey"]==9999]
</code></pre>
<p>Then assign a value for the sliced dataframe's column AKey.</p>
<pre><code>["AKey"]=8888
</code></pre>
<p>In other words, you were updating the slice, not the dataframe itself.</p>
<p>From Pandas <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer">documentatiom</a>:</p>
<blockquote>
<p>.loc[] is primarily label based, but may also be used with a boolean
array.</p>
</blockquote>
<p>Breaking down the code:</p>
<pre><code>df.loc[df["AKey"]==9999, "AKey"]
</code></pre>
<p>df["AKey"]==9999 will return a boolean array identifying the rows, and the string "Akey" will identify the column that will receive the new value, at once without slicing. </p>
|
python|pandas|pandas-loc
| 1
|
6,439
| 54,873,983
|
merge consecutive matching rows
|
<p>I want to merge all consecutive rows by matching all 'X' fields and concatenating the 'Y' field.</p>
<p>Below is sample data -</p>
<pre><code>[Y X1 X2 X3 X4 X5
A NaN -3810 TRUE None None
B NaN -3810 TRUE None None
C NaN -3810 TRUE None None
D NaN -3810 None None None
E NaN -3810 None None None
F NaN -3810 None None None
G NaN -3810 None None None
H NaN -3810 TRUE None None
I NaN 2540 TRUE None None
J NaN 2540 None True None]
</code></pre>
<p><a href="https://i.stack.imgur.com/wCMAk.png" rel="nofollow noreferrer">1</a></p>
<p>Expected output -</p>
<pre><code>[A B C NaN -3810 TRUE None None
D E F G NaN -3810 None None None
H NaN -3810 TRUE None None
I NaN 2540 TRUE None None
J NaN 2540 None True None]
</code></pre>
<p><a href="https://i.stack.imgur.com/qJPts.png" rel="nofollow noreferrer">2</a></p>
<p>As stated, if any of the X fields changes in the consecutive row, they shouldn't concatenate.
Thanks in advance.</p>
|
<p>A little bit tricky , using <code>shift</code> create the groupkey , then <code>agg</code> </p>
<pre><code>df.fillna('NaN',inplace=True) # notice here NaN always no equal to NaN, so I replace it with string 'NaN'
df.groupby((df.drop('Y',1)!=df.drop('Y',1).shift()).any(1).cumsum()).\
agg(lambda x : ','.join(x) if x.name=='Y' else x.iloc[0])
Out[19]:
Y X1 X2 X3 X4 X5
1 A,B,C NaN -3810 TRUE None None
2 D,E,F,G NaN -3810 None None None
3 H NaN -3810 TRUE None None
</code></pre>
|
python|pandas
| 3
|
6,440
| 55,120,541
|
How can I evaluate the accuracy loss of the converted ftlite model?
|
<p>I trained a model based on the ssd-moblienet algorithm.And use the eval.py script to evaluate the mAP of the model.</p>
<p>I need to use this model on iOS, so I converted it to a tflite model and it works now.</p>
<p>I want to analyze the precision loss when converting a model by the mAP value before and after the model conversion. Is there a script similar to eval.py that can calculate the mAP value of the tflite model?</p>
<p>Or is there any other better way?</p>
<p>I am a newcomer using tensorflow, thank you for your answer.</p>
|
<p>There isn't any official script. You'll have a write a custom script which uses Tensorflow's metrics API found at <code>model/research/object_detection/metrics</code> passing the detections and groundtruth as arguments.</p>
<p>An example</p>
<pre><code># Append to $PYTHONPATH path to models/research and cocoapi/PythonAPI
from object_detection.metrics import coco_evaluation
from object_detection.core import standard_fields
from object_detection.utils.label_map_util import create_categories_from_labelmap, get_label_map_dict
def evaluate_single_image(image_path, annotation_path, label_file):
""" Evaluate mAP on image
args:
image_path: path to image
annotation_path: path to groundtruth in Pascal VOC format .xml
label_file: path to label_map.pbtxt
"""
categories = create_categories_from_labelmap(label_file)
label_map_dict = get_label_map_dict(label_file)
coco_evaluator = coco_evaluation.CocoDetectionEvaluator(categories)
image_name = os.path.basename(image_path).split('.')[0]
# Read groundtruth (here, an XML file in Pascal VOC format)
gt_boxes, gt_classes = voc_parser(annotation_path, label_map_dict)
# Get the detection after post processing
dt_boxes, dt_classes, dt_scores, num_det = postprocess_output(image_path)
coco_evaluator.add_single_ground_truth_image_info(
image_id=image_name,
groundtruth_dict={
standard_fields.InputDataFields.groundtruth_boxes:
np.array(gt_boxes),
standard_fields.InputDataFields.groundtruth_classes:
np.array(gt_classes)
})
coco_evaluator.add_single_detected_image_info(
image_id=image_name,
detections_dict={
standard_fields.DetectionResultFields.detection_boxes:
dt_boxes,
standard_fields.DetectionResultFields.detection_scores:
dt_scores,
standard_fields.DetectionResultFields.detection_classes:
dt_classes
})
coco_evaluator.evaluate()
</code></pre>
<p>Here's a minimal <a href="https://github.com/ChrystleMyrnaLobo/tflite-object-detection/blob/master/inference.py" rel="nofollow noreferrer">inference script</a> with input preprocessing, output post-processing, ground-truth parsing and mAP evaluation for a single image.
For other metrics, refer <code>model/research/object_detection/metrics/*_test.py</code> for usage.</p>
|
tensorflow|object-detection|tensorflow-lite
| 3
|
6,441
| 49,607,116
|
can anyone explain how does this code work? pandas in python
|
<pre><code>`
for column in list(df.columns[df.isnull().sum() > 0]):
mean = df[column].mean()
df[column].fillna(mean,inplace=True)
df.info()`
</code></pre>
<p>i don't understand the first line of code, i mean, ... <strong>list</strong>() ... does it means everything in the parenthesis will be return to the <strong>list</strong>?
should it be written like:</p>
<pre><code>list = df.columns[df.isnull().sum() > 0]
for column in list:
....
</code></pre>
<p>here, how could Python know what's inside of <em>df.columns[df.isnull().sum() > 0]</em> ,</p>
<p>for me i will write like, </p>
<pre><code>sum = 0
for each in df.columns:
if each is null:
sum += 1 (which is wrong obviously, but you know what i mean)
</code></pre>
<p>...</p>
<p>i am a beginner in Python, and i used to write in C, so it's confusing , hope someone can help me explain the way of using parenthesis to range the list, how does it work in Python. </p>
<p>thanks very much.!</p>
|
<p><code>list</code> here convert <code>Index</code> (columns names with at least one <code>NaN</code>) to <code>list</code>:</p>
<pre><code>df = pd.DataFrame({'A':list('abcdef'),
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3],
'D':[1,np.nan,5,7,1,0],
'E':[5,np.nan,np.nan,9,2,4],
'F':list('aaabbb')})
print (df)
A B C D E F
0 a 4 7 1.0 5.0 a
1 b 5 8 NaN NaN a
2 c 4 9 5.0 NaN a
3 d 5 4 7.0 9.0 b
4 e 5 2 1.0 2.0 b
5 f 4 3 0.0 4.0 b
</code></pre>
<hr>
<pre><code>a = list(df.columns[df.isnull().sum() > 0])
print (a)
['D', 'E']
print (df.isnull().sum())
A 0
B 0
C 0
D 1
E 2
F 0
dtype: int64
print (df.isnull().sum() > 0)
A False
B False
C False
D True
E True
F False
dtype: bool
print (df.columns[df.isnull().sum() > 0])
Index(['D', 'E'], dtype='object')
</code></pre>
|
pandas
| 0
|
6,442
| 49,366,765
|
How can I read endlessly from a Tensorflow tf.data.Dataset?
|
<p>I'm switching my old datalayer (using Queues) to the "new" and recommended Dataset API. I'm using it for the first time, so I'm providing code examples in case I got something fundamentally wrong.</p>
<p>I create my Dataset from a generator (that will read a file, and provide n samples). It's a small dataset and n_iterations >> n_samples, so I simply want to read this dataset over and over again, ideally shuffled.</p>
<pre><code>sample_set = tf.data.Dataset.from_generator( data_generator(filename),
(tf.uint8, tf.uint8), (tf.TensorShape([256,256,4]), tf.TensorShape([256,256,1]))
)
</code></pre>
<p>with datagenerator:</p>
<pre><code>class data_generator:
def __init__(self, filename):
self.filename= filename
def __call__(self):
with filename.open() as f:
for idx in f: yield img[idx], label[idx]
</code></pre>
<p>To actually use the data, I got that I need to define an <code>Iterator</code></p>
<pre><code>sample = sample_set.make_one_shot_iterator().get_next()
</code></pre>
<p>and then we are set to read data</p>
<pre><code>while True:
try: my_sample = sess.run(sample)
except tf.errors.OutOfRangeError: break # this happens after dset is read once
</code></pre>
<p>But all available Iterators seem to be "finite", in the way that they read a dataset only once.</p>
<p>Is there a simple way to make reading from the Dataset endless?</p>
|
<p>Datasets have <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset#repeat" rel="nofollow noreferrer"><code>repeat</code></a> and <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle" rel="nofollow noreferrer"><code>shuffle</code></a> methods.</p>
<pre class="lang-py prettyprint-override"><code>BUF_SIZE = 100 # choose it depending on your data
sample_set = tf.data.Dataset.from_generator( data_generator(filename),
(tf.uint8, tf.uint8), (tf.TensorShape([256,256,4]),
tf.TensorShape([256,256,1]))
).repeat().shuffle(BUF_SIZE)
</code></pre>
|
tensorflow|tensorflow-datasets
| 3
|
6,443
| 49,344,432
|
ValueError: Cannot feed value of shape (64,) for Tensor 'x:0', which has shape '(?, 100, 100, 3)'
|
<p>From skimage import io, transform:</p>
<pre><code> import os
import tensorflow as tf
import numpy as np
import time
import glob
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
path = 'D:/data/datasets/flower_photos/'
model_path = 'D:/data/model_save'
w = 100
h = 100
c = 3
def read_image(path):
cate=[path+x for x in os.listdir(path) if os.path.isdir(path+x)]
imgs=[]
labels = []
for idx,folder in enumerate(cate):
for im in glob.glob(folder+'/*.jpg'):
#print('reading the images:%s'%(im))
img = io.imread(im)
img = transform.resize(img,(w,h))
imgs.append(idx)
labels.append(idx)
return (np.asanyarray(imgs,np.float32),np.asanyarray(labels,np.int32))
data,label = read_image(path)
num_exaple = data.shape[0]
arr = np.array(num_exaple)
#np.random.shuffle(arr)
#label = label[arr]
ratio = 0.8
s = np.int(num_exaple*ratio)
x_train = data[:s]
y_train = data[:s]
x_val = data[s:]
y_val = data[s:]
x=tf.placeholder(tf.float32,shape=[None,w,h,c],name='x')
y_=tf.placeholder(tf.int32,shape=[None,],name='y_')
def inference(input_tensor, train, regularizer):
with tf.variable_scope('layer1-conv1'):
conv1_weights = tf.get_variable("weight",[5,5,3,32],initializer=tf.truncated_normal_initializer(stddev=0.1))
conv1_biases = tf.get_variable("bias", [32], initializer=tf.constant_initializer(0.0))
conv1 = tf.nn.conv2d(input_tensor, conv1_weights, strides=[1, 1, 1, 1], padding='SAME')
relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1_biases))
with tf.name_scope("layer2-pool1"):
pool1 = tf.nn.max_pool(relu1, ksize = [1,2,2,1],strides=[1,2,2,1],padding="VALID")
with tf.variable_scope("layer3-conv2"):
conv2_weights = tf.get_variable("weight",[5,5,32,64],initializer=tf.truncated_normal_initializer(stddev=0.1))
conv2_biases = tf.get_variable("bias", [64], initializer=tf.constant_initializer(0.0))
conv2 = tf.nn.conv2d(pool1, conv2_weights, strides=[1, 1, 1, 1], padding='SAME')
relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_biases))
with tf.name_scope("layer4-pool2"):
pool2 = tf.nn.max_pool(relu2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
with tf.variable_scope("layer5-conv3"):
conv3_weights = tf.get_variable("weight",[3,3,64,128],initializer=tf.truncated_normal_initializer(stddev=0.1))
conv3_biases = tf.get_variable("bias", [128], initializer=tf.constant_initializer(0.0))
conv3 = tf.nn.conv2d(pool2, conv3_weights, strides=[1, 1, 1, 1], padding='SAME')
relu3 = tf.nn.relu(tf.nn.bias_add(conv3, conv3_biases))
with tf.name_scope("layer6-pool3"):
pool3 = tf.nn.max_pool(relu3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
with tf.variable_scope("layer7-conv4"):
conv4_weights = tf.get_variable("weight",[3,3,128,128],initializer=tf.truncated_normal_initializer(stddev=0.1))
conv4_biases = tf.get_variable("bias", [128], initializer=tf.constant_initializer(0.0))
conv4 = tf.nn.conv2d(pool3, conv4_weights, strides=[1, 1, 1, 1], padding='SAME')
relu4 = tf.nn.relu(tf.nn.bias_add(conv4, conv4_biases))
with tf.name_scope("layer8-pool4"):
pool4 = tf.nn.max_pool(relu4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
nodes = 6*6*128
reshaped = tf.reshape(pool4,[-1,nodes])
with tf.variable_scope('layer9-fc1'):
fc1_weights = tf.get_variable("weight", [nodes, 1024],
initializer=tf.truncated_normal_initializer(stddev=0.1))
if regularizer != None: tf.add_to_collection('losses', regularizer(fc1_weights))
fc1_biases = tf.get_variable("bias", [1024], initializer=tf.constant_initializer(0.1))
fc1 = tf.nn.relu(tf.matmul(reshaped, fc1_weights) + fc1_biases)
if train: fc1 = tf.nn.dropout(fc1, 0.5)
with tf.variable_scope('layer10-fc2'):
fc2_weights = tf.get_variable("weight", [1024, 512],
initializer=tf.truncated_normal_initializer(stddev=0.1))
if regularizer != None: tf.add_to_collection('losses', regularizer(fc2_weights))
fc2_biases = tf.get_variable("bias", [512], initializer=tf.constant_initializer(0.1))
fc2 = tf.nn.relu(tf.matmul(fc1, fc2_weights) + fc2_biases)
if train: fc2 = tf.nn.dropout(fc2, 0.5)
with tf.variable_scope('layer11-fc3'):
fc3_weights = tf.get_variable("weight", [512, 5],
initializer=tf.truncated_normal_initializer(stddev=0.1))
if regularizer != None: tf.add_to_collection('losses', regularizer(fc3_weights))
fc3_biases = tf.get_variable("bias", [5], initializer=tf.constant_initializer(0.1))
logit = tf.matmul(fc2, fc3_weights) + fc3_biases
return logit
regularizer = tf.contrib.layers.l2_regularizer(0.0001)
logits = inference(x,False,regularizer)
b = tf.constant(value=1,dtype=tf.float32)
logits_eval = tf.multiply(logits,b,name='logits_eval')
loss=tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,
labels=y_)
train_op=tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
correct_prediction = tf.equal(tf.cast(tf.argmax(logits,1),tf.int32), y_)
acc= tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
def minibatches(inputs=None, targets=None, batch_size=None, shuffle=False):
assert len(inputs) == len(targets)
if shuffle:
indices = np.arange(len(inputs))
np.random.shuffle(indices)
for start_idx in range(0, len(inputs) - batch_size + 1, batch_size):
if shuffle:
excerpt = indices[start_idx:start_idx + batch_size]
else:
excerpt = slice(start_idx, start_idx + batch_size)
yield inputs[excerpt], targets[excerpt]
n_epoch=10
batch_size=64
saver=tf.train.Saver()
sess=tf.Session()
sess.run(tf.global_variables_initializer())
for epoch in range(n_epoch):
start_time = time.time()
train_loss, train_acc, n_batch = 0, 0, 0
for x_train_a, y_train_a in minibatches(x_train, y_train, batch_size=64 , shuffle=True):
_,err,ac=sess.run([train_op,loss,acc], feed_dict={x:x_train_a,y_:y_train_a})
train_loss += err; train_acc += ac; n_batch += 1
print(" train loss: %f" % (np.sum(train_loss)/ n_batch))
print(" train acc: %f" % (np.sum(train_acc)/ n_batch))
val_loss, val_acc, n_batch = 0, 0, 0
for x_val_a, y_val_a in minibatches(x_val, y_val, batch_size, shuffle=False):
err, ac = sess.run([loss,acc], feed_dict={x: x_val_a, y_: y_val_a})
val_loss += err; val_acc += ac; n_batch += 1
print(" validation loss: %f" % (np.sum(val_loss)/ n_batch))
print(" validation acc: %f" % (np.sum(val_acc)/ n_batch))
saver.save(sess,model_path)
sess.close()
1. `List item`
</code></pre>
|
<p>Since <strong>w = 100, h = 100, c = 3</strong> and you have defined your input placeholder <code>x</code> as follows,</p>
<pre><code>x=tf.placeholder(tf.float32,shape=[None,w,h,c],name='x')
</code></pre>
<p><code>x</code> has the shape of <strong>(?,100,100,3)</strong>. Here <strong>?</strong> refer the batch size (since the batch size only requires when training but not building the graph). </p>
<p>The error is because you are feeding an one-dimensional array <code>x_train_a</code> that has the shape of <strong>(64,)</strong> to your placeholder <code>x</code>. Obviously, there should be an error since two shapes don't match. <em>i.e</em>,</p>
<p>shape of <code>x</code> <strong>(?,100,100,3) !=</strong> shape of <code>x_train_a</code> <strong>(64,)</strong>.</p>
<p>As a quick solution, you can try to reshape <code>x_train_a</code> as follows before feeding to the placeholder <code>x</code>,</p>
<pre><code>x_train_a = x_train_a.reshape (64,100,100,3).
</code></pre>
<p>Here <strong>64</strong> means your <strong>batch size</strong>, which replaces <code>?</code>.</p>
<p>Hope this helps.</p>
|
python|tensorflow
| 3
|
6,444
| 73,349,869
|
Not a gzipped file error when reading a gzipped file
|
<p>I saved a gzip parquet file:</p>
<pre><code>df.to_parquet("meth_450_clin_all_kipan.parquet.gz", compression="gzip")
</code></pre>
<p>And then I want to load it as matrix:</p>
<pre><code>matrix = pd.read_table('../input/meth-clin-kipan/meth_450_clin_all_kipan.parquet.gz')
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
OSError Traceback (most recent call last)
/tmp/ipykernel_18/3894087199.py in <module>
----> 4 matrix = pd.read_table('../input/meth-clin-kipan/meth_450_clin_all_kipan.parquet.gz')
/opt/conda/lib/python3.7/site-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs)
309 stacklevel=stacklevel,
310 )
--> 311 return func(*args, **kwargs)
312
313 return wrapper
/opt/conda/lib/python3.7/site-packages/pandas/io/parsers/readers.py in read_table(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, encoding_errors, delim_whitespace, low_memory, memory_map, float_precision)
681 kwds.update(kwds_defaults)
682
--> 683 return _read(filepath_or_buffer, kwds)
684
685
/opt/conda/lib/python3.7/site-packages/pandas/io/parsers/readers.py in _read(filepath_or_buffer, kwds)
480
481 # Create the parser.
--> 482 parser = TextFileReader(filepath_or_buffer, **kwds)
483
484 if chunksize or iterator:
/opt/conda/lib/python3.7/site-packages/pandas/io/parsers/readers.py in __init__(self, f, engine, **kwds)
809 self.options["has_index_names"] = kwds["has_index_names"]
810
--> 811 self._engine = self._make_engine(self.engine)
812
813 def close(self):
/opt/conda/lib/python3.7/site-packages/pandas/io/parsers/readers.py in _make_engine(self, engine)
1038 )
1039 # error: Too many arguments for "ParserBase"
-> 1040 return mapping[engine](self.f, **self.options) # type: ignore[call-arg]
1041
1042 def _failover_to_python(self):
/opt/conda/lib/python3.7/site-packages/pandas/io/parsers/c_parser_wrapper.py in __init__(self, src, **kwds)
67 kwds["dtype"] = ensure_dtype_objs(kwds.get("dtype", None))
68 try:
---> 69 self._reader = parsers.TextReader(self.handles.handle, **kwds)
70 except Exception:
71 self.handles.close()
/opt/conda/lib/python3.7/site-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__()
/opt/conda/lib/python3.7/site-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._get_header()
/opt/conda/lib/python3.7/site-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()
/opt/conda/lib/python3.7/site-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()
/opt/conda/lib/python3.7/_compression.py in readinto(self, b)
66 def readinto(self, b):
67 with memoryview(b) as view, view.cast("B") as byte_view:
---> 68 data = self.read(len(byte_view))
69 byte_view[:len(data)] = data
70 return len(data)
/opt/conda/lib/python3.7/gzip.py in read(self, size)
472 # jump to the next member, if there is one.
473 self._init_read()
--> 474 if not self._read_gzip_header():
475 self._size = self._pos
476 return b""
/opt/conda/lib/python3.7/gzip.py in _read_gzip_header(self)
420
421 if magic != b'\037\213':
--> 422 raise OSError('Not a gzipped file (%r)' % magic)
423
424 (method, flag,
OSError: Not a gzipped file (b'PA')
</code></pre>
|
<p>The solution is to read the file using <code>read_parquet</code> and then convert it into a numpy array.</p>
<pre><code>matrix = pd.read_parquet('../input/meth-clin-kipan/meth_450_clin_all_kipan.parquet.gz').to_numpy()
</code></pre>
|
python|pandas|gzip
| 1
|
6,445
| 67,334,116
|
Shift rows in array independently
|
<p>I want to shift each row by its row-number and be relative to my desired output shape. An example:</p>
<pre><code>array([[0, 1, 2], array([[0, 1, 2], array([[0, 1, 2, 0, 0],
[1, 2, 3], -> [1, 2, 3], -> [0, 1, 2, 3, 0],
[2, 3, 4]]) [2, 3, 4]]) [0, 0, 2, 3, 4])
</code></pre>
<p>The array to furthest left is my input and the array to the furthest right is my desired output. This can be generalized to bigger arrays, for example a <code>10x10</code> array.</p>
<p>Is there a nice way of doing this?</p>
<hr />
<p>What I have is:</p>
<pre><code>A = np.array([[1, 2, 3],
[2, 3, 4],
[3, 4, 5]], dtype=np.float32)
out = np.zeros((A.shape[0], A.shape[1]*2-1))
out[[np.r_[:3]], [np.r_[:3] + np.r_[:3][:,None]]] = A
</code></pre>
|
<p>Here is a solution for square matrices of size n.</p>
<p><code>np.concatenate((A,np.zeros((n,n))),axis=1).flatten()[0:-n].reshape([n,2*n-1])</code></p>
|
python|arrays|numpy
| 2
|
6,446
| 60,225,723
|
Ordering dataframe columns row by row
|
<p>I have a df that looks like this</p>
<pre><code>df = pd.DataFrame({'A' : ['1','2','3','7'],
'B' : [7,6,5,4],
'C' : [5,8,7,1],
'v' : [1,9,9,8]})
df=df.set_index('A')
df
B C v
A
1 7 5 1
2 6 8 9
3 5 7 9
7 4 1 8
</code></pre>
<p>I'd like to sort it so that column B and C respectively show the lower and higher value of the two, i.e.</p>
<pre><code> B C v
A
1 5 7 1
2 6 8 9
3 5 7 9
7 1 4 8
</code></pre>
<p>I can do this in a <code>for</code> loop that goes through each row, but I'm wondering if there is a better way to do it?</p>
|
<p>If performance is important use <a href="https://numpy.org/doc/1.18/reference/generated/numpy.sort.html" rel="nofollow noreferrer"><code>numpy.sort</code></a> with <code>axis=1</code>:</p>
<pre><code>df[['B','C']] = np.sort(df[['B','C']], axis=1)
print (df)
B C v
A
1 5 7 1
2 6 8 9
3 5 7 9
7 1 4 8
</code></pre>
|
python|pandas
| 4
|
6,447
| 60,196,260
|
How to use keras ImageDataGenerator flow_from_directory given a map from image name to class label?
|
<p>Folder's structure is like following:</p>
<pre><code>-dataset
|---train_set
| |-- img01.jpg
| |-- img02.jpg
| |-.....
|
|---val_set
</code></pre>
<p>I don't have subfolders in train_set whose name is class label. However I have a dictionary from image name to class label like <code>map = {'img01.jpg': 10, 'img02.jpg': 3, 'img03.jpg':12, ...}</code> </p>
<p>How to use keras' <code>ImageDataGenerator.flow_from_directory</code> in this case, since it requires me to sort the images in different subfolders? My training set's volume is around 500 G, it's not convenient to build subfolders according to classes' labels. If I cannot use <code>ImageDataGenerator.flow_from_directory</code>, is there any alternative way to realize this functionality?</p>
|
<p>Keras ImageDataGenerator class also offers <a href="https://keras.io/preprocessing/image/#flow_from_dataframe" rel="nofollow noreferrer">flow_from_dataframe()</a> method, that should do the job. </p>
<p>Check out this <a href="https://medium.com/datadriveninvestor/keras-imagedatagenerator-methods-an-easy-guide-550ecd3c0a92" rel="nofollow noreferrer">tutorial</a> (second part) for your case example.</p>
|
image-processing|keras|tensorflow2.0
| 0
|
6,448
| 65,313,416
|
Getting pandas dataframe from json file
|
<p>I would like to flatten a JSON file to create a pandas DataFrame. The json output is:</p>
<pre><code>{
'info': {
'status': [
],
'weightcorp': {
'weight': 4.0
}
},
'results': [
{
'instrument': 'A',
'ts': [
{
'date': '2020-12-10',
'indicators': {
'Batch': 'Daily',
'Price': '313.23653',
'Date': '2020-12-10'
}
}
]
},
{
'instrument': 'B',
'ts': [
{
'date': '2020-12-10',
'indicators': {
'Batch': 'Weekly',
'Price': '29.21',
'Date': '2020-12-10'
}
}
]
}
]
}
</code></pre>
<p>The output DataFrame I am looking for is as follow:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Instrument</th>
<th>Date</th>
<th>Batch</th>
<th>Price</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>2020-12-10</td>
<td>Daily</td>
<td>313.23653</td>
</tr>
<tr>
<td>B</td>
<td>2020-12-10</td>
<td>weekly</td>
<td>29.21</td>
</tr>
</tbody>
</table>
</div>
<p>Can you please help me?</p>
|
<p>Let's try a custom function to your data:</p>
<pre><code>def flatten(d):
'''
remove all intermediate dictionaries and list
'''
ret = dict()
for k, v in d.items():
# case when value is a dictionary
if isinstance(v, dict):
sub = flatten(v)
for kk, vv in sub.items():
ret[kk] = vv
# case when value is a list
elif isinstance(v, list):
for vv in v:
ret.update(flatten(vv) )
# normal key:value pair
else: ret[k] = v
return ret
pd.DataFrame(flatten(v) for v in d['results'])
</code></pre>
<p>Output:</p>
<pre><code> instrument date Batch Price Date
0 A 2020-12-10 Daily 313.23653 2020-12-10
1 B 2020-12-10 Weekly 29.21 2020-12-10
</code></pre>
|
json|pandas|list|dataframe|dictionary
| 2
|
6,449
| 65,461,750
|
tensorflow.python.framework.errors_impl.UnknownError: Failed to rename; : Access is denied. ; Input/output error
|
<p>I am not able to download and load tensorflow dataset on my Windows 10 machine. It works okay on Google colab. Can someone please help me?</p>
<p><strong>Code:</strong></p>
<pre><code>import tensorflow_datasets as tfds
datasets, info = tfds.load("imdb_reviews", as_supervised=True, with_info=True)
</code></pre>
<p>When I run this on my local Windows 10 environment, here's the error I get:</p>
<pre><code>....[Output showing I/O progress]...Skipped for concision....
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s] Shuffling...: 90%|βββββββββ | 18/20 [00:01<00:00, 14.15 shard/s] Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s] Traceback (most recent call last): File "C:\Anaconda3\envs\ml_tf\lib\site-packages\IPython\core\interactiveshell.py", line 3418, in run_code
exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-3b586bfe81d7>", line 3, in <module>
datasets, info = tfds.load("imdb_reviews", as_supervised=True, with_info=True) File "C:\Anaconda3\envs\ml_tf\lib\site-packages\tensorflow_datasets\core\api_utils.py", line 52, in disallow_positional_args_dec
return fn(*args, **kwargs) File "C:\Anaconda3\envs\ml_tf\lib\site-packages\tensorflow_datasets\core\registered.py", line 300, in load
dbuilder.download_and_prepare(**download_and_prepare_kwargs) File "C:\Anaconda3\envs\ml_tf\lib\site-packages\tensorflow_datasets\core\api_utils.py", line 52, in disallow_positional_args_dec
return fn(*args, **kwargs) File "C:\Anaconda3\envs\ml_tf\lib\site-packages\tensorflow_datasets\core\dataset_builder.py", line 307, in download_and_prepare
self.info.write_to_directory(self._data_dir) File "C:\Anaconda3\envs\ml_tf\lib\contextlib.py", line 119, in __exit__
next(self.gen) File "C:\Anaconda3\envs\ml_tf\lib\site-packages\tensorflow_datasets\core\file_format_adapter.py", line 200, in incomplete_dir
tf.io.gfile.rename(tmp_dir, dirname) File "C:\Anaconda3\envs\ml_tf\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 546, in rename_v2
compat.as_bytes(src), compat.as_bytes(dst), overwrite) tensorflow.python.framework.errors_impl.UnknownError: Failed to rename: C:\Users\User\tensorflow_datasets\imdb_reviews\plain_text\0.1.0.incomplete5JQVCL to: C:\Users\User\tensorflow_datasets\imdb_reviews\plain_text\0.1.0 : Access is denied. ; Input/output error
</code></pre>
<p>Here's what I tried to fix this:</p>
<ol>
<li>Uninstalled Conda environment and reinstalled it. Nothing happened.</li>
<li>Ran PyCharm using Admin rights. Nothing happened.</li>
</ol>
<p>It seems people have raised similar issue, but there has been no response. <a href="https://stackoverflow.com/questions/54862413/tensorflow-python-framework-errors-impl-unknownerror-failed-to-rename-input-ou">tensorflow.python.framework.errors_impl.UnknownError: Failed to rename: Input/output error</a></p>
<p>Can someone please help?</p>
|
<p>Had this issues and upgrading the tensorflow-datasets solved this</p>
<pre><code>pip install -U tensorflow-datasets
</code></pre>
<p>this will replace the tensorflow-datasets-1.2.0 and install tensorflow-datasets-4.2.0</p>
<p>working on windows 10</p>
|
python|tensorflow|tensorflow2.x
| 3
|
6,450
| 65,271,478
|
Failed to save list of DataFrames to multisheet Excel spreadsheet
|
<p>I was trying to do the same thing with the question <a href="https://stackoverflow.com/questions/14225676/save-list-of-dataframes-to-multisheet-excel-spreadsheet">here</a>, and followed the method given by the answers. Here is my code:</p>
<pre><code>import pandas as pd
import xlsxwriter
mylist=[df_1,df_2,df_3]
for n,df in enumerate(mylist):
df=df[df['Name']=='Anna'].copy() # filter each dataframe in the list according to some condition
# print(df)
writer = pd.ExcelWriter('data.xlsx', engine='xlsxwriter')
df.to_excel(writer,'sheet%s' % n)
writer.save()
writer.close()
</code></pre>
<p>I was using <code>print(df)</code> in jupyter notebook to double check if the loop works and each dataframe is successfully filtered, and it does. My problem is, in the output excel file there's only the first filtered data frame <code>df_1</code> in <code>sheet0</code> has been saved, the other data frames in the list have not been saved.</p>
<p>What's the problem with my code?</p>
|
<p>You have some pieces inside the loop that should be outside. Reference the docs here: <a href="https://xlsxwriter.readthedocs.io/example_pandas_multiple.html" rel="nofollow noreferrer">https://xlsxwriter.readthedocs.io/example_pandas_multiple.html</a></p>
<pre><code>writer = pd.ExcelWriter('data.xlsx', engine='xlsxwriter')
mylist=[df_1,df_2,df_3]
for n,df in enumerate(mylist):
df=df[df['Name']=='Anna'].copy() # filter each dataframe in the list according to some condition
# print(df)
df.to_excel(writer,'sheet%s' % n)
writer.save()
</code></pre>
|
python|excel|pandas
| 1
|
6,451
| 49,969,539
|
Adding a name (string) column to an existing pandas DF
|
<p>I have an array(of float data type) from a FITS file of (1, 5000) dimension. I have created a pandas DF from it so that I can export it as a csv later. <a href="https://i.stack.imgur.com/CXCjB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CXCjB.png" alt="The data frame from the data"></a>. However I am trying to add an extra column in the beginning (i.e. at [0,0]) with the name of the file i.e. 'FSC0029m4226', a string. So that I can use the first column as a class and the remaining 5000 columns as features for ML applications. Also when I add rows in future, the first column can then help in identifying the candidates. Is there any other method than using pd.DataFframe for this ?</p>
<p>This is what I have tried : </p>
<pre><code>A = 'FSC0029m4226'
FSC0029m4226.insert(loc=0,column = 'Name',value = A)
</code></pre>
<p>But keep getting errors with the bottom line being,</p>
<pre><code>ValueError: Big-endian buffer not supported on little-endian compiler
</code></pre>
<p>However if I try with an artificial sample data it works :</p>
<pre><code>xx = np.linspace(0,59.05,100)
print xx.dtype
xxx =np.reshape(xx,(1,100))
x4= pd.DataFrame(xxx)
x4.insert(loc=0,column = 'Name',value = A)
print x4
</code></pre>
<p><a href="https://i.stack.imgur.com/k2XSn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k2XSn.png" alt="enter image description here"></a></p>
|
<pre><code>FSC0029m4226 = pd.DataFrame(np.array(FSC0029m4226).byteswap().newbyteorder())
</code></pre>
|
python|string|python-2.7|pandas|dataframe
| 0
|
6,452
| 63,906,723
|
Keras image_dataset_from_directory not finding images
|
<p>I want to load the data from the directory where I have around 5000 images (type 'png'). But it returns me an error saying that there are no images when obviusly there are images.
This code:</p>
<pre><code>width=int(wb-wa)
height=int(hb-ha)
directory = '/content/drive/My Drive/Colab Notebooks/Hair/Images'
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
directory, labels=densitat, label_mode='int',
color_mode='rgb', batch_size=32, image_size=(width, height), shuffle=True, seed=1,
validation_split=0.2, subset='training', follow_links = False)
</code></pre>
<p>Returns:</p>
<pre><code>ValueError: Expected the lengths of `labels` to match the number of files in the target directory. len(labels) is 5588 while we found 0 files in /content/drive/My Drive/Colab Notebooks/Hair/Images.
</code></pre>
<p>I can see the images:
<a href="https://i.stack.imgur.com/STKao.png" rel="noreferrer">Colab view of the folder structure with the images</a></p>
<p>Where is the problem? I need to use this function to load data in batchs as i have a large dataset</p>
|
<p>I have found the answer so I am posting in case it might help someone.</p>
<p>The problrem is the path, as I was using the path to the folder with the images whereas I should have used the directory (one folder above).</p>
<pre><code>directory = '/content/drive/My Drive/Colab Notebooks/Hair'
</code></pre>
<p>Note that '/Hair' is the folder with my images.</p>
|
tensorflow|keras|tensorflow-datasets|image-preprocessing
| 14
|
6,453
| 46,914,292
|
How to do reinforcement learning with an LSTM in PyTorch?
|
<p>Due to observations not revealing the entire state, I need to do reinforcement with a recurrent neural network so that the network has some sort of memory of what has happened in the past. For simplicity let's assume that we use an LSTM. </p>
<p>Now the in-built PyTorch LSTM requires you to feed it a an input of shape <code>Time x MiniBatch x Input D</code> and it outputs a tensor of shape <code>Time x MiniBatch x Output D</code>. </p>
<p>In reinforcement learning however, to know the input at time <code>t+1</code>, I need to know the output at time <code>t</code>, because I am doing actions in an environment. </p>
<p>So is it possible to use the in-built PyTorch LSTM to do BPTT in a reinforcement learning setting? And if it is, how could I do it? </p>
|
<p>Maybe you can feed your input sequence in a loop to your LSTM. Something, like this:</p>
<pre><code>h, c = Variable(torch.zeros()), Variable(torch.zeros())
for i in range(T):
input = Variable(...)
_, (h, c) = lstm(input, (h,c))
</code></pre>
<p>Every timestep you can use (h,c) and input to evaluate action for instance. As long as you do not break computational graph you can backpropagate as Variables keep all the history.</p>
|
recurrent-neural-network|backpropagation|reinforcement-learning|pytorch
| 1
|
6,454
| 46,859,478
|
Update columns in df2 based on df1 on index of date
|
<p>I want to to have a data frame <strong>df2</strong> that will contain the values from <strong>df1</strong>.
Both data frames have an index of date.
Both data frames contain the same columns. I just want to update the columns of df2 if the index of df2 exists in df1. </p>
<p><strong>df1</strong> </p>
<pre><code>Symbol K1 K2 K3
Date
2011-01-10 0.0 0.0 0.0
2011-01-13 -1500.0 0.0 4000.0
2011-01-26 0.0 1000.0 0.0
</code></pre>
<p><strong>df2</strong></p>
<pre><code> K1 K2 K3
2011-01-10 0.0 0.0 0.0
2011-01-11 0.0 0.0 0.0
2011-01-26 0.0 0.0 0.0
</code></pre>
<p>Desired Output</p>
<pre><code> K1 K2 K3
2011-01-10 0.0 0.0 0.0
2011-01-11 0.0 0.0 0.0
2011-01-26 0.0 1000.0 0.0
</code></pre>
<p>I tried this;</p>
<pre><code>df2 = df2.join(df1, on=df1.index, how='left')
</code></pre>
<p>But received this error;</p>
<blockquote>
<p>raise KeyError('%s not in index' % objarr[mask]) KeyError:
"Index([u'2011-01-10', u'2011-01-13', u'2011-01-26', u'2011-02-02',\n</p>
</blockquote>
<p>Any help is more than welcomed. </p>
<p>Thanks</p>
|
<p>You can try to merge on indices:</p>
<pre><code>df3 =df1.merge(df2, left_index=True, right_index=True, suffixes=("","_"), how='right')
df3= df3.drop(['K1_', 'K2_', 'K3_'], axis=1).fillna(0)
</code></pre>
|
python|pandas
| 1
|
6,455
| 63,314,009
|
fill NaN values of a df under condition
|
<p>I have a resampled df:</p>
<pre><code> Timestamp Loading Power Energy ID status
2020-04-09 06:45:00 1.0 1000 5000 1 on
2020-04-09 06:46:00 1.0 1000 5500 1 on
2020-04-09 06:47:00 NaN NaN NaN NaN NaN
2020-04-09 06:48:00 NaN NaN NaN NaN NaN
2020-04-09 06:49:00 1.0 5 0 1 off
2020-04-09 06:50:00 1.0 3000 200 2 on
...
</code></pre>
<p>The first thing: df['Loading'] was originally of the type 'boolean' and no its a number (1 or 0) - how can i change this?</p>
<p>The NaN values of the column df['status'] should simply be continued (last entry was on, then the lines should be filled with on until an off comes).</p>
<p>Now the other lines of the other columns should be filled differently, depending on whether the status is on or off:</p>
<p>status == on: loading = 'true'; energy = last existing entry; power = last existing entry; id == last existing entry</p>
<p>status == off: loading = 'false'; energy = 0; power = 0; Id = 'no ID'.</p>
<p>i tried something like that:</p>
<pre><code>cond = (df2['Status'] != df2['Status'].shift(-1)) | (df2['Status'].notna())
df2.loc[cond] = df2.loc[cond].ffill()
</code></pre>
<p>without desired success...</p>
<p>Expected outcome:</p>
<pre><code> Timestamp Loading Power Energy ID status
2020-04-09 06:45:00 True 1000 5000 1 on
2020-04-09 06:46:00 True 1000 5500 1 on
2020-04-09 06:47:00 True 1000 5500 1 on
2020-04-09 06:48:00 True 1000 5500 1 on
2020-04-09 06:49:00 False 5 0 no Id off
2020-04-09 06:49:00 True 3000 200 2 on
...
</code></pre>
<p><strong>EDIT</strong>
the condition for filling the nan values is more complicated than expected: I have different cycles which are marked by different IDs. Within a cycle (ID appears both before and after the nan value) the power of the two "surrounding" lines should be averaged and in the column energy the last existing value of the column energy should be entered. Outside the cycle (ID before != next ID) the power as well as the energy should be set to 0.</p>
|
<p>Use for loop like this</p>
<p>df["status"]=[df["status"].values[i-1] if pd.isna(x) else x for i,x in enumerate (df["status"] .values) ]</p>
|
python|pandas
| 0
|
6,456
| 63,257,389
|
Pandas Boolean Indexing to Compare DataFrame and Results in List of Dicts
|
<p>I have the below dataframes</p>
<pre><code>import pandas as pd
import numpy as np
df1 = pd.DataFrame([[70, np.nan, "hello"], [89, 3, 4], [210, 5, 64], [11, 75, 8]], columns=["ID", "A", "B"], dtype='object')
df2 = pd.DataFrame([[70, np.nan, "world"], [89, 33, 44], [21, 5, 6], [11, 7, 8]], columns=["ID","A", "B"], dtype='object')
</code></pre>
<p>df1 output below</p>
<pre><code> ID A B
0 70 NaN hello
1 89 3 4
2 21 5 64
3 11 75 8
</code></pre>
<p>df2 output below</p>
<pre><code> ID A B
0 70 NaN world
1 89 33 44
2 21 5 6
3 11 7 8
</code></pre>
<p>boolean mask highlighting differences</p>
<pre><code>diff_mask = (df1 != df2) & ~(df1.isnull() & df2.isnull())
</code></pre>
<p>Yielding:</p>
<pre><code> ID A B
0 False False True
1 False True True
2 False False True
3 False True False
</code></pre>
<p>How do I get a result that creates a list of dicts of the ID's and the true values for each row? I could set the id's as an index as well if needed.</p>
<p>final output would look like this</p>
<pre><code>[{'ID': 70, 'B': 'world'}, {'ID': 89, 'A': 33, 'B': 44}, {'ID': 21, 'B': 6}, {'ID': 11, 'A': 7}]
</code></pre>
|
<p>Let us try <code>where</code>, also I recommend output series not dict</p>
<pre><code>s=df2.set_index('ID').where(diff_mask.drop('ID',1).values).stack()
Out[74]:
ID
70 B world
89 A 33
B 44
21 B 6
11 A 7
dtype: object
</code></pre>
<p>to dict</p>
<pre><code>d=[y.unstack().reset_index().to_dict('r')[0] for x , y in s.groupby(level=0)]
Out[111]:
[{'ID': 11, 'A': 7},
{'ID': 21, 'B': 6},
{'ID': 70, 'B': 'world'},
{'ID': 89, 'A': 33, 'B': 44}]
</code></pre>
|
python|pandas|numpy|dataframe
| 4
|
6,457
| 63,089,129
|
Question on restoring training after loading model
|
<p>Having trained for 24 hours, the training process saved the model files via <code>torch.save</code>. There was a power-off or other issues caused the process exited. Normally, we can load the model and continue training from the last step.</p>
<p>Why should not we load the states of optimizers (Adam, etc), is it necessary?</p>
|
<p>Yes, you can load the model from the last step and retrain it from that very step.</p>
<p>if you want to use it only for inference, you will save the state_dict of the model as</p>
<pre><code>torch.save(model, PATH)
</code></pre>
<p>And load it as</p>
<pre><code>model = torch.load(PATH)
model.eval()
</code></pre>
<p>However, for your concern you need to save the optimizer state dict as well. For that purpose, you need to save it as</p>
<pre><code>torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,
...
}, PATH)
</code></pre>
<p>and load the model for further training as:</p>
<pre><code>model = TheModelClass(*args, **kwargs)
optimizer = TheOptimizerClass(*args, **kwargs)
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']
model.eval()
# - or -
model.train()
</code></pre>
<p>It is necessary to save the optimizer state dictionary, since this contains buffers and parameters that are updated as the model trains.</p>
|
pytorch
| 2
|
6,458
| 62,968,276
|
Apply an Encoder-Decoder (Seq2Seq) inference model with Attention
|
<p>Hello a <strong>StackOverflow</strong> community!</p>
<p>I'm trying to create an inference model for a <strong>seq2seq</strong> (<em>Encoded-Decoded</em>) model with <strong>Attention</strong>. It's a definition of the inference model.</p>
<pre><code>model = compile_model(tf.keras.models.load_model(constant.MODEL_PATH, compile=False))
encoder_input = model.input[0]
encoder_output, encoder_h, encoder_c = model.layers[1].output
encoder_state = [encoder_h, encoder_c]
encoder_model = tf.keras.Model(encoder_input, encoder_state)
decoder_input = model.input[1]
decoder = model.layers[3]
decoder_new_h = tf.keras.Input(shape=(n_units,), name='input_3')
decoder_new_c = tf.keras.Input(shape=(n_units,), name='input_4')
decoder_input_initial_state = [decoder_new_h, decoder_new_c]
decoder_output, decoder_h, decoder_c = decoder(decoder_input, initial_state=decoder_input_initial_state)
decoder_output_state = [decoder_h, decoder_c]
# These lines cause an error
context = model.layers[4]([encoder_output, decoder_output])
decoder_combined_context = model.layers[5]([context, decoder_output])
output = model.layers[6](decoder_combined_context)
output = model.layers[7](output)
# end
decoder_model = tf.keras.Model([decoder_input] + decoder_input_initial_state, [output] + decoder_output_state)
return encoder_model, decoder_model
</code></pre>
<p>When I run this code the following error is coming.</p>
<pre><code>ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_5:0", shape=(None, None, 20), dtype=float32) at layer "lstm_4". The following previous layers were accessed without issue: ['lstm_5']
</code></pre>
<p>If I exclude an attention block, the model will be form without any errors at all.</p>
<pre><code>model = compile_model(tf.keras.models.load_model(constant.MODEL_PATH, compile=False))
encoder_input = model.input[0]
encoder_output, encoder_h, encoder_c = model.layers[1].output
encoder_state = [encoder_h, encoder_c]
encoder_model = tf.keras.Model(encoder_input, encoder_state)
decoder_input = model.input[1]
decoder = model.layers[3]
decoder_new_h = tf.keras.Input(shape=(n_units,), name='input_3')
decoder_new_c = tf.keras.Input(shape=(n_units,), name='input_4')
decoder_input_initial_state = [decoder_new_h, decoder_new_c]
decoder_output, decoder_h, decoder_c = decoder(decoder_input, initial_state=decoder_input_initial_state)
decoder_output_state = [decoder_h, decoder_c]
# These lines cause an error
# context = model.layers[4]([encoder_output, decoder_output])
# decoder_combined_context = model.layers[5]([context, decoder_output])
# output = model.layers[6](decoder_combined_context)
# output = model.layers[7](output)
# end
decoder_model = tf.keras.Model([decoder_input] + decoder_input_initial_state, [decoder_output] + decoder_output_state)
return encoder_model, decoder_model
</code></pre>
|
<p>I think you also need to take the encoder output as output from the encoder model and then give it as input to the decoder model as the attention part requires it. Maybe this changes could help-</p>
<pre><code>model = compile_model(tf.keras.models.load_model(constant.MODEL_PATH, compile=False))
encoder_input = model.input[0]
encoder_output, encoder_h, encoder_c = model.layers[1].output
encoder_state = [encoder_h, encoder_c]
encoder_model = tf.keras.Model(inputs=[encoder_input],outputs=[encoder_state,encoder_output])
decoder_input = model.input[1]
decoder_input2 = tf.keras.Input(shape=x) #where x is the shape of encoder output
decoder = model.layers[3]
decoder_new_h = tf.keras.Input(shape=(n_units,), name='input_3')
decoder_new_c = tf.keras.Input(shape=(n_units,), name='input_4')
decoder_input_initial_state = [decoder_new_h, decoder_new_c]
decoder_output, decoder_h, decoder_c = decoder(decoder_input, initial_state=decoder_input_initial_state)
decoder_output_state = [decoder_h, decoder_c]
context = model.layers[4]([decoder_input2, decoder_output])
decoder_combined_context = model.layers[5]([context, decoder_output])
output = model.layers[6](decoder_combined_context)
output = model.layers[7](output)
decoder_model = tf.keras.Model([decoder_input,decoder_input2,decoder_input_initial_state], [output] + decoder_output_state)`
</code></pre>
|
python|tensorflow|keras|seq2seq|encoder-decoder
| 0
|
6,459
| 67,859,542
|
API to get mean and standard deviation for TFLite Models
|
<p>How can we get these values of mean and standard deviation for any TFLite model ? Is there any API to fetch mean and std deviation ? where is this information stored ?</p>
<p>How can TFLite users know with what values they have to normalize the input ?
Can these values obtained run time ?</p>
<p>To change these mean and standard deviation ? what is the place to change , training ? (or) conversion (or) Inference ?</p>
|
<p>If you're talking about the quantization parameters (mean and std to convert float to int), see <a href="https://www.tensorflow.org/lite/api_docs/java/org/tensorflow/lite/Tensor.QuantizationParams" rel="nofollow noreferrer">https://www.tensorflow.org/lite/api_docs/java/org/tensorflow/lite/Tensor.QuantizationParams</a> (this is Java. similar API can be found for Python and C++)</p>
<p>If you're talking about mean / std to normalize input, TFLite core doesn't store that data. You can leverage TFLite Model Metadata (<a href="https://www.tensorflow.org/lite/inference_with_metadata/overview" rel="nofollow noreferrer">https://www.tensorflow.org/lite/inference_with_metadata/overview</a>). But please note that it's optional for a TFLite model. We do encourage you attach your metadata if you're a model author.</p>
|
python|tensorflow|tensorflow2.0|tensorflow-lite
| 0
|
6,460
| 67,889,133
|
What is the sql equivalent function of the python function .size()?
|
<p>I am trying to solve a problem on bigquery; list of customers with consistent transactions for 6 months. I already solved it with python but I don't know how to replicate the code on sql. This is the code</p>
<pre><code>df.groupby(['Month','accounttoken'])['transactionid'].value_counts()
a=df[df.groupby(['Month','accounttoken'])['transactionid'].transform('count')>=5]
df_grouped = a.groupby(['Month', 'accounttoken','Name']).size().reset_index(name='num_transactions')
a1 = df_grouped[df_grouped['num_transactions']>=5]
</code></pre>
<p>This is what I have done with sql so far</p>
<pre><code>select Month, Name,accounttoken,count(transactionid) no_of_trans from data
group by Month, accounttoken,Name
having count(transactionid)>=5
</code></pre>
<p>I think what I need is the equivalent of the .size() function</p>
|
<p>count(*) is counting the number of rows in a group.</p>
<pre><code>SELECT count(*) as num_transactions
FROM data
GROUP BY Month, accounttoken, name
HAVING count(*) >= 5
</code></pre>
<p>You can use these SQL Query for the replacement of last two line of python code given by you. I hope SQL query given by you is also working.</p>
|
python|pandas|google-bigquery
| 0
|
6,461
| 67,684,718
|
How to display `.value_counts()` in interval in pandas dataframe
|
<p>I need to display <code>.value_counts()</code> in interval in pandas dataframe. Here's my code</p>
<pre><code>prob['bucket'] = pd.qcut(prob['prob good'], 20)
grouped = prob.groupby('bucket', as_index = False)
kstable = pd.DataFrame()
kstable['min_prob'] = grouped.min()['prob good']
kstable['max_prob'] = grouped.max()['prob good']
kstable['counts'] = prob['bucket'].value_counts()
</code></pre>
<p>My Output</p>
<pre><code>min_prob max_prob counts
0 0.26 0.48 NaN
1 0.49 0.52 NaN
2 0.53 0.54 NaN
3 0.55 0.56 NaN
4 0.57 0.58 NaN
</code></pre>
<p>I know that I have pronblem in <code>kstable['counts']</code> syntax, but how to solve this?</p>
|
<p>Use named aggregation for simplify your code, for counts is used <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>GroupBy.size</code></a> to new column <code>counts</code> and is apply function for column <code>bucket</code>:</p>
<pre><code>prob['bucket'] = pd.qcut(prob['prob good'], 20)
kstable = prob.groupby('bucket', as_index = False).agg(min_prob=('prob good','min'),
max_prob=('prob good','max'),
counts=('bucket','size'))
</code></pre>
<p>In your solution should working with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>DataFrame.assign</code></a>:</p>
<pre><code>kstable = kstable.assign(counts = prob['bucket'].value_counts())
</code></pre>
|
python|pandas|dataframe
| 1
|
6,462
| 61,522,733
|
Reshape model input LSTM
|
<p>Hello everyone i'm trying to build a model to predict emotion in speech.
Since the audio have different lengths the feature matrixes also have different lengths and therefore i have a variable timestep.
I read on other answers that i can leave the input shape of LSTM as follows: <code>model.add(LSTM(1, input_shape=(None, 31)))</code> </p>
<p>Next i need to reshape my input training data which it's a list of 2D arrays of variable timestep length (Tx,Features) into a list of 3D arrays of length (1,Tx,Features) for <code>Model.fit()</code> </p>
<p>I hope the logic behind what i'm doing is correct but anyhow i get an error when reshaping. I made sure that the types were correct, i don't know what could be the problem, here's the code:</p>
<pre><code>def rnn(x_train, x_test, y_train, y_test):
for e in x_train:
print(type(e))
print(type(e.shape[0]))
print(type(e.shape[1]))
x_train[e]=np.reshape(e,(1,e.shape[0],e.shape[1]))
model = Sequential()
model.add(LSTM(1, input_shape=(None, 31)))
model.add(Dropout(0.2))
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(5, activation='softmax'))
opt = tf.keras.optimizers.Adam(lr=1e-3, decay=1e-5)
model.compile(loss='sparse_categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model.fit(x_train, y_train, epochs=3, validation_data=(x_test, y_test))
</code></pre>
<pre><code>Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.3.3\plugins\python-ce\helpers\pydev\pydevd.py", line 1434, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.3.3\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/mp95/PycharmProjects/Thesis/Main.py", line 16, in <module>
rnn(x_train, x_test, y_train, y_test)
File "C:\Users\mp95\PycharmProjects\Thesis\models.py", line 15, in rnn
x_train[e]=np.reshape(e,(1,e.shape[0],e.shape[1]))
TypeError: only integer scalar arrays can be converted to a scalar index
<class 'numpy.ndarray'>
<class 'int'>
<class 'int'>
</code></pre>
<p>Thank you very much, my dataset is small (450 examples) also any other tips on what should i do with the model it's very welcomed as i'm new to this stuff!</p>
|
<p>I am also new to this but from what I have found:
The input to LSTM is 3-D [samples,timesteps,features]
For more information, please visit this link :
<a href="https://stats.stackexchange.com/questions/264546/difference-between-samples-time-steps-and-features-in-neural-network">https://stats.stackexchange.com/questions/264546/difference-between-samples-time-steps-and-features-in-neural-network</a>
Also I think that your problem is in this line
x_train[e]=np.reshape(e,(1,e.shape[0],e.shape[1]))
Maybe the dimensions are wrong</p>
|
python|tensorflow|keras|lstm|recurrent-neural-network
| 0
|
6,463
| 61,581,042
|
Make a new column based on other columns id values - Pandas
|
<p>How can i make new columns based on another columns id values? </p>
<p>The data look like this.</p>
<pre><code>value id
551 54089
12 54089
99 54089
55 73516
123 73516
431 73516
742 74237
444 74237
234 74237
</code></pre>
<p>I want the dataset to look like this.</p>
<pre><code> v1 v2 v3
54089 551 12 99
73516 55 123 431
74237 742 444 234
</code></pre>
|
<p>Use <code>groupby</code> with <code>unstack</code>:</p>
<pre><code>df = df.groupby('id')['value'].apply(lambda x: pd.Series(x.tolist(),
index=['v1', 'v2', 'v3']))\
.unstack()
# or
df.groupby('id')['value'].apply(lambda x: pd.DataFrame(x.tolist(),
index=['v1', 'v2', 'v3']).T)
print(df)
v1 v2 v3
id
54089 551 12 99
73516 55 123 431
74237 742 444 234
</code></pre>
|
python|pandas|dataset
| 3
|
6,464
| 68,657,888
|
Pandas Taking the Nlargest ignoring 0
|
<p>Hi is there a way in Pandas to do nlargest but only take values excluding 0? For example if we have [5, 0, 0] and we want to take the 2 largest, then it would only return 5 because the other values are 0?</p>
|
<p>You can simply remove the value 0 before you do nlargest:</p>
<pre><code>l = pd.Series([5,0,0])
l[l != 0].nlargest(2)
0 5
dtype: int64
</code></pre>
<p>Similarly, if you have a list of numbers you want to exclude:</p>
<pre><code>l[~l.isin(list_of_excluded_numbers)].nlargest(2)
</code></pre>
|
pandas
| -1
|
6,465
| 53,299,543
|
Is there any method in tensorflow like get_output in lasagne
|
<p>I found that it is easy to use lasagne to make a graph like this.</p>
<pre class="lang-py prettyprint-override"><code>import lasagne.layers as L
class A:
def __init__(self):
self.x = L.InputLayer(shape=(None, 3), name='x')
self.y = x + 1
def get_y_sym(self, x_var, **kwargs):
y = L.get_output(self.y, {self.x: x_var}, **kwargs)
return y
</code></pre>
<p>through the method <code>get_y_sym</code>, we could get a tensor not a value, then I could use this tensor as the input of another graph.</p>
<p>But if I use tensorflow, how could I implement this?</p>
|
<p>I'm not familiar with lasagne but you should know that ALL of TensorFlow uses graph based computation (unless you use tf.Eager, but that's another story). So by default something like:</p>
<p><code>net = tf.nn.conv2d(...)</code> </p>
<p>returns a reference to a Tensor object. In other words, <code>net</code> is NOT a value, it is a reference to the output of the convolution node created by <code>tf.nn.conv2d(...)</code>. </p>
<p>These can then be chained:</p>
<p><code>net2 = tf.nn.conv2d(net, ...)</code> and so on.</p>
<p>To get "values" one has to open a <code>tf.Session</code>:</p>
<pre><code>with tf.Session() as sess:
net2_eval = sess.run(net2)
</code></pre>
|
tensorflow|theano|lasagne
| 0
|
6,466
| 52,996,993
|
reshaping daily data on intraday values pandas
|
<p>I have a DF that looks like this:</p>
<pre><code> Last
1996-02-26 09:31:00 65.750000
1996-02-26 09:32:00 65.890625
1996-02-26 09:33:00 NaN
1996-03-27 09:31:00 266.710000
1996-03-27 09:32:00 266.760000
1996-03-27 09:33:00 266.780000
</code></pre>
<p>I want to reshape my data to look like such:</p>
<pre><code> 1996-02-26 1996-03-27
9:31:00 65.75 266.71
9:32:00 65.890625 266.76
9:33:00 NaN 266.78
</code></pre>
<p>How can I do this in pandas?</p>
|
<p>If your index is <code>str</code> dtype, create a MultiIndex and call <code>unstack</code>:</p>
<pre><code>idx = pd.MultiIndex.from_arrays(zip(*df.index.str.split()))
df = df.set_index(idx)['Last'].unstack(0)
print(df)
1996-02-26 1996-03-27
09:31:00 65.750000 266.71
09:32:00 65.890625 266.76
09:33:00 NaN 266.78
</code></pre>
<hr>
<p>An alternative solution if the index values are <code>datetimes</code>:</p>
<pre><code>idx = pd.MultiIndex.from_arrays([df.index.time, df.index.floor('D')])
df = df.set_index(idx)['Last'].unstack()
print(df)
1996-02-26 1996-03-27
09:31:00 65.750000 266.71
09:32:00 65.890625 266.76
09:33:00 NaN 266.78
</code></pre>
|
python|pandas
| 2
|
6,467
| 65,756,217
|
custom sorting values of dataframe
|
<p>I want to custom sort my dataframe. This is sample dataframe which has structure like mine:</p>
<pre><code>data = {'name':['name1','name1','name1','name2','name2','name2','name3','name3','name3'],
'col1':[19, 38, 25, 10, 39, 28, 25, 20, 23],
'col2':[29, 28, 25, 20, 19, 18, 15, 10, 13],
'col3':[9, 8, 5, 0, 9, 8, 5, 0, 3]}
df = pd.DataFrame(data, index =['2020-12-31',
'2021-01-31',
'2021-02-28',
'2020-12-31',
'2021-01-31',
'2021-02-28',
'2020-12-31',
'2021-01-31',
'2021-02-28'])
df.index.name = 'date'
df.reset_index(inplace = True)
print(df)
</code></pre>
<p>The output:</p>
<pre><code> date name col1 col2 col3
0 2020-12-31 name1 19 29 9
1 2021-01-31 name1 38 28 8
2 2021-02-28 name1 25 25 5
3 2020-12-31 name2 10 20 0
4 2021-01-31 name2 39 19 9
5 2021-02-28 name2 28 18 8
6 2020-12-31 name3 25 15 5
7 2021-01-31 name3 20 10 0
8 2021-02-28 name3 23 13 3
</code></pre>
<p>Firstly, I sorted date column in this way:</p>
<pre><code>df.sort_values(by=['date'], inplace = True)
</code></pre>
<p>The second output:</p>
<pre><code> date name col1 col2 col3
0 2020-12-31 name1 19 29 9
3 2020-12-31 name2 10 20 0
6 2020-12-31 name3 25 15 5
1 2021-01-31 name1 38 28 8
4 2021-01-31 name2 39 19 9
7 2021-01-31 name3 20 10 0
2 2021-02-28 name1 25 25 5
5 2021-02-28 name2 28 18 8
8 2021-02-28 name3 23 13 3
</code></pre>
<p>Now, I want to sort name column, I tried categorical sorting, but I can't achieve my desired output. So I want to sort the name column without sorting the date column. How can I get it?</p>
<p>Expected output:</p>
<pre><code> date name col1 col2 col3
0 2020-12-31 name3 25 15 5
3 2020-12-31 name2 10 20 0
6 2020-12-31 name1 19 29 9
1 2021-01-31 name3 20 10 0
4 2021-01-31 name2 39 19 9
7 2021-01-31 name1 38 28 8
2 2021-02-28 name3 23 13 3
5 2021-02-28 name2 28 18 8
8 2021-02-28 name1 25 25 5
</code></pre>
|
<p>If I understand, you want to sort groups of three rows:</p>
<pre><code>groups = df.reset_index().loc[:,"date":].groupby(lambda x: int(x/3))
df[:] = groups.apply(lambda x: x.sort_values("name", ascending = False)).values
print(df)
# date name col1 col2 col3
#0 2020-12-31 name3 25 15 5
#3 2020-12-31 name2 10 20 0
#6 2020-12-31 name1 19 29 9
#1 2021-01-31 name3 20 10 0
#4 2021-01-31 name2 39 19 9
#7 2021-01-31 name1 38 28 8
#2 2021-02-28 name3 23 13 3
#5 2021-02-28 name2 28 18 8
#8 2021-02-28 name1 25 25 5
</code></pre>
|
pandas|dataframe|sorting
| 0
|
6,468
| 65,654,800
|
Plot from matplotlib and pandas is not working with data json file because ValueError
|
<p>I tried generate a simple plot, but error here.
My code:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
real_estates = pd.read_json('/Users/pablo/Desktop/project/REscraper/real_estates.json')
# print(real_estates)
plt.plot(real_estates.size, real_estates.price)
plt.show()
</code></pre>
<p>real_estates.json</p>
<pre><code>[
{"price": 298000.0, "size": 47.45, "rooms": 3, "price_per_square_meter": 6280.0},
{"price": 599000.0, "size": 73.0, "rooms": 2, "price_per_square_meter": 8205.0},
[...]
]
</code></pre>
<p>Error:</p>
<pre><code>ValueError: x and y must have same first dimension, but have shapes (1,) and (2500,)
</code></pre>
<p>Someone can help?</p>
|
<p><a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.size.html" rel="nofollow noreferrer" title="Pandas docs">size</a> happens to be an attribute of a DataFrame. Use a <a href="https://docs.python.org/3/reference/expressions.html#subscriptions" rel="nofollow noreferrer" title="Python docs">subscription</a> or <code>.loc</code> to <em>select</em> the column</p>
<pre><code>plt.plot(real_estates['size'], real_estates['price'])
# or
plt.plot(real_estates.loc[:,'size'], real_estates.loc[:,'price'])
</code></pre>
<hr />
<p><a href="https://pandas.pydata.org/docs/user_guide/indexing.html#different-choices-for-indexing" rel="nofollow noreferrer" title="Pandas docs">Indexing and selecting data</a></p>
|
python|json|pandas|matplotlib
| 0
|
6,469
| 65,751,866
|
pandas pivot table issue - assuming it is how i am structuring it?
|
<p>i have a dataset that contains video game platforms, and the year that games were released for it.</p>
<p>what i'm trying to do is end up with a dataframe that has the count of titles for each year released by platform.</p>
<p>my initial dataframe looks like this:</p>
<pre><code>platform year
0 Wii 2006.0
1 NES 1985.0
2 Wii 2008.0
3 Wii 2009.0
4 GB 1996.0
5 GB 1989.0
6 DS 2006.0
7 Wii 2006.0
8 Wii 2009.0
9 NES 1984.0
10 DS 2005.0
11 DS 2005.0
12 GB 1999.0
13 Wii 2007.0
14 X360 2010.0
15 Wii 2009.0
16 PS3 2013.0
17 PS2 2004.0
18 SNES 1990.0
19 DS 2005.0
</code></pre>
<p>i'm using a groupby to get them together:</p>
<pre><code>df = df.sort_values(['year']).groupby(['year'])['platform'].value_counts()
</code></pre>
<p>which gets me close:</p>
<pre><code>year platform
1980.0 2600 9
1981.0 2600 46
1982.0 2600 36
1983.0 2600 11
NES 6
1984.0 NES 13
2600 1
1985.0 NES 11
2600 1
DS 1
</code></pre>
<p>but this is a series, and with the year being the index i can't stick this into something like a heatmap.</p>
<p>here is an example of the desired output:</p>
<pre><code> year platform #_titles
1980 2600 9
1981 2600 46
1982 2600 36
1983 2600 11
1983 NES 6
1984 NES 13
1984 2600 1
1985 NES 11
1985 2600 1
1985 DS 1
1985 PC 1
1986 NES 19
1986 2600 2
1987 NES 10
1987 2600 6
1988 NES 11
1988 2600 2
1988 GB 1
1988 PC 1
1989 GB 10
</code></pre>
<p>I was thinking i might need to use a pivot_table() but this is something i am still quite new to and am struggling to implement.</p>
<p>i tried something like:</p>
<pre><code>df = df.pivot_table(df,index='year',columns = 'platform',aggfunc = 'count')
</code></pre>
<p>but my output then is just the year.</p>
<p>clearly i am doing something wrong, and figure it is time to stop beating my virtual head on juypter notebook and ask for some advice.</p>
<p>I am fine with getting the original group method to work, or using a pivot table either way - I just would appreciate some pointers on what i'm doing wrong so i can correct it.</p>
<p>Thanks for your time in advance,</p>
<p>Jared</p>
<p>edit: here is the result from the first answer (which would be perfect, if it had the aggfunc in it? not sure why that isn't there?):
|year|platform|
|----|--------|
|1980.0|2600|
|1981.0|2600|
|1982.0|2600|
|1983.0|2600
||NES|
|1984.0|2600|
||NES|</p>
|
<p>Here is the solution with pivot table:</p>
<pre><code>res = pd.pivot_table(df,index=['year', 'platform'],aggfunc = 'size')
>>> print(res)
year platform
1984.0 NES 1
1985.0 NES 1
1989.0 GB 1
1990.0 SNES 1
1996.0 GB 1
1999.0 GB 1
2004.0 PS2 1
2005.0 DS 3
2006.0 DS 1
Wii 2
2007.0 Wii 1
2008.0 Wii 1
2009.0 Wii 3
2010.0 X360 1
2013.0 PS3 1
</code></pre>
|
python|pandas|pandas-groupby|pivot-table
| 1
|
6,470
| 65,578,017
|
Error: Error when checking input: expected dense_Dense1_input to have 3 dimension(s). but got array with shape 1,9
|
<p>Im really new to tensorflow.js and im trying to do simple model that tell you on what side of the canvas you cliked</p>
<pre><code>const model = tf.sequential();
model.add(
tf.layers.dense({
units: 200,
activation: "sigmoid",
inputShape: [0, 1],
})
);
model.add(
tf.layers.dense({
units: 2,
activation: "softmax",
})
);
model.compile({
optimizer: optimizer,
loss: "categoricalCrossentropy",
metrics: ["accuracy"],
});
</code></pre>
<p>my training and testing data</p>
<pre><code> const yTrain = tf.tensor2d(
[10, 130, 60, 20, 150, 110, 3, 160, 99],
[1, 9]
);
const xTrain = tf.tensor2d([0, 1, 0, 0, 1, 1, 0, 1, 0], [1, 9]);
const yTest = tf.tensor2d(
[5, 106, 33, 88, 104, 140, 7, 60, 154],
[1, 9]
);
const xTest = tf.tensor2d([0, 1, 0, 0, 1, 1, 0, 0, 1], [1, 9]);
</code></pre>
|
<p>The inputShape is of size 2, therefore, the features (here xtrain and xtest) should be of dimension 3.</p>
<p>Additionnally, it does not make sense to have a dimension size to be 0 (that will mean that the tensor is empty).</p>
<p>Given your xtrain and xtest shape, <code>[a, b]</code>, the inputShape, should be <code>[b]</code>.</p>
<p>This shape mismatch between the model and the training data has been discussed <a href="https://stackoverflow.com/questions/51790230/expected-dense-dense1-input-to-have-shape-a-but-got-array-with-shape-b/51791892#51791892">here</a> and <a href="https://stackoverflow.com/questions/54689808/how-to-add-images-in-a-tensorflow-js-model-and-train-the-model-for-given-images/54690176#54690176">there</a></p>
|
javascript|tensorflow|tensorflow.js
| 0
|
6,471
| 63,515,884
|
Classify text from a Pandas Series
|
<p>I have the following dataframe which is built from parsing raw text files into a list and then into the dataframe.</p>
<pre><code> Content
0 POLITICS
1 A Renewed Push in New York to Open Police Disciplinary Records
2 11:59 PM ET
3 CORRECTIONS
4 Corrections & Amplifications
5 11:25 PM ET
6 NEW YORK
7 New York City to Have Curfew as Protests Over George Floyds Death Continue
8 10:20 PM ET
9 U.S.
10 Fresh Data Shows Heavy Coronavirus Death Toll in Nursing Homes
11 8:49 PM ET
12 BUSINESS
13 Reports of Violence Against Journalists Mount as U.S. Protests Intensify
14 8:05 PM ET
15 MEDIA & MARKETING
16 Music Labels Suspend Work in Support of Demonstrations
17 7:32 PM ET
18 REVIEW & OUTLOOK
19 Dont Call in the Troops
20 7:31 PM ET
21 NEW YORK
22 Manhattan Stores Prepare for Another Night of Looting
23 7:31 PM ET
24 OPINION
25 Dave Patrick Underwood, RIP
26 7:30 PM ET
27 REVIEW & OUTLOOK
28 Courts Arent Financial Clearinghouses
29 7:27 PM ET
</code></pre>
<p>I would like to know if there is any way to split this column into 3 columns like these <code>['Topic','Headline','Time']</code>. Each row contains data for one of these columns. I would like to split them without doing any manual work. I think the entire dataframe does not follow the pattern of Topic, Headline, Time. At some point the pattern changes since the raw data was created by hand. So, if the rows could be classified based on regex or something that allows to maintain the time series structure; that would be great.</p>
|
<h2>Addresses: <em>At some point the pattern changes</em></h2>
<ul>
<li>Using a list comprehension, find data for each header
<ul>
<li>The order of list creation matters, <code>time</code>, <code>top</code>, and then <code>head</code>.</li>
<li>The <code>time</code> pattern must be consistent with 2 character time zones, contains <code>AM</code> or <code>PM</code>, and <code>hh:mm</code> or <code>h:mm</code>.</li>
<li>The <code>top</code> pattern should maintain the pattern of all uppercase characters and not in <code>time</code>.</li>
<li><code>head</code> is anything not in <code>time</code> or <code>top</code>.</li>
</ul>
</li>
<li>The following implementation uses fairly simple matching
<ul>
<li>There are undoubtedly more sophisticated regular expressions that could be applied.</li>
</ul>
</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import re
import pandas
# find components for each list
time = [v for v in cont if (len(v) in [10, 11]) & (':' in v)] # the time pattern must be consistent
top = [v for v in cont if ''.join(re.findall('\w', v)).isupper() & (v not in time)] # topics characters must be all uppercase
head = [v for v in cont if v not in time + top] # anything not in the other two lists
# create the dataframe
df = pd.DataFrame({'Time': time, 'Topic': top, 'Headline': head})
</code></pre>
<h2>Addresses: Column maintains a continuous pattern</h2>
<ul>
<li>I'd convert the column to a list and use string slicing</li>
<li>This only works for a continuous pattern
<ul>
<li>It doesn't address <em>At some point the pattern changes since the raw data was created by hand</em>.</li>
</ul>
</li>
</ul>
<pre class="lang-py prettyprint-override"><code># given your dataframe as df
# create a new dataframe with 3 columns
df_new = pd.DataFrame(columns=['cat', 'desc', 'time'])
# select data for columns
df_new.cat = df.Content.tolist()[0::3]
df_new.desc = df.Content.tolist()[1::3]
df_new.time = df.Content.tolist()[2::3]
# display(df_new)
cat desc time
0 POLITICS A Renewed Push in New York to Open Police Disciplinary Records 11:59 PM ET
1 CORRECTIONS Corrections & Amplifications 11:25 PM ET
2 NEW YORK New York City to Have Curfew as Protests Over George Floyds Death Continue 10:20 PM ET
3 U.S. Fresh Data Shows Heavy Coronavirus Death Toll in Nursing Homes 8:49 PM ET
4 BUSINESS Reports of Violence Against Journalists Mount as U.S. Protests Intensify 8:05 PM ET
5 MEDIA & MARKETING Music Labels Suspend Work in Support of Demonstrations 7:32 PM ET
6 REVIEW & OUTLOOK Dont Call in the Troops 7:31 PM ET
7 NEW YORK Manhattan Stores Prepare for Another Night of Looting 7:31 PM ET
8 OPINION Dave Patrick Underwood, RIP 7:30 PM ET
9 REVIEW & OUTLOOK Courts Arent Financial Clearinghouses 7:27 PM ET
</code></pre>
<h3>Using a loop</h3>
<pre class="lang-py prettyprint-override"><code>df_new = pd.DataFrame()
for i, col in enumerate(['Topic','Headline','Time']):
df_new[col] = df.Content.tolist()[i::3]
</code></pre>
|
python|pandas
| 3
|
6,472
| 53,458,023
|
for loop for row wise comparison in pandas
|
<p>I have following pandas dataframe</p>
<pre><code>code tank prod_receipt tank_prod
12345 1 MS MS
23452 2 MS No Data
23333 2 HS HS
14567 3 MS No Data
12343 2 MS MS
</code></pre>
<p>I want to generate a flag where in it checks whether <code>prod_receipt</code> is equal to <code>tank_prod</code> My desired dataframe is </p>
<pre><code>code tank prod_receipt tank_prod Flag
12345 1 MS MS Equal
23452 2 MS No Data No Data
23333 2 HS HS Equal
14567 3 MS No Data No Data
12343 2 MS HS Not Equal
</code></pre>
<p>How can I do it in pandas?</p>
|
<p>Dont use loops, because slow, better here is use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.select.html" rel="nofollow noreferrer"><code>numpy.select</code></a>:</p>
<pre><code>m1 = df['tank_prod'] == 'No Data'
m2 = df['prod_receipt'] == df['tank_prod']
df['new'] = np.select([m1, m2], ['No Data', 'Equal'],'Not Equal')
print (df)
code tank prod_receipt tank_prod new
0 12345 1 MS MS Equal
1 23452 2 MS No Data No Data
2 23333 2 HS HS Equal
3 14567 3 MS No Data No Data
4 12343 2 MS HS Not Equal
</code></pre>
<p>If need only one condition use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a>:</p>
<pre><code>m2 = df['prod_receipt'] == df['tank_prod']
df['new'] = np.where(m2, 'Equal','Not Equal')
print (df)
code tank prod_receipt tank_prod new
0 12345 1 MS MS Equal
1 23452 2 MS No Data Not Equal
2 23333 2 HS HS Equal
3 14567 3 MS No Data Not Equal
4 12343 2 MS HS Not Equal
</code></pre>
<p><strong>Performance</strong>:</p>
<p>Depends of number of rows and number of matched values:</p>
<pre><code>#4k rows
df = pd.concat([df] * 1000, ignore_index=True)
In [90]: %%timeit
...: m1 = df['tank_prod'] == 'No Data'
...: m2 = df['prod_receipt'] == df['tank_prod']
...: df['new'] = np.select([m1, m2], ['No Data', 'Equal'],'Not Equal')
...:
2.89 ms Β± 64.3 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each)
#loop solution
In [91]: %%timeit
...: df["Flag"] = df.apply(lambda x: "Equal" if x["prod_receipt"] == x["tank_prod"] else ("Not Equal" if x["prod_receipt"] != x["tank_prod"] and x["tank_prod"] != "No Data" else "No Data"), axis =1)
...:
278 ms Β± 7.04 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
</code></pre>
|
python|pandas
| 4
|
6,473
| 71,893,469
|
Converting a dataframe stringcolumn into multiple columns and rearrange each column based on the labels
|
<p>I want to convert a stringcolumn with multiple labels into separate columns for each label and rearrange the dataframe that identical labels are in the same column. For e.g.:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Label</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>apple, tom, car</td>
</tr>
<tr>
<td>1</td>
<td>apple, car</td>
</tr>
<tr>
<td>2</td>
<td>tom, apple</td>
</tr>
</tbody>
</table>
</div>
<p>to</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Label</th>
<th>0</th>
<th>1</th>
<th>2</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>apple, tom, car</td>
<td>apple</td>
<td>car</td>
<td>tom</td>
</tr>
<tr>
<td>1</td>
<td>apple, car</td>
<td>apple</td>
<td>car</td>
<td>None</td>
</tr>
<tr>
<td>2</td>
<td>tom, apple</td>
<td>apple</td>
<td>None</td>
<td>tom</td>
</tr>
</tbody>
</table>
</div>
<pre><code>df["Label"].str.split(',',3, expand=True)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>0</th>
<th>1</th>
<th>2</th>
</tr>
</thead>
<tbody>
<tr>
<td>apple</td>
<td>tom</td>
<td>car</td>
</tr>
<tr>
<td>apple</td>
<td>car</td>
<td>None</td>
</tr>
<tr>
<td>tom</td>
<td>apple</td>
<td>None</td>
</tr>
</tbody>
</table>
</div>
<p>I know how to split the stringcolumn, but I can't really figure out how to sort the label columns, especially since the number of labels per sample is different.</p>
|
<p>Try:</p>
<pre class="lang-py prettyprint-override"><code>df = df.assign(xxx=df.Label.str.split(r"\s*,\s*")).explode("xxx")
df["Col"] = df.groupby("xxx").ngroup()
df = (
df.set_index(["ID", "Label", "Col"])
.unstack(2)
.droplevel(0, axis=1)
.reset_index()
)
df.columns.name = None
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> ID Label 0 1 2
0 0 apple, tom, car apple car tom
1 1 apple, car apple car NaN
2 2 tom, apple apple NaN tom
</code></pre>
|
python|pandas|dataframe|sorting
| 1
|
6,474
| 55,224,109
|
Custom upsampling of images with TensorFlow
|
<p>I have trouble implementing a layer function in TensorFlow. Maybe someone with more experience has an idea how to solve this. The usage of the function should be the following:</p>
<p>In: a <code>[B x W x H x 2]</code> tensor called <code>A</code></p>
<p>Out: a new tensor called <code>B</code> of size <code>[B x p*W x q*W]</code> which is filled like:</p>
<pre><code>for b from 0 to B: #loop over batches
for w from 0 to W: # loop over width
for h from 0 to H: # loop over height
B[b,w*p:w*p+p,h*q:h*q+q] = tf.random.normal(shape=[p,q],
mean=A[b,w,h,0],
stddev=A[b,w,h,1])
</code></pre>
<p>What I basically want to do is up-sample an image with "random (gaussian) interpolation".</p>
<p>I fail creating an empty tensor and filling it, as I would do usually according to the pseudocode. What I tried is using TensorFlows <code>tf.map_fn()</code> function, which unfortunately didn't work.</p>
<p>The idea is to use this layer later as alternative for mean- or max-pooling.</p>
<p>Maybe there is a simpler way to do this?</p>
<p>Any help appreciated. Thanks.</p>
|
<p>You can do that in a vectorized way (which should be much faster than looping or mapping) like this:</p>
<pre><code>import tensorflow as tf
import numpy as np
def gaussian_upsampling(A, p, q):
s = tf.shape(A)
B, W, H, C = s[0], s[1], s[2], s[3]
# Add two dimensions to A for tiling
A_exp = tf.expand_dims(tf.expand_dims(A, 2), 4)
# Tile A along new dimensions
A_tiled = tf.tile(A_exp, [1, 1, p, 1, q, 1])
# Reshape
A_tiled = tf.reshape(A_tiled, [B, W * p, H * q, C])
# Extract mean and std
mean_tiled = A_tiled[:, :, :, 0]
std_tiled = A_tiled[:, :, :, 1]
# Make base random value
rnd = tf.random.normal(shape=[B, W * p, H * q], mean=0, stddev=1, dtype=A.dtype)
# Scale and shift random value
return rnd * std_tiled + mean_tiled
# Test
with tf.Graph().as_default(), tf.Session() as sess:
tf.random.set_random_seed(100)
mean = tf.constant([[[ 1.0, 2.0, 3.0],
[ 4.0, 5.0, 6.0]],
[[ 7.0, 8.0, 9.0],
[10.0, 11.0, 12.0]]])
std = tf.constant([[[0.1, 0.2, 0.3],
[0.4, 0.5, 0.6]],
[[0.7, 0.8, 0.9],
[1.0, 1.1, 1.2]]])
A = tf.stack([mean, std], axis=-1)
with np.printoptions(precision=2, suppress=True):
print(sess.run(gaussian_upsampling(A, 3, 2)))
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>[[[ 0.94 0.97 1.82 1.67 2.89 2.96]
[ 1.04 0.78 2.23 2.02 2.95 3.04]
[ 0.9 0.96 1.84 1.98 2.74 3.06]
[ 3.89 4.12 5.72 4.32 6.02 5.7 ]
[ 3.47 4.27 4.39 4.85 6.38 5.32]
[ 3.21 3.98 4.64 4.31 5.72 5.96]]
[[ 8.15 7.08 7.33 7.78 8.75 9.95]
[ 7.37 7.29 8.27 8.26 8.56 8.17]
[ 5.91 7.95 7.9 7.81 8.43 8.64]
[11.12 11.49 11.95 11.74 11.43 12.3 ]
[ 9.98 9.66 9.21 10.2 12.78 12.13]
[ 8.33 10.37 11.88 11.44 12.96 11.73]]]
</code></pre>
|
python|tensorflow
| 1
|
6,475
| 56,519,204
|
How to compute a moving average of value for the previous date untill the first event?
|
<p>I have a dataframe with data where the first column is an identification number, ID1, the second is a date, DATE, and the third is some value, VALUE.</p>
<pre><code>d = {'ID1': [1,2,3,4,1,2,4,1,3,2,4,1],
'DATE': ['1/06/2016', '1/06/2016','2/06/2016','1/06/2016','3/06/2016', '4/06/2016','2/06/2016','5/06/2016','1/06/2016', '2/06/2016','2/06/2016','4/06/2016'], 'VALUE':[1.0, 3.0, 4.0, 2.0, 5.0, 0.6, 9.0, 10.0, 8.0, 100.0, 23.0, 1.0]}
df = pd.DataFrame(d)
</code></pre>
<p>I want to compute the average value, for each ID1, for the past dates. So, for instance, in the first row, where ID1 = 1, I would have a value of 5.33, for the second row, where ID1=2, I would have 50.3, and so on and so forth. If the last value is reached (for instance, the last value of ID1=1), the moving average should be the value of VALUE (1.0 in this case).</p>
<p>I know the existence of the rolling function, but I do not see exactly how to apply it here. I guess I should do some re-indexing, with the DATE column and make a groupby in order to group the data by the value of ID1 column. </p>
<p>Can someone give me some advice?
Thanks!</p>
|
<p>I think you are looking for <code>expanding</code></p>
<pre><code>s=df.groupby('ID1').VALUE.expanding(min_periods=1).mean().reset_index(level=0,drop=True)
df['new']=s
</code></pre>
|
python|pandas|moving-average
| 1
|
6,476
| 66,964,560
|
How do I calculate percentage change of a timeseries of daily data
|
<p>I have a daily timeseries of indexdata and want to take yearly pct changes of it. If I use <code>DataFrame.pct_change(periods=...)</code> I will have to define the exact number of days till the same day last year which is not correct as the number of working days differs from year to year. Do anyone have any idea how to get the changes from the same day a year back ?</p>
<p>The code may look like:</p>
<pre><code>import pandas as pd
list=[]
list=[[7.71],[7.79],[6.80],[6.44],[6.46],[6.80]]
df = pd.DataFrame(list, columns=['index'], index=['2016-01-04','2016-01-05','2016-01-06','2017-01-04','2017-01-05','2017-01-06'])
</code></pre>
<p>and I want the output as follows:</p>
<pre><code>2017-01-04 -16.45%
2017-01-05 -17.10%
2017-01-05 0.00%
</code></pre>
|
<p><strong>EDIT</strong>: starting from the good answer from @Pablo C: given OP's definition of the DataFrame, we first need to convert the index to <code>DatetimeIndex</code>, otherwise @Pablo C's answer will throw <code>NotImplementedError: Not supported for type Index</code></p>
<pre><code>import pandas as pd
list=[]
list=[[7.71],[7.79],[6.80],[6.44],[6.46],[6.80]]
df = pd.DataFrame(list, columns=['index'], index=['2016-01-04','2016-01-05','2016-01-06','2017-01-04','2017-01-05','2017-01-06'])
df.index = pd.to_datetime(df.index)
do = pd.DateOffset(years = 1)
df.pct_change(freq = do).dropna().mul(100)
# index
# 2017-01-04 -16.472114
# 2017-01-05 -17.073171
# 2017-01-06 0.000000
</code></pre>
|
python|pandas|percentage
| 0
|
6,477
| 68,107,160
|
fill all nan values in a dataframe with values from a column
|
<p>I have a df like this</p>
<pre><code>index a b c
0 0 0 1
1 nan 1 2
2 0 1 3
3 1 nan 4
4 1 0 5
5 nan 0 6
6 nan nan 7
</code></pre>
<p>I want to fill the first 2 columns(actually first 20ish) with the value from the last column, i.e.</p>
<pre><code>index a b c
0 0 0 1
1 2 1 2
2 0 1 3
3 1 4 4
4 1 0 5
5 6 0 6
6 7 7 7
</code></pre>
<p>I was thinking using something like df.iloc[:,0:1].fillna(df['c']), however it only works when i am selecting one column, if more than one is selected everything will remain nan. i want to fill first 20ish column all using the values from the last column using iloc(or using index not column names).</p>
|
<p>Just do</p>
<pre><code>out = df.fillna(dict.fromkeys(list(df),df.c))
Out[206]:
a b c
0 0.0 0.0 1
1 2.0 1.0 2
2 0.0 1.0 3
3 1.0 4.0 4
4 1.0 0.0 5
5 6.0 0.0 6
6 7.0 7.0 7
</code></pre>
|
python|pandas|dataframe
| 2
|
6,478
| 59,081,876
|
How to overcome the u200d unicode while reading excel files using pandas
|
<p>The excel file contains Indian language data. The excel file is being read but while displaying the content it shows \u200d in between. I need to avoid it to do further processing of data. Kindly help.</p>
|
<p>try this </p>
<pre><code>s = 'This is some \u200d text that has to be cleaned\u200d! it\u200d annoying!'
s.encode('ascii', 'ignore')
output :
This is some text that has to be cleaned! it annoying!'
</code></pre>
|
python|pandas|nlp
| 0
|
6,479
| 59,305,007
|
Compute dataframe columns from a string formula in variables?
|
<p>I use an excel file in which I determine the names of sensor, and a formula allowing me to create a new "synthetic" sensor based on real sensors. I would like to write the formula as string like for example "y1 + y2 + y3" and not "df ['y1'] + df ['y2'] + df ['y3]" but I don't see which method to use?</p>
<p>Excel file example:</p>
<p><a href="https://i.stack.imgur.com/Z7rRo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z7rRo.png" alt="Excel file" /></a></p>
<p>My script must therefore create a new sensor for each line of this excel file. This new sensor will then be uploaded to my database. The number of sensors to calculate the new value is variable!</p>
<p>Here is my code sample :</p>
<pre><code># From excel file
sensor_cell = '008253_sercit_sercit_batg_b_flr0_g_tctl_z1_tr_prval|officielles_darksky_bierset_temp|officielles_darksky_uccle_temp|005317_esa_001_hur_piece_030110'
formula_cell = "df['y1'] + df['y2'] + df['y3'] + df['y4']"
# formula_cell = 'y1+y2+y3+y4' --> what I would like to be able to write in my excel file cell
list = sensor_cell.split('|')
df = []
for sensor in list:
position = list.index(sensor) + 1
df_y = search_ES(sensor) # Function that return a df with timestamp and value from my database
df_y = df_y.rename(columns={'value': "y"+ str(position)})
df.append(df_y)
df = pd.concat(df, axis=1, sort=True)
# I would like to have :
# y1 = df['y1']
# y2 = df['y2']
# y3 = df['y3']
# y4 = df['y4']
df = df.dropna()
print(df)
df['value'] = eval(formula_cell) # Formula from excel file
print(df)
</code></pre>
<p>df before applying the formula :</p>
<pre><code> y1 y2 y3 y4
2019-12-11 00:00:00 20.500000 5.62 6.03 29
2019-12-11 01:00:00 21.180000 5.54 6.15 30
2019-12-11 02:00:00 21.020000 5.28 6.29 30
2019-12-11 03:00:00 20.760000 4.99 6.36 29
2019-12-11 04:00:00 20.680000 4.80 6.26 30
2019-12-11 05:00:00 20.760000 4.63 6.07 30
2019-12-11 06:00:00 20.900000 4.49 5.91 30
2019-12-11 07:00:00 20.920000 4.20 6.05 30
2019-12-11 08:00:00 21.320000 4.15 5.95 30
2019-12-11 09:00:00 21.840000 4.42 5.81 30
2019-12-11 10:00:00 22.460000 4.24 5.81 30
2019-12-11 11:00:00 22.240000 4.11 5.89 31
2019-12-11 12:00:00 22.420000 4.43 6.15 32
2019-12-11 13:00:00 21.740000 4.37 6.14 32
2019-12-11 14:00:00 22.500000 4.48 6.24 31
2019-12-11 15:00:00 22.980000 4.87 6.46 32
2019-12-11 16:00:00 22.420000 4.56 6.21 32
2019-12-11 17:00:00 22.320000 4.40 5.92 32
2019-12-11 18:00:00 21.939999 4.52 6.19 32
2019-12-11 19:00:00 20.680000 4.30 5.35 32
2019-12-11 20:00:00 20.900000 4.28 4.94 32
2019-12-11 21:00:00 20.859999 4.55 5.21 32
2019-12-11 22:00:00 20.520000 4.28 4.73 32
2019-12-11 23:00:00 20.320000 4.24 4.90 32
</code></pre>
<p>df after applying the formula :</p>
<pre><code> y1 y2 y3 y4 value
2019-12-11 00:00:00 20.500000 5.62 6.03 29 61.150000
2019-12-11 01:00:00 21.180000 5.54 6.15 30 62.870000
2019-12-11 02:00:00 21.020000 5.28 6.29 30 62.590000
2019-12-11 03:00:00 20.760000 4.99 6.36 29 61.110000
2019-12-11 04:00:00 20.680000 4.80 6.26 30 61.740000
2019-12-11 05:00:00 20.760000 4.63 6.07 30 61.460000
2019-12-11 06:00:00 20.900000 4.49 5.91 30 61.300000
2019-12-11 07:00:00 20.920000 4.20 6.05 30 61.170000
2019-12-11 08:00:00 21.320000 4.15 5.95 30 61.420000
2019-12-11 09:00:00 21.840000 4.42 5.81 30 62.070000
2019-12-11 10:00:00 22.460000 4.24 5.81 30 62.510000
2019-12-11 11:00:00 22.240000 4.11 5.89 31 63.240000
2019-12-11 12:00:00 22.420000 4.43 6.15 32 65.000000
2019-12-11 13:00:00 21.740000 4.37 6.14 32 64.250000
2019-12-11 14:00:00 22.500000 4.48 6.24 31 64.220000
2019-12-11 15:00:00 22.980000 4.87 6.46 32 66.310000
2019-12-11 16:00:00 22.420000 4.56 6.21 32 65.190000
2019-12-11 17:00:00 22.320000 4.40 5.92 32 64.640000
2019-12-11 18:00:00 21.939999 4.52 6.19 32 64.649999
2019-12-11 19:00:00 20.680000 4.30 5.35 32 62.330000
2019-12-11 20:00:00 20.900000 4.28 4.94 32 62.120000
2019-12-11 21:00:00 20.859999 4.55 5.21 32 62.619999
2019-12-11 22:00:00 20.520000 4.28 4.73 32 61.530000
2019-12-11 23:00:00 20.320000 4.24 4.90 32 61.460000
</code></pre>
<h3>EDIT - SOLUTION:</h3>
<p>The problem was solved by using <code>df.eval</code>:</p>
<pre><code>formula_cell = 'fictive = y1+y2+y3+y4'
df.eval(formula_cell, inplace=True)
</code></pre>
|
<p>I think pandas.query is what you want. <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html</a></p>
<p>Example:</p>
<pre><code>formula_cell = "y1 + y2 + y3 + y4"
df['value'] = df.query(formula_cell)
</code></pre>
|
python|pandas|dataframe|variables|formula
| 1
|
6,480
| 46,129,407
|
How to extract a item from list of items in a python dataframe column?
|
<p>I have a Dataframe like this :</p>
<pre><code> Date sdate
0 2012-3-12 [2012, 03, 12]
1 2012-3-25 [2012, 03, 25]
2 2012-4-20 [2012, 04, 20]
3 2012-4-12 [2012, 04, 12]
4 2012-4-26 [2012, 04, 26]
</code></pre>
<p>I need to extract the year,month and day to separate columns like this </p>
<pre><code> Date sdate year month day
0 2012-3-12 [2012, 03, 12] 2012 03 12
1 2012-3-25 [2012, 03, 25] 2012 03 25
2 2012-4-20 [2013, 04, 20] 2013 04 20
3 2012-4-12 [2015, 06, 12] 2015 06 12
4 2012-4-26 [2011, 08, 26] 2011 08 26
</code></pre>
<p>Can I achieve this using for loop?</p>
|
<p>Use <code>apply</code> with <code>pd.Series</code> and <code>rename</code> the columns</p>
<pre><code>In [784]: df.sdate.apply(pd.Series).rename(columns={0:'year',1:'month',2:'day'})
Out[784]:
year month day
0 2012 3 12
1 2012 3 25
2 2012 4 20
3 2012 4 12
4 2012 4 26
</code></pre>
<p><code>join</code> to original <code>df</code></p>
<pre><code>In [785]: df.join(df.sdate.apply(pd.Series).rename(columns={0:'year',1:'month',2:'day'}))
Out[785]:
Date sdate year month day
0 2012-3-12 [2012, 3, 12] 2012 3 12
1 2012-3-25 [2012, 3, 25] 2012 3 25
2 2012-4-20 [2012, 4, 20] 2012 4 20
3 2012-4-12 [2012, 4, 12] 2012 4 12
4 2012-4-26 [2012, 4, 26] 2012 4 26
</code></pre>
<p>Or, provide column names as <code>index</code></p>
<pre><code>In [786]: df.sdate.apply(lambda x: pd.Series(x, index=['year', 'month', 'day']))
Out[786]:
year month day
0 2012 3 12
1 2012 3 25
2 2012 4 20
3 2012 4 12
4 2012 4 26
</code></pre>
|
python|pandas|for-loop|dataframe
| 2
|
6,481
| 46,134,201
|
How to get original values after using factorize() in Python?
|
<p>I'm a beginner trying to create a predictive model using Random Forest in Python, using train and test datasets. train["ALLOW/BLOCK"] can take 1 out of 4 expected values (all strings). test["ALLOW/BLOCK"] is what needs to be predicted. </p>
<pre><code>y,_ = pd.factorize(train["ALLOW/BLOCK"])
y
Out[293]: array([0, 1, 0, ..., 1, 0, 2], dtype=int64)
</code></pre>
<p>I used <code>predict</code> for the prediction.</p>
<pre><code>clf.predict(test[features])
clf.predict(test[features])[0:10]
Out[294]: array([0, 0, 0, 0, 0, 2, 2, 0, 0, 0], dtype=int64)
</code></pre>
<p>How can I get the original values instead of the numeric ones? Is the following code actually comparing the actual and predicted values?</p>
<pre><code>z,_= pd.factorize(test["AUDIT/BLOCK"])
z==clf.predict(test[features])
Out[296]: array([ True, False, False, ..., False, False, False], dtype=bool)
</code></pre>
|
<p>First, you need to save the <code>label</code> returned by <code>pd.factorize</code> as follows:</p>
<pre><code>y, label = pd.factorize(train["ALLOW/BLOCK"])
</code></pre>
<p>And then after you got the numeric predictions, you can extract the corresponding labels by <code>label[pred]</code>:</p>
<pre><code>pred = clf.predict(test[features])
pred_label = label[pred]
</code></pre>
<p><code>pred_label</code> contains predictions with the original values.</p>
<hr>
<p>No you should not re factorize the test predictions, since very likely the label would be different. Consider the following example:</p>
<pre><code>pd.factorize(['a', 'b', 'c'])
# (array([0, 1, 2]), array(['a', 'b', 'c'], dtype=object))
pd.factorize(['c', 'a', 'b'])
# (array([0, 1, 2]), array(['c', 'a', 'b'], dtype=object))
</code></pre>
<p>So the label depends on the order of the elements.</p>
|
python|pandas|random-forest|prediction
| 7
|
6,482
| 45,847,893
|
Concatenate rows in python dataframe
|
<p>This question may be very basic, but I would like to concatenate three columns in a pandas DataFrame.<br>
I would like to concatenate col1, col2 and col3 into col4. I know in R this could be done with the paste function quite easily.</p>
<pre><code>df = pd.DataFrame({'col1': [2012, 2013, 2014], 'col2': 'q', 'col3': range(3)})
</code></pre>
<p>Edit: Code for clarity - I would like to generate col4 automatically:</p>
<pre><code>x=pd.DataFrame()
x['col1'] = [2012,2013,2013]
x['col2'] = ['q', 'q', 'q']
x['col3'] = [1,2,3]
x['col4'] = ['2012q1', '2013q2', '2014q4']
</code></pre>
|
<p>Use <code>pd.DataFrame.sum</code> with <code>axis=1</code> after converting to strings.<br>
I use <code>pd.DataFrame.assign</code> to create a copy with the new column</p>
<pre><code>df.assign(col4=df[['col1', 'col2', 'col3']].astype(str).sum(1))
col1 col2 col3 col4
0 2012 q 1 2012q1
1 2013 q 2 2013q2
2 2014 q 3 2014q3
</code></pre>
<p>Or you can add a column inplace</p>
<pre><code>df['col4'] = df[['col1', 'col2', 'col3']].astype(str).sum(1)
df
col1 col2 col3 col4
0 2012 q 1 2012q1
1 2013 q 2 2013q2
2 2014 q 3 2014q3
</code></pre>
<hr>
<p>If <code>df</code> only has the three columns, you can reduce code to</p>
<pre><code>df.assign(col4=df.astype(str).sum(1))
</code></pre>
<p>If <code>df</code> has more than three columns but the three you want to concat are the first three</p>
<pre><code>df.assign(col4=df.iloc[:, :3].astype(str).sum(1))
</code></pre>
|
python|pandas
| 4
|
6,483
| 51,089,531
|
Read CSV file with features and labels in the same row in Tensorflow
|
<p>I have a .csv file with around 5000 rows and 3757 columns. The first 3751 columns of each row are the features and the last 6 columns are the labels. Each row is a set of features-labels pair.</p>
<p>I'd like to know if there are built-in functions or any fast ways that I can:</p>
<ol>
<li>Parse the first 3751 columns as features (these columns doesn't have headers)</li>
<li>Parse ANY of the last 6 columns as labels, which means that I'd like to take any of the last 6 columns out as a label for training. </li>
</ol>
<p>Basically I want to train a DNN model with 3751 features and 1 label and I'd like the output of the parsing function be fed into the following function for training:</p>
<pre><code>train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": np.array(training_set.data)},
y=np.array(training_set.target),
num_epochs=None,
shuffle=True)
</code></pre>
<p>I know some functions like "tf.contrib.learn.datasets.base.load_csv_without_header" can do similar things but it is already deprecated.</p>
|
<p>You could look into <code>tf.data.Dataset</code>'s input pipelines (<a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset" rel="nofollow noreferrer">LINK</a>). What you basically do is you can read a csv file, possibly batch/shuffle/map it somehow and create an iterator over the dataset. Whenever you evaluate <code>iterator.get_next()</code>, you get a number of lines from your csv which is equal to batch size. Concerning your separation of features and labels, you can then simply access single elements of the batch with standard python syntax, e.g. <code>features = batch[:-6]</code> and <code>label = batch[-1]</code> and feed them to whatever function you like.</p>
<p>On the tensorflow site, there's an in-depth tutorial about how to use these input pipelines (<a href="https://www.tensorflow.org/programmers_guide/datasets" rel="nofollow noreferrer">LINK</a>).</p>
|
python|python-3.x|pandas|tensorflow
| 2
|
6,484
| 51,078,007
|
Keras array shape error
|
<p>How come I am getting a shape <code>ValueError</code> for my neural network when what I am passing in is an array of shape <code>(8,1)</code> but the error I am getting is that the neural network is complaining about getting a <code>(1,)</code>?</p>
<p>Neural Network:</p>
<pre><code>>>> observation_dimension
(8,)
>>> q_network = Sequential([
Dense(40, input_dim=observation_dimension, activation='relu'),
Dense(40, activation='relu'),
Dense(number_of_actions, activation='linear')
])
>>> obs
array([-0.00371828, 0.93953934, -0.37663383, -0.07161933, 0.00431531,
0.08531308, 0. , 0. ])
>>> obs.shape
(8,)
</code></pre>
<p>Error:</p>
<pre><code>>>> q_network.predict(obs)
Traceback (most recent call last):
...
...
ValueError: Error when checking input: expected dense_27_input to have shape (8,) but got array with shape (1,)
</code></pre>
|
<p><code>model.predict</code> takes a batch of samples, if you give it a single sample with the wrong shape, it will interpret the first dimension as the batch one.</p>
<p>A simple solution is to add a dimension with a value of one:</p>
<pre><code>q_network.predict(obs.reshape(1, 8))
</code></pre>
|
python|tensorflow|neural-network|keras
| 2
|
6,485
| 51,080,137
|
Adding missing rows from the another table based on 2 columns
|
<p>I have a subset of a dataframe like bellow</p>
<pre><code>ID var1 var2 var3
111 A 1 1
222 A 1 1
333 A 1 1
444 A 2 1
555 A 2 1
666 A 2 1
</code></pre>
<p>and I want to join missing information from dataframe bellow. But only those ID that subset contains var1 and var2</p>
<pre><code>ID var1 var2 var3
111 A 1 1
222 A 1 1
333 A 1 1
777 A 1 0
888 A 1 0
444 A 2 1
555 A 2 1
666 A 2 1
999 A 2 0
123 B 3 1
456 B 4 0
789 C 5 1
</code></pre>
<p>So output should be </p>
<pre><code>ID var1 var2 var3
111 A 1 1
222 A 1 1
333 A 1 1
777 A 1 0
888 A 1 0
444 A 2 1
555 A 2 1
666 A 2 1
999 A 2 0
</code></pre>
<p>Thanks!</p>
|
<p>Use <code>merge</code></p>
<pre><code>In [164]: df2.merge(df1[['var1', 'var2']].drop_duplicates())
Out[164]:
ID var1 var2 var3
0 111 A 1 1
1 222 A 1 1
2 333 A 1 1
3 777 A 1 0
4 888 A 1 0
5 444 A 2 1
6 555 A 2 1
7 666 A 2 1
8 999 A 2 0
</code></pre>
|
python|pandas|numpy
| 1
|
6,486
| 66,573,190
|
how to get correct correlation plot on time series data with matplotlib/seaborn?
|
<p>I have time-series data that collected weekly basis, where I want to see the correlation of its two columns. to do so, I could able to find a correlation between two columns and want to see how rolling correlation moves each year. my current approach works fine but I need to normalize the two columns before doing rolling correlation and making a line plot. In my current attempt, I don't know how to show 3-year, 5 year rolling correlation. Can anyone suggest a possible idea of doing this in <code>matplotlib</code>?</p>
<p><strong>current attempt</strong>:</p>
<p>Here is my current attempt:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
dataPath="https://gist.github.com/jerry-shad/503a7f6915b8e66fe4a0afbc52be7bfa#file-sample_data-csv"
def ts_corr_plot(dataPath, roll_window=4):
df = pd.read_csv(dataPath)
df['Date'] = pd.to_datetime(df['Date'])
df['week'] = pd.DatetimeIndex(df['date']).week
df['year'] = pd.DatetimeIndex(df['date']).year
df['week'] = df['date'].dt.strftime('%W').astype('uint8')
def find_corr(x):
df = df.loc[x.index]
return df[:, 1].corr(df[:, 2])
df['corr'] = df['week'].rolling(roll_window).apply(find_corr)
fig, ax = plt.subplots(figsize=(7, 4), dpi=144)
sns.lineplot(x='week', y='corr', hue='year', data=df,alpha=.8)
plt.show()
plt.close
</code></pre>
<p><strong>update</strong>:</p>
<p>I want to see rolling correlation in different time window such as:</p>
<pre><code>plt_1 = ts_corr_plot(dataPath, roll_window=4)
plt_2 = ts_corr_plot(dataPath, roll_window=12)
plt_3 = ts_corr_plot(dataPath, roll_window=24)
</code></pre>
<p>I need to add 3-years, 5-years rolling correlation to the plots but I couldn't find a better way of doing this. Can anyone point me out how to make a rolling correlation line plot for time series data? How can I improve the current attempt? any idea?</p>
<p><strong>desired plot</strong></p>
<p>this is my expected plot that I want to obtain:</p>
<p><a href="https://i.stack.imgur.com/Xo1HD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xo1HD.png" alt="enter image description here" /></a></p>
|
<p>Customizing the legend in esaborn is painstaking, so I created the code in matplotlib.</p>
<ol>
<li>Corrected the method for calculating the correlation coefficient. Your code gave me an error, so please correct me if I'm wrong.</li>
<li>The color of the line graph seems to be the color of the tableau from the desired graph color, so I used the 10 colors of the tableau defined in matplotlib.</li>
<li>To calculate the correlation coefficient for 3 years, I am using 156 line units, which is 3 years of weekly data. Please correct this logic if it is wrong.</li>
<li>I am creating 4-week and 3-year graphs in a loop process respectively.</li>
</ol>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
dataPath="https://gist.githubusercontent.com/jerry-shad/503a7f6915b8e66fe4a0afbc52be7bfa/raw/414a2fc2988fcf0b8e6911d77cccfbeb4b9e9664/sample_data.csv"
df = pd.read_csv(dataPath)
df['Date'] = pd.to_datetime(df['Date'])
df['week'] = df['Date'].dt.isocalendar().week
df['year'] = df['Date'].dt.year
df['week'] = df['Date'].dt.strftime('%W').astype('uint8')
def find_corr(x):
dfc = df.loc[x.index]
tmp = dfc.iloc[:, [1,2]].corr()
tmp = tmp.iloc[0,1]
return tmp
roll_window=4
df['corr'] = df['week'].rolling(roll_window).apply(find_corr)
df3 = df.copy() # three year
df3['corr3'] = df3['year'].rolling(156).apply(find_corr) # 3 year = 52 week x 3 year = 156
fig, ax = plt.subplots(figsize=(12, 4), dpi=144)
cmap = plt.get_cmap("tab10")
for i,y in enumerate(df['year'].unique()):
tmp = df[df['year'] == y]
ax.plot(tmp['week'], tmp['corr'], color=cmap(i), label=y)
for i,y in enumerate(df['year'].unique()):
tmp = df3[df3['year'] == y]
if tmp['corr3'].notnull().all():
ax.plot(tmp['week'], tmp['corr3'], color=cmap(i), lw=3, linestyle='--', label=str(y)+' 3 year avg')
ax.grid(axis='both')
ax.legend(loc='upper left', bbox_to_anchor=(1.0, 1.0), borderaxespad=1)
plt.show()
# plt.close
</code></pre>
<p><a href="https://i.stack.imgur.com/V1l7W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V1l7W.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib|seaborn
| 1
|
6,487
| 66,387,891
|
Pandas df comparing two dates condition
|
<p>I'd like to add 1 if <code>date_</code> > <code>buy_date</code> larger than 12 months else 0</p>
<p>example df</p>
<pre><code>customer_id date_ buy_date
34555 2019-01-01 2017-02-01
24252 2019-01-01 2018-02-10
96477 2019-01-01 2017-02-18
</code></pre>
<p>output df</p>
<pre><code>customer_id date_ buy_date buy_date>_than_12_months
34555 2019-01-01 2017-02-01 1
24252 2019-01-01 2018-02-10 0
96477 2019-01-01 2018-02-18 1
</code></pre>
|
<p>Based on what I understand, you can try adding a year to <code>buy_date</code> and then subtract from <code>date_</code> , then check if days are + or -.</p>
<pre><code> df['buy_date>_than_12_months'] = ((df['date_'] -
(df['buy_date']+pd.offsets.DateOffset(years=1)))
.dt.days.gt(0).astype(int))
</code></pre>
<hr />
<pre><code>print(df)
customer_id date_ buy_date buy_date>_than_12_months
0 34555 2019-01-01 2017-02-01 1
1 24252 2019-01-01 2018-02-10 0
2 96477 2019-01-01 2017-02-18 1
</code></pre>
|
python|pandas
| 4
|
6,488
| 66,383,718
|
How to find column with specific row is zero in pandas
|
<p>I have pandas dataframe like below.</p>
<pre><code> index col_A col_B col_C col_D col_E
a 12 15 28 34 23
b 23 37 46 34 92
c 34 32 24 93 12
d 12 0 1 0 0
</code></pre>
<p>I want output like below.</p>
<pre><code> index col_B col_D col_E
a 15 34 23
b 37 34 92
c 32 93 12
d 0 0 0
</code></pre>
<p>The output frame condition is like if there is 0 in <strong>index d</strong> row. It should be in output dataframe.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a> for select index <code>d</code> and then filter by another <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a> with <code>:</code> for all rows and condition for filter columns:</p>
<pre><code>df = df.loc[:, df.loc['d'].eq(0)]
print (df)
col_B col_D col_E
index
a 15 34 23
b 37 34 92
c 32 93 12
d 0 0 0
</code></pre>
|
python|pandas
| 2
|
6,489
| 66,744,172
|
How to pass the image buffer to Tensorflow JS decodeImage method?
|
<p>So I have the following code;
<code>CLIENT</code></p>
<pre><code>const imageData = context.getImageData(0, 0, 320, 180);
const buffer = imageData.data.buffer;
socket.emit("signal", buffer); //Pass it to the server through websocket
</code></pre>
<p><code>BACKEND</code></p>
<pre><code>socket.on("signal", (data)=> {
γγconst buffer = new Uint8Array(data);
γγconst imgData = ts.node.decodeImage(buffer); //error throwns here.
})
</code></pre>
<p>on the backend, I've tried to decode the buffer, but this error was thrown out.</p>
<pre><code>throw new Error('Expected image (BMP, JPEG, PNG, or GIF), but got unsupported ' +
^
Error: Expected image (BMP, JPEG, PNG, or GIF), but got unsupported image type
at getImageType (/Users/xxx/app/server/node_modules/@tensorflow/tfjs-node/dist/image.js:351:15)
at Object.decodeImage (/Users/xxx/app/server/node_modules/@tensorflow/tfjs-node/dist/image.js:196:21)
at Socket.socket.on (/Users/xxx/app/server/app.js:37:29)
at Socket.emit (events.js:182:13)
at Socket.emitUntyped (/Users/xxx/app/server/node_modules/socket.io/dist/typed-events.js:69:22)
at process.nextTick (/Users/xxx/app/server/node_modules/socket.io/dist/socket.js:428:39)
at process._tickCallback (internal/process/next_tick.js:61:11)
</code></pre>
<p>Anybody has any clue as to why?</p>
|
<p>The buffer data contains already the decoded image. To get the tensor out of it, here is simply what it takes on the server side:</p>
<pre><code>tf.tensor(data).reshape([IMAGE_H, IMAGE_W, -1]);
// IMAGE_H is the height of the image, here 180
// IMAGE_W is the width of the image, here 320
</code></pre>
|
node.js|reactjs|socket.io|tensorflow.js
| 0
|
6,490
| 57,713,794
|
Pandas rank subset of rows based on condition column
|
<p>I want to rank the below dataframe by <code>score</code>, only for rows where<code>condition</code> is <code>False</code>. The rest should have a rank of <code>NaN</code>.</p>
<pre><code>df=pd.DataFrame(np.array([[34, 65, 12, 98, 5],[False, False, True, False, False]]).T, index=['A', 'B','C','D','E'], columns=['score', 'condition'])
</code></pre>
<p>The desired output with the (descending) conditional rank would be:</p>
<pre><code> score condition cond_rank
A 34 0 3
B 65 0 2
C 12 1 NaN
D 98 0 1
E 5 0 4
</code></pre>
<p>I know <code>pd.DataFrame.rank()</code> can handle <code>NaN</code> for the values that are being ranked, but in cases where the conditioning is intended on another column/series, what is the most efficient way to achieve this?</p>
|
<p>You can filter by condition column <code>rank</code>:</p>
<pre><code>df['new'] = df.loc[~df['condition'].astype(bool), 'score'].rank()
print (df)
score condition new
A 34 0 2.0
B 65 0 3.0
C 12 1 NaN
D 98 0 4.0
E 5 0 1.0
</code></pre>
|
pandas|dataframe|conditional-statements|rank
| 4
|
6,491
| 57,574,467
|
How to get deviation from a reference value for multindex dataframe
|
<p>I'm interested in finding deviations from my simulated data from experiment, in a manner similar to the following:</p>
<pre><code>my_frame = pd.DataFrame(data={'simulation1':[71,4.8,65,4.7],
'simulation2':[71,4.8,69,4.7],
'simulation3':[70,3.8,68,4.9],
'experiment':[70.3,3.5,65,4.4],
'Material':['Copper','Copper',
'Aluminum','Aluminum'],
'Property':['Temperature','Weight',
'Temperature','Weight']})
my_frame.set_index(keys=['Material','Property'], inplace=True)
simulation1 simulation2 simulation3 experiment
Material Property
Copper Temperature 71.0 71.0 70.0 70.3
Weight 4.8 4.8 3.8 3.5
Aluminum Temperature 65.0 69.0 68.0 65.0
Weight 4.7 4.7 4.9 4.4
</code></pre>
<p>I would like to have a per-category deviation from a reference column (in my case experiment)</p>
<pre><code> simulation1 simulation2 simulation3 experiment
Material Property
Copper Temperature 71.0 71.0 70.0 70.3
Weight 4.8 4.8 3.8 3.5
ERROR(Weight-exp) 0.7 0.7 0.3 0.0
ERROR(Temp -exp) 1.3 1.3 0.3 0.0
Aluminum Temperature 65.0 69.0 68.0 65.0
Weight 4.7 4.7 4.9 4.4
ERROR(Weight-exp) 0.0 4.0 3.0 0.0
ERROR(Temp -exp) 0.3 0.3 0.5 0.0
</code></pre>
<p>I'm sure this can be done easily(ish) in pandas, but I"m not sure how. </p>
|
<p>Create new DataFrame by subtract column <code>experiment</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sub.html" rel="nofollow noreferrer"><code>DataFrame.sub</code></a> and then change <code>MultiIndex</code>:</p>
<pre><code>df = my_frame.sub(my_frame['experiment'], axis=0)
a = df.index.get_level_values(0) + '_ERR'
b = df.index.get_level_values(1)
df.index = [a, b]
print (df)
simulation1 simulation2 simulation3 experiment
Material Property
Copper_ERR Temperature 0.7 0.7 -0.3 0.0
Weight 1.3 1.3 0.3 0.0
Aluminum_ERR Temperature 0.0 4.0 3.0 0.0
Weight 0.3 0.3 0.5 0.0
</code></pre>
<p>Last use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_index.html" rel="nofollow noreferrer"><code>DataFrame.sort_index</code></a>:</p>
<pre><code>my_frame = pd.concat([my_frame, df]).sort_index()
print (my_frame)
simulation1 simulation2 simulation3 experiment
Material Property
Aluminum Temperature 65.0 69.0 68.0 65.0
Weight 4.7 4.7 4.9 4.4
Aluminum_ERR Temperature 0.0 4.0 3.0 0.0
Weight 0.3 0.3 0.5 0.0
Copper Temperature 71.0 71.0 70.0 70.3
Weight 4.8 4.8 3.8 3.5
Copper_ERR Temperature 0.7 0.7 -0.3 0.0
Weight 1.3 1.3 0.3 0.0
</code></pre>
<p>Another solution with change second level:</p>
<pre><code>df = my_frame.sub(my_frame['experiment'], axis=0)
a = df.index.get_level_values(0)
b = 'ERROR(' + df.index.get_level_values(1) + '-exp)'
df.index = [a, b]
print (df)
simulation1 simulation2 simulation3 \
Material Property
Copper ERROR(Temperature-exp) 0.7 0.7 -0.3
ERROR(Weight-exp) 1.3 1.3 0.3
Aluminum ERROR(Temperature-exp) 0.0 4.0 3.0
ERROR(Weight-exp) 0.3 0.3 0.5
experiment
Material Property
Copper ERROR(Temperature-exp) 0.0
ERROR(Weight-exp) 0.0
Aluminum ERROR(Temperature-exp) 0.0
ERROR(Weight-exp) 0.0
</code></pre>
<hr>
<pre><code>my_frame = pd.concat([my_frame, df]).sort_index(ascending=False)
print (my_frame)
simulation1 simulation2 simulation3 \
Material Property
Copper Weight 4.8 4.8 3.8
Temperature 71.0 71.0 70.0
ERROR(Weight-exp) 1.3 1.3 0.3
ERROR(Temperature-exp) 0.7 0.7 -0.3
Aluminum Weight 4.7 4.7 4.9
Temperature 65.0 69.0 68.0
ERROR(Weight-exp) 0.3 0.3 0.5
ERROR(Temperature-exp) 0.0 4.0 3.0
experiment
Material Property
Copper Weight 3.5
Temperature 70.3
ERROR(Weight-exp) 0.0
ERROR(Temperature-exp) 0.0
Aluminum Weight 4.4
Temperature 65.0
ERROR(Weight-exp) 0.0
ERROR(Temperature-exp) 0.0
</code></pre>
|
python|pandas
| 2
|
6,492
| 57,609,350
|
Can I use a boolean loop to find equal values btw two df cols then set df1['col1'] = df2['col2']
|
<p>I have to find the values of <code>df2 col1</code> that are equal to <code>df1 col1</code>, then replace <code>df1 col2</code> with <code>df2 col2</code> from the same row. </p>
<p>I've already tried <code>.isin()</code> (possibly incorrectly) and multiple conditions i.e. <code>if (df1['col1'] == df2['col1']) & (df1['col3'] == 'x index')</code></p>
<pre><code>i=0
for i in df1:
if df1['col1'].isin(df2['col1']):
df1['col2'] = df2['col2']
else df1['col1'].isin(df3):
df1['col2'] = df['col3']
i+=1
</code></pre>
|
<p>Please, if you find a solution without using loops, it is always better. In your case, finding rows that are in an other column can be solved by an inner join. Here is, I hope, a code that can solve your issue.</p>
<pre class="lang-py prettyprint-override"><code>In [1]:
## Set the exemple with replicable code
import pandas as pd
cols = ['col1', 'col2']
data = [[100, 150],
[220, 240],
[80, 60]
]
df1 = pd.DataFrame(data=data, columns=cols).set_index('col1')
cols = ['col1', 'col2']
data = [[111, 0],
[220, 0],
[80, 0]
]
df2 = pd.DataFrame(data=data, columns=cols).set_index('col1')
## Get all the rows from df1 col1 that are in df2 col1
df_merge = df1.merge(df2, left_index=True, right_index=True, how='inner', suffixes=('_df1', '_df2'))
df_merge
Out [1]:
col2_df1 col2_df2
col1
220 240 0
80 60 0
</code></pre>
<p>Then Do a left join to add the values from <code>col2 df2</code> to <code>col2 df1</code></p>
<pre class="lang-py prettyprint-override"><code>In [2]:
df1 = df1.merge(df_merge, how='left', left_index=True, right_index=True)
df1.drop(axis=1, columns=['col2', 'col2_df1'], inplace=True)
df1.rename(columns={'col2_df2': 'df2'}, inplace=True)
df1
Out [2]:
df2
col1
100 NaN
220 0.0
80 0.0
</code></pre>
|
python|pandas|loops|iterator
| 0
|
6,493
| 73,018,028
|
To check whether female household income is higher than the average male house hold income (using dataframe given)
|
<p>This is the code that I have done to check whether females' household income is higher than males' average household income.</p>
<pre class="lang-py prettyprint-override"><code>#Get total number of female customers
df_Female = df[df['Gender']=='Female']
FemaleIncomeArray = df_Female.loc[:,'Income'].values #get female income
FemaleIncomeList = FemaleIncomeArray.tolist() #convert to list
#Get male average household income
df_Male = df[df['Gender']=='Male']
MaleAvgIncome = df_Male.groupby('Gender')['Income'].mean()
color_list = []
y_pos=[] #y-axis positions
for i in range(len(FemaleIncomeList)):
if FemaleIncomeList[i] >= MaleAvgIncome : #got error from this line
color_list.append('y')
else:
color_list.append('r')
y_pos.append(i+1)
</code></pre>
<p>However, I got an error:</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
|
<p>sometimes when you check a condition on series or dataframes, your output is a series such as ( , False).
In this case you must use any, all, item,...
use print function for your condition to see the series</p>
|
python|pandas|dataframe
| 0
|
6,494
| 51,528,643
|
How to unpack multiple dictionary objects inside list within a row of dataframe?
|
<p>I have a dataframe with the below dictionaries within a single list in every row and per row, the list are different sizes with they are of different sizes as below:</p>
<pre><code>ID unnest_column
1 [{'abc': 11, 'def': 1},{'abc': 15, 'def': 1},
{'abc': 16, 'def': 1},
{'abc': 17, 'def': 1},
{'abc': 18, 'def': 1, 'ghi': 'abc'},
{'abc': 23, 'def': 'xxx', 'def': 1},
{'abc': 23, 'def': 'xxx', 'def': 2},
{'abc': 23, 'def': 'xxx', 'def': 4}]
2 [{'abc': 11, 'def': 1}]
</code></pre>
<p>How do I unpack the dictionaries in the list and make the key values columns? </p>
<p>new df potentially, not sure exactly how it will look, just need keys into columns:</p>
<pre><code>id abc def ghi
1 2 3 abc
</code></pre>
|
<p>IIUC, from</p>
<pre><code>df = pd.DataFrame()
df['x'] = [[{'QuestionId': 11, 'ResponseId': 1},{'QuestionId': 15, 'ResponseId': 1},
{'QuestionId': 16, 'ResponseId': 1},
{'QuestionId': 17, 'ResponseId': 1},
{'QuestionId': 18, 'ResponseId': 1, 'Value': 'abc'},
{'QuestionId': 23, 'DataLabel': 'xxx', 'ResponseId': 1},
{'QuestionId': 23, 'DataLabel': 'xxx', 'ResponseId': 2},
{'QuestionId': 23, 'DataLabel': 'xxx', 'ResponseId': 4}],
[{'QuestionId': 11, 'ResponseId': 1}]]
</code></pre>
<p>You can <code>sum</code> your lists to aggregate them, and use <code>DataFrame</code> constructor</p>
<pre><code>new_df = pd.DataFrame(df.x.values.sum())
DataLabel QuestionId ResponseId Value
0 NaN 11 1 NaN
1 NaN 15 1 NaN
2 NaN 16 1 NaN
3 NaN 17 1 NaN
4 NaN 18 1 abc
5 xxx 23 1 NaN
6 xxx 23 2 NaN
7 xxx 23 4 NaN
8 NaN 11 1 NaN
</code></pre>
<p>If you want to maintain the original indexes, you can build a <code>inds</code> list and pass it as arguments to the constructor:</p>
<pre><code>inds = [index for _ in ([i] * len(v) for i,v in df.x.iteritems()) for index in _]
pd.DataFrame(df.x.values.sum(), index=inds)
DataLabel QuestionId ResponseId Value
0 NaN 11 1 NaN
0 NaN 15 1 NaN
0 NaN 16 1 NaN
0 NaN 17 1 NaN
0 NaN 18 1 abc
0 xxx 23 1 NaN
0 xxx 23 2 NaN
0 xxx 23 4 NaN
1 NaN 11 1 NaN
</code></pre>
|
python|python-3.x|pandas
| 3
|
6,495
| 36,238,101
|
Using Pandas to export multiple rows of data to a csv
|
<p>I have a matching algorithm which links students to projects. It's working, and I have trouble exporting the data to a csv file. It only takes the last value and exports that only, when there are 200 values to be exported. </p>
<p>The data that's exported uses each number as a value when I would like to get the whole 's' rather than the three 3 numbers which make up 's', which are split into three columns. I've attached the images below. Any help would be appreciated.</p>
<p><a href="http://i.stack.imgur.com/ZIhpH.png" rel="nofollow">What it looks like</a></p>
<p><a href="http://i.stack.imgur.com/MoX8i.png" rel="nofollow">What it should look like</a></p>
<pre><code>#Imports for Pandas
import pandas as pd
from pandas import DataFrame
SPA()
for m in M:
s = m['student']
l = m['lecturer']
Lecturer[l]['limit'] = Lecturer[l]['limit'] - 1
id = m['projectid']
p = Project[id]['title']
c = Project[id]['sourceid']
r = str(getRank("Single_Projects1copy.csv",s,c))
print(s+","+l+","+p+","+c+","+r)
dataPack = (s+","+l+","+p+","+c+","+r)
df = pd.DataFrame.from_records([dataPack])
df.to_csv('try.csv')
</code></pre>
|
<p>You keep overwriting in the loop so you only end up with the last bit of data, you need to append to the csv with <code>df.to_csv('try.csv',mode="a",header=False)</code> or create one df and append to that and write outside the loop, something like:</p>
<pre><code>df = pd.DataFrame()
for m in M:
s = m['student']
l = m['lecturer']
Lecturer[l]['limit'] = Lecturer[l]['limit'] - 1
id = m['projectid']
p = Project[id]['title']
c = Project[id]['sourceid']
r = str(getRank("Single_Projects1copy.csv",s,c))
print(s+","+l+","+p+","+c+","+r)
dataPack = (s+","+l+","+p+","+c+","+r)
df.append(pd.DataFrame.from_records([dataPack]))
df.to_csv('try.csv') # write all data once outside the loop
</code></pre>
<p>A better option would be to open a file and pass that file object to <code>to_csv</code>:</p>
<pre><code>with open('try.csv', 'w') as f:
for m in M:
s = m['student']
l = m['lecturer']
Lecturer[l]['limit'] = Lecturer[l]['limit'] - 1
id = m['projectid']
p = Project[id]['title']
c = Project[id]['sourceid']
r = str(getRank("Single_Projects1copy.csv",s,c))
print(s+","+l+","+p+","+c+","+r)
dataPack = (s+","+l+","+p+","+c+","+r)
pd.DataFrame.from_records([dataPack]).to_csv(f, header=False)
</code></pre>
<p>You get individual chars because you are using from_records passing a single string <code>dataPack</code> as the value so it iterates over the chars:</p>
<pre><code>In [18]: df = pd.DataFrame.from_records(["foobar,"+"bar"])
In [19]: df
Out[19]:
0 1 2 3 4 5 6 7 8 9
0 f o o b a r , b a r
In [20]: df = pd.DataFrame(["foobar,"+"bar"])
In [21]: df
Out[21]:
0
0 foobar,bar
</code></pre>
<p>I think you basically want to leave as a tuple <code>dataPack = (s, l, p,c, r)</code> and use <code>pd.DataFrame(dataPack)</code>. You don't really need pandas at all, the csv lib would do all this for you without needing to create Dataframes. </p>
|
python|csv|pandas
| 1
|
6,496
| 35,814,769
|
python pandas HDF5Store append new dataframe with long string columns fails
|
<p>For a given path, i process many GigaBytes of files inside, and yield dataframes for every processed one.
For every dataframe that is yield, which includes two string columns of varying size, I want to dump them to disk using the very efficient HDF5 format. The error is raised when the HDFStore.append procedure is called, for the 4th or 5th iteration. </p>
<p>I use the following routine(simplified) to build the dataframes:</p>
<pre><code>def build_data_frames(path):
data = df({'headline': [],
'content': [],
'publication': [],
'file_ref': []},
columns=['publication','file_ref','headline','content'])
for curdir, subdirs, filenames in os.walk(path):
for file in filenames:
if (zipfile.is_zipfile(os.path.join(curdir, file))):
with zf(os.path.join(curdir, file), 'r') as arch:
for arch_file_name in arch.namelist():
if re.search('A[r|d]\d+.xml', arch_file_name) is not None:
xml_file_ref = arch.open(arch_file_name, 'r')
xml_file = xml_file_ref.read()
metadata = XML2MetaData(xml_file)
headlineTokens, contentTokens = XML2TokensParser(xml_file)
rows= [{'headline': " ".join(headlineTokens),
'content': " ".join(contentTokens)}]
rows[0].update(metadata)
data = data.append(df(rows,
columns=['publication',
'file_ref',
'headline',
'content']),
ignore_index=True)
arch.close()
yield data
</code></pre>
<p>Then I use the following method to write these dataframes to disk:</p>
<pre><code>def extract_data(path):
hdf_fname = extract_name(path)
hdf_fname += ".h5"
data_store = HDFStore(hdf_fname)
for dataframe in build_data_frames(path):
data_store.append('df', dataframe, data_columns=True)
## passing min_itemsize doesn't work either
## data_store.append('df', dataframe, min_itemsize=8000)
## trying the "alternative" command didn't help
## dataframe.to_hdf(hdf_fname, 'df', format='table', append=True,
## min_itemsize=80000)
data_store.close()
</code></pre>
<p>-></p>
<pre><code>%time load_data(publications_path)
</code></pre>
<p>And the ValueError I get is:</p>
<p>...</p>
<pre><code>ValueError: Trying to store a string with len [5761] in [values_block_0]
column but this column has a limit of [4430]!
Consider using min_itemsize to preset the sizes on these columns
</code></pre>
<p>I tried all the options, went through all the documentation necessary for this task, and tried all the tricks I saw on the Internet. Yet, no idea why it happens. </p>
<p>I use pandas ver: 0.17.0</p>
<p>Appreciate your help very much!</p>
|
<p>Have you seen this post? <a href="https://stackoverflow.com/questions/22710738/pandas-pytable-how-to-specify-min-itemsize-of-the-elements-of-a-multiindex">stackoverflow</a> </p>
<pre><code>data_store.append('df',dataframe,min_itemsize={ 'string' : 5761 })
</code></pre>
<p>Change 'string' to your type.</p>
|
python|pandas|hdf5
| 0
|
6,497
| 38,028,762
|
OutOfRangeError: RandomShuffleQueue
|
<p>Hi I am trying to run a conv. neural network addapted from MINST2 tutorial in tensorflow. I am having the following error, but i am not sure what is going on:</p>
<pre><code>W tensorflow/core/framework/op_kernel.cc:909] Invalid argument: Shape mismatch in tuple component 0. Expected [784], got [6272]
W tensorflow/core/framework/op_kernel.cc:909] Invalid argument: Shape mismatch in tuple component 0. Expected [784], got [6272]
Traceback (most recent call last):
File "4_Treino_Rede_Neural.py", line 161, in <module>
train_accuracy = accuracy.eval(feed_dict={keep_prob: 1.0})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 555, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3498, in _eval_using_default_session
return session.run(tensors, feed_dict)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 636, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 708, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 728, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.OutOfRangeError: RandomShuffleQueue '_0_input/shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
[[Node: input/shuffle_batch = QueueDequeueMany[_class=["loc:@input/shuffle_batch/random_shuffle_queue"], component_types=[DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input/shuffle_batch/random_shuffle_queue, input/shuffle_batch/n)]]
Caused by op u'input/shuffle_batch', defined at:
File "4_Treino_Rede_Neural.py", line 113, in <module>
x, y_ = inputs(train=True, batch_size=FLAGS.batch_size, num_epochs=FLAGS.num_epochs)
File "4_Treino_Rede_Neural.py", line 93, in inputs
min_after_dequeue=1000)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/input.py", line 779, in shuffle_batch
dequeued = queue.dequeue_many(batch_size, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/data_flow_ops.py", line 400, in dequeue_many
self._queue_ref, n=n, component_types=self._dtypes, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 465, in _queue_dequeue_many
timeout_ms=timeout_ms, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 704, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2260, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1230, in __init__
self._traceback = _extract_stack()
</code></pre>
<p>My program is:</p>
<pre><code>from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os.path
import time
import numpy as np
import tensorflow as tf
# Basic model parameters as external flags.
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_integer('num_epochs', 2, 'Number of epochs to run trainer.')
flags.DEFINE_integer('batch_size', 100, 'Batch size.')
flags.DEFINE_string('train_dir', '/root/data', 'Directory with the training data.')
#flags.DEFINE_string('train_dir', '/root/data2', 'Directory with the training data.')
# Constants used for dealing with the files, matches convert_to_records.
TRAIN_FILE = 'train.tfrecords'
VALIDATION_FILE = 'validation.tfrecords'
# Set-up dos pacotes
sess = tf.InteractiveSession()
def read_and_decode(filename_queue):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
serialized_example,
# Defaults are not specified since both keys are required.
features={
'image_raw': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64),
})
# Convert from a scalar string tensor (whose single string has
# length mnist.IMAGE_PIXELS) to a uint8 tensor with shape
# [mnist.IMAGE_PIXELS].
image = tf.decode_raw(features['image_raw'], tf.uint8)
image.set_shape([784])
# OPTIONAL: Could reshape into a 28x28 image and apply distortions
# here. Since we are not applying any distortions in this
# example, and the next step expects the image to be flattened
# into a vector, we don't bother.
# Convert from [0, 255] -> [-0.5, 0.5] floats.
image = tf.cast(image, tf.float32) * (1. / 255) - 0.5
# Convert label from a scalar uint8 tensor to an int32 scalar.
label = tf.cast(features['label'], tf.int32)
return image, label
def inputs(train, batch_size, num_epochs):
"""Reads input data num_epochs times.
Args:
train: Selects between the training (True) and validation (False) data.
batch_size: Number of examples per returned batch.
num_epochs: Number of times to read the input data, or 0/None to
train forever.
Returns:
A tuple (images, labels), where:
* images is a float tensor with shape [batch_size, 30,26,1]
in the range [-0.5, 0.5].
* labels is an int32 tensor with shape [batch_size] with the true label,
a number in the range [0, char letras).
Note that an tf.train.QueueRunner is added to the graph, which
must be run using e.g. tf.train.start_queue_runners().
"""
if not num_epochs: num_epochs = None
filename = os.path.join(FLAGS.train_dir,
TRAIN_FILE if train else VALIDATION_FILE)
with tf.name_scope('input'):
filename_queue = tf.train.string_input_producer(
[filename], num_epochs=num_epochs)
# Even when reading in multiple threads, share the filename
# queue.
image, label = read_and_decode(filename_queue)
# Shuffle the examples and collect them into batch_size batches.
# (Internally uses a RandomShuffleQueue.)
# We run this in two threads to avoid being a bottleneck.
images, sparse_labels = tf.train.shuffle_batch(
[image, label], batch_size=batch_size, num_threads=2,
capacity=1000 + 3 * batch_size,
# Ensures a minimum amount of shuffling of examples.
min_after_dequeue=1000)
return images, sparse_labels
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
#Variaveis
x, y_ = inputs(train=True, batch_size=FLAGS.batch_size, num_epochs=FLAGS.num_epochs)
#onehot_y_ = tf.one_hot(y_, 36, dtype=tf.float32)
#y_ = tf.string_to_number(y_, out_type=tf.int32)
#Layer 1
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
#Layer 2
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
#Densely Connected Layer
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
#Dropout - reduz overfitting
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
#Readout layer
W_fc2 = weight_variable([1024, 36])
b_fc2 = bias_variable([36])
#y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
#Train and evaluate
#cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1]))
#cross_entropy = tf.reduce_mean(-tf.reduce_sum(onehot_y_ * tf.log(y_conv), reduction_indices=[1]))
cross_entropy = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(y_conv, y_))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.initialize_all_variables())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
for i in range(20000):
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict={keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={keep_prob: 0.5})
x, y_ = inputs(train=True, batch_size=2000)
#y_ = tf.string_to_number(y_, out_type=tf.int32)
print("test accuracy %g"%accuracy.eval(feed_dict={keep_prob: 1.0}))
coord.join(threads)
sess.close()
</code></pre>
<p>I have tried changing the num_epochs to <code>10000</code> and to <code>None</code> but the same error message appears. I am wondering if anyone knows how to solve this.</p>
<p>Thanks
Marcelo</p>
|
<p>This looks like an issue with your image.set_shape([784]). The error is saying that it was expecting something of size [784] but it got [6272]. I'm semi-familiar with this tutorial and the images should be 28x28 which would give you a size of 784 but maybe there are 6272 images and your dimensions are confused because the first dimension should be the amount of observations and not the size of a single observation? Sorry this isn't a concrete answer but I would start there.</p>
|
python|tensorflow|conv-neural-network
| 0
|
6,498
| 31,550,031
|
ValueError with Pandas - shaped of passed values
|
<p>I'm trying to use Pandas and PyODBC to pull from a SQL Server View and dump the contents to an excel file.</p>
<p>However, I'm getting the error when dumping the data frame (I can print the colums and dataframe content):</p>
<pre><code>ValueError: Shape of passed values is (1, 228), indices imply (2, 228)
</code></pre>
<p>There are several other issues on this forum pertaining to the same issue, but none discuss pulling from a SQL Server table.</p>
<p>I can't figure out what is causing this error, and altering the view to cast the source columns differently has no effect.</p>
<p>Here is the python code i'm using:</p>
<pre><code>import pyodbc
import pandas as pd
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=servername;DATABASE=dbname;UID=username;PWD=password')
cursor = cnxn.cursor()
script = """
SELECT * FROM schema.ActiveEnrollmentCount
"""
cursor.execute(script)
columns = [desc[0] for desc in cursor.description]
data = cursor.fetchall()
df = pd.DataFrame(list(data), columns=columns)
writer = pd.ExcelWriter('c:\temp\ActiveEnrollment.xlsx')
df.to_excel(writer, sheet_name='bar')
writer.save()
</code></pre>
<p>The 2 columns I'm trying to pull are both 3-digit integers.</p>
|
<p>To query data from a database, you can better use the built-in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_query.html" rel="nofollow"><code>read_sql_query</code></a> function instead of doing the execute and converting to dataframe manually.<br>
For your example, this would give something like:</p>
<pre><code>df = pd.read_sql_query(script, cnxn)
</code></pre>
<p>See the <a href="http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries" rel="nofollow">docs</a> for more explanation on <code>read_sql</code>/<code>to_sql</code>.</p>
|
python|pandas|pyodbc
| 2
|
6,499
| 64,509,263
|
How do I rewrite the following SQL code in Pandas to display the query and not just the headers?
|
<p>I have a dataset and I am trying to write SQL query into Pandas.</p>
<p>The SQL query code is:</p>
<pre><code>`SELECT Industry_type, No_of_Employees, Employee_Insurance_Premium, Percent_Female_Employees FROM cdc_new
WHERE Industry_type= 'Hospitals' AND Employee_Insurance_Premium='Decreased'
ORDER BY Percent_Female_Employees DESC;`
</code></pre>
<p>This is the code that I wrote in Pandas:</p>
<pre><code>pd.DataFrame(cdc_new[(cdc_new.Industry_type == 'Hospitals') & (cdc_new.Employee_Insurance_Premium == 'Decreased')][['No_of_Employees', 'Industry_type', 'Employee_Insurance_Premium', 'Percent_Female_Employees']].sort_values(['Percent_Female_Employees'], ascending=[False]))
</code></pre>
<p>and I get an output with ONLY the headers and no text.</p>
|
<p>Assuming you have read in the entire table from sql with something like:</p>
<pre><code>cdc_new = pd.read_sql(query, conn)
</code></pre>
<p>You can use the following syntax:</p>
<pre><code>df = (cdc_new.loc[(cdc_new['Industry_type'] == 'Hospitals') &
(cdc_new['Employee_Insurance_Premium'] == 'Decreased'),
['Industry_type',
'No_of_Employees',
'Employee_Insurance_Premium',
'Percent_Female_Employees']]
.sort_values('Percent_Female_Employees', ascending=False))
df
</code></pre>
|
python|sql|pandas
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.