Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
2,700
| 66,468,134
|
any doable approach to use multiple GPUs, multiple process with tensorflow?
|
<p>I am using docker container to run my experiment. I have multiple GPUs available and I want to use all of them for my experiment. I mean I want to utilize all GPUs for one program. To do so, I used <code>tf.distribute.MirroredStrategy</code> that suggested on tensorflow site, but it is not working. here is the <a href="https://gist.github.com/adamFlyn/5c2e2797f78acfc0136eeb283bb2ec77" rel="nofollow noreferrer">full error messages on gist</a>.</p>
<p>here is available GPUs info:</p>
<pre><code>+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:6A:00.0 Off | 0 |
| N/A 31C P8 15W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla T4 Off | 00000000:6B:00.0 Off | 0 |
| N/A 31C P8 15W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 Tesla T4 Off | 00000000:6C:00.0 Off | 0 |
| N/A 34C P8 15W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 Tesla T4 Off | 00000000:6D:00.0 Off | 0 |
| N/A 34C P8 15W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
</code></pre>
<p><strong>my current attempt</strong></p>
<p>here is my attempt using <code>tf.distribute.MirroredStrategy</code>:</p>
<pre><code>device_type = "GPU"
devices = tf.config.experimental.list_physical_devices(device_type)
devices_names = [d.name.split("e:")[1] for d in devices]
strategy = tf.distribute.MirroredStrategy(devices=devices_names[:3])
with strategy.scope():
model.compile(optimizer=opt, loss="categorical_crossentropy", metrics=["accuracy"])
</code></pre>
<p>The above attempt is not working and gave the error that listed on above gist. I don't find another way of using multiple GPUs for a single experiment.</p>
<p>does anyone any workable approach to make this happens? any thoughts?</p>
|
<h3>Is the MirrordStrategy proper way to distribute the workload</h3>
<p>The approach is correct, as long as the GPUs are on the same host. The TensorFlow <a href="https://www.tensorflow.org/tutorials/distribute/keras" rel="nofollow noreferrer">manual</a> has examples how the <code>tf.distribute.MirroredStrategy</code> can be used with keras to train the MNIST set.</p>
<h3>Is it the MirrordStrategy the only strategy</h3>
<p>No, there are multiple strategies that can be used to acheive the workload distribution. For example, the <code>tf.distribute.MultiWorkerMirroredStrategy</code> can also be used to distribute the work on multiple devices trough multiple workers.</p>
<p>The TF <a href="https://www.tensorflow.org/guide/distributed_training" rel="nofollow noreferrer">documentation</a> explains the strategies, the limitations associated with the strategies and provides some examples to help kick-start the work.</p>
<h3>The strategy is throwing an error</h3>
<p>According to the <a href="https://github.com/tensorflow/tensorflow/issues/40366" rel="nofollow noreferrer">issue</a> from github, the <code>ValueError: SyncOnReadVariable does not support 'assign_add' .....</code> is a bug in TensorFlow which is fixed in TF 2.4</p>
<p>You can try to upgrade the tensorflow libraries by</p>
<pre><code>pip install --ignore-installed --upgrade tensorflow
</code></pre>
<h3>Implementing variables that are not aware of distributed strategy</h3>
<p>If you have tried the standard <a href="https://www.tensorflow.org/tutorials/distribute/keras" rel="nofollow noreferrer">example</a> from the documentation, and it works fine, but your model is not working, you might be having variables that are incorrectly set-up or you are using <code>distributed variables</code> that do not have support for the aggregation functions required by the distributed strategy.</p>
<p>As per the TF documentation:</p>
<blockquote>
<p>..."
A distributed variable is variables created on multiple devices. As discussed in the glossary, mirrored variable and SyncOnRead variable are two examples.
"...</p>
</blockquote>
<p>To better understand how to implement the custom support for the distributed varialbes, check the following page in the <a href="https://www.tensorflow.org/api_docs/python/tf/distribute/StrategyExtended" rel="nofollow noreferrer">documentation</a></p>
|
docker|tensorflow|parallel-processing|gpu
| 1
|
2,701
| 66,621,093
|
To determine total runtime from excel using python pandas
|
<p>I have the below columns in excel file i need to find the total runtime using python pandas.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">Stage</th>
<th style="text-align: center;">JobName</th>
<th style="text-align: center;">BaseLineStartTime</th>
<th style="text-align: center;">BaseLineEndTime</th>
<th style="text-align: center;">StartTime-2Mar21</th>
<th style="text-align: center;">EndTime-2Mar21</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">App1</td>
<td style="text-align: center;">JobName1</td>
<td style="text-align: center;">20:00:00</td>
<td style="text-align: center;">20:11:45</td>
<td style="text-align: center;">20:05:31</td>
<td style="text-align: center;">20:18:43</td>
</tr>
<tr>
<td style="text-align: center;">App2</td>
<td style="text-align: center;">JobName2</td>
<td style="text-align: center;">20:00:00</td>
<td style="text-align: center;">20:12:11</td>
<td style="text-align: center;">20:05:31</td>
<td style="text-align: center;">20:23:11</td>
</tr>
<tr>
<td style="text-align: center;">App9</td>
<td style="text-align: center;">JobNamex</td>
<td style="text-align: center;">20:11:46</td>
<td style="text-align: center;">20:25:41</td>
<td style="text-align: center;">20:23:12</td>
<td style="text-align: center;">20:43:33</td>
</tr>
<tr>
<td style="text-align: center;">Day1</td>
<td style="text-align: center;">JobName1</td>
<td style="text-align: center;">20:25:42</td>
<td style="text-align: center;">20:30:42</td>
<td style="text-align: center;">20:43:44</td>
<td style="text-align: center;">20:48:44</td>
</tr>
<tr>
<td style="text-align: center;">Day2</td>
<td style="text-align: center;">JobName2</td>
<td style="text-align: center;">20:30:43</td>
<td style="text-align: center;">20:31:43</td>
<td style="text-align: center;">20:48:45</td>
<td style="text-align: center;">20:49:50</td>
</tr>
<tr>
<td style="text-align: center;">Day2</td>
<td style="text-align: center;">JobName3</td>
<td style="text-align: center;">20:30:43</td>
<td style="text-align: center;">20:40:43</td>
<td style="text-align: center;">20:48:45</td>
<td style="text-align: center;">20:58:45</td>
</tr>
</tbody>
</table>
</div>
<p>Note: I will have more columns based on the runtime dates.</p>
<p>To find the total run time using the logic (App9(EndTime) - App1 (StartTime) & (Day2(EndTime of jobname2 or jobname3 which runs later) - Day1(StartTime)</p>
<p>I need to print the result in below format</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">Stage</th>
<th style="text-align: center;">BaseLineRunTime</th>
<th style="text-align: center;">Runtime-2Mar21</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">App</td>
<td style="text-align: center;">00:25:41</td>
<td style="text-align: center;">00:38:02</td>
</tr>
<tr>
<td style="text-align: center;">Day</td>
<td style="text-align: center;">00:15:01</td>
<td style="text-align: center;">00:15:01</td>
</tr>
</tbody>
</table>
</div>
|
<p>You can try something like this. If you need to column names and other things for multiple time frames, that's quite a bit more logic, but you can start here.</p>
<p>This makes the assumption you can sort rows correctly and the variable names are somewhat consistent so you can use <code>groupby()</code></p>
<p>See comments inline</p>
<pre><code>import io
import datetime
data = '''
Stage JobName BaseLineStartTime BaseLineEndTime StartTime-2Mar21 EndTime-2Mar21
App1 JobName1 20:00:00 20:11:45 20:05:31 20:18:43
App2 JobName2 20:00:00 20:12:11 20:05:31 20:23:11
App9 JobNamex 20:11:46 20:25:41 20:23:12 20:43:33
Day1 JobName1 20:25:42 20:30:42 20:43:44 20:48:44
Day2 JobName2 20:30:43 20:31:43 20:48:45 20:49:50
Day2 JobName3 20:30:43 20:40:43 20:48:45 20:58:45
'''
df = pd.read_csv(io.StringIO(data), sep='\s+', engine='python')
df.sort_values(['Stage', 'BaseLineStartTime'], inplace=True)
# define variable to group by; in this case creating column with 'App' and 'Day'
df['groupkey'] = df['Stage'].str[0:3]
dft = df.groupby('groupkey') # create the groupings
# create your output dataframe with Stage and empty strings for runtimes
df_final = pd.DataFrame({'Stage': ['App', 'Day'], 'BaseLineRunTime': ['',''], 'Runtime-2Mar21': ['','']})
# use [0] for first in group and [-1] for last in group
for gkey in dft.groups.keys():
# print(dft.get_group(gkey))
basestart = dft.get_group(gkey).iloc[0]['BaseLineStartTime']
baseend = dft.get_group(gkey).iloc[-1]['BaseLineEndTime']
m2Mar21start = dft.get_group(gkey).iloc[0]['StartTime-2Mar21']
m2Mar21end = dft.get_group(gkey).iloc[-1]['EndTime-2Mar21']
# print(m2Mar21end, m2Mar21start)
basetime = datetime.datetime.strptime(baseend, '%H:%M:%S')-datetime.datetime.strptime(basestart, '%H:%M:%S')
m2Mar21 = datetime.datetime.strptime(m2Mar21end, '%H:%M:%S')-datetime.datetime.strptime(m2Mar21start, '%H:%M:%S')
df_final.loc[df_final['Stage'] == gkey, 'BaseLineRunTime'] = basetime
df_final.loc[df_final['Stage'] == gkey, 'Runtime-2Mar21'] = m2Mar21
df_final #print your output dataframe
</code></pre>
<p>Output</p>
<pre><code> Stage BaseLineRunTime Runtime-2Mar21
0 App 0:25:41 0:38:02
1 Day 0:15:01 0:15:01
</code></pre>
|
python|pandas|numpy
| 0
|
2,702
| 66,568,948
|
Overwrite Frame with other Frame after comparing
|
<p>2 Frames and I want the 2nd Frame to "overwrite"/update the 1st Frame. Basically, where Table1-colB-value = Table2-oldB-value, overwrite Table1-colB-value with Table2-newB-value.</p>
<p>Table1</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>colA</th>
<th>colB</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>X</td>
</tr>
<tr>
<td>2</td>
<td>Y</td>
</tr>
<tr>
<td>3</td>
<td>Z</td>
</tr>
<tr>
<td>4</td>
<td>W</td>
</tr>
<tr>
<td>5</td>
<td>X</td>
</tr>
<tr>
<td>6</td>
<td>X</td>
</tr>
<tr>
<td>7</td>
<td>W</td>
</tr>
</tbody>
</table>
</div>
<p>Table2</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>oldB</th>
<th>newB</th>
</tr>
</thead>
<tbody>
<tr>
<td>X</td>
<td>L</td>
</tr>
<tr>
<td>Y</td>
<td>M</td>
</tr>
<tr>
<td>Z</td>
<td>N</td>
</tr>
<tr>
<td>W</td>
<td>O</td>
</tr>
</tbody>
</table>
</div>
<p>Admired result after overwriting:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>colA</th>
<th>colB</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>L</td>
</tr>
<tr>
<td>2</td>
<td>M</td>
</tr>
<tr>
<td>3</td>
<td>N</td>
</tr>
<tr>
<td>4</td>
<td>O</td>
</tr>
<tr>
<td>5</td>
<td>L</td>
</tr>
<tr>
<td>6</td>
<td>L</td>
</tr>
<tr>
<td>7</td>
<td>O</td>
</tr>
</tbody>
</table>
</div>
<p>Is something like that possible? I did something similar using <code>replace</code>, but comparing 2 frames would be way better, e.g. in case the new-value-input-frame changes you wouldn't have to adjust the replace-code at every respective point.</p>
|
<p>There are plenty of ways to do this, a fast way would be to convert your df2 to <code>dict</code>, using oldB column as your keys, and then <code>map</code>it on your colB in df1:</p>
<pre><code>d = dict(zip(df2.oldB, df2.newB))
df1['new_colB'] = df1['colB'].map(d)
</code></pre>
<p>Will get back</p>
<pre><code> colA colB new_colB
0 1 X L
1 2 Y M
2 3 Z N
3 4 W O
4 5 X L
5 6 X L
6 7 W O
</code></pre>
<p><strong>EDIT</strong>: I have added a P in colB of <code>df1</code> to illustrate.</p>
<p>You can pretty much do the same process, just wrap it in <code>np.where()</code>, and set the condition like below, or use <code>fillna()</code> to fill the <code>nulls</code> with 'colB'</p>
<pre><code># Using np.where
import numpy as np
df1['new_colB'] = np.where(df1['colB'].map(d).isnull(),df1['colB'],df1['colB'].map(d))
# Using fillna()
df1['new_colB'] = (df1['colB'].map(d)).fillna(df1['colB'])
</code></pre>
<p>I would go for 2nd option because the first one map's twice.</p>
<p>Both will give you:</p>
<pre><code>df1
colA colB new_colB
0 1 X L
1 2 Y M
2 3 Z N
3 4 W O
4 5 X L
5 6 X L
6 7 W O
7 8 P P
</code></pre>
|
python|pandas
| 2
|
2,703
| 66,350,225
|
Select rows in a numpy array where there are no "holes", the holes being 0 (example ...,1,0,1,...)
|
<p>How can I select rows in a numpy array where there are no "holes", the holes being 0. For example, if I my input is :</p>
<pre><code>M = np.array([[0,10,0,20,30],[15,0,0,25,35],[0,40,40,40,0],[50,0,50,0,50]])
</code></pre>
<p>I would like the output to be :</p>
<pre><code>M = np.array([[15,0,0,25,35],[0,40,40,40,0]])
</code></pre>
<p>the first and last row were not selected because they have the sequence "non-zero integer , 0 , non-zero integer"</p>
|
<p>To detect a "hole" in a row, define the following function:</p>
<pre><code>def hasHole(row):
wrk = np.vstack([np.roll(row, -1), (row == 0).astype(int), np.roll(row, 1)])[:, 1:-1]
return np.not_equal(wrk, 0).all(0).any()
</code></pre>
<p>Then, to find boolean indices of rows with holes, run:</p>
<pre><code>idx = np.apply_along_axis(hasHole, axis=1, arr=M)
</code></pre>
<p>And finally, to get the expected result, run:</p>
<pre><code>result = M[~idx]
</code></pre>
<p>The result is:</p>
<pre><code>array([[15, 0, 0, 25, 35],
[ 0, 40, 40, 40, 0]])
</code></pre>
|
python|arrays|numpy|select|filter
| 1
|
2,704
| 66,696,564
|
How can I slice a TF dataset so that there's 500 negative examples and 500 positive examples? (IMDB dataset)
|
<p>I have the following dataset:</p>
<pre><code>train = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=64, validation_split=0.2,
subset='training', seed=123)
test = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=64, validation_split=0.2,
subset='validation', seed=123)
</code></pre>
<p>and I am trying to run BERT on this model, however, I only want 1000 examples total of this dataset (500+ve and 500-ve examples), is there a quick and neat way to do this? I am quite new to TF datasets so I'm not sure how I can manipulate them...</p>
|
<p>As you will have dataset of the type <code>tf.data.Dataset</code>, this makes everything a lot easier.
You will first have to filter from the training and the validation dataset the positive and the negative examples and then take the 500.</p>
<p>I will do some considerations as follows, I will use the IMDB dataset from the <code>tfds</code> package. But you can apply the concept also to your example. I just don't exactly know how your dataset is built up. I am assuming it to be the same.</p>
<pre><code># import tensorflow_datasets package.
import tensorflow_datasets as tfds
# load the imdb dataset from the tfds, here you can have your own dataset as well.
dataset, info = tfds.load('imdb_reviews/plain_text', with_info=True, as_supervised=True, shuffle_files=True)
# Here the data is of type tuple and x is the imdb review whereas y is the label.
# 1 means positive and 0 means negative
updated_train_pos = dataset['train'].filter(lambda x,y: y == 1).take(500)
updated_train_neg = dataset['train'].filter(lambda x,y: y == 0).take(500)
train = updated_train_pos.concatenate(updated_train_neg)
# just reshuffle your dataset so that your batch might get positive as well as negative samples for training.
train = train.shuffle(1000, reshuffle_each_iteration=True)
</code></pre>
<p>Follow the same steps for getting your validation dataset ready.</p>
|
python|tensorflow|dataset|tensorflow-datasets
| 1
|
2,705
| 70,915,528
|
New to Pandas - Indexing
|
<p>I have loaded a .CSV value into pandas dataframe (with pd.read_csv) in Jupyter and trying to replace a NaN value with boolean value 'False'.I was able to identify the row where given nan value is by: <br></p>
<p><code>dataframe[dataframe['columnname'].isnull()] </code><br></p>
<p>However now i am getting trobules choosing the field in order to replace it with desired 'False' value. I recall that in numpy it was sufficient to name row and column number (exemplary <code>dataframe[3,5]</code> to extract desired location.</p>
<p>In current case, whenever I try to use row number i got an error 'invalid key'
Here are some methods that have failed:</p>
<p><code>dataframe[dataframe['columnname'].isnull()].fillna('False') </code> - fills in every NAN value in row with 'False', instead of filling in one cell <br>
<code>dataframe[rownumber, columnnumber]</code> - invalid key error<br>
<code>dataframe[rownumber, 'ColumnName']</code> - throws invalid key as well<br></p>
<p>I am really sorry to bother you with such silly question, I would really appreciate any hint.</p>
|
<p>You have to read <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html" rel="nofollow noreferrer">Index and selecting data</a>:</p>
<blockquote>
<p>dataframe[dataframe['columnname'].isnull()].fillna('False') - fills in every NAN value in row with 'False', instead of filling in one cell</p>
</blockquote>
<pre><code>df['columnname'] = df['columnname'].fillna(False)
</code></pre>
<p>Note: <em>in one cell</em>, do you mean in one column?</p>
<blockquote>
<p>dataframe[rownumber, columnnumber] - invalid key error</p>
</blockquote>
<pre><code>df.iloc[rownumber, columnnumber]
</code></pre>
<blockquote>
<p>dataframe[rownumber, 'ColumnName'] - throws invalid key as well</p>
</blockquote>
<pre><code>df.iloc[rownumber, df.columns.get_loc('ColumnName')]
</code></pre>
|
python|pandas|dataframe
| 0
|
2,706
| 71,079,012
|
Design pattern - Python function applied to different objects
|
<p>I have modified Numpy's <a href="https://numpy.org/doc/stable/reference/generated/numpy.apply_along_axis.html" rel="nofollow noreferrer"><code>apply_along_axis()</code></a> function and I want to be able to use this function on different objects (such as <code>xarray.DataArray</code> and other multi-dimensional data structures). My modified <code>apply_along_axis()</code> coerces the input to a numpy array using <a href="https://numpy.org/doc/stable/reference/generated/numpy.asanyarray.html?highlight=asanyarray#numpy.asanyarray" rel="nofollow noreferrer"><code>np.asanyarray()</code></a>. I would like to have a design pattern where i can apply this function to different objects but preserve the meta-data associated with those objects. The meta-data/attributes are now lost because of the coercion to a numpy array.</p>
<p>I do not want to modify the core functionality of <code>apply_along_axis()</code> or do type checking on the arguments within <code>apply_along_axis()</code>. I want to be able to easily add support for new data structures in the future... I was thinking of either wrapping the input datastructures or the <code>apply_along_axis()</code> function. But I'm not sure how to go about it. Looking for elegant solutions.</p>
<p>Here is the code for the <code>apply_along_axis()</code> function:</p>
<pre><code>def apply_along_axis(func1d, axis, arr, progress_bar=None, *args, **kwargs):
'''
Modified from numpy's apply_along_axis function
'''
arr = asanyarray(arr)
nd = arr.ndim
# arr, with the iteration axis at the end
in_dims = list(range(nd))
inarr_view = np.transpose(arr, in_dims[:axis] + in_dims[axis+1:] + [axis])
# compute indices for the iteration axes
inds = iter(ndindex(inarr_view.shape[:-1]))
# invoke the function on the first item
try:
ind0 = next(inds)
except StopIteration as e:
raise ValueError(
'Cannot apply_along_axis when any iteration dimensions are 0'
) from None
res = func1d(inarr_view[ind0], *args, **kwargs)
# build a buffer for storing evaluations of func1d.
# remove the requested axis, and add the new ones on the end.
# laid out so that each write is contiguous.
# for a tuple index inds, buff[inds] = func1d(inarr_view[inds])
buff = np.zeros(inarr_view.shape[:-1] + res.shape, res.dtype)
# permutation of axes such that out = buff.transpose(buff_permute)
buff_dims = list(range(buff.ndim))
buff_permute = (
buff_dims[0 : axis] +
buff_dims[buff.ndim-res.ndim : buff.ndim] +
buff_dims[axis : buff.ndim-res.ndim]
)
# save the first result, then compute and save all remaining results
buff[ind0] = res
if progress_bar:
inds = progress_bar(list(inds))
for ind in inds:
buff[ind] = func1d(inarr_view[ind], *args, **kwargs)
# finally, rotate the inserted axes back to where they belong
return np.transpose(buff, buff_permute)
</code></pre>
<p>Edit: I realised that the part I left out from the original numpy function might actually be part of the solution. I think i'm looking for a functionality similar to <code>__array_wrap__</code>:</p>
<pre><code># wrap the array, to preserve subclasses
buff = res.__array_wrap__(buff)
</code></pre>
<p>Still not sure how to adapt or write my own <code>__array_wrap__</code> to preserve Xarray and other class attributes... Any ideas are welcome</p>
|
<p>Just to highlight a couple of features in the function.</p>
<p>First make a view that has <code>axis</code> last:</p>
<pre><code>inarr_view = np.transpose(arr, in_dims[:axis] + in_dims[axis+1:] + [axis])
</code></pre>
<p>Then make an iterator - on all axes except the last one:</p>
<pre><code>inds = iter(ndindex(inarr_view.shape[:-1]))
</code></pre>
<p>This is a trial calculation, used to determine the <code>dtype</code> and <code>shape</code> of what <code>func1d</code> does:</p>
<pre><code>res = func1d(inarr_view[ind0], *args, **kwargs)
</code></pre>
<p>Then it makes an array to receive results:</p>
<pre><code>buff = np.zeros(inarr_view.shape[:-1] + res.shape, res.dtype)
</code></pre>
<p>The work is an iteration over all the <code>inds</code> indices:</p>
<pre><code>for ind in inds:
buff[ind] = func1d(inarr_view[ind], *args, **kwargs)
</code></pre>
<p>where <code>inarr_view[ind]</code> is a 1d array.</p>
<p>Imagine <code>arr</code> is a 2d array, and we want to pass rows to <code>func1d</code>. We could just do <code>np.array([func1d(x) for x in arr])</code>. That is, just iterate on rows, passing them one at time to <code>func1d</code>, and make an array from that. <code>apply...</code> is just a generalization of that pattern.</p>
<p>to illustrate:</p>
<pre><code>In [29]: def foo(x):
...: print('x',x)
...: return x**2
...:
In [30]: arr = np.arange(12).reshape(3,4)
In [31]: np.apply_along_axis(foo, 1, arr)
x [0 1 2 3]
x [4 5 6 7]
x [ 8 9 10 11]
Out[31]:
array([[ 0, 1, 4, 9],
[ 16, 25, 36, 49],
[ 64, 81, 100, 121]])
In [32]: [foo(row) for row in arr]
x [0 1 2 3]
x [4 5 6 7]
x [ 8 9 10 11]
Out[32]: [array([0, 1, 4, 9]), array([16, 25, 36, 49]), array([ 64, 81, 100, 121])]
</code></pre>
<p>times (without the print):</p>
<pre><code>In [35]: timeit np.apply_along_axis(foo, 1, arr)
75.7 µs ± 2.4 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [36]: timeit np.array([foo(row) for row in arr])
11.4 µs ± 18.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
</code></pre>
<p><code>apply</code> scales better, and may be faster for large enough arrays.</p>
|
python|numpy|design-patterns|wrapper
| 0
|
2,707
| 70,850,236
|
How to adjust row height with pandas Styler
|
<p>I have a styled pandas DataFrame that I want to export to HTML with smaller rows than default. I don't know much about CSS so I haven't found a way to make it work so far. Can it be achieved and if so, how?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({"foo": [1, 2, 3], "bar": [4, 5, 6]})
styler = df.style
... # Here I adjust a few styling options
html = styler.to_html()
</code></pre>
|
<p>Set the styles for row and data cell line height and reset the padding</p>
<pre><code>styler.set_table_styles([
{"selector": "tr", "props": "line-height: 12px;"},
{"selector": "td,th", "props": "line-height: inherit; padding: 0;"}
])
</code></pre>
|
python|html|css|pandas
| 2
|
2,708
| 70,821,976
|
Efficiency of ordering the elements of a numpy array, by occurrences in a different array
|
<p>I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def suborder(x, y):
pos = np.in1d(x, y, assume_unique=True)
return x[pos]
</code></pre>
<p><code>x</code> and <code>y</code> are 1d numpy integer arrays, and the elements of <code>y</code> are a subset of those in <code>x</code>, and neither array has repeats. The result is the elements of <code>y</code>, in the order they appear in <code>x</code>. The code gives the result I want. But the intermediate array <code>pos</code> is the same size as <code>x</code> and in many use cases <code>y</code> is much, much smaller than <code>x</code>. Is there a way I can more directly get the result without allocating the intermediate array <code>pos</code> so as to save some memory?</p>
<p><code>x</code> is not sorted. In my case its elements are the ids of objects and are the value 0->len(x) but in an unspecified order, and it's sorted in order of a score assigned to each object. The purpose of <code>suborder</code> is to order subsets with that same score order.</p>
<p><code>x</code> is around 10million elements; and I have many different values for <code>y</code>, some approaching the size of <code>x</code>, all the way down to just a handful of elements.</p>
<p>Edit: I get <code>x</code> from doing an <code>argsort</code> on a set of scores for objects. I had imagined that it would be better to sort once for all scores, and then use that ordering to impose an order on the subsets. It may actually be better to take <code>scores[y]</code>, then <code>argsort</code> that and take the elements of <code>y</code> in that order (for each <code>y</code>).</p>
|
<p><code>in1d</code> starts with:</p>
<pre><code>if len(ar2) < 10 * len(ar1) ** 0.145 or contains_object:
...
mask = np.zeros(len(ar1), dtype=bool)
for a in ar2:
mask |= (ar1 == a)
return mask
</code></pre>
<p>In other words it does an equality test for each element of <code>y</code>. If your size difference isn't that large, then it uses a different method, one based on concatenating the arrays and doing a <code>argsort</code>.</p>
<p>I can imagine doing using <code>np.flatnonzero(ar1==a)</code> to get the equivalent indices, and concatenating them. But that will preserve the <code>y</code> order.</p>
|
python|numpy
| 0
|
2,709
| 51,564,118
|
Convert string to pandas dataframe
|
<pre><code>import pandas as pd
from StringIO import StringIO
msg1 = '" feature1 feature2 UIN Comop YYYYMM Sales Month grain\\n0 212 212 F1230901 220ES 201202 212 2 F1230901220ES\\n"'
def result_trans(res_str):
print res_str
res_str = StringIO(res_str)
part_hist_df = pd.read_csv(res_str, sep="\s+")
return part_hist_df
print result_trans(msg1)
</code></pre>
<p>I need this string to be converted to pandas dataframe as below. Please can any help.</p>
<pre><code>feature1 feature2 UIN Comop YYYYMM Sales Month grain
212 212 F1230901 220ES 201202 212 2 F1230901220ES
</code></pre>
|
<p>IIUC, use <code>pd.read_table</code></p>
<pre><code>msg1 = '" feature1 feature2 UIN Comop YYYYMM Sales Month grain\n0 212 212 F1230901 220ES 201202 212 2 F1230901220ES\\n"'
pd.read_table(sio(msg1.strip('""').replace('\\n','\n')), delim_whitespace=True)
</code></pre>
<p>If data is just the string, no need to <code>strip</code></p>
<pre><code>pd.read_table(io.StringIO(msg1), delim_whitespace=True)
</code></pre>
|
python|pandas
| 0
|
2,710
| 35,806,305
|
Build an ever growing 3D numpy array
|
<p>I have a function (<code>MyFunct(X)</code>) that, depending on the value of <code>X</code>, will return <em>either</em> a 3D numpy array (e.g <code>np.ones((5,2,3))</code> or an empty array (<code>np.array([])</code>). </p>
<pre><code>RetVal = MyFunct(X) # RetVal can be np.array([]) or np.ones((5,2,3))
</code></pre>
<p><strong>NB</strong> I'm using <code>np.ones((5,2,3))</code> as a way to generate fake data - in reality the content of the RetVal is all integers.</p>
<p><code>MyFunct</code> is called with a range of different X values, some of which will lead to an empty array being returned while others don't. </p>
<p>I'd like to create a new 3D numpy array (<code>OUT</code>) which is an <code>n</code> by 2 by 3 concatenated array of all the returned values from <code>MyFunct()</code>. This issue is trying to concatenate a 3D array and an empty list causes an exception (understandably!) rather than just silently not doing anything. There are various ways around this:</p>
<ul>
<li>Explicitly checking if the <code>RetVal</code> is empty or not and then use <code>np.concatenate()</code></li>
<li>Using a try/except block and catching exceptions </li>
<li>Adding each value to a list and then post-processing by removing empty entries</li>
</ul>
<p>But these all feel ugly. Is there an efficient/fast way to do this 'correctly'?</p>
|
<p>You can reshape arrays to compatible shape :</p>
<pre><code>concatenate([MyFunct(X).reshape((-1,2,3)) for X in values])
</code></pre>
<p>Example :</p>
<pre><code>In [2]: def MyFunc(X): return ones((5,2,3)) if X%2 else array([])
In [3]: concatenate([MyFunc(X).reshape((-1,2,3)) for X in range(6)]).shape
Out[3]: (15, 2, 3)
</code></pre>
|
python|arrays|numpy
| 1
|
2,711
| 37,307,616
|
convert indices into corresponding pandas dataframe values
|
<p>I have a matrix of indices, I'd like to get the same matrix filled with values taken from pandas dataframe predefined column corresponding to the index on a given position. </p>
<p>For example, index matrix</p>
<pre><code>[[0 1 2]
[1 0 2]
[2 1 3]
[3 4 2]]
</code></pre>
<p>pd.DataFrame["id"]:</p>
<pre><code>100
200
300
400
500
600
700
800
900
</code></pre>
<p>Expected result:</p>
<pre><code> [[100 200 300]
[200 100 300]
[300 100 400]
[400 500 300]]
</code></pre>
<p>Appears</p>
<pre><code>t_ind = [ td[(td.index.isin(ind[:,0]))]["id"].values,
td[(td.index.isin(ind[:,1]))]["id"].values,
td[(td.index.isin(ind[:,2]))]["id"].values ]
</code></pre>
<p>breaks the structure and return only unique values, while the full list expected</p>
<p>Any idea how to make the conversion properly? </p>
<p>NB: The data set is huge, going element by element is non-acceptable, the conversion should be done in a single operation </p>
|
<h1>Setup</h1>
<p><code>i_s</code> is a list of list. This works equally well if it were a numpy array.</p>
<pre><code>i_s = [[0, 1, 2],
[1, 0, 2],
[2, 1, 3],
[3, 4, 2]]
s = pd.DataFrame([100, 200, 300, 400, 500, 600, 700, 800, 900])
</code></pre>
<p><code>s</code> does not have to be a <code>DataFrame</code>. I made it so to be consistent with the OP's question.</p>
<h1>Solution</h1>
<pre><code>pd.DataFrame([[s.iloc[i, 0] for i in i_s[j]] for j in range(len(i_s))])
0 1 2
0 100 200 300
1 200 100 300
2 300 200 400
3 400 500 300
</code></pre>
|
python|pandas
| 0
|
2,712
| 37,439,014
|
Possible for pandas dataframe to be rendered in a new window?
|
<p>We all love using pandas and how the dataframe shows us a snippet of the data. However within the ipython notebook, sometimes it is difficult to view all the data especially if it contains too many columns.</p>
<p>Was just wondering if there is an option to view the dataframe in a new window like Matplotlib when it renders its graph.</p>
|
<p>You can create a temporary file containing HTML of the whole table, and then use the <code>webbrowser</code> module to open in. It would probably be best to simply create a function for displaying data frames in a new window:</p>
<pre><code>import webbrowser
import pandas as pd
from tempfile import NamedTemporaryFile
def df_window(df):
with NamedTemporaryFile(delete=False, suffix='.html') as f:
df.to_html(f)
webbrowser.open(f.name)
df = pd.DataFrame({'a': [10, 10, 10, 11], 'b': [8, 8 ,8, 9]})
df_window(df)
</code></pre>
<p><a href="https://i.stack.imgur.com/DeuGq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DeuGq.png" alt="enter image description here"></a></p>
<p><strong>Edit:</strong> In my answer <a href="https://stackoverflow.com/questions/38893448/pagination-on-pandas-dataframe-to-html/49134917#49134917">here</a> I show how to display a table in a new window with pagination, search, sort and other cool stuff using JQuery+DataTables.</p>
|
python|pandas
| 3
|
2,713
| 37,305,370
|
create multiple columns from 1 column pandas
|
<p>I am trying to duplicate a column multiple time from a df such as</p>
<pre><code>df.head()
close
date
2015-09-23 17:00:00 1.3324
2015-09-23 17:01:00 1.3325
2015-09-23 17:02:00 1.3323
2015-09-23 17:03:00 1.3323
2015-09-23 17:04:00 1.3323
</code></pre>
<p>from a certain list of name, I want to duplicate that colum as many time as there is name in my list:</p>
<pre><code>list =['a','b','c']
</code></pre>
<p>and get</p>
<pre><code> df.head()
close a b c
date
2015-09-23 17:00:00 1.3324 1.3324 1.3324 1.3324
2015-09-23 17:01:00 1.3325 1.3325 1.3325 1.3325
2015-09-23 17:02:00 1.3323 1.3323 1.3323 1.3323
2015-09-23 17:03:00 1.3323 1.3323 1.3323 1.3323
2015-09-23 17:04:00 1.3323 1.3323 1.3323 1.3323
</code></pre>
<p>I tried</p>
<pre><code>df[list] = df
</code></pre>
<p>but columns must be same length as key. Thanks for your help!</p>
|
<p>The simplest way would be to iterate through your list and create a new column for each key (side note: you should probably avoid using <code>list</code> as the name of a variable, since you'll overwrite the native <code>list</code>):</p>
<pre><code>keys = ['a','b','c']
for k in keys:
df[k] = df['close']
</code></pre>
<p>If you want to do it in one line, without a loop, you could do the following:</p>
<pre><code>keys = ['a','b','c']
df = df.join(pd.concat([df.close]*len(keys), keys=keys))
</code></pre>
<p>Moving outwards from the middle, <code>[df.close]*len(keys)</code> creates a list with as many copies of the original dataframe column as there are keys in your list. These are then combined into one dataframe using <code>pd.concat()</code>, with the column names being set with your list (<code>keys=keys</code>). Now that you have a dataframe with your duplicate columns, you can add it to the original dataframe using <code>df.join()</code>.</p>
|
python|pandas|multiple-columns
| 3
|
2,714
| 37,718,584
|
Removing outliers automatically in pandas data frame
|
<p>I have a pandas data frame:</p>
<pre><code>data = pd.read_csv(path)
</code></pre>
<p>I'm looking for a good way to remove outlier rows that have an extreme value in any of the features (I have 400 features in the data frame) before I run some prediction algorithms. </p>
<p>Tried a few ways but they don't seem to solve the issue: </p>
<ul>
<li><p><code>data[data.apply(lambda x: np.abs(x - x.mean()) / x.std() < 3).all(axis=1)]</code> </p></li>
<li><p>using <code>Standard Scaler</code> </p></li>
</ul>
|
<p>I think you can check your output but comparing both indexes by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.difference.html" rel="nofollow"><code>Index.difference</code></a>, because I think your solution works very nice:</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(1234)
df = pd.DataFrame(np.random.randn(100, 3), columns=list('ABC'))
</code></pre>
<pre><code>print (df)
A B C
0 0.471435 -1.190976 1.432707
1 -0.312652 -0.720589 0.887163
2 0.859588 -0.636524 0.015696
3 -2.242685 1.150036 0.991946
4 0.953324 -2.021255 -0.334077
5 0.002118 0.405453 0.289092
6 1.321158 -1.546906 -0.202646
7 -0.655969 0.193421 0.553439
8 1.318152 -0.469305 0.675554
9 -1.817027 -0.183109 1.058969
10 -0.397840 0.337438 1.047579
11 1.045938 0.863717 -0.122092
12 0.124713 -0.322795 0.841675
13 2.390961 0.076200 -0.566446
14 0.036142 -2.074978 0.247792
15 -0.897157 -0.136795 0.018289
16 0.755414 0.215269 0.841009
17 -1.445810 -1.401973 -0.100918
18 -0.548242 -0.144620 0.354020
19 -0.035513 0.565738 1.545659
20 -0.974236 -0.070345 0.307969
21 -0.208499 1.033801 -2.400454
22 2.030604 -1.142631 0.211883
23 0.704721 -0.785435 0.462060
24 0.704228 0.523508 -0.926254
25 2.007843 0.226963 -1.152659
26 0.631979 0.039513 0.464392
27 -3.563517 1.321106 0.152631
28 0.164530 -0.430096 0.767369
29 0.984920 0.270836 1.391986
</code></pre>
<pre><code>df1 = df[df.apply(lambda x: np.abs(x - x.mean()) / x.std() < 3).all(axis=1)]
print (df1)
A B C
0 0.471435 -1.190976 1.432707
1 -0.312652 -0.720589 0.887163
2 0.859588 -0.636524 0.015696
3 -2.242685 1.150036 0.991946
4 0.953324 -2.021255 -0.334077
5 0.002118 0.405453 0.289092
6 1.321158 -1.546906 -0.202646
7 -0.655969 0.193421 0.553439
8 1.318152 -0.469305 0.675554
9 -1.817027 -0.183109 1.058969
10 -0.397840 0.337438 1.047579
11 1.045938 0.863717 -0.122092
12 0.124713 -0.322795 0.841675
13 2.390961 0.076200 -0.566446
14 0.036142 -2.074978 0.247792
15 -0.897157 -0.136795 0.018289
16 0.755414 0.215269 0.841009
17 -1.445810 -1.401973 -0.100918
18 -0.548242 -0.144620 0.354020
19 -0.035513 0.565738 1.545659
20 -0.974236 -0.070345 0.307969
22 2.030604 -1.142631 0.211883
23 0.704721 -0.785435 0.462060
24 0.704228 0.523508 -0.926254
25 2.007843 0.226963 -1.152659
26 0.631979 0.039513 0.464392
28 0.164530 -0.430096 0.767369
29 0.984920 0.270836 1.391986
30 0.079842 -0.399965 -1.027851
31 -0.584718 0.816594 -0.081947
</code></pre>
<pre><code>idx = df.index.difference(df1.index)
print (idx)
Int64Index([21, 27], dtype='int64')
print (df.loc[idx])
A B C
21 -0.208499 1.033801 -2.400454
27 -3.563517 1.321106 0.152631
</code></pre>
|
python-2.7|pandas|outliers
| 0
|
2,715
| 64,425,558
|
I am trying to create and implement a function that identifies outliers in datasets in Python using the numpy module, keep getting 'ValueError'
|
<p>I am attempting to create an 'Outlier' function that detects outliers in data sets. I am then trying to call the function into a for loop but it keeps giving me a <code>ValueError</code>. I have a brief understanding of why the error occurs. It's because numpy doesn't let you set arrays as Booleans (Please correct me if I'm wrong). I was just wondering if there was a way around this, and how would I implement the <code>a.any()</code>, <code>a.all()</code> suggestions the error is giving me.</p>
<p>Code:</p>
<pre><code>import numpy as np
def Outlier(a, IQR, Q1, Q3):
if a < Q1 - 1.5 * IQR or a > Q3 + 1.5 * IQR:
outlier = True
else:
outlier = False
return(outlier)
data_clean = []
Q1 = np.percentile(data, 25)
Q3 = np.percentile(data, 75)
print("Q1 = {:,.2f}".format(Q1))
print("Q3 = {:,.2f}".format(Q3))
n= len(data)
for i in range(n):
outlier[i] = Outlier(data, IQR, Q1, Q3) # Error
if outlier[i] == False : # Error
data_clean.append(data[i])
else:
print("value removed (outlier) = {:,.2f}".format(data[i]))
data_clean = np.asarray(data_clean)
n = len(data_clean)
print("n = {:.0f}".format(n))
print("data_clean = {}".format(data_clean))
</code></pre>
<p>Full error:</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-30-f686bd0a0718> in <module>
---> 19 outlier[i] = Outlier(data, IQR, Q1, Q3)
---> 20 if outlier[i] == False : #check for outlier
<ipython-input-29-1f034e2a09b6> in Outlier(a, IQR, Q1, Q3)
3 def Outlier(a, IQR, Q1, Q3):
4
---> 5 if a < Q1 - 1.5 * IQR or a > Q3 + 1.5 * IQR:
6
7 outlier = True
ValueError: The truth value of an array with more than one element is ambiguous.
Use a.any() or a.all()
</code></pre>
<p>Thanks in advance.
Just to clarify the code above is meant to check for outliers in a dataset and then add the non-outliers to a <code>data_clean</code> list, leaving the dataset with no outliers.</p>
|
<p>I think that, rather than iterating, you could do everything inside the function without looping:</p>
<pre><code>import numpy as np
def detect_outliers(arr, pct_bounds=(75, 25), mulitplier=1.5):
upper, lower = np.percentile(arr, pct_bounds)
spread = upper - lower
# Create a mask where the data are more than or less than 1.5 * IQR from the median
mask = (arr < (np.median(arr) - (multiplier * spread))) | (arr > (np.median(arr) + (multiplier * spread)))
outliers = arr[mask]
# Invert mask
new_array = arr[~mask]
return outliers, new_array
test_data = np.random.normal(size=100)
outliers, data = detect_outliers(test_data)
</code></pre>
|
python|numpy|statistics|dataset
| 2
|
2,716
| 64,573,085
|
Python: Replace the special character by NULL in each column in pandas dataframe
|
<p>I have a dataframe as follows:</p>
<pre><code>ID Date_Loading Date_delivery Value
001 01.11.2017 20.11.2017 200.34
002 %^&**##_ 15.01.2018 300.05
003 11.12.2018 _%67* 7*7%
</code></pre>
<p>As we can see that except <code>ID</code> column I have special character in all columns.</p>
<p>Objective: To replace those special character by <code>None</code>. So the final dataframe should look like:</p>
<pre><code>ID Date_Loading Date_delivery Value
001 01.11.2017 20.11.2017 200.34
002 Null 15.01.2018 300.05
003 11.12.2018 Null Null
</code></pre>
<p>Then as a next step I want parse the Date columns to YYYY-MM-DD format.</p>
<p>In order to accomplish this I am using the following code snippet:</p>
<pre><code>for c in df.columns.tolist():
df[c] = df[c].astype(str).str.replace(r"[^A-Za-z0-9]"," ")
df['Date_Loading'] = pd.to_datetime(df['Date_Loading'],error='coerce',format='YYYY-MM-DD')
df['Date_delivery'] = pd.to_datetime(df['Date_Loading'],error='coerce',format='YYYY-MM-DD')
</code></pre>
<p>But the above code is just not working!!! Even if I am trying to replace, it is not working.</p>
<p>Am I missing out anything?</p>
<p>P.S.: I have tried in SO - > <a href="https://stackoverflow.com/questions/50846719/cannot-replace-special-characters-in-a-python-pandas-dataframe">this</a> and <a href="https://stackoverflow.com/questions/45596529/replacing-special-characters-in-pandas-dataframe">this</a> but so far no luck</p>
|
<p>You can specify fomrat of datetimes of input data, here <code>DD.MM.YYYY</code> by <code>'%d.%m.%Y'</code> and for convert numbers use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_numeric.html" rel="nofollow noreferrer"><code>to_numeric</code></a>:</p>
<pre><code> #for processing all columns
df = df.astype(str).replace(r"[^A-Za-z0-9]","", regex=True)
df['Date_Loading'] = pd.to_datetime(df['Date_Loading'],errors='coerce',format='%d.%m.%Y')
df['Date_delivery'] = pd.to_datetime(df['Date_delivery'],errors='coerce',format='%d.%m.%Y')
df['Value'] = pd.to_numeric(df['Value'],errors='coerce')
print (df)
ID Date_Loading Date_delivery Value
0 1 2017-11-01 2017-11-20 200.34
1 2 NaT 2018-01-15 300.05
2 3 2018-12-11 NaT NaN
print (df.dtypes)
ID int64
Date_Loading datetime64[ns]
Date_delivery datetime64[ns]
Value float64
dtype: object
</code></pre>
<p>EDIT:</p>
<pre><code>dateparse = lambda x: pd.to_datetime(x, format='%d.%m.%Y', errors='coerce',)
df = pd.read_csv(file, parse_dates=['Date_Loading','Date_delivery'], date_parser=dateparse)
print (df)
ID Date_Loading Date_delivery Value
0 1 2017-11-01 2017-11-20 200.34
1 2 NaT 2018-01-15 300.05
2 3 2018-12-11 NaT 7*7%
</code></pre>
|
python|pandas
| 0
|
2,717
| 64,249,983
|
UnboundLocalError: local variable 'coordi' referenced before assignment
|
<p>Below is a python function that detects an object and returns its bounding box coordinates:</p>
<pre><code>def back_project_dilation(hist1,hist2,hsv,min_area,max_area,min_aspect,max_aspect,min_rect,max_rect):
Ratio=hist1/hist2
#calculating the M_blue
#spliting the target channels
h,s,v = cv2.split(hsv)
#backprojecting the R_red hist
B = Ratio[h.ravel(),s.ravel()]
B = np.minimum(B,1)
B = B.reshape(hsv_1.shape[:2])
B[B >0 ]+=1
B = np.minimum(B,1)
Open=cv2.morphologyEx(B,cv2.MORPH_OPEN,kernel)
dilation = cv2.dilate(Open,kernel,iterations = 2)
#dilation_special=dilation_special.astype(np.uint8)
dilation*= 255
dilation=dilation.astype(np.uint8)
contours =cv2.findContours(dilation, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cntss = imutils.grab_contours(contours)
if len(cntss) > 0:
# print('ture')
for cr in cntss:
area=cv2.contourArea(cr)
(x,y,w,h)=cv2.boundingRect(cr)
aspect=w/float(h)
if area>=min_area and area<=max_area and aspect>=min_aspect and aspect<=max_aspect:
center, (width,height),_=cv2.minAreaRect(cr)
rect=area/float((width*height))
if rect>=min_rect and rect<=max_rect:
coordi = cv2.boundingRect(cr)
return coordi
</code></pre>
<p>The detection method used inside the function is called "Histogram_back_projection"</p>
<p>The function is is called a in a while loop four times as the function is being used to detect small cars in two videos simultaneously</p>
<pre><code>while True:
ret1,frame1=cap1.read()
ret2,frame2=cap2.read()
cv2.imwrite('image22.jpg',frame1)
if ret1 and ret2 !=True:
print('cannot open Videos, please cheack your source')
break
else:
hsv_1 = cv2.cvtColor(frame1, cv2.COLOR_BGR2HSV)
hsv_hist_1 = cv2.calcHist([hsv_1.copy()],[0,1],None,[180,256],[0,180,0,256])
hsv_2 = cv2.cvtColor(frame2, cv2.COLOR_BGR2HSV)
hsv_hist_2 = cv2.calcHist([hsv_2.copy()],[0,1],None,[180,256],[0,180,0,256])
B_red_coord=back_project_dilation(hist_red,hsv_hist_1,hsv_1,619,1090,1.2432432432432432,3.0588235294117645,0.7296458280876743,1)
B_red_2_coord = back_project_dilation(hist_red,hsv_hist_2,hsv_2,619,1090,1.2432432432432432,3.0588235294117645,0.7296458280876743,1)
B_blue_coord = back_project_dilation(hist_blue,hsv_hist_1,hsv_1,522,903,0.875,2.68,0.76504332843943325,1.000000410476877)
B_blue_2_coord = back_project_dilation(hist_blue,hsv_hist_2,hsv_2,522,903,0.875,2.68,0.76504332843943325,1.000000410476877)
</code></pre>
<p>So as a result i should obtain 4 outputs as the function is called 4 times (these are Centroids coordinates of two cars in two videos)</p>
<p>When i run the code, the following error occurs:</p>
<pre><code><module>
B_red_2_coord = back_project_dilation(hist_red,hsv_hist_2,hsv_2,619,1090,1.2432432432432432,3.0588235294117645,0.7296458280876743,1)
back_project_dilation
return coordi
UnboundLocalError: local variable 'coordi' referenced before assignment
</code></pre>
<p>I looked the problem up in stack overflow and found similar one, however i didn't know how to project the suggested solution on my problem</p>
<p>So, I would be grateful for some Help and thanks in Advance</p>
<p>Khaled Jbaili</p>
|
<p>I solved the Problem by setting the variable "coordi" to global</p>
|
python|python-3.x|numpy|opencv|image-processing
| 0
|
2,718
| 47,542,212
|
Python: Splitting a string at any uppercase letter (as part of a rename for a column name)
|
<p>I would like to rename columns in a Pandas dataframe using <em>rename</em> function and therefore I would like to split the name (string) at an uppercase letter within the string.
So for example my column names are something like 'FooBar' or 'SpamEggs' and one column is called 'Monty-Python'. My goal are column names like 'foo_bar' 'spam_eggs' and 'monty_python'.</p>
<p>I know that </p>
<pre><code>'-'.join(re.findall('[A-Z][a-z]*', 'FooBar'))
</code></pre>
<p>will give me
<code>Foo-Bar</code></p>
<p>But this cannot be included in my <em>rename</em> function:</p>
<pre><code>df.rename(columns=lambda x: x.strip().lower().replace("-", "_"), inplace=True)
</code></pre>
<p>(should go between <em>strip</em> and <em>lower</em> but gives back a Syntax Error).</p>
<p>Can anyone help me to include the snippet to <em>rename</em> or help me find another solution than <em>findall</em>?</p>
|
<ol>
<li>Remove anything that is not a letter</li>
<li>Prepend an underscore (<code>_</code>) to uppercase letters that are not at the start of the string</li>
<li>Lowercase the result</li>
</ol>
<pre><code>df.columns
Index(['FooBar', 'SpamEggs', 'Monty-Python'], dtype='object')
df.columns.str.replace('[\W]', '')\
.str.replace('(?<!^)([A-Z])', r'_\1')\
.str.lower()
Index(['foo_bar', 'spam_eggs', 'monty_python'], dtype='object')
</code></pre>
<p>This solution generalises quite nicely. Assign the result back to <code>df.columns</code>.</p>
|
python|regex|pandas|dataframe
| 2
|
2,719
| 49,270,966
|
Create a DataFrame of combinations for each group with pandas
|
<p>The inputs are as follows.</p>
<pre class="lang-python prettyprint-override"><code>Out[178]:
group value
0 A a
1 A b
2 A c
3 A d
4 B c
5 C d
6 C e
7 C a
</code></pre>
<p>For this input, I want to create a combination for each group and create one DataFrame. How can I do it?</p>
<p>The output I want to get:</p>
<pre class="lang-python prettyprint-override"><code>Out[180]:
group 0 1
0 A a b
1 A a c
2 A a d
3 A b c
4 A b d
5 A c d
0 C d e
1 C d a
2 C e a
</code></pre>
|
<p>Using <code>combinations</code> in a comprehension</p>
<pre><code>from itertools import combinations
pd.DataFrame([
[n, x, y]
for n, g in df.groupby('group').value
for x, y in combinations(g, 2)
], columns=['group', 0, 1])
group 0 1
0 A a b
1 A a c
2 A a d
3 A b c
4 A b d
5 A c d
6 C d e
7 C d a
8 C e a
</code></pre>
|
python|pandas
| 14
|
2,720
| 49,331,882
|
Vectorizing a for loop with Numpy
|
<p>I have a for loop that I would like to vectorize with numpy. In the below snippet, <code>R</code>, <code>A</code>, and <code>done</code> are numpy arrays of length <code>num_rows</code>, while Q and Q1 are matrices of size <code>(num_rows, num_cols)</code>. Also worth noting, all elements of <code>A</code> are between <code>0</code> and <code>num_cols - 1</code>, and all elements of <code>done</code> are either <code>0</code> or <code>1</code>. I basically want to do the same thing as the below for-loop, but taking advantage of numpy vectorization.</p>
<p>Important Info:</p>
<ul>
<li><code>R</code> is a numpy array of length <code>num_rows</code>. Arbitrary values</li>
<li><code>A</code> is a numpy array of length <code>num_rows</code>. Values can be integers between 0 and <code>num_cols - 1</code></li>
<li><code>done</code> is a numpy array of length <code>num_rows</code>. Values are either 0 or 1</li>
<li><code>Q</code> is a 2D numpy array with shape <code>(num_rows, num_cols)</code></li>
<li><code>Q1</code> is also a 2D numpy array with shape <code>(num_rows, num_cols)</code></li>
</ul>
<p>Here is the loop:</p>
<pre><code> y = np.zeros((num_rows, num_cols))
for i in range(num_rows):
r = R[i]
a = A[i]
q = Q[i]
adjustment = r
if not done[i]:
adjustment += (gamma*max(Q1[i]))
q[a] = adjustment
y[i, :] = q
</code></pre>
<p>I think that I have gotten my "adjustments" in a vectorized way with the following lines, I just need to do the assignment to the <code>Q</code> matrix and output the correct <code>y</code> matrix.</p>
<p>These are the lines that I am using to vectorize the first part:</p>
<pre><code> q_max_adjustments = np.multiply(gamma * Q1.max(1), done) # This would be numpy array of length num_rows
true_adjustments = R + q_max_adjustments # Same dimension numpy array
</code></pre>
<p>An example input and output would be </p>
<pre><code>gamma = 0.99
R = numpy.array([1,2,0,3,2])
A = numpy.array([0,2,0,1,1])
done = numpy.array([0,1,0,0,1])
Q = numpy.array([[1,2,3],
[4,5,6],
[7,8,9],
[10,11,12],
[13,14,15]])
Q1 = numpy.array([[1,2,3],
[4,5,6],
[7,8,9],
[10,11,12],
[13,14,15]])
output y should be array([[ 3.97, 2. , 3. ],
[ 4. , 5. , 2. ],
[ 8.91, 8. , 9. ],
[10. , 14.88, 12. ],
[13. , 2. , 15. ]])
</code></pre>
<p><strong>EDIT</strong></p>
<p>So I think that I hacked something together that works, using sparse matrices as masks and such... But it seems like this probably isn't particularly performant, given the number of steps required. Is there a more efficient way to achieve the same goal? Code is below</p>
<pre><code> q_max_adjustments = np.multiply(gamma * Q1.max(1), 1-done)
true_adjustments = R + q_max_adjustments
mask = np.full((num_rows, num_cols), False)
mask[np.arange(num_rows), A] = True
value_mask = np.multiply(np.vstack(true_adjustments), mask)
np.copyto(Q, value_mask, where=mask)
</code></pre>
|
<p>Your vectorized solution has all the right elements, but contains a couple of unnecessary complications. A streamlined version using advanced indexing would be:</p>
<pre><code>>>> y = Q.astype(float)
>>> D, = np.where(1-done)
>>> y[np.arange(A.size), A] = R
>>> y[D, A[D]] += gamma * Q1[D].max(axis=1)
>>> y
array([[ 3.97, 2. , 3. ],
[ 4. , 5. , 2. ],
[ 8.91, 8. , 9. ],
[10. , 14.88, 12. ],
[13. , 2. , 15. ]]
</code></pre>
|
python|numpy
| 2
|
2,721
| 58,884,590
|
Estimate pandas dataframe size without loading into memory
|
<p>Is there a way to estimate the size a dataframe would be without loading it into memory? I already know that I do not have enough memory for the dataframe that I am trying to create but I do not know how much more memory would be required to fully create it.</p>
|
<p>You can calculate for one row, and estimate based on it:</p>
<pre><code>data = {'name': ['Bill'],
'year': [2012],
'num_sales': [4]}
df = pd.DataFrame(data, index = ['sales'])
df.memory_usage(index=True).sum() #-> 32
</code></pre>
|
python|pandas|dataframe|dask
| 2
|
2,722
| 70,117,098
|
How to add dummy rows to make number of rows equal for each column value of a particular column in pandas
|
<p>I have a pandas dataframe like so:</p>
<pre><code>ID A B C D
1 a1 b1 c1 d1
1 a2 b2 c2 d2
1 a3 b3 c3 d3
1 a4 b4 c4 d4
1 a5 b5 c5 d5
2 a6 b6 c6 d6
2 a7 b7 c7 d7
3 a8 b8 c8 d8
</code></pre>
<p>Is there some way I can add rows (with dummy values a0 b0 c0 d0 for remaining columns) for ID 2 and 3 (and others) so all ID values have the same number of rows (5). Please note that I only need to add rows as I already have executed a groupby to have at max 5 rows per ID.</p>
<pre><code>df = df.groupby('id').head(5)
</code></pre>
<p>The dummy rows need to have same values (a0, b0, c0, d0) aside from the ID. Please ask for any further information that might be required.</p>
<p>EXPECTED OUTPUT</p>
<pre><code>ID A B C D
1 a1 b1 c1 d1
1 a2 b2 c2 d2
1 a3 b3 c3 d3
1 a4 b4 c4 d4
1 a5 b5 c5 d5
2 a6 b6 c6 d6
2 a7 b7 c7 d7
2 a0 b0 c0 d0
2 a0 b0 c0 d0
2 a0 b0 c0 d0
3 a8 b8 c8 d8
3 a0 b0 c0 d0
3 a0 b0 c0 d0
3 a0 b0 c0 d0
3 a0 b0 c0 d0
</code></pre>
|
<p>Create a new dataframe with missing <code>ID</code> values then append it to your original dataframe and finally fill missing values.</p>
<pre><code>out = df.append(pd.DataFrame({'ID': df['ID'].unique().repeat(5 - df['ID'].value_counts())})) \
.fillna({'A': 'a0', 'B': 'b0', 'C': 'c0', 'D': 'd0'}) \
.sort_values('ID').reset_index(drop=True)
print(out)
# Output:
ID A B C D
0 1 a1 b1 c1 d1
1 1 a2 b2 c2 d2
2 1 a3 b3 c3 d3
3 1 a4 b4 c4 d4
4 1 a5 b5 c5 d5
5 2 a6 b6 c6 d6
6 2 a7 b7 c7 d7
7 2 a0 b0 c0 d0
8 2 a0 b0 c0 d0
9 2 a0 b0 c0 d0
10 3 a8 b8 c8 d8
11 3 a0 b0 c0 d0
12 3 a0 b0 c0 d0
13 3 a0 b0 c0 d0
14 3 a0 b0 c0 d0
</code></pre>
|
python|pandas
| 0
|
2,723
| 70,189,349
|
How to calculate the summation for values based on consecutive days and two other columns
|
<p>How can I do summation just for consecutive days and for the same name and same supplier?
For instance, for A and Supplier Wal, I need to do summation for 2021-05-31 and 2021-06-01 and then do another summation for 2021-06-08 and 2021-06-09. I need to add a new column for summation. Please take a look at the example below:</p>
<p><a href="https://i.stack.imgur.com/i5sYv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i5sYv.png" alt="enter image description here" /></a></p>
<p>Here is the Pandas DataFrame code for the table:</p>
<pre><code>df = pd.DataFrame({'Name': ['A', 'A', 'A','A','B','B','C','C','C','C','C','C','C','C','C'],
'Supplier': ['Wal', 'Wal', 'Wal', 'Wal', 'Co', 'Co', 'Mc', 'Mc', 'St', 'St', 'St', 'St', 'St', 'To', 'To'],
'Date': ['2021-05-31', '2021-06-01', '2021-06-08', '2021-06-09', '2021-05-17', '2021-05-18'
, '2021-04-07', '2021-04-08', '2021-05-11', '2021-05-12', '2021-05-13', '2021-05-18'
, '2021-05-19', '2021-03-30', '2021-03-31'],
'Amount': [27, 400, 410, 250, 100, 50, 22, 78, 60, 180, 100, 240, 140, 30, 110],
'Summation': [427,427,660,660,150,150,100,100,340,340,340,380,380,140,140 ]})
</code></pre>
|
<p>Like this?</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Name': ['A', 'A', 'A','A','B','B','C','C','C','C','C','C','C','C','C'],
'Supplier': ['Wal', 'Wal', 'Wal', 'Wal', 'Co', 'Co', 'Mc', 'Mc', 'St', 'St', 'St', 'St', 'St', 'To', 'To'],
'Date': ['2021-05-31', '2021-06-01', '2021-06-08', '2021-06-09', '2021-05-17', '2021-05-18'
, '2021-04-07', '2021-04-08', '2021-05-11', '2021-05-12', '2021-05-13', '2021-05-18'
, '2021-05-19', '2021-03-30', '2021-03-31'],
'Amount': [27, 400, 410, 250, 100, 50, 22, 78, 60, 180, 100, 240, 140, 30, 110]})
df['Date'] = pd.to_datetime(df['Date'])
filt = df.loc[((df['Date'] - df['Date'].shift(-1)).abs() == pd.Timedelta('1d')) | (df['Date'].diff() == pd.Timedelta('1d'))]
breaks = filt['Date'].diff() != pd.Timedelta('1d')
df['Summation'] = df.groupby(['Name','Supplier',breaks.cumsum()])['Amount'].transform('sum')
print(df)
</code></pre>
<p>output:</p>
<pre><code> Name Supplier Date Amount Summation
0 A Wal 2021-05-31 27 427
1 A Wal 2021-06-01 400 427
2 A Wal 2021-06-08 410 660
3 A Wal 2021-06-09 250 660
4 B Co 2021-05-17 100 150
5 B Co 2021-05-18 50 150
6 C Mc 2021-04-07 22 100
7 C Mc 2021-04-08 78 100
8 C St 2021-05-11 60 340
9 C St 2021-05-12 180 340
10 C St 2021-05-13 100 340
11 C St 2021-05-18 240 380
12 C St 2021-05-19 140 380
13 C To 2021-03-30 30 140
14 C To 2021-03-31 110 140
</code></pre>
|
python|pandas|dataframe|jupyter-notebook|calculated-columns
| 1
|
2,724
| 56,076,205
|
Recognize scene with deep learning
|
<p>What is the approach to recognize a scene with deep learning (preferably Keras).
There are many examples showing how to classify images of limited size e.g. dogs/cats hand-written letters etc. There are also some examples for the detection of a searched object within a big image.</p>
<p>But, what is the best approach to recognize e.g. is it a class-room, bed-room or a dinning room? Create a data-set with that images? I think no. I think one should train a model with many things, which may appear in the scene, create a vector of the found things in the analysed image and using the second classifier (SVM or simple NN) classify the scene. Is it a right approach? </p>
<p>P.S.: Actually, I'm facing another problem, which IHMO the same. My "scene" is a microscope image. The images contain different sets of cells and artifacts. Depending on a set, a doctor makes a diagnosis. So I aim to train a CNN with the artifacts, which I extract with a simple morphologicyl methods. These artifacts (e.g. biological cells) will be my features. So the first level of the recognition - feature extraction is done by CNN, the later classification by SVM. Just wanted be sure, that I'm not reinventing a wheel.</p>
|
<p>In my opinion the comparison between your room-scenes and the biological scenes differ. Especially since your scene is a microscope image (probably of a limited predefined domain).</p>
<p>In this case, pure classification should work (without seeing the data). In other words the neural network should be able to figure out what it is seeing, without having you to hand-craft features (in case you need interpretability that's a whole new discussion). </p>
<p>Also there are lots approaches for scene understanding in this <a href="https://ieeexplore.ieee.org/document/8969051" rel="nofollow noreferrer">paper</a>.</p>
|
tensorflow|machine-learning|image-processing|keras|deep-learning
| 0
|
2,725
| 56,257,407
|
Is there a method in Pandas to check if a cell is bolded?
|
<p>I have a column with names. I want to build a list containing all the names from my column that are bolded. Is there a method in Pandas available to do this?</p>
<pre><code>import pandas as pd
df = pd.read_excel("mydatafile.xlsx")
print("Column Headings:")
mylist = []
for i in df.index:
if df['Names'][i].celltype == bold
mylist.append(cell)
</code></pre>
|
<p><code>pandas</code> does not read styles from Excel. You will have to use another library that does. One such library is <a href="https://github.com/deepspace2/StyleFrame" rel="nofollow noreferrer">styleframe</a> (full disclosure, I'm one of the authors of this library).</p>
<p>Then, using this code</p>
<pre><code>from styleframe import StyleFrame
# 'from StyleFrame import StyleFrame' in older versions (< 3.0)
sf = StyleFrame.read_excel('test.xlsx', read_style=True, use_openpyxl_styles=False)
for name in sf.Names:
if name.style.bold:
print(name)
</code></pre>
<p>With this Excel sheet:</p>
<p><a href="https://i.stack.imgur.com/1cSxP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1cSxP.png" alt="enter image description here" /></a></p>
<p>Outputs</p>
<pre><code>bold
bold
</code></pre>
|
python|pandas|conditional|cell-formatting
| 4
|
2,726
| 56,177,013
|
Import data with each value containing column labels
|
<p>I have data in a text file with no headers. The values in each row have a label indicating which column they belong to. I want to take those labels as column names and feed the data under the columns.</p>
<p>I want to import a text file containing this:</p>
<pre><code>Column1=variable11&Column2=variable12&Column3=variable13&Column4=variable14
Column1=variable12&Column2=variable22&Column3=variable23
Column1=variable13&Column2=variable32&Column3=variable33&Column4=variable34&Column5=variable35
</code></pre>
<p>I expect the result to be a table like this:</p>
<pre><code>Column1 Column2 Column3 Column4 Column5
variable11 variable12 variable13 variable14
variable21 variable22 variable23
variable31 variable32 variable33 variable34 variable35
</code></pre>
|
<p>I assume here that <code>Column1=variable1=21</code> on line 2 and 3 are mistakes. </p>
<pre><code>df = pd.read_csv('file', header=None)
df = df[0].str.split('=|&', expand=True)
tmp = df.loc[:,1::2].copy()
tmp.columns = df.loc[:,::2].apply(lambda x: x.dropna().iloc[0])
</code></pre>
<p>output</p>
<pre><code> Column1 Column2 Column3 column4 Column5
0 variable11 variable12 variable13 variable14 None
1 variable21 variable22 variable23 None None
2 variable31 variable32 variable33 variable34 variable35
</code></pre>
|
python|pandas|dataframe|import
| 1
|
2,727
| 55,675,345
|
Should I use softmax as output when using cross entropy loss in pytorch?
|
<p>I have a problem with classifying fully connected deep neural net with 2 hidden layers for <strong>MNIST dataset in pytorch</strong>.</p>
<p>I want to use <strong>tanh</strong> as activations in both hidden layers, but in the end, I should use <strong>softmax</strong>.</p>
<p>For the loss, I am choosing <code>nn.CrossEntropyLoss()</code> in PyTOrch, which (as I have found out) does not want to take one-hot encoded labels as true labels, but takes LongTensor of classes instead.</p>
<p>My model is <code>nn.Sequential()</code> and when I am using softmax in the end, it gives me worse results in terms of accuracy on testing data. Why?</p>
<pre class="lang-py prettyprint-override"><code>import torch
from torch import nn
inputs, n_hidden0, n_hidden1, out = 784, 128, 64, 10
n_epochs = 500
model = nn.Sequential(
nn.Linear(inputs, n_hidden0, bias=True),
nn.Tanh(),
nn.Linear(n_hidden0, n_hidden1, bias=True),
nn.Tanh(),
nn.Linear(n_hidden1, out, bias=True),
nn.Softmax() # SHOULD THIS BE THERE?
)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.5)
for epoch in range(n_epochs):
y_pred = model(X_train)
loss = criterion(y_pred, Y_train)
print('epoch: ', epoch+1,' loss: ', loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
</code></pre>
|
<p>As stated in the <a href="https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss" rel="nofollow noreferrer"><code>torch.nn.CrossEntropyLoss()</code></a> doc:</p>
<blockquote>
<p>This criterion combines <a href="https://pytorch.org/docs/stable/generated/torch.nn.LogSoftmax.html" rel="nofollow noreferrer"><code>nn.LogSoftmax()</code></a> and <a href="https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html" rel="nofollow noreferrer"><code>nn.NLLLoss()</code></a> in one single class.</p>
</blockquote>
<p>Therefore, you should <strong>not</strong> use softmax before.</p>
|
python|pytorch|mnist|softmax
| 50
|
2,728
| 56,009,451
|
FFT to find autocorrelation function
|
<p>I am trying to find the correlation function of the following stochastic process: <a href="https://i.stack.imgur.com/izFUa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/izFUa.png" alt="enter image description here"></a>
where beta and D are constants and xi(t) is a Gaussian noise term.</p>
<p>After simulating this process with the Euler method, I want to find the auto correlation function of this process. First of all, I have found an analytical solution for the correlation function and already used the definition of correlation function to simulate it and the two results were pretty close (please see the photo, the corresponding code is at the end of this post). </p>
<p><a href="https://i.stack.imgur.com/e4FCv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e4FCv.png" alt="figure 1, correlation function analytically and numerically"></a></p>
<p>(Figure 1)</p>
<hr>
<p>Now I want to use the Wiener-Khinchin theorem (using fft) to find the correlation function by taking the fft of the realizations, multiply it with its conjugate and then find take the ifft to get the correlation function. But obviously I am getting results that are way off the expected correlation function, so I am pretty sure there is something I misunderstood in the code to get this wrong results.. </p>
<p>Here is my code for the solution of the stochastic process (which I am sure it is right although my code might be sloppy) and my attempt to find the autocorrelaion with the fft:</p>
<pre><code>N = 1000000
dt=0.01
gamma = 1
D=1
v_data = []
v_factor = math.sqrt(2*D*dt)
v=1
for t in range(N):
F = random.gauss(0,1)
v = v - gamma*dt + v_factor*F
if v<0: ###boundary conditions.
v=-v
v_data.append(v)
def S(x,dt): ### power spectrum
N=len(x)
fft=np.fft.fft(x)
s=fft*np.conjugate(fft)
# n=N*np.ones(N)-np.arange(0,N) #divide res(m) by (N-m)
return s.real/(N)
c=np.fft.ifft(S(v_data,0.01)) ### correlation function
t=np.linspace(0,1000,len(c))
plt.plot(t,c.real,label='fft method')
plt.xlim(0,20)
plt.legend()
plt.show()
</code></pre>
<p>And this is what I would get using this method for the correlation function,
<a href="https://i.stack.imgur.com/6ZxOf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6ZxOf.png" alt="enter image description here"></a></p>
<p>And this is my code for the correlation function using the definition:</p>
<pre><code>def c_theo(t,b,d): ##this was obtained by integrating the solution of the SDE
I1=((-t*d)+((d**2)/(b**2))-((1/4)*(b**2)*(t**2)))*special.erfc(b*t/(2*np.sqrt(d*t)))
I2=(((d/b)*(np.sqrt(d*t/np.pi)))+((1/2)*(b*t)*(np.sqrt(d*t/np.pi))))*np.exp(-((b**2)*t)/(4*d))
return I1+I2
## this is the correlation function that was plotted in the figure 1 using the definition of the autocorrelation.
Ntau = 500
sum2=np.zeros(Ntau)
c=np.zeros(Ntau)
v_mean=0
for i in range (0,N):
v_mean=v_mean+v_data[i]
v_mean=v_mean/N
for itau in range (0,Ntau):
for i in range (0,N-10*itau):
sum2[itau]=sum2[itau]+v_data[i]*v_data[itau*10+i]
sum2[itau]=sum2[itau]/(N-itau*10)
c[itau]=sum2[itau]-v_mean**2
t=np.arange(Ntau)*dt*10
plt.plot(t,c,label='numericaly')
plt.plot(t,c_theo(t,1,1),label='analyticaly')
plt.legend()
plt.show()
</code></pre>
<p>so would someone please point out where is the mistake in my code, and how could I simulate it better to get the right correlation function? </p>
|
<p>There are two issues with the code that I can see.</p>
<ol>
<li><p>As <a href="https://stackoverflow.com/questions/56009451/fft-to-find-autocorrelation-function?noredirect=1#comment98715027_56009451">francis said in a comment</a>, you need to subtract the mean from your signal to get the autocorrelation to reach zero.</p></li>
<li><p>You plot your autocorrelation function with a wrong x-axis values.</p>
<p><code>v_data</code> is defined with:</p>
<pre><code> N = 1000000 % 1e6
dt = 0.01 % 1e-2
</code></pre>
<p>meaning that <code>t</code> goes from 0 to 1e4. However:</p>
<pre><code> t = np.linspace(0,1000,len(c))
</code></pre>
<p>meaning that you plot with <code>t</code> from 0 to 1e3. You should probably define <code>t</code> with</p>
<pre><code> t = np.arange(N) * dt
</code></pre>
<p>Looking at the plot, I'd say that stretching the blue line by a factor 10 would make it line up with the red line quite well.</p></li>
</ol>
|
python|numpy|signal-processing|fft|correlation
| 1
|
2,729
| 65,045,653
|
Pandas once condition is met in column delete n rows then go to next section
|
<p>I have a DataFrame <code>df = pd.DataFrame({'col1': [.8,.9,1,1,1,.9,.7,.9,1,1,.8,.8,.9]})</code> which looks like</p>
<pre><code> col1
0 0.8
1 0.9
2 1.0
3 1.0
4 1.0
5 0.9
6 0.7
7 0.9
8 1.0
9 1.0
10 0.8
11 0.8
12 0.9
</code></pre>
<p>Once a 1 is located in <code>col1</code> I want the code to remove three numbers below. It should remove the three and then look to see if there are any 1's below. The output should look like...</p>
<pre><code> col1
0 0.8
1 0.9
2 1.0
6 0.7
7 0.9
8 1.0
12 0.9
</code></pre>
<p>I asked a prior question and the answer said.</p>
<pre><code>integers = np.r_[: df.twofour.eq(1).idxmax() + 1,
range(df.twofour.eq(1).idxmax() + 30, len(df))
]
df = df.iloc[integers]
</code></pre>
<p>This code can remove the first intence of a 1, can I have help expanding this for all 1's that are not removed? Are there more elegant ways to do this?</p>
|
<p>you can just iterate over your dataframe and get all indexes to delete:</p>
<pre><code>index_to_delete = []
for row in df.itertuples():
idx = row[0]
value = row.col1
if value == 1 and idx not in index_to_delete:
index_to_delete += [idx+1, idx+2, idx+3]
df = df.loc[~df.index.isin(index_to_delete)]
print(df)
</code></pre>
<p>Output:</p>
<pre><code> col1
0 0.8
1 0.9
2 1.0
6 0.7
7 0.9
8 1.0
12 0.9
</code></pre>
|
python|pandas
| 3
|
2,730
| 64,894,455
|
How to convert columns of dataframe based on best-fit types of values in them?
|
<p>Following <a href="https://stackoverflow.com/questions/15891038/change-column-type-in-pandas">this answer</a> (number 4) I am trying to use <code>df.convert_dtypes()</code></p>
<p>Pandas version: 0.25.3</p>
<p>Reproducible:</p>
<pre><code>import pandas as pd
import numpy as np
data = {
"int": np.zeros((5, ), dtype=np.int),
"float": np.zeros((5, ), dtype=np.float),
"string": ["a", "b", "c", "ddd", "FFFFF"],
"bool": [True, False, True, True, False]
}
df = pd.DataFrame(data)
print(df)
print(df.dtypes)
</code></pre>
<p>output so far:</p>
<blockquote>
<pre><code> int float string bool
0 0 0.0 a True
1 0 0.0 b False
2 0 0.0 c True
3 0 0.0 ddd True
4 0 0.0 FFFFF False
int int32
float float64
string object
bool bool
dtype: object
</code></pre>
</blockquote>
<pre><code>df.convert_dtypes()
print(df.dtypes)
</code></pre>
<p>I would expect the trivial types, rather than <code>object</code>, but getting</p>
<blockquote>
<p><code>AttributeError: 'DataFrame' object has no attribute 'convert_dtypes'</code></p>
</blockquote>
<p>What's the way to do this?</p>
|
<p><code>pip install --upgrade pandas</code></p>
<p>Thanks to @jezrael need version <code>1.+</code></p>
|
python|pandas
| 0
|
2,731
| 64,663,665
|
Extracting numbers from column
|
<p>I have a dataset with many columns. I would like to search in one of these any numbers:</p>
<pre><code>Column_to_look_at
10 days ago I was ...
How old are you?
I am 24 years old
I do not know. Maybe 23.12?
I could21n ....
</code></pre>
<p>I would need to create two columns: one which extracts the numbers included in that column and another one which just has boolean values if a row contains or does not a number.</p>
<p>Output I would expect</p>
<pre><code>Column_to_look_at Numbers Bool
10 days ago I was ... [10] 1
How old are you? [] 0
I am 24 years old [24] 1
I do not know. Maybe 23.12 or 23.14? [23.12, 23.14] 1
I could21n .... [21] 1
</code></pre>
<p>The code I applied to select numbers is this</p>
<pre><code>df[df.applymap(np.isreal).all(1)]
</code></pre>
<p>but actually this is not give me the outpuut expected (at least for number selection).
Any suggestions on how to extract digits from that column would be appreciated. Thanks</p>
|
<p>This will do</p>
<pre><code>def checknum(x):
num_list = re.findall(r"[+-]?\d+(?:\.\d+)?", x['Column_to_look_at'])
return num_list
df['Numbers'] = df.apply(checknum, axis=1)
df['Bool'] = df.apply(lambda x: 1 if len(x['Numbers']) > 0 else 0, axis=1)
</code></pre>
|
python|pandas
| 0
|
2,732
| 39,615,238
|
how could i delete rows with repeating/duplicate index from dataframe
|
<p>I have a dataframe</p>
<pre><code>>>> df
zeroa zerob zeroc zerod zeroe zero
FSi
1 10 100 a ok NaN ok
1 11 110 temp NaN NaN
2 12 120 c temp NaN NaN
3 NaN NaN NaN NaN ok NaN
</code></pre>
<p>I want to keep only unique indexes, as index 1 is repeating I want its second instance to be dropped, how could I do that ? I want my result as</p>
<pre><code>>>> df
zeroa zerob zeroc zerod zeroe zero
FSi
1 10 100 a ok NaN ok
2 12 120 c temp NaN NaN
3 NaN NaN NaN NaN ok NaN
</code></pre>
|
<p>Ok something like this should help:</p>
<pre><code>df = df.reset_index().drop_duplicates(subset='FSi', keep='first').set_index('FSi')
</code></pre>
<p>Explanation: First we reset_index which creates a column <strong>FSi</strong> cause drop_duplicates works on columns and not on index. We keep the first one and set_index again back to <strong>FSi</strong></p>
|
python|pandas|dataframe
| 1
|
2,733
| 39,785,820
|
Python compute a specific inner product on vectors
|
<p>Assume having two vectors with m x 6, n x 6</p>
<pre><code>import numpy as np
a = np.random.random(m,6)
b = np.random.random(n,6)
</code></pre>
<p>using np.inner works as expected and yields</p>
<pre><code>np.inner(a,b).shape
(m,n)
</code></pre>
<p>with every element being the scalar product of each combination. I now want to compute a special inner product (namely Plucker). Right now im using</p>
<pre><code>def pluckerSide(a,b):
a0,a1,a2,a3,a4,a5 = a
b0,b1,b2,b3,b4,b5 = b
return a0*b4+a1*b5+a2*b3+a4*b0+a5*b1+a3*b2
</code></pre>
<p>with a,b sliced by a for loop. Which is way too slow. Any plans on vectorizing fail. Mostly broadcast errors due to wrong shapes. Cant get np.vectorize to work either.
Maybe someone can help here?</p>
|
<p>There seems to be an indexing based on some random indices for pairwise multiplication and summing on those two input arrays with function <code>pluckerSide</code>. So, I would list out those indices, index into the arrays with those and finally use <code>matrix-multiplication</code> with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html" rel="nofollow"><code>np.dot</code></a> to perform the sum-reduction.</p>
<p>Thus, one approach would be like this -</p>
<pre><code>a_idx = np.array([0,1,2,4,5,3])
b_idx = np.array([4,5,3,0,1,2])
out = a[a_idx].dot(b[b_idx])
</code></pre>
<p>If you are doing this in a loop across all rows of <code>a</code> and <code>b</code> and thus generating an output array of shape <code>(m,n)</code>, we can vectorize that, like so -</p>
<pre><code>out_all = a[:,a_idx].dot(b[:,b_idx].T)
</code></pre>
<p>To make things a bit easier, we can re-arrange <code>a_idx</code> such that it becomes <code>range(6)</code> and re-arrange <code>b_idx</code> with that pattern. So, we would have :</p>
<pre><code>a_idx = np.array([0,1,2,3,4,5])
b_idx = np.array([4,5,3,2,0,1])
</code></pre>
<p>Thus, we can skip indexing into <code>a</code> and the solution would be simply -</p>
<pre><code>a.dot(b[:,b_idx].T)
</code></pre>
|
python|numpy|vector|vectorization
| 3
|
2,734
| 39,502,630
|
Numpy: Single loop vectorized code slow compared to two loop iteration
|
<p>The following codes iterates over each element of two array to compute pairwise euclidean distance. </p>
<pre><code>def compute_distances_two_loops(X, Y):
num_test = X.shape[0]
num_train = Y.shape[0]
dists = np.zeros((num_test, num_train))
for i in range(num_test):
for j in range(num_train):
dists[i][j] = np.sqrt(np.sum((X[i] - Y[j])**2))
return dists
</code></pre>
<p>The following code serves the same purpose but with single loop.</p>
<pre><code>def compute_distances_one_loop(X, Y):
num_test = X.shape[0]
num_train = Y.shape[0]
dists = np.zeros((num_test, num_train))
for i in range(num_test):
dists[i, :] = np.sqrt(np.sum((Y - X[i])**2, axis=1))
return dists
</code></pre>
<p>Below are time comparison for both.</p>
<pre><code>two_loop_time = time_function(compute_distances_two_loops, X, Y)
print ('Two loop version took %f seconds' % two_loop_time)
>> Two loop version took 20.3 seconds
one_loop_time = time_function(compute_distances_one_loop, X, Y)
print ('One loop version took %f seconds' % one_loop_time)
>> One loop version took 80.9 seconds
</code></pre>
<p>Both X and Y are numpy.ndarray with shape - </p>
<p>X: (500, 3000)
Y: (5000, 3000)</p>
<p>Out of intuition the results are not correct, the single loop should run at least with same speed. What am I missing here ? </p>
<p>PS: The result is not from a single run. I ran the code number of times, on different machines, the results are similar.</p>
|
<p>The reason is size of arrays within the loop body.
In the two loop variant works on two arrays of 3000 elements. This easily fits into at least the level 2 cache of a cpu which is much faster than the main memory but it is also large enough that computing the distance is much slower than the python loop iteration overhead.</p>
<p>The second case the loop body works on 5000 * 3000 elements. This is so much that the data needs go to main memory in each computation step (first the Y-X[i] subtraction into a temporary array, squaring the temporary into another temporary and then read it back to sum it).
The main memory is much slower than the cpu for the simple operations involved so it takes much longer despite removing a loop.
You could speed it up a bit by using inplace operations writing into preallocated temporary array, but it will still be slower than the two loop variant for these array sizes.</p>
<p>Note that <code>scipy.spatial.distance.cdist(X, Y)</code> is probably going to be the fastest, as it does not need any temporaries at all</p>
|
python|performance|numpy|time|nearest-neighbor
| 2
|
2,735
| 69,613,815
|
How to specify a forced_bos_token_id when using Facebook's M2M-100 HuggingFace model through AWS SageMaker?
|
<p>The <a href="https://huggingface.co/facebook/m2m100_1.2B" rel="nofollow noreferrer">model page</a> provides this snippet for how the model should be used:</p>
<pre class="lang-py prettyprint-override"><code>from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
chinese_text = "生活就像一盒巧克力。"
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_1.2B")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_1.2B")
# translate Hindi to French
tokenizer.src_lang = "hi"
encoded_hi = tokenizer(hi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "La vie est comme une boîte de chocolat."
# translate Chinese to English
tokenizer.src_lang = "zh"
encoded_zh = tokenizer(chinese_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Life is like a box of chocolate."
</code></pre>
<p>It also provides a snippet for how to deploy and use it with AWS SageMaker:</p>
<pre class="lang-py prettyprint-override"><code>from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'facebook/m2m100_1.2B',
'HF_TASK':'text2text-generation'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
predictor.predict({
'inputs': "The answer to the universe is"
})
</code></pre>
<p>It is not clear, however, how to specify the source language or the target language with the AWS setup. I tried:</p>
<pre class="lang-py prettyprint-override"><code>predictor.predict({
'inputs': "The answer to the universe is",
'forced_bos_token_id': "fr"
})
</code></pre>
<p>but my parameter was ignored.</p>
<p>I haven't managed to find any documentation that would explain what the expect format is through this API.</p>
|
<p>The tokenizer needs to be installed and imported in any case:</p>
<pre><code>pip install transformers
pip install sentencepiece
</code></pre>
<p>Then the tokenizer needs to be passed the following way:</p>
<pre class="lang-py prettyprint-override"><code>tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_1.2B")
predictor.predict({
'inputs': "The answer to the universe is",
'parameters': {
'forced_bos_token_id': tokenizer.get_lang_id("it")
}
})
</code></pre>
|
amazon-web-services|facebook|amazon-sagemaker|huggingface-transformers|machine-translation
| 0
|
2,736
| 69,562,174
|
check if column is blank in pandas dataframe
|
<p>I have the next csv file:</p>
<pre><code>A|B|C
1100|8718|2021-11-21
1104|21|
</code></pre>
<p>I want to create a dataframe that gives me the date output as follows:</p>
<pre><code> A B C
0 1100 8718 20211121000000
1 1104 21 ""
</code></pre>
<p>This means</p>
<pre><code>if C is empty:
put doublequotes
else:
format date to yyyymmddhhmmss (adding 0s to hhmmss)
</code></pre>
<p>My code:</p>
<pre><code>df['C'] = np.where(df['C'].empty, df['C'].str.replace('', '""'), df['C'] + '000000')
</code></pre>
<p>but it gives me the next:</p>
<pre><code> A B C
0 1100 8718 2021-11-21
1 1104 21 0
</code></pre>
<p>I have tried another piece of code:</p>
<pre><code>if df['C'].empty:
df['C'] = df['C'].str.replace('', '""')
else:
df['C'] = df['C'].str.replace('-', '') + '000000'
</code></pre>
<p>OUTPUT:</p>
<pre><code> A B C
0 1100 8718 20211121000000
1 1104 21 0000000
</code></pre>
|
<p>Use <code>dt.strftime</code>:</p>
<pre><code>df = pd.read_csv('data.csv', sep='|', parse_dates=['C'])
df['C'] = df['C'].dt.strftime('%Y%m%d%H%M%S').fillna('""')
print(df)
# Output:
A B C
0 1100 8718 20211121000000
1 1104 21 ""
</code></pre>
|
python|pandas
| 0
|
2,737
| 69,635,114
|
Raising arrays to a fractional index produces NaN?
|
<p>I have an array, <em>k</em> as demonstrated in my MWE. I want to raise this array to the power of 3/2 but when I do the quoted error arises.</p>
<pre><code>import numpy as np
N = 2**15
dx = 0.1
k = (2 * np.pi / (N * dx)) * np.r_[0:N / 2, 0, -N / 2 + 1:0][None, :]
kpower = k**(3/2)
print(kpower)
</code></pre>
<blockquote>
<p>array([[0. , 0.0018999 , 0.00537372, ..., nan, nan,
nan]])</p>
</blockquote>
<p>Why does this happen? I tried using np.power as well as</p>
<blockquote>
<p>scipy.linalg import fractional_matrix_power</p>
</blockquote>
<p>What is the deal here since the array is real and features only integers.</p>
|
<p>You have to cast the entries of your array to complex numbers, so that numpy can perform your exponentiation:</p>
<pre><code>import numpy as np
N = 5
dx = 2
k = (2 * np.pi / (N * dx)) * np.r_[0: N / 2, 0, -N / 2 + 1:0][None, :]
# convert array to complex datatype
k = k.astype(np.cdouble)
kpower = k**(3/2)
print(kpower)
</code></pre>
<p>Output:</p>
<pre><code>[[ 0.00000000e+00+0.j 4.98046397e-01+0.j
1.40868794e+00+0.j 0.00000000e+00+0.j
-1.68077199e-16-0.91496966j -3.23464720e-17-0.17608599j]]
</code></pre>
|
python|numpy|nan
| 0
|
2,738
| 69,512,620
|
How to append a numpy Array into another one
|
<p>This is my code...</p>
<pre><code>image_paths = dt_labels['image']
train_X = np.ndarray([])
for image_path in image_paths:
path = './Dataset/' + image_path
img = cv2.imread(path, 0)
vectorized_img = img.reshape(img.shape[0] * img.shape[1], 1)
train_X = np.append(train_X, vectorized_img, axis=1)
</code></pre>
<p>As you can see, i have an ndarray in the var named train_X, and... i read an image and reshape it into a vector of one dimention, and when i try to append into the ndarray train_X, and i got this error:</p>
<blockquote>
<p>zero-dimensional arrays cannot be concatenated</p>
</blockquote>
<p>I just want to concatenate the multiple arrays "vectorized_img" into the train_X ndarray in horizontal</p>
|
<pre><code>In [103]: train_X = np.ndarray([])
...: print(train_X.shape)
...: for i in range(3):
...: vectorized_img = np.ones((4, 1))
...: train_X = np.append(train_X, vectorized_img, axis=1)
...:
()
Traceback (most recent call last):
File "<ipython-input-103-26d2205beb4e>", line 5, in <module>
train_X = np.append(train_X, vectorized_img, axis=1)
File "<__array_function__ internals>", line 5, in append
File "/usr/local/lib/python3.8/dist-packages/numpy/lib/function_base.py", line 4745, in append
return concatenate((arr, values), axis=axis)
File "<__array_function__ internals>", line 5, in concatenate
ValueError: zero-dimensional arrays cannot be concatenated
</code></pre>
<p><code>np.append</code> just calls <code>np.concatenate</code>. One argument has shape (), the other (4,1). The error is that it can't join those.</p>
<p><code>np.ndarray([])</code> is NOT a clone of <code>[]</code>, and <code>np.append</code> is not a clone of list <code>append</code>. <code>concatenate</code> says that the number of dimensions must match, and the size of the dimensions must also match (except for the concatenate one).</p>
<p>To join 'columns' we need to start with a 'column'</p>
<pre><code>In [111]: train_X = np.ones((4,0))
...: for i in range(3):
...: vectorized_img = np.ones((4, 1))
...: train_X = np.append(train_X, vectorized_img, axis=1)
...: train_X.shape
Out[111]: (4, 3)
</code></pre>
<p>Or we could start with (0,1) and join on <code>axis=0</code>.</p>
<p>But it's faster, and less prone to errors it we stick with list append:</p>
<pre><code>In [114]: alist = []
...: for i in range(3):
...: vectorized_img = np.ones((4, 1))
...: alist.append(vectorized_img)
...: np.concatenate(alist, axis=1)
Out[114]:
array([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
</code></pre>
|
python|numpy
| 1
|
2,739
| 69,304,833
|
Loading a tensorflow.keras trained model using load_model returns JSON decode error, while untrained model loads normally
|
<p>I have a trained Keras model built and trained using the tensorflow.keras API and saved using the <code>tf.keras.save_model()</code> method with no optional arguments. Tensorflow is up to date and my Python version is 3.8. From my understanding, this method should save the model using the default "tf" format, which is recommended in TF 2.X, and then using <code>load_model()</code> should work fine.</p>
<p>Loading the model again, however, produces the following:</p>
<pre><code>model = tf.keras.models.load_model("/Volumes/thesis_drive/thesis_project_local_new/trained_model_640x64/")
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
/var/folders/c1/8tq0t8y90195qxyt5hppjjtr0000gq/T/ipykernel_933/1131710361.py in <module>
----> 1 model = tf.keras.models.load_model("/Volumes/thesis_drive/thesis_project_local_new/trained_model_640x64/")
~/miniforge3/lib/python3.9/site-packages/tensorflow/python/keras/saving/save.py in load_model(filepath, custom_objects, compile, options)
204 filepath = path_to_string(filepath)
205 if isinstance(filepath, str):
--> 206 return saved_model_load.load(filepath, compile, options)
207
208 raise IOError(
~/miniforge3/lib/python3.9/site-packages/tensorflow/python/keras/saving/saved_model/load.py in load(path, compile, options)
153
154 # Finalize the loaded layers and remove the extra tracked dependencies.
--> 155 keras_loader.finalize_objects()
156 keras_loader.del_tracking()
157
~/miniforge3/lib/python3.9/site-packages/tensorflow/python/keras/saving/saved_model/load.py in finalize_objects(self)
624
625 # Initialize graph networks, now that layer dependencies have been resolved.
--> 626 self._reconstruct_all_models()
627
628 def _unblock_model_reconstruction(self, layer_id, layer):
~/miniforge3/lib/python3.9/site-packages/tensorflow/python/keras/saving/saved_model/load.py in _reconstruct_all_models(self)
643 all_initialized_models.add(model_id)
644 model, layers = self.model_layer_dependencies[model_id]
--> 645 self._reconstruct_model(model_id, model, layers)
646 _finalize_config_layers([model])
647
~/miniforge3/lib/python3.9/site-packages/tensorflow/python/keras/saving/saved_model/load.py in _reconstruct_model(self, model_id, model, layers)
659 def _reconstruct_model(self, model_id, model, layers):
660 """Reconstructs the network structure."""
--> 661 config = json_utils.decode(
662 self._proto.nodes[model_id].user_object.metadata)['config']
663
~/miniforge3/lib/python3.9/site-packages/tensorflow/python/keras/saving/saved_model/json_utils.py in decode(json_string)
60
61 def decode(json_string):
---> 62 return json.loads(json_string, object_hook=_decode_helper)
63
64
~/miniforge3/lib/python3.9/json/__init__.py in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
357 if parse_constant is not None:
358 kw['parse_constant'] = parse_constant
--> 359 return cls(**kw).decode(s)
~/miniforge3/lib/python3.9/json/decoder.py in decode(self, s, _w)
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
~/miniforge3/lib/python3.9/json/decoder.py in raw_decode(self, s, idx)
353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
</code></pre>
<p>To test whether this is an error with <code>save_model()</code> or <code>load_model()</code>, I built the same model again in a Jupyter notebook, saved it, and reloaded it with no error:</p>
<pre><code>import tensorflow as tf
from tensorflow.keras.layers import Dense, Activation, Flatten, Dropout, Conv2D, MaxPooling2D, Input
from tensorflow.keras.losses import CategoricalCrossentropy
from tensorflow.keras import optimizers
from tensorflow.keras.models import Model
def build_model():
_input = Input(shape=(640,64,3))
x = Conv2D(filters=64, kernel_size=4, input_shape=(640, 64, 3))(_input)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(4, 4))(x)
x = Dropout(0.5)(x)
x = Conv2D(filters=128, kernel_size=4, input_shape=(640, 64, 3))(_input)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(4, 4))(x)
x = Dropout(0.5)(x)
x = Conv2D(filters=256, kernel_size=4, input_shape=(640, 64, 3))(_input)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Dropout(0.5)(x)
x = Flatten()(x)
output = Dense(161, activation = 'softmax')(x)
model = Model(_input,output)
model.compile(optimizer=optimizers.Adam(), loss="categorical_crossentropy")
tf.keras.models.save_model(model,"model_test")
model = build_model()
Metal device set to: Apple M1
2021-09-23 13:40:46.234438: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2021-09-23 13:40:46.234631: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
2021-09-23 13:40:47.112730: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
INFO:tensorflow:Assets written to: model_test/assets
del model
model = tf.keras.models.load_model("model_test")
</code></pre>
<p>Further details: the model was trained on another machine (a supercomputer I have access to through my university) running Linux, and transferred to my Apple M1 machine via SCP, where it is now exhibiting this loading error.</p>
<p>I don't know why the JSON module is being called - there doesn't appear to be a JSON file anywhere in the directory. However, given that rebuilding the model without training and loading it produced no error, I am suspicious that the save did not execute correctly.</p>
|
<p>Faced the same issue with tensorflow 2.3.1.
Updated to 2.7 and now its working again!</p>
|
python|tensorflow|machine-learning|keras|deep-learning
| 1
|
2,740
| 69,321,347
|
Pandas: summing nanoseconds time delta to timestamp
|
<p>This seems somehow a simple question, but I could not find answers to it (maybe using wrong search keywords):</p>
<pre><code>import pandas as pd
pd.Timestamp(1) + pd.Timedelta(seconds=1e-9)
Out: Timestamp('1970-01-01 00:00:00.000000001') # why? Un-intuitive
pd.Timestamp(1) + pd.Timedelta(microseconds=1e-3)
Out: Timestamp('1970-01-01 00:00:00.000000001') # why? Un-intuitive
pd.Timestamp(1) + pd.Timedelta(nanoseconds=1)
Out: Timestamp('1970-01-01 00:00:00.000000002') # ok, but not consistent with above ones
</code></pre>
<p>Even if timestamps being based on the epoch in nanoseconds would be the argument, then why this (eg) inconsistency:</p>
<pre><code>pd.Timestamp(1) + pd.Timedelta(seconds=1e-9)
Out: Timestamp('1970-01-01 00:00:00.000000001')
pd.Timestamp(1) + pd.Timedelta(seconds=1e-6)
Out: Timestamp('1970-01-01 00:00:00.000001001')
</code></pre>
|
<p>This was a bug and was fixed <a href="https://github.com/pandas-dev/pandas/pull/45108" rel="nofollow noreferrer">here</a>.</p>
|
python-3.x|pandas|datetime|timedelta
| 1
|
2,741
| 53,938,977
|
TypeError: add(): argument 'other' (position 1) must be Tensor, not numpy.ndarray
|
<p>I am testing a ResNet-34 trained_model using Pytorch and fastai on a linux system with the latest anaconda3. To run it as a batch job, I commented out the gui related lines. It started to run for a few hrs, then stopped in the Validation step, the error message is as below. </p>
<pre><code>...
^M100%|█████████▉| 452/453 [1:07:07<00:08, 8.75s/it,
loss=1.23]^[[A^[[A^[[A
^MValidation: 0%| | 0/40 [00:00<?, ?it/s]^[[A^[[A^[[ATraceback
(most recent call last):
File "./resnet34_pretrained_PNG_nogui_2.py", line 279, in <module>
learner.fit(lr,1,callbacks=[f1_callback])
File "/project/6000192/jemmyhu/resnet_png/fastai/learner.py", line 302,
in fit
return self.fit_gen(self.model, self.data, layer_opt, n_cycle,
**kwargs)
File "/project/6000192/jemmyhu/resnet_png/fastai/learner.py", line 249,
in fit_gen
swa_eval_freq=swa_eval_freq, **kwargs)
File "/project/6000192/jemmyhu/resnet_png/fastai/model.py", line 162, in
fit
vals = validate(model_stepper, cur_data.val_dl, metrics, epoch,
seq_first=seq_first, validate_skip = validate_skip)
File "/project/6000192/jemmyhu/resnet_png/fastai/model.py", line 241, in
validate
res.append([to_np(f(datafy(preds), datafy(y))) for f in metrics])
File "/project/6000192/jemmyhu/resnet_png/fastai/model.py", line 241, in
<listcomp>
res.append([to_np(f(datafy(preds), datafy(y))) for f in metrics])
File "./resnet34_pretrained_PNG_nogui_2.py", line 237, in __call__
self.TP += (preds*targs).float().sum(dim=0)
TypeError: add(): argument 'other' (position 1) must be Tensor, not
numpy.ndarray
</code></pre>
<p>The link for the original code is
<a href="https://www.kaggle.com/iafoss/pretrained-resnet34-with-rgby-0-460-public-lb" rel="nofollow noreferrer">https://www.kaggle.com/iafoss/pretrained-resnet34-with-rgby-0-460-public-lb</a></p>
<p>lines 279 and 237 in my copy are shown below:</p>
<pre><code>226 class F1:
227 __name__ = 'F1 macro'
228 def __init__(self,n=28):
229 self.n = n
230 self.TP = np.zeros(self.n)
231 self.FP = np.zeros(self.n)
232 self.FN = np.zeros(self.n)
233
234 def __call__(self,preds,targs,th=0.0):
235 preds = (preds > th).int()
236 targs = targs.int()
237 self.TP += (preds*targs).float().sum(dim=0)
238 self.FP += (preds > targs).float().sum(dim=0)
239 self.FN += (preds < targs).float().sum(dim=0)
240 score = (2.0*self.TP/(2.0*self.TP + self.FP + self.FN + 1e-6)).mean()
241 return score
276 lr = 0.5e-2
277 with warnings.catch_warnings():
278 warnings.simplefilter("ignore")
279 learner.fit(lr,1,callbacks=[f1_callback])
</code></pre>
<p>Could anyone have a clue for the issue? </p>
<p>Many thanks,
Jemmy</p>
|
<p>Ok, the error seems be for the latest pytorch-1.0.0, when degrade pytorch to pytorch-0.4.1, the code seems work (passed the error lines at this point). Still have no idea to make the code work with pytorch-1.0.0</p>
|
machine-learning|pytorch|kaggle|resnet|fast-ai
| 2
|
2,742
| 38,097,181
|
Distributed TensorFlow example doesn't work on TensorFlow 0.9
|
<p>I'm trying out <a href="http://shipengfei92.cn/play_distributed_tensorflow/" rel="nofollow">this tensorflow distributed tutorial</a> with the same operating system and python version on my own computer. I create the first script and run it in a terminal, then I open another terminal and run the second script and get the following error:</p>
<pre><code>E0629 10:11:01.979187251 15265 tcp_server_posix.c:284] bind addr=[::]:2222: Address already in use
E0629 10:11:01.979243221 15265 server_chttp2.c:119] No address added out of total 1 resolved
Traceback (most recent call last):
File "worker0.py", line 7, in <module>
task_index=0)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/server_lib.py", line 142, in __init__
server_def.SerializeToString(), status)
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/errors.py", line 450, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors.InternalError: Could not start gRPC server
</code></pre>
<p>I get a similar error when trying the <a href="https://www.tensorflow.org/versions/r0.9/how_tos/distributed/index.html" rel="nofollow">official distributed tutorial</a>.</p>
<p><strong>Edit:</strong> I tried this on another machine I have with the same packages and now I get the following error log:</p>
<pre><code>E0629 11:17:44.500224628 18393 tcp_server_posix.c:284] bind addr=[::]:2222: Address already in use
E0629 11:17:44.500268362 18393 server_chttp2.c:119] No address added out of total 1 resolved
Segmentation fault (core dumped)
</code></pre>
<p>What could be the issue?</p>
|
<p>The problem is probably that you are using the same port number (2222) for both workers. Each port number can only be used by one process on any given host. That's what the error "bind addr=[::]:2222: Address already in use" means.</p>
<p>I'm guessing either you have "localhost:2222" twice in your cluster specification, or you have specified the same task_index to two tasks.</p>
<p>I hope that helps!</p>
|
python-2.7|machine-learning|tensorflow|tensorflow-serving
| 3
|
2,743
| 38,224,388
|
Set column name for size()
|
<p>I'm trying to rename the <code>size()</code> column as shown <a href="https://stackoverflow.com/questions/17995024/how-to-assign-a-name-to-the-a-size-column">here</a> like this:</p>
<pre><code>x = monthly.copy()
x["size"] = x\
.groupby(["sub_acct_id", "clndr_yr_month"]).transform(np.size)
</code></pre>
<p>But what I'm getting is</p>
<pre><code>ValueError: Wrong number of items passed 15, placement implies 1
</code></pre>
<p>Why is this not working for my dataframe?</p>
<hr>
<p>If I simple print the copy:</p>
<pre><code>x = monthly.copy()
print x
</code></pre>
<p>this is how the table looks like:</p>
<pre><code>sub_acct_id clndr_yr_month
12716D 201601 219
201602 265
12716G 201601 221
201602 262
12716K 201601 181
201602 149
...
</code></pre>
<p>what I try to accomplish is to set the name of the column:</p>
<pre><code>sub_acct_id clndr_yr_month size
12716D 201601 219
201602 265
12716G 201601 221
201602 262
12716K 201601 181
201602 149
...
</code></pre>
|
<p>You need:</p>
<pre><code>x["size"] = x.groupby(["sub_acct_id", "clndr_yr_month"])['sub_acct_id'].transform('size')
</code></pre>
<p>Sample:</p>
<pre><code>df = pd.DataFrame({'sub_acct_id': ['x', 'x', 'x','x','y','y','y','z','z']
, 'clndr_yr_month': ['a', 'b', 'c','c','a','b','c','a','b']})
print (df)
clndr_yr_month sub_acct_id
0 a x
1 b x
2 c x
3 c x
4 a y
5 b y
6 c y
7 a z
8 b z
df['size'] = df.groupby(['sub_acct_id', 'clndr_yr_month'])['sub_acct_id'].transform('size')
print (df)
clndr_yr_month sub_acct_id size
0 a x 1
1 b x 1
2 c x 2
3 c x 2
4 a y 1
5 b y 1
6 c y 1
7 a z 1
8 b z 1
</code></pre>
<hr>
<p>Another solution with aggregating output:</p>
<pre><code>df = df.groupby(['sub_acct_id', 'clndr_yr_month']).size().reset_index(name='Size')
print (df)
sub_acct_id clndr_yr_month Size
0 x a 1
1 x b 1
2 x c 2
3 y a 1
4 y b 1
5 y c 1
6 z a 1
7 z b 1
</code></pre>
|
pandas
| 1
|
2,744
| 38,064,449
|
TensorFlow - Tflearning error feed_dict
|
<p>I am working on a classification problem in python. Fact is, I'm not good yet in TensorFlow. So I have the same problem since a long time now and I don't know how to fix it. I hope you could help me :)</p>
<p>This is my data :</p>
<p>X : 8000 pictures : 32*32px and 3 colors (rgb), so I load a matrix X.shape = (8000,32,32,3)</p>
<p>Y : 4 classes (1,2,3 and 4): Y. shape = (8000,1)</p>
<p>This is my code :</p>
<pre><code>network = input_data(shape=[None, 32, 32, 3], name='iput')
# Step 1: Convolution
network = conv_2d(network, 32, 3, activation='relu')
# Step 2: Max pooling
network = max_pool_2d(network, 2)
# Step 3: Convolution again
network = conv_2d(network, 64, 3, activation='relu')
# Step 4: Convolution yet again
network = conv_2d(network, 64, 3, activation='relu')
# Step 5: Max pooling again
network = max_pool_2d(network, 2)
# Step 6: Fully-connected 512 node neural network
network = fully_connected(network, 512, activation='relu')
# Step 7: Dropout - throw away some data randomly during training to prevent over-fitting
network = dropout(network, 0.5)
# Step 8: Fully-connected neural network with 4 outputs
network = fully_connected(network, 4, activation='softmax')
# Tell tflearn how we want to train the network
network = regression(network, optimizer='adam',
loss='categorical_crossentropy',
learning_rate=0.001)
model = tflearn.DNN(network)
model.fit(X, Y)
</code></pre>
<p>This is my erros</p>
<blockquote>
<p>Traceback (most recent call last):</p>
<p>File "", line 3, in </p>
<p>model.fit(X, Y)</p>
<p>File "/home/side/anaconda3/lib/python3.5/site-packages/tflearn/models/dnn.py",</p>
<p>line 157, in fit</p>
<p>self.targets)</p>
<p>File "/home/side/anaconda3/lib/python3.5/site-packages/tflearn/utils.py",
line 267, in feed_dict_builder
feed_dict[net_inputs[i]] = x
IndexError: list index out of range</p>
</blockquote>
<p>I have also tried to pass X as (8000,3072) Matrix
And Y as (8000,4) Matrix, for exemple :</p>
<p>[0 0 1 0 <-- Y[0] = 3</p>
<p>0 1 0 0 <-- Y[1] = 2</p>
<p>...]</p>
<p>I reuse this code : <a href="https://github.com/tflearn/tflearn/blob/master/examples/images/convnet_cifar10.py" rel="nofollow">https://github.com/tflearn/tflearn/blob/master/examples/images/convnet_cifar10.py</a> , used to class cifar10 data.</p>
<p>Thank you for your help,</p>
<p>Celia</p>
|
<p>Another option is to add:</p>
<pre><code>tf.reset_default_graph()
</code></pre>
<p>As the first line of your code</p>
|
python|tensorflow|deep-learning
| 5
|
2,745
| 66,139,250
|
tf.data: function fails tries to convert to numpy array?
|
<p>I'm trying to build a <code>tf.data</code> pipeline, ultimately to compute skipgrams, but I get an error</p>
<pre><code>NotImplementedError: Cannot convert a symbolic Tensor (cond/Identity:0) to a numpy array.
This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
</code></pre>
<p>My pipeline:</p>
<pre><code>text_vector_ds = (
text_ds
.batch(1024)
.map(vectorize_layer)
.map(my_func)
)
</code></pre>
<p>where</p>
<pre><code>text_ds = tf.data.TextLineDataset(file)
vectorize_layer = tensorflow.keras.layers.experimental.preprocessing.TextVectorization(
standardize='lower_and_strip_punctuation',
max_tokens=4096,
output_mode='int',
output_sequence_length=5)
class MyFunc():
def _make_fat_diagonal(self, size: int) -> tf.Tensor:
fat_ones = tf.linalg.band_part(
tf.ones([size,size], dtype=tf.int64),
num_lower=self.window,
num_upper=self.window
)
return tf.linalg.set_diag(fat_ones, tf.zeros(size, dtype=tf.int64))
def __call__(self, input):
# Ensure the input is rank 2
if tf.rank(input) == 1:
input = tf.expand_dims(input, axis=0)
input_shape = tf.shape(input)
num_input_cols = input_shape[1]
return = self._make_fat_diagonal(num_input_cols)
my_func = MyFunc()
</code></pre>
<p>A partial stacktrace is</p>
<pre><code>../testw2v/skipgram/skipgram.py:333 _make_fat_diagonal *
fat_ones = tf.linalg.band_part(
/opt/conda/envs/emb2/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper **
return target(*args, **kwargs)
/opt/conda/envs/emb2/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:3120 ones
output = _constant_if_small(one, shape, dtype, name)
/opt/conda/envs/emb2/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:2804 _constant_if_small
if np.prod(shape) < 1000:
<__array_function__ internals>:6 prod
/opt/conda/envs/emb2/lib/python3.7/site-packages/numpy/core/fromnumeric.py:3031 prod
keepdims=keepdims, initial=initial, where=where)
/opt/conda/envs/emb2/lib/python3.7/site-packages/numpy/core/fromnumeric.py:87 _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
/opt/conda/envs/emb2/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:855 __array__
" a NumPy call, which is not supported".format(self.name))
NotImplementedError: Cannot convert a symbolic Tensor (cond/Identity:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
</code></pre>
<p>I suppose that it doesn't like extracting a dimension to run <code>_make_fat_diagonal()</code>, although I'm not sure how else I would express this. Outside of a pipeline the function works just fine on individual elements of the <code>text_ds</code> dataset. As you can see I'm careful to only use Tensorflow methods.</p>
<p>What's the correct approach?</p>
|
<p>As of today, it does seem to be a result of the bug <a href="https://github.com/tensorflow/models/issues/9706" rel="nofollow noreferrer">https://github.com/tensorflow/models/issues/9706</a>. Reverting to python 3.6 makes it work as expected.</p>
|
python|tensorflow|tf.data.dataset
| 0
|
2,746
| 66,137,047
|
A 2d matrix can be reconstructed, in which a mask has been used with numpy where and flattened
|
<p>As the question says I have a 2D matrix (1000, 2000), in which I apply a condition with the numpy where function, in this way:</p>
<pre><code>import numpy as np
A = np.random.randn(1000, 2000)
print(A.shape)
(1000, 2000)
mask = np.where((A >=0.1) & (A <= 0.5))
A = A[mask]
print(A.shape)
(303112,)
</code></pre>
<p>and I get a flattened matrix which I use as input in a Fortran program which only supports 1D matrices, the output of this program has the same dimension as the input 1D matrix <strong>(303112,)</strong>, is there any method or function to reconstruct the flattened matrix to its original 2D form. I was thinking of saving the indexes in a boolean matrix and use these to reconstruct the matrix, if someone knows of any numpy method or any suggestion would be of great help.</p>
<p>Greetings.</p>
|
<p>IIUC you need to maintain the 1D indexes and 2D indexes of the mask so that when you try to update those values using a FORTRAN program, you can switch to 1D for input and then switch back to 2D to update the original array.</p>
<p>You can use <code>np.ravel_multi_index</code> to convert 2D indexes to 1D. Then you can use these 1D indexes to convert them back to 2D using <code>np.unravel_index</code> (though since you already have the 2D mask, you don't need to convert 1D to 2D again.)</p>
<pre><code>import numpy as np
A = np.random.randn(1000, 2000)
mask = np.where((A >=0.1) & (A <= 0.5))
idx_flat = np.ravel_multi_index(mask, (1000,2000)) #FLAT 1D indexes using original mask
idx_2d = np.unravel_index(idx_flat, (1000,2000)) #2D INDEXES using the FLAT 1D indexes
#Comparing the method of using flat indexes and A[mask]
print(np.allclose(A.ravel()[idx_flat],A[mask]))
### True
#Comparing the freshly created 2d indexes to the original mask
print(np.allclose(idx_2d,mask))
### True
</code></pre>
<hr />
<p>Here is a dummy test case with end to end code for a (3,3) matrix.</p>
<pre><code>import numpy as np
#Dummy matrix A and mask
A = np.random.randn(3, 3) #<---- shape (3,3)
mask = np.where(A <= 0.5)
mask[0].shape #Number of indexes in 2D mask
###Output: (6,)
#########################################################
#Flatten 2D indexes to 1D
idx_flat = np.ravel_multi_index(mask, (3,3)) #<--- shape (3,3)
idx_flat.shape #Number of indexes in flattened mask
###Output: (6,)
#########################################################
#Feed the 6 length array to fortran function
def fortran_function(x):
return x**2
flat_array = A.ravel()[idx_flat]
fortran_output = fortran_function(flat_array)
#Number of values in fortran_output
fortran_output.shape
###Output: (6,)
#########################################################
#Create a empty array
new_arr = np.empty((3,3)) #<---- shape (3,3)
new_arr[:] = np.nan
new_arr[mask] = fortran_output #Feed the 1D array to the 2D masked empty array
new_arr
</code></pre>
<pre><code>array([[5.63399114e-04, nan, 7.86255167e-01],
[3.94992857e+00, 4.88932044e-02, 2.45489069e+00],
[3.51957270e-02, nan, nan]])
</code></pre>
|
python|python-3.x|numpy
| 1
|
2,747
| 66,183,174
|
Identifying a pattern in a dataframe
|
<p>I have the following DF:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Game</th>
<th>Bookmaker</th>
<th>Over1.5</th>
<th>Under1.5</th>
<th>Over2.25</th>
<th>Under2.25</th>
<th>Over2.5</th>
<th>Under2.5</th>
<th>Over3</th>
<th>Under3</th>
</tr>
</thead>
<tbody>
<tr>
<td>A vs B</td>
<td>Asianodds</td>
<td>1,2</td>
<td>3,4</td>
<td>2,1</td>
<td>4,1</td>
<td>3,1</td>
<td>4,7</td>
<td>3,3</td>
<td>4,9</td>
</tr>
<tr>
<td>A vs B</td>
<td>Pinnacle</td>
<td>1,2</td>
<td>3,4</td>
<td>2,1</td>
<td>4,0</td>
<td>3,1</td>
<td>4,6</td>
<td>3,3</td>
<td>4,9</td>
</tr>
</tbody>
</table>
</div>
<p>How do I need to set up my python code to recognize a pattern as follows:</p>
<p>Find game where Pinnacle < Asianodds for Under2.25 & Under2.5 while Under in all other columns is equal for bookmaker & Over in all columns is equal for bookmaker</p>
<p>So far I tried this, but it does not give the desired outcome...:</p>
<pre class="lang-py prettyprint-override"><code>o_u_types= [1.50,1.75,2.00,2.25,2.50,2.75,3.00,3.50]
for i o_u_types:
data_df["Over_{}".format(i)]=Over #In these two lines Over and Under are the two previously scraped values for over and under
data_df["Under{}".format(i)]=Under
#Identify patterns
try:
Asian=data_df.iloc[0] #Asian is always line 1 of the df, Pin always line 2
Pin=data_df.iloc[1]
if Asian['Over_{}'.format(i)] > Pin['Over_{}'.format(i)]:
data_df['over_dominance_{}'.format(i)]=='AsianDominant'
elif Asian['Over_{}'.format(i)] < Pin['Over_{}'.format(i)]:
data_df['over_dominance_{}'.format(i)]=='PinDominant'
elif Asian['Over_{}'.format(i)] == Pin['Over_{}'.format(i)]:
data_df['over_dominance_{}'.format(i)]=='Parity'
if Asian['Under_{}'.format(i)] > Pin['Under_{}'.format(i)]:
data_df['under_dominance_{}'.format(i)]=='AsianDominant'
elif Asian['Under_{}'.format(i)] < Pin['Under_{}'.format(i)]:
data_df['under_dominance_{}'.format(i)]=='PinDominant'
elif Asian['Under_{}'.format(i)] == Pin['Under_{}'.format(i)]:
data_df['under_dominance_{}'.format(i)]=='Parity'
# data_df['over_diff']= Asian['Over'] != Pin['Over']
# data_df['under_diff']= Asian['Under'] != Pin['Under']
# print(data_df)
except:
data_df['over_dominance_{}'.format(i)]= "n/a"
data_df['under_dominance_{}'.format(i)]= "n/a"
</code></pre>
<p>Sadly, this code returns n/a for all lines, even if there is a difference in the Over/Under values..</p>
<p>After I have established the dominance of one line over another, I want to identify a pattern where
Pinnacle < Asianodds for Under2.25 & Under2.5 while Under in all other columns is equal for bookmaker & Over in all columns is equal for bookmaker</p>
|
<p>Try to see if this is helpful to you.</p>
<p>You can Transpose your dataframe and do this as follows:</p>
<pre><code>df = df.T
df.loc[df[0] > df[1],'over_dominance'] = 'AsianDominant'
df.loc[df[0] < df[1],'over_dominance'] = 'PinDominant'
df.loc[df[0] == df[1],'over_dominance'] = 'Parity'
print (df)
</code></pre>
<p>You can ignore the first 2 rows (Game and Bookmaker).</p>
<p>The output will be as follows:</p>
<pre><code> 0 1 over_dominance
Game A vs B A vs B Parity
Bookmaker Asianodds Pinnacle PinDominant
Over1.5 1,2 1,2 Parity
Under1.5 3,4 3,4 Parity
Over2.25 2,1 2,1 Parity
Under2.25 4,1 4,0 AsianDominant
Over2.5 3,1 3,1 Parity
Under2.5 4,7 4,6 AsianDominant
Over3 3,3 3,3 Parity
Under3 4,9 4,9 Parity
</code></pre>
<p>This may not be the final answer but I am trying to give you options to consider.</p>
|
python|pandas|dataframe
| 0
|
2,748
| 52,774,506
|
Count nonzero elements and plot
|
<p>I want to count the number of nonzero elements in the list below-
<a href="https://i.stack.imgur.com/4pelh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4pelh.png" alt="enter image description here"></a></p>
<p>I tried this code,</p>
<pre><code>nzcnt = [nonzero.count(0),nonzero.count(1),nonzero.count(2),nonzero.count(3),nonzero.count(4),nonzero.count(5),nonzero.count(6),nonzero.count(7),nonzero.count(8),nonzero.count(9)]
</code></pre>
<p>But it is not really <em>pythonic</em>. How can I change this more <em>pythonic</em>?</p>
<p>NOTE: allowed library : <code>numpy</code>, <code>pandas</code>, <code>matplotlib</code>, <code>copy</code></p>
|
<p>You can flatten the list and then count the number of zeros. You can use <code>w_check.flatten()</code> for that.</p>
|
python|pandas|numpy
| 1
|
2,749
| 52,704,777
|
NumPy - Vectorizing loops involving range iterators
|
<p>Is there any way to make this work without for loops?</p>
<pre><code>import import numpy as np
import matplotlib.pyplot as plt
L = 1
N = 255
dh = 2*L/N
dh2 = dh*dh
phi_0 = 1
c = int(N/2)
r_0 = L/2
arr = np.empty((N, N))
for i in range(N):
for j in range(N):
arr[i, j] = phi_0 if (i - c)**2 + (j - c)**2 < r_0**2/dh2 else 0
plt.imshow(arr)
</code></pre>
<p>I've tried calling function(x[None,:], y[:, None]), where:</p>
<pre><code>function(i, j):
return phi_0 if (i - c)**2 + (j - c)**2 < r_0**2/dh2 else 0
</code></pre>
<p>but it requires list .any or .all methods. I'm looking for specifically functionless method (without fromfunction and vectorization).
Big thanks!</p>
|
<h3>Vectorized solution using open grids</h3>
<p>We could use two <em>open</em> range/grid arrays for <code>N</code> simulating the same behavior as the iterators -</p>
<pre><code>I = np.arange(N)
mask = (I[:,None] - c)**2 + (I - c)**2 < r_0**2/dh2
out = np.where(mask,phi_0,0)
</code></pre>
<p><strong>For a generic range on the two loops</strong></p>
<p>For the generic case where we would iterate through two loops that extend till say <code>M</code> and <code>N</code> respectively, we could make use of <a href="https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.ogrid.html" rel="nofollow noreferrer"><code>np.ogrid</code></a> to create those open grids and then use on the same lines -</p>
<pre><code>I,J = np.ogrid[:M,:N]
mask = (I - c)**2 + (J - c)**2 < r_0**2/dh2
</code></pre>
<p><strong>For a generic number of loops</strong></p>
<p>For a generic number of loops, simply create as many variables as the number of loops. Hence, for three loops :</p>
<pre><code>for i in range(M):
for j in range(N):
for k in range(P):
</code></pre>
<p>, we would have :</p>
<pre><code>I,J,K = np.ogrid[:M,:N,:P]
</code></pre>
<p>, then use <code>I,J,K</code> instead of <code>i,j,k</code> respectively for element-wise operations like we have here.</p>
<hr>
<p><strong>Alternative to replace last step for this specific case</strong></p>
<p>Last step could also be implemented with elementwise multiplication by scaling to <code>phi_0</code> with <code>mask</code> as the <code>else</code> part is setting to <code>0s</code> -</p>
<pre><code>out = mask*phi_0
</code></pre>
|
python|numpy|vectorization
| 7
|
2,750
| 46,342,608
|
simplegui based pygame to exe file, numpy-atlas.dll error
|
<p>I have a working game developed on codeskulptor simplegui tools. I converted it to pygame though SimpleGUICS2Pygame. I tried to convert it to exe, it ran this error:
[Errno 2] No such file or directory: 'numpy-atlas.dll'</p>
<p>I looked into this thread:<a href="https://stackoverflow.com/questions/36191770/py2exe-errno-2-no-such-file-or-directory-numpy-atlas-dll?newreg=6d39cae323044bb28f8744410b31199a">Py2Exe, [Errno 2] No such file or directory: 'numpy-atlas.dll'</a></p>
<p>I tried copying the numpy-atlas.dll to the code file directory, It worked, but when I tried to run the exe file, the command line just pops out and disappear.</p>
<p>I found the last answer to work, though I don't know how/where to run such a code:</p>
<pre><code> from distutils.core import setup
import py2exe
import numpy
import os
import sys
# add any numpy directory containing a dll file to sys.path
def numpy_dll_paths_fix():
paths = set()
np_path = numpy.__path__[0]
for dirpath, _, filenames in os.walk(np_path):
for item in filenames:
if item.endswith('.dll'):
paths.add(dirpath)
sys.path.append(*list(paths))
numpy_dll_paths_fix()
setup(...)
</code></pre>
<p>I recompiled it using pyinstaller, it succeeded, but no functionality, here is what the spec file looks like:</p>
<pre><code># -*- mode: python -*-
block_cipher = None
a = Analysis(['balling.py'],
pathex=['C:\\Users\\SamsunG\\Desktop\\Python 2017\\convert'],
binaries=[],
datas=[],
hiddenimports=[],
hookspath=[],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
exclude_binaries=True,
name='balling',
debug=False,
strip=False,
upx=True,
console=True )
coll = COLLECT(exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
name='balling')
</code></pre>
|
<p>I never used and will never use neither py2exe nor py2app. I always used cx_Freeze and then deleted it and installed Pyinstaller because it offers <code>run bat or shell -> delete build dir -> go to dist dir -> run that exe</code> system. To use it:</p>
<ol>
<li><p>Install Pyinstaller: type <code>pip install pyinstaller</code> in your shell. On Windows, do not forget to run it with Admin rights if Python is installed for all users.</p></li>
<li><p><code>cd</code> into dir with your Python file.</p></li>
<li><p>Make sure you have cutpasted "build" folder (if present). Type <code>pyinstaller <python_file_name> --noconsole -F</code>. <code>--noconsole</code> is for removing console window in the exe, <code>-F</code> is for singlefile compiling. This is why I prefer Pyinstaller.</p></li>
<li><p>Wait some time. Then get your .exe!</p></li>
<li><p>Delete the appeared "build" folder.</p></li>
</ol>
<p>Pyinstaller is cross-platform (but Pyinstaller for Windows will only compile for Windows, Pyinstaller for Linux will only compile for Linux, Pyinstaller for MacOS will only compile for MacOS etc).</p>
<hr>
<p>Set <code>console</code> to False or use --noconsole when compiling when EXEing games or GUI apps.</p>
|
python|numpy|dll|pygame
| 0
|
2,751
| 58,402,843
|
Filtering large dataframes by unique indices with Pandas
|
<p>I have a 30 million row by 30 column dataframe that I want to filter by using a list of unique indices.</p>
<p>Basically the <strong>input</strong> would be:</p>
<pre><code>df = pd.DataFrame({'column':[0,1,2,3,4,5,6,7,8,9,10]})
indices = [1, 7, 10]
df_filtered = df[df.index.isin(indices)]
</code></pre>
<p>With the <strong>output</strong> being:</p>
<pre><code>df_filtered
column
1
7
10
</code></pre>
<p>This works well with 'manageable' dataframes but when trying to match a (30,000,000, 30) dataframe with a list of ~33,000 unique indices this runs me into a local <code>MemoryError</code>.</p>
<p>Is there a way I can parallelize this process or break it into pieces more efficiently?</p>
|
<p>The actual answer depends on what you want to do with the DataFrame, but a general idea when running into memory errors is to do the operation in chunks.</p>
<p>In your case, chunks of size N are N sequential elements from the <code>indices</code> list:</p>
<pre><code>df = pd.DataFrame() # placeholder for your huge dataframe
indices = [] # placeholder for your long list of indices
chunksize = 50 # size of each chunk (50 rows)
for chunk in range(0, len(indices), chunksize):
current_indices = indices[chunk:chunk+chunksize]
df_filtered = df[df.index.isin(current_indices)]
# do what you want with df_filtered here
</code></pre>
|
python|python-3.x|pandas|dataframe|out-of-memory
| 2
|
2,752
| 58,253,269
|
What does this line do? df = df[~df[runner].str.contains("[a-z]").fillna(False)]
|
<p>may I check what does this line do? </p>
<pre><code>df = df[~df[runner].str.contains("[a-z]").fillna(False)]
</code></pre>
<p>Is this code remove all rows that contain string that start with alphabet?
2nd question is what is the purpose of ~? What does it do?</p>
<p>Thanks</p>
|
<p><strong>This code is masking a DataFrame.</strong></p>
<p>The RegEx <code>"[a-z]"</code> means contains any character 'a to z' (not 'starting with', as this would be <code>"^[a-z]"</code>).</p>
<p>The <code>.fillna(False)</code> means every NaN is treated as False for this Mask.</p>
<p>The <code>~</code> is inverting the Mask, so that the unselected rows are returned.</p>
<p>Be aware that the rows containing NaN are included. If this is not intended you must use <code>.fillna(True)</code>.</p>
|
python|pandas
| 1
|
2,753
| 58,293,899
|
How can I upgrade from Tensorflow 2.0 alpha to beta given the exception: ERROR: Cannot uninstall 'wrapt'
|
<p>When trying to install the beta version of tensorflow 2.0 I am getting the following exception:</p>
<pre><code>ERROR: Failed building wheel for wrapt
Running setup.py clean for wrapt
Failed to build wrapt
Installing collected packages: tf-estimator-nightly, wrapt, tb-nightly, google-pasta, tensorflow
Found existing installation: tf-estimator-nightly 1.14.0.dev2019030115
Uninstalling tf-estimator-nightly-1.14.0.dev2019030115:
Successfully uninstalled tf-estimator-nightly-1.14.0.dev2019030115
Found existing installation: wrapt 1.10.11
ERROR: Cannot uninstall 'wrapt'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
</code></pre>
<p>What can I do about it?</p>
<hr>
<p><strong>ERROR MESSAGE WHEN I TRY TO INSTALL WRAPT</strong></p>
<pre><code>(base) C:\WINDOWS\system32>pip install wrapt==1.10.0 --ignore-installed
Collecting wrapt==1.10.0
Downloading https://files.pythonhosted.org/packages/ab/43/5453a18b5a06b0d714fd50f4634524c09af4bc41214f3dddf97f59090b23/wrapt-1.10.0.tar.gz
Building wheels for collected packages: wrapt
Building wheel for wrapt (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: 'c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Alienware\\AppData\\Local\\Temp\\pip-install-jdld9mo_\\wrapt\\setup.py'"'"'; __file__='"'"'C:\\Users\\Alienware\\AppData\\Local\\Temp\\pip-install-jdld9mo_\\wrapt\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\Alienware\AppData\Local\Temp\pip-wheel-mmlqban_' --python-tag cp36
cwd: C:\Users\Alienware\AppData\Local\Temp\pip-install-jdld9mo_\wrapt\
Complete output (47 lines):
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.6
creating build\lib.win-amd64-3.6\wrapt
copying src\arguments.py -> build\lib.win-amd64-3.6\wrapt
copying src\decorators.py -> build\lib.win-amd64-3.6\wrapt
copying src\importer.py -> build\lib.win-amd64-3.6\wrapt
copying src\wrappers.py -> build\lib.win-amd64-3.6\wrapt
copying src\__init__.py -> build\lib.win-amd64-3.6\wrapt
running build_ext
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Alienware\AppData\Local\Temp\pip-install-jdld9mo_\wrapt\setup.py", line 79, in <module>
run_setup(with_extensions=True)
File "C:\Users\Alienware\AppData\Local\Temp\pip-install-jdld9mo_\wrapt\setup.py", line 55, in run_setup
setup(**setup_kwargs_tmp)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\dist.py", line 955, in run_commands
self.run_command(cmd)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages\wheel\bdist_wheel.py", line 202, in run
self.run_command('build')
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\command\build.py", line 135, in run
self.run_command(cmd_name)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "C:\Users\Alienware\AppData\Local\Temp\pip-install-jdld9mo_\wrapt\setup.py", line 25, in run
build_ext.run(self)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\command\build_ext.py", line 308, in run
force=self.force)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\ccompiler.py", line 1031, in new_compiler
return klass(None, dry_run, force)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\cygwinccompiler.py", line 285, in __init__
CygwinCCompiler.__init__ (self, verbose, dry_run, force)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\cygwinccompiler.py", line 129, in __init__
if self.ld_version >= "2.10.90":
TypeError: '>=' not supported between instances of 'NoneType' and 'str'
----------------------------------------
ERROR: Failed building wheel for wrapt
Running setup.py clean for wrapt
Failed to build wrapt
Installing collected packages: wrapt
Running setup.py install for wrapt ... error
ERROR: Command errored out with exit status 1:
command: 'c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Alienware\\AppData\\Local\\Temp\\pip-install-jdld9mo_\\wrapt\\setup.py'"'"'; __file__='"'"'C:\\Users\\Alienware\\AppData\\Local\\Temp\\pip-install-jdld9mo_\\wrapt\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Alienware\AppData\Local\Temp\pip-record-tbyepppf\install-record.txt' --single-version-externally-managed --compile
cwd: C:\Users\Alienware\AppData\Local\Temp\pip-install-jdld9mo_\wrapt\
Complete output (49 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.6
creating build\lib.win-amd64-3.6\wrapt
copying src\arguments.py -> build\lib.win-amd64-3.6\wrapt
copying src\decorators.py -> build\lib.win-amd64-3.6\wrapt
copying src\importer.py -> build\lib.win-amd64-3.6\wrapt
copying src\wrappers.py -> build\lib.win-amd64-3.6\wrapt
copying src\__init__.py -> build\lib.win-amd64-3.6\wrapt
running build_ext
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Alienware\AppData\Local\Temp\pip-install-jdld9mo_\wrapt\setup.py", line 79, in <module>
run_setup(with_extensions=True)
File "C:\Users\Alienware\AppData\Local\Temp\pip-install-jdld9mo_\wrapt\setup.py", line 55, in run_setup
setup(**setup_kwargs_tmp)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\dist.py", line 955, in run_commands
self.run_command(cmd)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages\setuptools\command\install.py", line 61, in run
return orig.install.run(self)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\command\install.py", line 545, in run
self.run_command('build')
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\command\build.py", line 135, in run
self.run_command(cmd_name)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "C:\Users\Alienware\AppData\Local\Temp\pip-install-jdld9mo_\wrapt\setup.py", line 25, in run
build_ext.run(self)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\command\build_ext.py", line 308, in run
force=self.force)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\ccompiler.py", line 1031, in new_compiler
return klass(None, dry_run, force)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\cygwinccompiler.py", line 285, in __init__
CygwinCCompiler.__init__ (self, verbose, dry_run, force)
File "c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\distutils\cygwinccompiler.py", line 129, in __init__
if self.ld_version >= "2.10.90":
TypeError: '>=' not supported between instances of 'NoneType' and 'str'
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Alienware\\AppData\\Local\\Temp\\pip-install-jdld9mo_\\wrapt\\setup.py'"'"'; __file__='"'"'C:\\Users\\Alienware\\AppData\\Local\\Temp\\pip-install-jdld9mo_\\wrapt\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Alienware\AppData\Local\Temp\pip-record-tbyepppf\install-record.txt' --single-version-externally-managed --compile Check the logs for full command output.
</code></pre>
|
<p>I also faced this error but you can safely ignore this error.
You can read more about this, <a href="https://github.com/tensorflow/tensorboard/issues/2296#issuecomment-497883063" rel="nofollow noreferrer">here</a> in this issue. </p>
<p>Or, you can first install/update the wrapt package using <code>pip install wrapt==1.11.0</code> or whatever version TF Alpha requires. Then just continue with TF Alpha installation.</p>
|
upgrade|tensorflow2.0
| 0
|
2,754
| 68,988,616
|
Plotting different groups of a dataframe in different subplots
|
<p>I want to plot 4 different scatter subplots in one main plot. The data are coming from a grouped dataframe which is read from a .csv file. The initial dataframe looks like this:</p>
<pre><code>df.to_csv("File.csv", index=False)
</code></pre>
<p>df:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Category1</th>
<th>Category2</th>
<th>X</th>
<th>Y</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>A</td>
<td>x</td>
<td>4</td>
<td>5.1</td>
</tr>
<tr>
<td>1</td>
<td>B</td>
<td>x</td>
<td>3</td>
<td>4.2</td>
</tr>
<tr>
<td>2</td>
<td>A</td>
<td>y</td>
<td>2</td>
<td>7.1</td>
</tr>
<tr>
<td>3</td>
<td>A</td>
<td>z</td>
<td>9</td>
<td>6.1</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>97</td>
<td>A</td>
<td>z</td>
<td>4</td>
<td>5.1</td>
</tr>
<tr>
<td>98</td>
<td>A</td>
<td>w</td>
<td>3</td>
<td>4.2</td>
</tr>
<tr>
<td>99</td>
<td>B</td>
<td>y</td>
<td>2</td>
<td>7.1</td>
</tr>
<tr>
<td>100</td>
<td>B</td>
<td>z</td>
<td>9</td>
<td>6.1</td>
</tr>
</tbody>
</table>
</div>
<p>As you can see, category1 has only two kinds of values (A,B) while category2 has 4 kinds of values (x,y,z,w). the X and Y values are random and for display purpose only.</p>
<p>The grouped df was created using following command:</p>
<pre><code>dfGrouped = df.groupby(["Category1 ","Category2"])
</code></pre>
<p>dfGrouped:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th></th>
<th>X</th>
<th>Y</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>x</td>
<td>4</td>
<td>5.1</td>
</tr>
<tr>
<td>A</td>
<td></td>
<td>7</td>
<td>9.1</td>
</tr>
<tr>
<td></td>
<td>y</td>
<td>3</td>
<td>4.2</td>
</tr>
<tr>
<td></td>
<td></td>
<td>3</td>
<td>4.2</td>
</tr>
<tr>
<td></td>
<td></td>
<td>3</td>
<td>4.2</td>
</tr>
<tr>
<td></td>
<td>z</td>
<td>2</td>
<td>7.1</td>
</tr>
<tr>
<td></td>
<td>w</td>
<td>9</td>
<td>6.1</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>B</td>
<td>x</td>
<td>4</td>
<td>5.1</td>
</tr>
<tr>
<td></td>
<td>y</td>
<td>3</td>
<td>4.2</td>
</tr>
<tr>
<td></td>
<td>z</td>
<td>2</td>
<td>7.1</td>
</tr>
<tr>
<td></td>
<td></td>
<td>2</td>
<td>7.1</td>
</tr>
<tr>
<td></td>
<td>w</td>
<td>9</td>
<td>6.1</td>
</tr>
</tbody>
</table>
</div>
<p>I tried to plot them individually, but it didn't work:</p>
<pre><code>fig, ax = plt.subplots(figsize=(8, 6))
ax.margins(0.05)
for name, group in dfGrouped:
ax.plot(group.X, group.Y, marker='o', linestyle='', ms=2, label=name)
</code></pre>
<p>I even tried to call the groups using get_group but I was not successful.</p>
<pre><code>dfGrouped= dfGrouped.get_group(("A","x"))
</code></pre>
<p>Is there any way to plot 4 different scatter subplots (Based on "category2": x,y,z,w) in one main plot in a way that each plot contains 2 sets values with 2 different colors(Based on "Category1": A, B)?</p>
|
<p>You could use <a href="https://seaborn.pydata.org/generated/seaborn.relplot.html#seaborn.relplot" rel="nofollow noreferrer"><code>seaborn.relplot</code></a>:</p>
<pre><code>import numpy as np
import seaborn as sns
# dummy data
df = pd.DataFrame({'Category1': np.random.choice(['A','B'], size=100),
'Category2': np.random.choice(['w','x', 'y', 'z'], size=100),
'x': np.random.random(size=100),
'y': np.random.random(size=100),
})
# plot
sns.relplot(data=df, x='x', y='y', col='Category2', col_wrap=2, hue='Category1')
</code></pre>
<p>Output:
<img src="https://i.stack.imgur.com/XPja7.png" alt="seaborn relplot" /></p>
|
python|pandas|dataframe|plot|group-by
| 1
|
2,755
| 69,194,635
|
UTF error when reading in GPX files using list comprehension in Python
|
<p>I am trying to take a batch of GPX files and concatenate them into a pandas dataframe to then export as a CSV for analysis elsewhere (QGIS).</p>
<p>Problem is, when I do my list comprehension step, it gives me a UTF-8 encoding error. I took a look at one of the GPX files, and it explicitly declares the encoding at the beginning of the file as expected. Not sure what I am missing. Code and error message below.</p>
<p>CODE</p>
<pre><code>INDIR=r'/Path/to/data'
OUTDIR=r'/Path/to/data/out'
os.chdir(INDIR)
def parsegpx(f):
#Parse a GPX file into a list of dictionaries.
points2 = []
with open(f, 'r') as gpxfile:
# print f
gpx = gpxpy.parse(gpxfile)
for track in gpx.tracks:
for segment in track.segments:
for point in segment.points:
dict = {'Timestamp' : point.time,
'Latitude' : point.latitude,
'Longitude' : point.longitude,
'Elevation' : point.elevation
}
points2.append(dict)
return points2
files = os.listdir(INDIR)
df2 = pd.concat([pd.DataFrame(parsegpx(f)) for f in files])
</code></pre>
<p>ERROR</p>
<pre><code><ipython-input-44-408221aa1cb1> in <module>
----> 1 df2 = pd.concat([pd.DataFrame(parsegpx(f)) for f in files])
2 # df2.head(5)
<ipython-input-44-408221aa1cb1> in <listcomp>(.0)
----> 1 df2 = pd.concat([pd.DataFrame(parsegpx(f)) for f in files])
2 # df2.head(5)
<ipython-input-4-8ece4e512e75> in parsegpx(f)
6 with open(f, 'r') as gpxfile:
7 # print f
----> 8 gpx = gpxpy.parse(gpxfile)
9 for track in gpx.tracks:
10 for segment in track.segments:
/usr/local/anaconda3/lib/python3.8/site-packages/gpxpy/__init__.py in parse(xml_or_file, version)
35 from . import parser as mod_parser
36
---> 37 parser = mod_parser.GPXParser(xml_or_file)
38
39 return parser.parse(version)
/usr/local/anaconda3/lib/python3.8/site-packages/gpxpy/parser.py in __init__(self, xml_or_file)
68 """
69 self.xml = ""
---> 70 self.init(xml_or_file)
71 self.gpx = mod_gpx.GPX()
72
/usr/local/anaconda3/lib/python3.8/site-packages/gpxpy/parser.py in init(self, xml_or_file)
80
81 """
---> 82 text = xml_or_file.read() if hasattr(xml_or_file, 'read') else xml_or_file # type: ignore
83 if isinstance(text, bytes):
84 text = text.decode()
/usr/local/anaconda3/lib/python3.8/codecs.py in decode(self, input, final)
320 # decode input (taking the buffer into account)
321 data = self.buffer + input
--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)
323 # keep undecoded input until the next call
324 self.buffer = data[consumed:]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x87 in position 27: invalid start byte
</code></pre>
<p>Any help is much appreciated!</p>
|
<p>I guess you need to use encoding while opening the file.</p>
<pre><code>
with open(f, 'r', encoding="desired_encoding") as gpxfile:
</code></pre>
|
python|pandas|dataframe|gis|gpx
| 1
|
2,756
| 44,757,059
|
how to transfer a vertor numpy array of tuple transform to 2d numpy array
|
<p>I have numpy array like this:</p>
<pre><code>a = [(20111205000000, 15.94, 16.04, 15.7 , 15.95, 11349137.)
(20111206000000, 15.95, 15.95, 15.95, 15.95, 0.)
(20111207000000, 15.9 , 16.15, 15.86, 16.05, 14862428.)
(20111208000000, 16.05, 16.13, 15.81, 15.94, 18705208.)]
</code></pre>
<p>I can't not use a slice like this a[1:3,2:3].So I want to change this vector to:</p>
<pre><code> [[20111205000000 15.94 16.04 15.7 15.95 11349137]
[20111206000000 15.95 15.95 15.95 15.95 0]
[20111207000000 15.9 16.15 15.86 16.05 14862428]
[20111208000000 16.05 16.13 15.81 15.94 18705208]]
</code></pre>
<p>Please help me,thank you.</p>
|
<p>You can use this:</p>
<pre><code>np.array([list(i) for i in a])
</code></pre>
|
python|numpy|vector
| 0
|
2,757
| 44,802,939
|
Hyperparameter Tuning of Tensorflow Model
|
<p>I've used Scikit-learn's GridSearchCV before to optimize the hyperparameters of my models, but just wondering if a similar tool exists to optimize hyperparameters for Tensorflow (for instance <strong>number of epochs, learning rate, sliding window size etc.</strong>)</p>
<p>And if not, how can I implement a snippet that effectively runs all different combinations?</p>
|
<p>Even though it does not seem to be explicitly documented (in version 1.2), the package <a href="https://www.tensorflow.org/get_started/tflearn" rel="noreferrer"><code>tf.contrib.learn</code></a> (included in TensorFlow) defines classifiers that are supposed to be compatible with scikit-learn... However, looking at <a href="https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/contrib/learn/python/learn/estimators/_sklearn.py" rel="noreferrer">the source</a>, it seems you need to explicitly set the environment variable <code>TENSORFLOW_SKLEARN</code> (e.g. to <code>"1"</code>) to actually get this compatibility. If this works, you can already use <code>GridSearchCV</code> (<a href="https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/contrib/learn/python/learn/grid_search_test.py" rel="noreferrer">see this test case</a>).</p>
<p>That said, there are a few alternatives. I don't know about any specific to TensorFlow, but <a href="https://hyperopt.github.io/hyperopt/" rel="noreferrer">hyperopt</a>, <a href="https://scikit-optimize.github.io/" rel="noreferrer">Scikit-Optimize</a> or <a href="https://automl.github.io/SMAC3/" rel="noreferrer">SMAC3</a> should all be valid options. <a href="https://github.com/Yelp/MOE" rel="noreferrer">MOE</a> and <a href="https://github.com/HIPS/Spearmint" rel="noreferrer">Spearmint</a> look like used to be good choices but now don't seem too maintained.</p>
<p>Alternatively, you can look into a service like <a href="https://sigopt.com/" rel="noreferrer">SigOpt</a> (a company by the original author of MOE).</p>
<p><em>Edit</em></p>
<p>About running all possible combinations of parameters, the core logic, if you want to implement it yourself, is not really complicated. You can just define lists with the possible values for each parameter and then run through all the combinations with <a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="noreferrer"><code>itertools.product</code></a>. Something like:</p>
<pre><code>from itertools import product
param1_values = [...]
param2_values = [...]
param3_values = [...]
for param1, param2, param3 in product(param1_values, param2_values param3_values):
run_experiment(param1, param2, param3)
</code></pre>
<p>Note however that grid search can be prohibitively expensive to run in many cases, and even doing just a random search in the parameters space will probably be more efficient (more about that <a href="https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf" rel="noreferrer">in this publication</a>).</p>
|
python|tensorflow|convolution|hyperparameters
| 20
|
2,758
| 60,923,428
|
pandas csv reader produces wrong result
|
<p>I have a python script that produces a wrong date format.</p>
<pre><code>import csv
import urllib
import requests
import numpy as np
from urllib.request import urlopen
from matplotlib.dates import DateFormatter
import matplotlib.pyplot as plt
import pandas as pd
import io
link = 'https://health-infobase.canada.ca/src/data/covidLive/covid19.csv'
s = requests.get(link).content
coviddata = pd.read_csv(io.StringIO(s.decode('utf-8')),
parse_dates=['date'],
index_col= ['date'],
na_values=['999.99'])
prinput = 'Quebec'
ispr = coviddata['prname'] == prinput
covidpr = coviddata[ispr]
print(covidpr)
</code></pre>
<p>The data it produces seems to garble up dates as shown below.</p>
<pre><code> pruid prname prnameFR ... numtotal numtoday numtested
</code></pre>
<p>date ...
<strong>2020-01-03 24 Quebec Québec ... 1 1 NaN
2020-03-03 24 Quebec Québec ... 1 0 NaN
2020-05-03 24 Quebec Québec ... 2 1 NaN
2020-06-03 24 Quebec Québec ... 2 0 NaN
2020-07-03 24 Quebec Québec ... 2 0 NaN
2020-08-03 24 Quebec Québec ... 3 1 NaN
2020-09-03 24 Quebec Québec ... 4 1 NaN
2020-11-03 24 Quebec Québec ... 7 3 NaN
2020-12-03 24 Quebec Québec ... 13 6 NaN</strong>
2020-03-13 24 Quebec Québec ... 17 4 NaN
2020-03-14 24 Quebec Québec ... 17 0 NaN</p>
<p>Now on the contrary
here is another code snippet which works.</p>
<pre><code>import csv
import urllib
import requests
from urllib.request import urlopen
from matplotlib.dates import DateFormatter
import matplotlib.pyplot as plt
from datetime import datetime
link = 'https://health-infobase.canada.ca/src/data/covidLive/covid19.csv'
text = requests.get(link).text
lines = text.splitlines()
infile = csv.DictReader(lines)
prinput = input("Enter province(EN):")
xvalues=[]
yvalues=[]
for row in infile:
if(row['prname']==prinput):
xvalues.append(row['date'])
yvalues.append(row['numconf'])
print(row['prname'],row['date'],row['numconf'])
</code></pre>
<p>It produces the right dates
<strong>Quebec 01-03-2020 1
Quebec 03-03-2020 1
Quebec 05-03-2020 2
Quebec 06-03-2020 2
Quebec 07-03-2020 2
Quebec 08-03-2020 3
Quebec 09-03-2020 4
Quebec 11-03-2020 7
Quebec 12-03-2020 13</strong>
Quebec 13-03-2020 17
Quebec 14-03-2020 17
Quebec 15-03-2020 24
Quebec 16-03-2020 39
Quebec 17-03-2020 50</p>
<p>What is wrong with the first script?</p>
|
<p>Because you've used the <code>parse_dates</code> attribute, pandas interprets the 'date' column as a date-time object. This can be very useful for plotting data over time, or resampling your data over given time periods. If you want to restructure the datetime format for printing of your dataset, you can use <code>dt.strftime</code> attribute of the datetime series to do so.
i.e</p>
<pre><code># Import pandas
import pandas as pd
# Read in dataframe from url
covid_df = pd.read_csv("https://health-infobase.canada.ca/src/data/covidLive/covid19.csv",
parse_dates=['date'], na_values=[999.99])
# Create new column date-str that's the string interpretation of the 'date' column
covid_df['date-str'] = covid_df['date'].dt.strftime("%d-%m-%Y")
# Show the top of the dataframe
covid_df.head()
"""
pruid prname prnameFR date ... numtotal numtoday numtested date-str
0 35 Ontario Ontario 2020-01-31 ... 3 3 NaN 31-01-2020
1 59 British Columbia Colombie-Britannique 2020-01-31 ... 1 1 NaN 31-01-2020
2 1 Canada Canada 2020-01-31 ... 4 4 NaN 31-01-2020
3 35 Ontario Ontario 2020-08-02 ... 3 0 NaN 02-08-2020
4 59 British Columbia Colombie-Britannique 2020-08-02 ... 4 3 NaN 02-08-2020
"""
# Show dtypes and properties of each column of the dataframe
covid_df.info()
"""
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 302 entries, 0 to 301
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 pruid 302 non-null int64
1 prname 302 non-null object
2 prnameFR 302 non-null object
3 date 302 non-null datetime64[ns]
4 numconf 302 non-null int64
5 numprob 302 non-null int64
6 numdeaths 302 non-null int64
7 numtotal 302 non-null int64
8 numtoday 302 non-null int64
9 numtested 0 non-null float64
10 date-str 302 non-null object
dtypes: datetime64[ns](1), float64(1), int64(6), object(3)
memory usage: 26.1+ KB
"""
</code></pre>
|
pandas
| 0
|
2,759
| 60,882,663
|
ModuleNotFoundError: No module named 'tensorflow.contrib' with tensorflow=2.0.0
|
<p>I am using TensorFlow version=2.0.0
python version=3.7.3
I am trying to import the below statement</p>
<pre><code>from tensorflow.contrib import rnn
</code></pre>
<p>And it gives error as
Module 'tensorflow' has no attribute 'contrib'
How can I resolve this?</p>
|
<p>from tensor flow </p>
<p><a href="https://www.tensorflow.org/guide/upgrade#compatibility_modules" rel="nofollow noreferrer">https://www.tensorflow.org/guide/upgrade#compatibility_modules</a></p>
<blockquote>
<p>Because of TensorFlow 2.x module deprecations (for example, tf.flags and tf.contrib), some changes can not be worked around by switching to compat.v1. Upgrading this code may require using an additional library (for example, absl.flags) or switching to a package in tensorflow/addons.</p>
</blockquote>
<p>and as describe in this thread </p>
<p><code>tensorflow.contrib doesn't exist in 2.0.</code></p>
<p><a href="https://github.com/tensorflow/tensorflow/issues/31350#issuecomment-518749548" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/31350#issuecomment-518749548</a></p>
|
python|tensorflow|nlg
| 1
|
2,760
| 42,270,757
|
Pandas maximum length of object type with None values
|
<p>I’ve written a short function to output the maximum values (or for strings, maximum length) for each column in a data frame, with adjustments for various data types.</p>
<pre><code>def maxDFVals(df):
for c in df:
if str(df[c].dtype) in ('datetime64[ns]'):
print('Max datetime of column {}: {}\n'.format(c, df[c].max()))
elif str(df[c].dtype) in ('object', 'string_', 'unicode_'):
df[c].fillna(value='', inplace=True)
print('Max length of column {}: {}\n'.format(c, df[c].map(len).max()))
elif str(df[c].dtype) in ('int64', 'float64'):
print('Max value of column {}: {}\n'.format(c, df[c].max()))
else:
print('Unknown data type for column {}!\n'.format(c))
</code></pre>
<p>It works fine, but I just wanted to check whether there is a better alternative to line 6, using fillna, which I needed in order to deal with None values. Ideally I would just ignore None, but I couldn’t discover a way of using something like skipna=True.</p>
<p>If I really wanted to I guess I could add </p>
<pre><code> df[c].replace([''], [None], inplace=True)
</code></pre>
<p>after line 7 to return the None values, but that is hardly what anyone would call Pythonic…</p>
<p>Does anyone have any better suggestions?</p>
|
<p>Try This:-</p>
<pre><code>def maxDFVals(df):
for c in df:
if str(df[c].dtype) in ('datetime64[ns]'):
print('Max datetime of column {}: {}\n'.format(c, df[c].max()))
elif str(df[c].dtype) in ('object', 'string_', 'unicode_'):
print('Max length of column {}: {}\n'.format(c, df[c].dropna().map(len).max()))
elif str(df[c].dtype) in ('int64', 'float64'):
print('Max value of column {}: {}\n'.format(c, df[c].max()))
else:
print('Unknown data type for column {}!\n'.format(c))
</code></pre>
|
python|pandas|dataframe|fillna
| 2
|
2,761
| 69,769,246
|
How to convert list object type in 3rd dimension of 3D numpy array?
|
<p>A bit of background:
Initially, I had the error <code>ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type list).</code> after attempting to convert <code>my_list</code> into a tensor using <code>tf.convert_to_tensor()</code> .</p>
<p>I have a 3D numpy array <code>my_list</code> with the following properties:</p>
<p><a href="https://i.stack.imgur.com/6GeMJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6GeMJ.png" alt="enter image description here" /></a></p>
<p>As you can see in run [321] the 3rd dimension is a type <code>list</code>. I would like to convert it into a <code>numpy.ndarray</code> type too. Thank you!</p>
|
<p>Make sure:</p>
<ul>
<li>my_list only contains actual numbers, not other objects like a string</li>
<li>All entries of the same hierarchy have the same length, i.e. len(my_list[0]) == len(my_list[1])</li>
</ul>
<p>To convert into an numpy array (in all dimensions at once):</p>
<pre><code>my_array = np.array(my_list)
</code></pre>
<p>To replace a certain element with an array:</p>
<pre><code>my_array[0][0] = np.array(my_array[0][0])
</code></pre>
|
python|numpy|tensorflow|numpy-ndarray
| 0
|
2,762
| 43,213,708
|
What is "gate_gradients" atribute in tensorflow minimize() function in the optimzer class?
|
<p>This is the link to TF optimizer class <strong><a href="https://www.tensorflow.org/versions/r0.12/api_docs/python/train/optimizers" rel="noreferrer">https://www.tensorflow.org/versions/r0.12/api_docs/python/train/optimizers</a></strong></p>
|
<p>GATE_NONE: Take the simple case of a matmul op on two vectors 'x' and 'y'. let the output be L. Now gradient of L wrt x is y and gradient of L wrt y is xT (x transpose). with GATE_NONE it could so happen that the gradient wrt x is applied to modify x before the gradient for y is even calculated. Now when the gradient wrt y is calculated it would be computed equal to modified x which is an error. Of course it won't happen in such a simple case but you could imagine it could happen in more complex/extreme cases</p>
<p>GATE_OP: For each Op, make sure all gradients are computed before they are used. This prevents race conditions for Ops that generate gradients for multiple inputs where the gradients depend on the inputs. (You could see how this prevents the problem of GATE_NONE, though at the price of some parallelism).</p>
<p>GATE_GRAPH: Make sure all gradients for all variables are computed before any one of them is used. This provides the least parallelism but can be useful if you want to process all gradients before applying any of them.(an example of use case is clipping gradients according to global norm before applying them)</p>
|
tensorflow|deep-learning
| 15
|
2,763
| 43,439,206
|
Importing modules for visual studio
|
<p>Right now i'm trying to create an array like this: </p>
<pre><code>import NumPy as np
V = np.array([3, 9, 7, 7, 7, 1, 5, 5, 5, 5])
print V
</code></pre>
<p>but the terminal is saying "Unable to import NumPy" and "Missing module docstring"</p>
<p>I am fairly new to coding on visual studio and I don't often use Stackoverflow either... Basically I'm not a pro at coding. I've googled for solutions but I'm still confused. Apologies for the simple question. </p>
<p>I know you can download python tools on a github website but I don't know how that will link to my VS. I tried finding out the version I have and I think it is Version 1.10.2 (1.10.2).</p>
<p>Thanks!!!</p>
|
<p>First :
Make sure that numpy is already existed .try that outside VS.</p>
<pre><code>import numpy
</code></pre>
<p>you will get ModuleNotFound error if it's not existed.</p>
<p>Second : Update your environment database in the Visual Studio.</p>
<p>Third : make sure the the environment settings is correct .</p>
<p>Four : If previous steps don't work you may delete your current environment
and define new one and create new project .</p>
|
python|arrays|numpy
| 1
|
2,764
| 43,436,541
|
Reading from csv to pandas, chardet and error bad lines options do not work in my case
|
<p>I checked similar questions before I write here, also I tried to use try/except... where try does nothing, except prints bad line but couldn't solve my issue. So currently I have:</p>
<pre><code>import pandas as pd
import chardet
# Read the file
with open("full_data.csv", 'rb') as f:
result = chardet.detect(f.read()) # or readline if the file is large
df1 = pd.read_csv("full_data.csv", sep=';',
encoding=result['encoding'], error_bad_lines=False, low_memory=False, quoting=csv.QUOTE_NONE)
</code></pre>
<p>But I still get the error:</p>
<pre><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xba in position 9: invalid start byte
</code></pre>
<p>Is there any option similar to error = 'replace' in open csv ? Or any other solutions</p>
|
<p>Using engine option sovles my problem:</p>
<pre><code>df1 = pd.read_csv("full_data.csv", sep=";", engine="python")
</code></pre>
|
python|csv|pandas|unicode
| 1
|
2,765
| 43,123,334
|
Multidimensional Convolution in python
|
<p>I'm trying to convolve a 3 dimensional values array of shape (10, 100, 100) with a gaussian of shape (10, 100, 100). When I use the convolve function I get a Value error. </p>
<pre><code>def gaussian(x, mu, sigma):
g = (1./ (sigma * sqrt(2*pi))) * exp(-(x - mu)**2 / sigma**2)
return g
gauss_I = gaussian( values, mean(values), std(values) )
import numpy as np
np.convolve( values, gauss_I)
convolve(values, gauss_I)
</code></pre>
<p>Traceback (most recent call last):</p>
<p>File "", line 1, in
convolve(values, gauss_I)</p>
<pre><code> File "/Users/Me/Applications/anaconda/lib/python3.5/site-packages/numpy/core/numeric.py", line 1013, in convolve
return multiarray.correlate(a, v[::-1], mode)
ValueError: object too deep for desired array
</code></pre>
<p>I've also used the correlate function but that gives me the same error.</p>
|
<p>I have been having the same problem for some time. As already mentioned in the comments the function <code>np.convolve</code> supports only 1-dimensional convolution. One alternative I found is the scipy function <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.fftconvolve.html#scipy.signal.fftconvolve" rel="nofollow noreferrer"><code>scipy.signal.fftconvolve</code></a> which works for N-dimensional arrays.</p>
<p>For example here I test the convolution for 3D arrays with shape (100,100,100)</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as sps
# create 3D meshgrid and function cos(x+y)
bins = np.linspace(-10,10,100)
x, y, z = np.meshgrid(bins, bins, bins)
cos = np.cos(x+y)
print(cos.shape) # (100, 100, 100)
# plot projection of function on x-y plane
plt.title(r'$\cos(x+y)$')
plt.contourf(x[:,:,0], y[:,:,0], np.sum(cos,axis=2))
plt.colorbar()
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.show()
# perform convolution of function with itself
conv = sps.fftconvolve(cos, cos, mode='same')
print(conv.shape) # (100, 100, 100)
# plot projection of convolution on x-y plane
plt.title('numerical convolution')
plt.contourf(x[:,:,0], y[:,:,0], np.sum(conv,axis=2))
plt.colorbar()
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.show()
</code></pre>
<p>I obtain the following images:</p>
<p><a href="https://i.stack.imgur.com/K8doW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K8doW.png" alt="cos"></a></p>
<p><a href="https://i.stack.imgur.com/zFXPP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zFXPP.png" alt="convolution"></a></p>
<p>Hope it is helpful!</p>
|
python|numpy|math|multidimensional-array|signals
| 1
|
2,766
| 72,485,457
|
How to expand a dataframe with a tree structure
|
<p>I have a data frame, which has an embedded tree structure inside it, like this:</p>
<p>df1</p>
<pre><code> Category Type Dependent-Category
0 1 ~ A
1 1 ~ B
2 1 ~ C
3 1 ~ 14
4 1 ~ D
5 1 P NaN
6 A ~ C
7 A ~ D
8 A ~ 3
9 A P NaN
10 B ~ D
11 B ~ C
12 B ~ 12
13 B P NaN
14 C ~ D
15 C ~ 9
16 C P NaN
17 D ~ 12
18 D ~ 3
19 D ~ 8
20 D P NaN
</code></pre>
<p>Category D rows are only made out of numerical categories, which is defined as.</p>
<pre><code>D ~ 12
D ~ 3
D ~ 8
D P NaN
</code></pre>
<p>Category C rows are made out of a numerical Category and the previous Category D, which is defined as:</p>
<pre><code>C ~ D
C ~ 9
C P NaN
</code></pre>
<p>Category B are made out of a numerical Category and the previous 2 Categories D and C, which is defined as:</p>
<pre><code>B ~ D
B ~ C
B ~ 12
B P NaN
</code></pre>
<p>Category A are made out of a numerical Category and the previous 2 Categories C and D, which is defined as:</p>
<pre><code>A ~ C
A ~ D
A ~ 3
A P NaN
</code></pre>
<p>Category 1 are made out of a numerical Category and the previous 4 Categories A,B,C and D, which is defined as:</p>
<pre><code>1 ~ A
1 ~ B
1 ~ C
1 ~ 14
1 ~ D
1 P NaN
</code></pre>
<p>The aim is to get a final dataframe which has is a total concatenation of all the Categories - for example, when <code>Category</code> A is mentioned in the <code>Dependent-Category</code> column in df1, I need to replace that entire row with <code>Category</code> A.</p>
<p>The aim is to make the <code>Dependent-Category</code> column only consist of numbers (and NaNs):</p>
|
<p>Possible solution can be to separate out different categories as dataframes and then merge them sequentially with original dataframe then apply conditional formatting.</p>
<p>Here is my solution doing the same:</p>
<pre><code>d = {
"Category": [1, 1, 1, 1, "A", "A", "A", "B", "B"],
"Type": ["~", "~", "~", "P", "~", "~", "P", "~", "P"],
"D_C": ["B", "A", 14, "missing", 4, "B", "missing", 12, "missing"],
}
df = pd.DataFrame(d)
df_A = df.loc[df.Category == "A"]
df_B = df.loc[df.Category == "B"]
df1 = df.merge(df_A, left_on=df.D_C, right_on=df_A.Category, how="outer")
df1_A = pd.DataFrame()
df1_A["Category"] = np.where(
df1.D_C_x == df1.Category_y, df1.Category_y, df1.Category_x
)
df1_A["Type"] = np.where(df1.D_C_x == df1.Category_y, df1.Type_y, df1.Type_x)
df1_A["D_C"] = np.where(df1.D_C_x == df1.Category_y, df1.D_C_y, df1.D_C_x)
df2 = df1_A.merge(df_B, left_on=df1_A.D_C, right_on=df_B.Category, how="outer")
df_final = pd.DataFrame()
df_final["Category"] = np.where(
df2.D_C_x == df2.Category_y, df2.Category_y, df2.Category_x
)
df_final["Type"] = np.where(df2.D_C_x == df2.Category_y, df2.Type_y, df2.Type_x)
df_final["D_C"] = np.where(df2.D_C_x == df2.Category_y, df2.D_C_y, df2.D_C_x)
print(df_final)
</code></pre>
<p>Let me know if you have questions around it.</p>
|
python|pandas|dataframe|tree
| 2
|
2,767
| 72,251,535
|
A bug in the code working with excel data
|
<p>I have an excel file like this and I want the date field numbers to be converted to history like (2021.7.22) and replaced again using Python in the history field.</p>
<p><a href="https://i.stack.imgur.com/O5Vmp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O5Vmp.png" alt="enter image description here" /></a></p>
<p>A friend sent me a code that almost answered me, but there is still a bug in the code.</p>
<p>This is the code I used</p>
<pre><code>import pandas as pd
dfs = pd.read_excel('apal.xlsx', sheet_name=None)
output = {}
for ws, df in dfs.items():
if 'date' in df.columns:
df['date'] = df['date'].apply(lambda x: f'{str(x)[:4]}.'
f'{str(x)[4:6 if len(str(x)) > 7 else 5]}.{str(x)[-2:]}')
output[ws] = df
writer = pd.ExcelWriter('TestOutput.xlsx')
for ws, df in output.items():
df.to_excel(writer, index=None, sheet_name=ws)
writer.save()
writer.close()
</code></pre>
<p>But the output has a bug and in some data the numbers of months are rewritten next to the numbers of the day.</p>
<p><a href="https://i.stack.imgur.com/QAnGH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QAnGH.png" alt="enter image description here" /></a></p>
<p>Like 2021.3.32, in fact, such a number did not exist in my original data at all</p>
|
<p>You need to solve the ambiguity for dates like <code>2021111</code>. In a first time, you can use <code>pd.to_datetime</code>:</p>
<pre><code>df['date2'] = pd.to_datetime(df['date'], format='%Y%m%d').dt.strftime('%Y.%-m.%-d')
print(df)
# Output
date date2
0 2021227 2021.2.27
1 2021228 2021.2.28
2 202131 2021.3.1
3 202132 2021.3.2
4 202133 2021.3.3
5 202136 2021.3.6
6 202137 2021.3.7
7 202138 2021.3.8
8 202139 2021.3.9
9 2021310 2021.3.10
10 2021313 2021.3.13
11 2021314 2021.3.14
12 2021315 2021.3.15
13 2021111 2021.11.1 # <- default interpretation of 2021111
</code></pre>
|
python|pandas
| 1
|
2,768
| 72,417,073
|
Equivalent of Partial Matching XLOOKUP in Python
|
<p>The following code will tell me if there are partial matches (via the True values in the final column):</p>
<pre><code>import pandas as pd
x = {'Non-Suffix' : ['1234567', '1234568', '1234569', '1234554'], 'Suffix' : ['1234567:C', '1234568:VXCF', '1234569-01', '1234554-01:XC']}
x = pd.DataFrame(x)
x['"Non-Suffix" Partial Match in "Suffix"?'] = x.apply(lambda row: row['Non-Suffix'] in row['Suffix'], axis=1)
x
</code></pre>
<p><a href="https://i.stack.imgur.com/6DkdH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6DkdH.png" alt="enter image description here" /></a></p>
<p>However, if I re-arrange the values in the second column, I'll get False values:</p>
<pre><code>x = {'Non-Suffix' : ['1234567', '1234568', '1234569', '1234554'], 'Suffix' : ['1234568:VXCF', '1234567:C', '1234554-01:XC', '1234569-01']}
x = pd.DataFrame(x)
x['"Non-Suffix" Partial Match in "Suffix"?'] = x.apply(lambda row: row['Non-Suffix'] in row['Suffix'], axis=1)
x
</code></pre>
<p><a href="https://i.stack.imgur.com/nSdp4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nSdp4.png" alt="enter image description here" /></a></p>
<p>Is there a way I can get the second block of code to find these partial matches even if they're not in the same row?</p>
<p>Also, instead of 'True/False' values, is there a way for me to have the value of 'Partial Match Exists!' instead of True, and 'Partial Match Does Not Exist!' instead of False?</p>
|
<p>You can join the <code>Non-Suffix</code> column value with <code>|</code> then use <code>Series.str.contains</code> to check if contain any value</p>
<pre class="lang-py prettyprint-override"><code>x['"Non-Suffix" Partial Match in "Suffix"?'] = x['Suffix'].str.contains('|'.join(x['Non-Suffix']))
</code></pre>
<pre><code>print(x)
Non-Suffix Suffix "Non-Suffix" Partial Match in "Suffix"?
0 1234567 1234568:VXCF True
1 1234568 1234567:C True
2 1234569 1234554-01:XC True
3 1234554 1234569-01 True
</code></pre>
<p>Above solution checks if <code>Suffix</code> contains any of <code>Non-Suffix</code>, if you want to do the reverse, you might do</p>
<pre class="lang-py prettyprint-override"><code>x['"Non-Suffix" Partial Match in "Suffix"?'] = x['Non-Suffix'].apply(lambda v: x['Suffix'].str.contains(v).any())
</code></pre>
<pre><code>print(x)
Non-Suffix Suffix "Non-Suffix" Partial Match in "Suffix"?
0 879 1234568:VXCF False
1 1234568 1234567:C True
2 1234569 1234554-01:XC True
3 1234554 1234569-01 True
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
2,769
| 50,284,366
|
How do I give a text key to a dataframe stored as a value in a dictionary?
|
<p>So I have 3 dataframes - df1,df2.df3. I'm trying to loop through each dataframe so that I can run some preprocessing - set date time, extract hour to a separate column etc. However, I'm running into some issues:</p>
<p>If I store the df in a dict as in <code>df_dict = {'df1' : df1, 'df2' : df2, 'df3' : df3}</code> and then loop through it as in</p>
<pre><code>for k, v in df_dict.items():
if k == 'df1':
v['Col1']....
else:
v['Coln']....
</code></pre>
<p>I get a <code>NameError: name 'df1' is not defined</code></p>
<p>What am I doing wrong? I initially thought I was not reading in the df1..3 data in but that seems to operate ok (as in it doesn't fail and its clearly reading it in given the time lag (they are big files)). The code preceding it (for load) is:</p>
<pre><code>DF_DATA = { 'df1': 'df1.csv','df2': 'df2.csv', 'df3': 'df3.csv' }
for k,v in DF_DATA.items():
print(k, v) #this works to print out both key and value
k = pd.read_csv(v) #this does not
</code></pre>
<p>I am thinking this maybe the cause but not sure.I'm expecting the load loop to create the 3 dataframes and put them into memory. Then for the loop on the top of the page, I want to reference the string key in my if block condition so that each df can get a slightly different preprocessing treatment.</p>
<p>Thanks very much in advance for your assist.</p>
|
<p>You didn't create <code>df_dict</code> correctly. Try this:</p>
<pre><code>DF_DATA = { 'df1': 'df1.csv','df2': 'df2.csv', 'df3': 'df3.csv' }
df_dict= {k:pd.read_csv(v) for k,v in DF_DATA.items()}
</code></pre>
|
python-3.x|pandas|loops|dictionary|dataframe
| 0
|
2,770
| 50,475,348
|
How to save the model for text-classification in tensorflow?
|
<p>Reading <a href="https://www.tensorflow.org/tutorials/text_classification_with_tf_hub" rel="nofollow noreferrer">tensorflow documentation</a> for text-classification, I have put up a script below that I used to train a model for text classification (positive/negative). I am not sure on one thing. How could I save the model to reuse it later? Also, how can I test for the input test-set I have?</p>
<pre><code>import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
# Load all files from a directory in a DataFrame.
def load_directory_data(directory):
data = {}
data["sentence"] = []
data["sentiment"] = []
for file_path in os.listdir(directory):
with tf.gfile.GFile(os.path.join(directory, file_path), "r") as f:
data["sentence"].append(f.read())
data["sentiment"].append(re.match("\d+_(\d+)\.txt", file_path).group(1))
return pd.DataFrame.from_dict(data)
# Merge positive and negative examples, add a polarity column and shuffle.
def load_dataset(directory):
pos_df = load_directory_data(os.path.join(directory, "pos"))
neg_df = load_directory_data(os.path.join(directory, "neg"))
pos_df["polarity"] = 1
neg_df["polarity"] = 0
return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True)
# Download and process the dataset files.
def download_and_load_datasets(force_download=False):
dataset = tf.keras.utils.get_file(
fname="aclImdb.tar.gz",
origin="http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz",
extract=True)
train_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "train"))
test_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "test"))
return train_df, test_df
# Reduce logging output.
tf.logging.set_verbosity(tf.logging.ERROR)
train_df, test_df = download_and_load_datasets()
train_df.head()
# Training input on the whole training set with no limit on training epochs.
train_input_fn = tf.estimator.inputs.pandas_input_fn(
train_df, train_df["polarity"], num_epochs=None, shuffle=True)
# Prediction on the whole training set.
predict_train_input_fn = tf.estimator.inputs.pandas_input_fn(
train_df, train_df["polarity"], shuffle=False)
# Prediction on the test set.
predict_test_input_fn = tf.estimator.inputs.pandas_input_fn(
test_df, test_df["polarity"], shuffle=False)
embedded_text_feature_column = hub.text_embedding_column(
key="sentence",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
estimator = tf.estimator.DNNClassifier(
hidden_units=[500, 100],
feature_columns=[embedded_text_feature_column],
n_classes=2,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.003))
# Training for 1,000 steps means 128,000 training examples with the default
# batch size. This is roughly equivalent to 5 epochs since the training dataset
# contains 25,000 examples.
estimator.train(input_fn=train_input_fn, steps=1000);
train_eval_result = estimator.evaluate(input_fn=predict_train_input_fn)
test_eval_result = estimator.evaluate(input_fn=predict_test_input_fn)
print "Training set accuracy: {accuracy}".format(**train_eval_result)
print "Test set accuracy: {accuracy}".format(**test_eval_result)
</code></pre>
<p>Currently, if I run the above script it retrains the complete model. I want to reuse the model and have it output for some sample texts that I have. How could I do this?</p>
<p><em>I have tried the following to save:</em></p>
<pre><code>sess = tf.Session()
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
saver.save(sess, 'test-model')
</code></pre>
<p>but this throws an error, saying <code>Value Error: No variables to save</code></p>
|
<p>You can train and predict on a saved/loaded Estimator model simply by passing the <code>model_dir</code> parameter to both the Estimator instance and a <code>tf.estimator.RunConfig</code> instance that is passed to the <code>config</code> parameter of pre-made estimators (since about Tensorflow 1.4--still works on Tensorflow 1.12):</p>
<pre><code> model_path = '/path/to/model'
run_config = tf.estimator.RunConfig(model_dir=model_path,
tf_random_seed=72, #Default=None
save_summary_steps=100,
# save_checkpoints_steps=_USE_DEFAULT, #Default=1000
# save_checkpoints_secs=_USE_DEFAULT, #Default=60
session_config=None,
keep_checkpoint_max=12, #Default=5
keep_checkpoint_every_n_hours=10000,
log_step_count_steps=100,
train_distribute=None,
device_fn=None,
protocol=None,
eval_distribute=None,
experimental_distribute=None)
classifier = tf.estimator.DNNLinearCombinedClassifier(
config=run_config,
model_dir=model_path,
...
)
</code></pre>
<p>You'll then be able to call <code>classifier.train()</code> and <code>classifier.predict()</code>, re-run the script skipping the <code>classifier.train()</code> call, and receive the same results after again calling <code>classifier.predict()</code>.</p>
<p>This works using a <code>hub.text_embedding_column</code> feature column, and when using <code>categorical_column_with_identity</code> and <code>embedding_column</code> feature columns with a manually saved/restored <code>VocabularyProcessor</code> dictionary.</p>
|
python|python-3.x|tensorflow|machine-learning
| 0
|
2,771
| 50,668,168
|
Vectorized implementation for Euclidean distance
|
<p>I am trying to compute a vectorized implementation of Euclidean distance(between each element in X and Y using inner product). The data as follows:</p>
<pre><code>X = np.random.uniform(low=0, high=1, size=(10000, 5))
Y = np.random.uniform(low=0, high=1, size=(10000, 5))
</code></pre>
<p>What I did was:</p>
<pre><code>euclidean_distances_vectorized = np.array(np.sqrt(np.sum(X**2, axis=1) - 2 * np.dot(X, Y.T) + np.sum(Y**2, axis=1)))
</code></pre>
<p>Although this gives 'some output' the answer is wrong as each row still contains 5 elements. </p>
<p>Does anyone know what I am doing wrong?</p>
|
<p>If I understood correctly this should do</p>
<pre><code>np.linalg.norm(X - Y, axis=1)
</code></pre>
<p>Or with <code>einsum</code> (square root of the dot product of each difference pair along the first axis)</p>
<pre><code>np.sqrt(np.einsum('ij,ij->i...', X - Y, X - Y))
</code></pre>
<p>If you want all pairwise distances</p>
<pre><code>from scipy.spatial.distance import cdist
cdist(X, Y)
</code></pre>
|
python|numpy|vectorization|euclidean-distance
| 5
|
2,772
| 45,281,597
|
Counting number of consecutive zeros in a Dataframe
|
<p>i want to count number of consecutive zeros in my Dataframe shown below, help please</p>
<pre class="lang-html prettyprint-override"><code> DEC JAN FEB MARCH APRIL MAY consecutive zeros
0 X X X 1 0 1 0
1 X X X 1 0 1 0
2 0 0 1 0 0 1 2
3 1 0 0 0 1 1 3
4 0 0 0 0 0 1 5
5 X 1 1 0 0 0 3
6 1 0 0 1 0 0 2
7 0 0 0 0 1 0 4
</code></pre>
|
<p>For each row, you want <code>cumsum(1-row)</code> with reset at every point when <code>row == 1</code>. Then you take the row max.</p>
<p>For example</p>
<pre><code>ts = pd.Series([0,0,0,0,1,1,0,0,1,1,1,0])
ts2 = 1-ts
tsgroup = ts.cumsum()
consec_0 = ts2.groupby(tsgroup).transform(pd.Series.cumsum)
consec_0.max()
</code></pre>
<p>will give you 4 as needed.</p>
<p>Write that in a function and apply to your dataframe</p>
|
python|python-2.7|pandas|numpy
| 1
|
2,773
| 62,506,567
|
python random data generator expected str instance, numpy.datetime64 found
|
<p>Hello have been trying to create random data with random dates as into a csv file but getting the following error <code>expected str instance, numpy.datetime64 found</code></p>
<p>code for data generator</p>
<pre><code>import pandas as pd
import numpy as np
import string
import random
def gen_random_email():
domains = [ "hotmail.com", "gmail.com", "aol.com", "mail.com" , "mail.kz", "yahoo.com"]
letters = string.ascii_letters +'.'*5
email = ''.join(np.random.choice(list(letters),10))+'@'+ np.random.choice(domains)
email = email.replace('.@', '@')
return email, "Email"
def gen_random_float():
num = np.random.random()*np.random.randint(2000)
decimal_points = np.random.randint(8)
num = int(num*10**(decimal_points))/10**decimal_points
return str(num), 'Float'
def gen_random_sentence():
nouns = ["puppy", "car", "rabbit", "girl", "monkey"]
verbs = ["runs", "hits", "jumps", "drives", "barfs"]
adv = ["crazily", "dutifully", "foolishly", "merrily", "occasionally"]
adj = ["adorable.", "clueless.", "dirty.", "odd.", "stupid."]
random_entry = lambda x: x[random.randrange(len(x))]
random_entry = " ".join([random_entry(nouns), random_entry(verbs),
random_entry(adv), random_entry(adj)])
return random_entry, 'String'
def gen_random_int():
num = np.random.randint(1000000)
return str(num), 'Int'
def gen_random_date():
monthly_days = np.arange(0, 30)
base_date = np.datetime64('2020-01-01')
random_date = base_date + np.random.choice(monthly_days)
return random_date, 'Date'
def gen_dataset(filename, size=5000):
randomizers = [gen_random_email, gen_random_float, gen_random_int, gen_random_sentence,gen_random_date]
with open(filename, 'w') as file:
file.write("Text, Type\n")
for _ in range(size):
file.write(",".join(random.choice(randomizers)())+"\n")
gen_dataset('dataaaa.csv')
</code></pre>
<pre><code>TypeError: sequence item 0: expected str instance, numpy.datetime64 found
</code></pre>
|
<p>First, catch the error and see what is causing it.</p>
<pre><code>def gen_dataset(filename, size=5000):
randomizers = [gen_random_email, gen_random_float, gen_random_int, gen_random_sentence,gen_random_date]
with open(filename, 'w') as file:
file.write("Text, Type\n")
for _ in range(size):
f = random.choice(randomizers)
result = f()
try:
file.write(",".join(result)+"\n")
except TypeError:
print(result)
raise
</code></pre>
<hr />
<pre><code>>>>
(numpy.datetime64('2020-01-09'), 'Date')
Traceback (most recent call last):
File "C:\pyProjects\tmp.py", line 80, in <module>
gen_dataset('dataaaa.csv')
File "C:\pyProjects\tmp.py", line 75, in gen_dataset
file.write(",".join(result)+"\n")
TypeError: sequence item 0: expected str instance, numpy.datetime64 found
</code></pre>
<p>hmmm, I wonder if <code>join</code> only except strings as arguments?</p>
<p>Yep, <a href="https://docs.python.org/3/library/stdtypes.html#str.join" rel="nofollow noreferrer">from the docs</a>:</p>
<blockquote>
<p>A TypeError will be raised if there are any non-string values in iterable, including bytes objects.</p>
</blockquote>
<p>I wonder how I can turn a numpy datetime64 to a string. Searching with <code>numpy datetime64 to string</code> is productive: <a href="https://stackoverflow.com/questions/19502506/convert-numpy-datetime64-to-string-object-in-python">Convert numpy.datetime64 to string object in python</a></p>
<p>These work</p>
<pre><code>>>> q = gen_random_date()[0]
>>> q
numpy.datetime64('2020-01-27')
>>> np.datetime_as_string(q)
'2020-01-27'
>>> q.astype(str)
'2020-01-27'
>>>
</code></pre>
<p>Then just modify the <code>try/except</code>.</p>
<pre><code>def gen_dataset(filename, size=5000):
randomizers = [gen_random_email, gen_random_float, gen_random_int, gen_random_sentence,gen_random_date]
with open(filename, 'w') as file:
file.write("Text, Type\n")
for _ in range(size):
f = random.choice(randomizers)
a,b = f()
try:
q = ",".join([a,b,"\n"])
except TypeError:
a = np.datetime_as_string(a)
q = ",".join([a,b,"\n"])
file.write(q)
</code></pre>
<p>Or simply <em>preemptively</em> make the first item a string.</p>
<pre><code>def gen_dataset(filename, size=5000):
randomizers = [gen_random_email, gen_random_float, gen_random_int, gen_random_sentence,gen_random_date]
with open(filename, 'w') as file:
file.write("Text, Type\n")
for _ in range(size):
f = random.choice(randomizers)
a,b = f()
q = ",".join([str(a),b,"\n"])
file.write(q)
</code></pre>
|
python|pandas|numpy|keras
| 0
|
2,774
| 73,733,311
|
How to rank a url in one column using a list of urls in another column in Pandas?
|
<p>My data frame looks something like this with URLs instead of letters:</p>
<p><a href="https://i.stack.imgur.com/p2kzx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p2kzx.png" alt="enter image description here" /></a></p>
<p>.csv code:</p>
<pre><code>query,ranks
a,"[k, g, y, l, a]"
h,"[f, g, l, h, p]"
x,"[b, x, y, a, g]"
w,"[w, I, b, d, g]"
r,"[I, r, n, f, g]"
</code></pre>
<p>I want the outcome to be like this:</p>
<p><a href="https://i.stack.imgur.com/rNglz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rNglz.png" alt="enter image description here" /></a></p>
<p>.csv code:</p>
<pre><code>query,ranks,rank
a,"[k, g, y, l, a]",5
h,"[f, g, l, h, p]",4
x,"[b, x, y, a, g]",2
w,"[w, I, b, d, g]",1
r,"[I, r, n, f, g]",2
</code></pre>
<p>As you can see, each letter (URL) has been ranked according to its position.</p>
<p>Edit: Sometimes the 'ranks' value (dtype: list, of strings) doesn't have the 'query' value.</p>
|
<p>A basic solution with <code>apply</code> and accounting for possibly missing values (I set -1 as default value but you can set whatever you need):</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'query': ['a', 'h', 'x', 'w', 'r'],
'ranks': [['k', 'g', 'y', 'l', 'a'],
['f', 'g', 'l', 'h', 'p'],
['b', 'x', 'y', 'a', 'g'],
['w', 'I', 'b', 'd', 'g'],
['I', 'r', 'n', 'f', 'g']]})
</code></pre>
<pre class="lang-py prettyprint-override"><code>>>> df["rank"] = df.apply(lambda row: next((i for i,rank in enumerate(row.ranks, start=1) if rank == row.query), -1), axis=1)
>>> df
query ranks rank
0 a [k, g, y, l, a] 5
1 h [f, g, l, h, p] 4
2 x [b, x, y, a, g] 2
3 w [w, I, b, d, g] 1
4 r [I, r, n, f, g] 2
</code></pre>
|
python|pandas|dataframe
| 2
|
2,775
| 71,273,770
|
How to use .iloc and .loc to get row data from specific columns value
|
<p>I am trying to make a scatter plot of dataset, 1st column is + or -, 2nd column is X axis and 3rd columns is Y axis. I need to take positve x and y and plot them one color, and than take the negatives x and y and plot them another color.</p>
<pre><code> 1 0.107143 0.60307
0 1 0.093318 0.649854
1 1 0.097926 0.705409
2 1 0.155530 0.784357
3 1 0.210829 0.866228
4 1 0.328341 0.929094
.. .. ... ...
805 -1 0.595622 0.871053
806 -1 0.625576 0.869298
807 -1 0.648618 0.857018
808 -1 0.637097 0.839474
809 -1 0.641705 0.804386
[810 rows x 3 columns]
</code></pre>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
d1 = pd.read_csv('data1.txt', sep='\s+')
y = d1[d1.iloc[:,2].values == -1]
x = d1[d1.iloc[:,1].values == 1]
y1 = d1[d1.iloc[0:,2].values == -1]
x1 = d1[d1.iloc[0:,1].values == 1]
plt.scatter(x1, y1, color='red', marker='o', label='-1')
plt.scatter(x, y, color='blue', marker='x', label='+1')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc='upper left')
plt.show()
print(y)
print(d1)
</code></pre>
|
<p>If you are ok with a solution which does not use <code>iloc</code> or <code>loc</code>, try this.
Only change is in getting the positive and negative rows. See comments inline</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
df = pd.DataFrame([[1, 0.807143, 0.60307],[-1, 0.207143, 0.50307],[1, 0.177143, 0.80307],[-1, 0.307143, 0.90307],[1, 0.107143, 0.6087]], columns=['val','x','y'])
# Get positive and negative values based on the colum
positive = df[df['val'] == 1]
negative = df[df['val'] == -1]
#your existing code. Update X and Y values
plt.scatter(negative['x'], negative['y'], color='red', marker='o', label='-1')
plt.scatter(positive['x'], positive['y'], color='blue', marker='x', label='+1')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc='upper left')
plt.show()
</code></pre>
<p><strong>Output:</strong>
<a href="https://i.stack.imgur.com/mHYnD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mHYnD.png" alt="enter image description here" /></a></p>
|
python|pandas|numpy|matlab
| 0
|
2,776
| 71,289,607
|
retrieve stock price data from specific dates in pandas df
|
<p>I have a pandas dataframe that has the earnings date of certain stocks, the eps actual and estimate as well as revenue estimate and actual. For my sample, I only have 10 tickers of all their earnings date but I to eventually incorporate all nasdaq tickers. Anyways, what is the fastest way to go through the pandas dataframe, retrieve the specific date and symbol and pull the stock price for that day (open, high, low, close). I know how to retrieve stock prices individually from the yahoo finance api. (i.e., downloading a specific ticker and retrieving stock prices from a start date and end date) But I'm unsure of how to connect the two. Thanks.</p>
<p>Below is my sample df and what I would like to see...</p>
<pre><code> date symbol eps epsEstimated time revenue revenueEstimated
0 2022-01-27 CMCSA 0.77 0.73 bmo 3.033600e+10 3.046110e+10
1 2021-10-28 CMCSA 0.87 0.75 bmo 3.029800e+10 2.976570e+10
2 2021-07-29 CMCSA 0.84 0.67 bmo 2.854600e+10 2.717460e+10
3 2021-04-29 CMCSA 0.76 0.59 bmo 2.720500e+10 2.680920e+10
4 2021-01-28 CMCSA 0.56 0.48 bmo 2.770800e+10 2.309000e+10
.. ... ... ... ... ... ... ...
34 2013-07-24 FB 0.19 0.14 amc 1.813000e+09 1.335895e+09
35 2013-05-01 FB 0.12 0.13 amc 1.458000e+09 1.579500e+09
36 2013-01-30 FB 0.17 0.15 amc 1.585000e+09 1.398529e+09
37 2012-10-23 FB 0.12 0.11 amc 1.262000e+09 1.156833e+09
38 2012-07-26 FB 0.12 0.12 amc 1.184000e+09 1.184000e+09
</code></pre>
<p>My desired result (but with values under the new columns):</p>
<pre><code> date symbol eps epsEstimated revenue revenueEstimated Open High Low Clos
0 2022-01-27 CMCSA 0.77 0.73 .033600e+10 3.046110e+10
1 2021-10-28 CMCSA 0.87 0.75 3.029800e+10 2.976570e+10
2 2021-07-29 CMCSA 0.84 0.67 2.854600e+10 2.717460e+10
3 2021-04-29 CMCSA 0.76 0.59 2.720500e+10 2.680920e+10
4 2021-01-28 CMCSA 0.56 0.48 2.770800e+10 2.309000e+10
.. ... ... ... ... ... ... ...
34 2013-07-24 FB 0.19 0.14 1.813000e+09 1.335895e+09
35 2013-05-01 FB 0.12 0.13 1.458000e+09 1.579500e+09
36 2013-01-30 FB 0.17 0.15 1.585000e+09 1.398529e+09
37 2012-10-23 FB 0.12 0.11 1.262000e+09 1.156833e+09
38 2012-07-26 FB 0.12 0.12 1.184000e+09 1.184000e+09
</code></pre>
<p><strong>UPDATE EDIT::This is what I have so far...</strong></p>
<p>the earnings df is called data1. I created three columns Day_0, Day_1 and Day_0_close. In the time column, the value is either amc or bmo. 'amc' means after market open and 'bmo' means before market open. In order for me to analyize earnings reaction on stock price. I need to possibly readjust the dates, which is why I created those new columns. For example bmo, since earnings are released before the market opens on the current day, i need to know yesterdays date and its closing price as Day_0. For amc, i need todays date's and closing price as Day_0_close. Eventually I need to get the next day prices but just keeping it to Day_0_close for now until I can resolve this issue.</p>
<pre><code>
date symbol eps epsEstimated time revenue revenueEstimated Day_0 Day_1 Day_0_Close
0 2022-01-27 CMCSA 0.770000 0.7300 bmo 3.033600e+10 3.046110e+10 0.0
1 2021-10-28 CMCSA 0.870000 0.7500 bmo 3.029800e+10 2.976570e+10 0.0
2 2021-07-29 CMCSA 0.840000 0.6700 bmo 2.854600e+10 2.717460e+10 0.0
3 2021-04-29 CMCSA 0.760000 0.5900 bmo 2.720500e+10 2.680920e+10 0.0
</code></pre>
<p>I have another df called price1 which has all the stocks price data.</p>
<pre><code> Date Open High ... Adj Close Volume ticker
0 1980-03-17 0.000000 0.101881 ... 0.070243 138396 CMCSA
1 1980-03-18 0.000000 0.101881 ... 0.070243 530518 CMCSA
2 1980-03-19 0.000000 0.100798 ... 0.069462 738113 CMCSA
3 1980-03-20 0.000000 0.108385 ... 0.074925 1360895 CMCSA
4 1980-03-21 0.000000 0.111636 ... 0.077267 461320 CMCSA
... ... ... ... ... ... ... ...
71942 2022-02-18 209.389999 210.750000 ... 206.160004 37049400 FB
71943 2022-02-22 202.339996 207.479996 ... 202.080002 39852400 FB
</code></pre>
<p>I then created a for loop to go through each row in data1, to pull the stock ticker and date and get prices. But now I'm getting an error "IndexError: index 0 is out of bounds for axis 0 with size 0" It debugs out at</p>
<pre><code>day_0_close = price1.loc[(price1.ticker == symbol) & (price1.Date == date_0), 'Adj Close'].values[0].
</code></pre>
<p>I don't know why it's erroring it out when sometimes the code works but stops several rows in.</p>
<p>See below</p>
<pre><code> date symbol eps epsEstimated time revenue revenueEstimated \
0 2022-01-27 CMCSA 0.77 0.73 bmo 3.033600e+10 3.046110e+10
1 2021-10-28 CMCSA 0.87 0.75 bmo 3.029800e+10 2.976570e+10
2 2021-07-29 CMCSA 0.84 0.67 bmo 2.854600e+10 2.717460e+10
3 2021-04-29 CMCSA 0.76 0.59 bmo 2.720500e+10 2.680920e+10
4 2021-01-28 CMCSA 0.56 0.48 bmo 2.770800e+10 2.309000e+10
Day_0 Day_1 Day_0_Close
0 2022-01-26 2022-01-27 48.459999
1 2021-10-27 2021-10-28 0.000000
2 0.000000
</code></pre>
<p>Here is what i have so far on my for loop</p>
<pre><code>for idx, row in data1.iterrows():
orig_day = pd.to_datetime(row['date'])
temp_day = orig_day + pd.tseries.offsets.CustomBusinessDay(1, holidays=nyse.holidays().holidays)
prev_temp_day = orig_day - pd.tseries.offsets.CustomBusinessDay(1, holidays=nyse.holidays().holidays)
if row['time'] == 'amc':
data1.at[idx, 'Day_0'] = orig_day.strftime("%Y-%m-%d")
data1.at[idx, 'Day_1'] = temp_day.strftime("%Y-%m-%d")
else:
data1.at[idx, 'Day_0'] = prev_temp_day.strftime("%Y-%m-%d")
data1.at[idx, 'Day_1'] = orig_day.strftime("%Y-%m-%d")
symbol = row['symbol']
date_0 = row['Day_0']
date_1 = row['Day_1']
day_0_close = price1.loc[(price1.ticker == symbol) & (price1.Date == date_0), 'Adj Close'].values[0]
print(day_0_close)
data1.at[idx, 'Day_0_Close'] = day_0_close
</code></pre>
<p>Thank you for any help you can give</p>
|
<p>This solution involves data collection as well, feel free to use this feature or just adapt the data merging using that specific part of the code.</p>
<p>First, setting up the dataframe to test this solution:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'Date':['2022-01-27','2021-10-28','2021-07-29','2021-04-29','2021-01-28','2013-07-24','2013-05-01','2013-01-30','2012-10-23','2012-07-26'],
'symbol':['CMCSA','CMCSA','CMCSA','CMCSA','CMCSA','FB','FB','FB','FB','FB'],
'eps' :[0.77,0.87,0.84,0.76,0.56,0.19,0.12,0.17,0.12,0.12],
'epsEstimated' :[0.73,0.75,0.67,0.59,0.48,0.14,0.13,0.15,0.11,0.12],
'time' :['bmo','bmo','bmo','bmo','bmo','amc','amc','amc','amc','amc'],
'revenue' :[3.033600e+10,3.029800e+10,2.854600e+10,2.720500e+10,2.770800e+10,1.813000e+09,1.458000e+09,1.585000e+09,1.262000e+09,1.184000e+09],
'revenueEstimated':[3.046110e+10,3.046110e+10,2.717460e+10,2.680920e+10,2.309000e+10,1.335895e+09,1.579500e+09,1.398529e+09,1.156833e+09,1.184000e+09]})
df['Date'] = pd.to_datetime(df['Date'])
</code></pre>
<p>Please notice I named the <code>Date</code> column with a capital <code>D</code>.</p>
<pre class="lang-none prettyprint-override"><code>df
Date symbol eps epsEstimated time revenue revenueEstimated
0 2022-01-27 CMCSA 0.77 0.73 bmo 3.033600e+10 3.046110e+10
1 2021-10-28 CMCSA 0.87 0.75 bmo 3.029800e+10 3.046110e+10
2 2021-07-29 CMCSA 0.84 0.67 bmo 2.854600e+10 2.717460e+10
3 2021-04-29 CMCSA 0.76 0.59 bmo 2.720500e+10 2.680920e+10
4 2021-01-28 CMCSA 0.56 0.48 bmo 2.770800e+10 2.309000e+10
5 2013-07-24 FB 0.19 0.14 amc 1.813000e+09 1.335895e+09
6 2013-05-01 FB 0.12 0.13 amc 1.458000e+09 1.579500e+09
7 2013-01-30 FB 0.17 0.15 amc 1.585000e+09 1.398529e+09
8 2012-10-23 FB 0.12 0.11 amc 1.262000e+09 1.156833e+09
9 2012-07-26 FB 0.12 0.12 amc 1.184000e+09 1.184000e+09
</code></pre>
<p>Downloading your database with OHLC information:</p>
<pre class="lang-py prettyprint-override"><code>import yfinance as yf
df_ohlc = yf.download(df['symbol'].unique().tolist(), start=df['Date'].min())[['Open','High','Low','Close']]
df_ohlc
</code></pre>
<p>Output (<em>could not format it properly using text, hence the figure</em>):</p>
<p><a href="https://i.stack.imgur.com/O8ssY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O8ssY.png" alt="df_ohlc_figure_output" /></a></p>
<p>Now, we stack the symbol level index, rename it and reset all indexes, we want both the <code>symbol</code> and <code>Date</code> index as columns, so we can properly merge the data:</p>
<pre class="lang-py prettyprint-override"><code>df_ohlc = df_ohlc.stack(level=1).reset_index().rename(columns={'level_1':'symbol'})
data1 = df.merge(df_ohlc, how='inner', on=['Date','symbol'])
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>data1
Date symbol eps epsEstimated time revenue revenueEstimated Close High Low Open
0 2022-01-27 CMCSA 0.77 0.73 bmo 3.033600e+10 3.046110e+10 48.009998 50.070000 45.470001 45.470001
1 2021-10-28 CMCSA 0.87 0.75 bmo 3.029800e+10 3.046110e+10 51.900002 52.740002 49.799999 50.400002
2 2021-07-29 CMCSA 0.84 0.67 bmo 2.854600e+10 2.717460e+10 58.110001 59.700001 58.060001 59.200001
3 2021-04-29 CMCSA 0.76 0.59 bmo 2.720500e+10 2.680920e+10 56.400002 56.490002 55.279999 55.980000
4 2021-01-28 CMCSA 0.56 0.48 bmo 2.770800e+10 2.309000e+10 51.599998 52.290001 49.779999 50.000000
5 2013-07-24 FB 0.19 0.14 amc 1.813000e+09 1.335895e+09 26.510000 26.530001 26.049999 26.320000
6 2013-05-01 FB 0.12 0.13 amc 1.458000e+09 1.579500e+09 27.430000 27.920000 27.309999 27.850000
7 2013-01-30 FB 0.17 0.15 amc 1.585000e+09 1.398529e+09 31.240000 31.490000 30.879999 30.980000
8 2012-10-23 FB 0.12 0.11 amc 1.262000e+09 1.156833e+09 19.500000 19.799999 19.100000 19.250000
9 2012-07-26 FB 0.12 0.12 amc 1.184000e+09 1.184000e+09 26.850000 28.230000 26.730000 27.750000
</code></pre>
<p>Done: we got your corresponding OHLC values avoiding any kind of loops.</p>
|
python|pandas|stock|yfinance
| 0
|
2,777
| 52,427,141
|
Check TPU workload/utilization
|
<p>I am training a model, and when I open the TPU in the Google Cloud Platform console, it shows me the CPU utilization (on the TPU, I suppose). It is really, really, low (like 0.07%), so maybe it is the VM CPU? I am wondering whether the training is really proper or if the TPUs are just that strong.</p>
<p>Is there any other way to check the TPU usage? Maybe with a <code>ctpu</code> command?</p>
|
<p>I would recommend using the TPU profiling tools that plug into TensorBoard. A good tutorial for install and use of these tools can be found <a href="https://cloud.google.com/tpu/docs/cloud-tpu-tools" rel="noreferrer">here</a>.</p>
<p>You'll run the profiler while your TPU is training. It will add an extra tab to your TensorBoard with TPU-specific profiling information. Among the most useful:</p>
<ul>
<li>Average step time</li>
<li>Host idle time (how much time the CPU spends idling)</li>
<li>TPU idle time</li>
<li>Utilization of TPU Matrix units</li>
</ul>
<p>Based on these metrics, the profiler will suggest ways to start optimizing your model to train well on a TPU. You can also dig into the more sophisticated profiling tools like a trace viewer, or a list of the most expensive graph operations.</p>
<p>For some guidelines on performance tuning (in addition to those ch_mike already linked) you can look at the <a href="https://cloud.google.com/tpu/docs/performance-guide" rel="noreferrer">TPU performance guide</a>.</p>
|
tensorflow|google-cloud-platform|google-compute-engine|google-cloud-tpu
| 6
|
2,778
| 60,578,609
|
Extract list element in pandas series and convert to datetime
|
<p>The series which I am handing now looks like this:</p>
<pre><code>qa_answers['date_of_birth']
1 []
2 []
...
2600 [1988/11/23]
2601 [1992/7/15]
2602 [1993/11/8"]
2603 [1997/08/31]
2604 [1971/2/11]
2605 [1979/11/1"]
2606 [1993/9/19]
2607 [1985/01/12]
2608 [1977/11/3"]
2609 [1981/7/2"]
2610 [1952/4/9"]
2611 [1991/8/20]
2612 [1993/1/31]
Name: date_of_birth, dtype: object
</code></pre>
<p>This problem might consist of two parts: </p>
<ol>
<li>I want to convert the type of the series (object) to datetime.</li>
<li>But when I tried to use to_datetime, I got this error.</li>
</ol>
<pre><code>qa_answers['date_of_birth'] = pd.to_datetime(qa_answers['date_of_birth'],errors='coerce')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-147-96dff0351764> in <module>()
28 qa_answers['date_of_birth2']= qa_answers['answers'].str.findall(dob2)
29 qa_answers['date_of_birth'] = qa_answers['date_of_birth1'] + qa_answers['date_of_birth2']
---> 30 qa_answers['date_of_birth'] = pd.to_datetime(qa_answers['date_of_birth'],errors='coerce')
31
32
4 frames
/usr/local/lib/python3.6/dist-packages/pandas/core/algorithms.py in unique(values)
403
404 table = htable(len(values))
--> 405 uniques = table.unique(values)
406 uniques = _reconstruct_data(uniques, dtype, original)
407 return uniques
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.unique()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable._unique()
TypeError: unhashable type: 'list'
</code></pre>
<p>So I guess I should try to extract the element out of the list first. How can I do this job?</p>
<p>p.s. Also, could you give some tips for removing ' " ' in the element?</p>
|
<p>You must first convert non empty lists to their first element and clean it and convert empty list to an empty string:</p>
<pre><code>df.date_of_birth.apply(lambda x: x[0].replace('"', '') if len(x) > 0 else '')
</code></pre>
<p>gives:</p>
<pre><code>1
2
...
2600 1988/11/23
2601 1992/7/15
2602 1993/11/8
2603 1997/08/31
2604 1971/2/11
2605 1979/11/1
2606 1993/9/19
2607 1985/01/12
2608 1977/11/3
2609 1981/7/2
2610 1952/4/9
2611 1991/8/20
2612 1993/1/31
</code></pre>
<p>Then you can easily convert that to a datetime column:</p>
<pre><code>pd.to_datetime(df.date_of_birth.apply(lambda x: x[0].replace('"', '') if len(x) > 0 else ''))
</code></pre>
<p>you get:</p>
<pre><code>1 NaT
2 NaT
2600 1988-11-23
2601 1992-07-15
2602 1993-11-08
2603 1997-08-31
2604 1971-02-11
2605 1979-11-01
2606 1993-09-19
2607 1985-01-12
2608 1977-11-03
2609 1981-07-02
2610 1952-04-09
2611 1991-08-20
2612 1993-01-31
Name: date_of_birth, dtype: datetime64[ns]
</code></pre>
|
python|pandas|list|datetime
| 2
|
2,779
| 60,640,113
|
What the best possible way to concatenate pandas column? From a list of column
|
<p>I have dataframe like this:</p>
<pre><code>A B C D E F
aa bb cc dd ee ff
NA ba NA da ea NA
list_col = ['A', 'B', 'C']
</code></pre>
<p>So i just want to merge the columns which are in list only. Moreover i dont want NA values as merged.. is there any way?</p>
<p>desired_output </p>
<pre><code> A B C D E F desired_col
aa bb cc dd ee ff aa-bb-cc
NA ba NA da ea NA ba
</code></pre>
|
<p>You can use <code>apply(..., x=1)</code> to process a dataframe row wise. But you want to ignore NaN values, so you will have to exclude them. You could use:</p>
<pre><code>df[list_col].apply(lambda x: '-'.join(x.dropna()), axis=1)
</code></pre>
<p>It gives:</p>
<pre><code>0 aa-bb-cc
1 ba
dtype: object
</code></pre>
|
python|pandas|join|merge
| 1
|
2,780
| 72,531,718
|
Changing list to dataframe in dictionary
|
<p>I am writing a dictionary that has to seperate a dataframe into multiple small dataframes based on a certain item that is repeated in the list calvo_massflows. If the items isn't repeated, it'll make a list in the dictionary. In the second for loop, the dictionary will add the index item from the df dataframe to one of the dictionary lists, if the key (l) and e are the same.</p>
<p>This is what I currently got:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from scipy.stats import linregress
from scipy.optimize import curve_fit
calvo_massflow = [1, 2, 1, 2, 2, 1, 1]
df = pd.DataFrame({"a":[1, 2, 3, 4, 11, 2, 4, 6, 7, 3],
"b":[5, 6, 7, 8, 10, 44, 23, 267, 4, 66]})
dic = {}
massflows = []
for i, e in enumerate(calvo_massflow):
if e not in massflows:
massflows.append(e)
dic[e] = []
for l in dic:
if e == l:
dic[e].append(pd.DataFrame([df.iloc[i]]))
</code></pre>
<p>The problem with the output is the fact each index is a seperate dataframe in thte dictionary. I would like to have all the dataframes combined. I tried doing something with pd.concat. But I didn't figure it out. Moreover, the chapters in the dictionary (if that's how you call them), are lists and I prefer them being dataframes. However, if I change my list to a dataframe like I done here:</p>
<pre><code>dic3 = {}
massflows = []
for i, e in enumerate(calvo_massflow):
if e not in massflows:
massflows.append(e)
dic3[e] = pd.DataFrame([])
for l in dic3:
if e == l:
dic3[e].append(df.iloc[i])
</code></pre>
<p>I can't seem to add dataframes to the dataframes made by the dictionary.</p>
<p>My ideal scenario would be a dictionary with two dataframes. One having the key '1' and one being '2'. Both those dataframes, include all the information from the data frame df. And not how it is right now with separate dataframes for each index. Preferably the dataframes aren't in lists like they are now but it won't be a disaster.</p>
<p>Let me know if you guys can help me out or need more context!</p>
|
<p>IIUC you want to select the rows of <code>df</code> up to the length of <code>calvo_massflow</code>, group by calvo_massflow and convert to dict. This might look like this:</p>
<pre><code>calvo_massflow = [1, 2, 1, 2, 2, 1, 1]
df = pd.DataFrame({"a":[1, 2, 3, 4, 11, 2, 4, 6, 7, 3],
"b":[5, 6, 7, 8, 10, 44, 23, 267, 4, 66]})
dic = dict(iter(df.iloc[:len(calvo_massflow)]
.groupby(calvo_massflow)))
print(dic)
</code></pre>
<p>resulting in a dictionary with keys 1 and 2 containing two filtered DataFrames:</p>
<pre><code>{1: a b
0 1 5
2 3 7
5 2 44
6 4 23,
2: a b
1 2 6
3 4 8
4 11 10}
</code></pre>
|
python|pandas|dataframe|dictionary|pandas-groupby
| 0
|
2,781
| 72,738,731
|
How can I group by a datetime column with timezone?
|
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'StartDate':['2020-01-01 00:00:00-04:00', '2020-01-01 01:00:00-04:00', '2020-01-01 01:55:00-04:00', '2020-01-02 02:00:00-02:00', '2020-01-02 02:00:00-04:00'],
'Weight':[100, 110, 120, 125, 155]
}
)
df['StartDate'] = pd.to_datetime(df['StartDate'])
df
</code></pre>
<p><a href="https://i.stack.imgur.com/5WhSo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5WhSo.png" alt="df1" /></a></p>
<p>I want to group the data <em><strong>by the hour</strong></em> and sum up the Weight column. So, the end result would be a df with 3 rows: current index 0, current indexes 1&2, current indexes 3&4.</p>
<p>I came across the <a href="https://pandas.pydata.org/docs/reference/api/pandas.Grouper.html" rel="nofollow noreferrer">Grouper</a> function and I tried the following but it didn't work:</p>
<pre><code>df = df.groupby(pd.Grouper(key='StartDate', freq='H')).sum()
</code></pre>
<p>I get the following error:</p>
<blockquote>
<p>TypeError: Only valid with DatetimeIndex, TimedeltaIndex or
PeriodIndex, but got an instance of 'Index'</p>
</blockquote>
<p>Does anyone know what I'm doing wrong or can someone provide a solution?</p>
<p>Thanks</p>
|
<p>You first need to convert to datetime, taking into account the timezones:</p>
<pre><code>df['StartDate'] = pd.to_datetime(df['StartDate'], utc=True)
df.groupby(pd.Grouper(key='StartDate', freq='H')).sum()
</code></pre>
<p>Output:</p>
<pre><code> Weight
StartDate
2020-01-01 04:00:00+00:00 100
2020-01-01 05:00:00+00:00 230
2020-01-01 06:00:00+00:00 0
2020-01-01 07:00:00+00:00 0
2020-01-01 08:00:00+00:00 0
2020-01-01 09:00:00+00:00 0
2020-01-01 10:00:00+00:00 0
2020-01-01 11:00:00+00:00 0
2020-01-01 12:00:00+00:00 0
2020-01-01 13:00:00+00:00 0
2020-01-01 14:00:00+00:00 0
2020-01-01 15:00:00+00:00 0
2020-01-01 16:00:00+00:00 0
2020-01-01 17:00:00+00:00 0
2020-01-01 18:00:00+00:00 0
2020-01-01 19:00:00+00:00 0
2020-01-01 20:00:00+00:00 0
2020-01-01 21:00:00+00:00 0
2020-01-01 22:00:00+00:00 0
2020-01-01 23:00:00+00:00 0
2020-01-02 00:00:00+00:00 0
2020-01-02 01:00:00+00:00 0
2020-01-02 02:00:00+00:00 0
2020-01-02 03:00:00+00:00 0
2020-01-02 04:00:00+00:00 125
2020-01-02 05:00:00+00:00 0
2020-01-02 06:00:00+00:00 155
</code></pre>
<h4>without "blanks"</h4>
<pre><code>df.groupby(pd.to_datetime(df['StartDate'], utc=True).dt.floor('h'))['Weight'].sum()
StartDate
2020-01-01 04:00:00+00:00 100
2020-01-01 05:00:00+00:00 230
2020-01-02 04:00:00+00:00 125
2020-01-02 06:00:00+00:00 155
Name: Weight, dtype: int64
</code></pre>
|
python|pandas
| 2
|
2,782
| 59,673,802
|
how to install the latest version of Tensorflow for CPU
|
<p>I am trying to install TensorFlow and I get this error when I try to import it :</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\USER\Anaconda3\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\USER\Anaconda3\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\USER\Anaconda3\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\USER\Anaconda3\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\USER\Anaconda3\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\USER\Anaconda3\lib\site-packages\tensorflow\__init__.py", line 101, in <module>
from tensorflow_core import *
File "C:\Users\USER\Anaconda3\lib\site-packages\tensorflow_core\__init__.py", line 40, in <module>
from tensorflow.python.tools import module_util as _module_util
File "C:\Users\USER\Anaconda3\lib\site-packages\tensorflow\__init__.py", line 50, in __getattr__
module = self._load()
File "C:\Users\USER\Anaconda3\lib\site-packages\tensorflow\__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "C:\Users\USER\Anaconda3\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\USER\Anaconda3\lib\site-packages\tensorflow_core\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\USER\Anaconda3\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\USER\Anaconda3\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\USER\Anaconda3\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\USER\Anaconda3\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\USER\Anaconda3\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\USER\Anaconda3\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.
Failed to load the native TensorFlow runtime.
</code></pre>
<p>python version : 3.7.6</p>
<p>conda version : 4.8.1</p>
<p>tensorflow version: 2.1.0 (the latest)</p>
<p>conda packages have the same versions and Microsoft C++ Redist 2015 is installed</p>
|
<p>Try installing or re-installing the C++ Redistributable from <a href="https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads" rel="nofollow noreferrer">here</a>. Make sure you are downloading the "Visual Studio 2015, 2017 and 2019" x64 version and restart your computer after the installation is complete.</p>
|
tensorflow
| 0
|
2,783
| 59,801,932
|
Raise TypeError (TypeError: object of type <class 'numpy.float64'> cannot be safely interpreted as an integer)
|
<p>I am using a package that needs NumPy. Until now it was working fine. But today, because of extending my code I needed the newest version of NumPy. The old version was 17.something and I installed the latest version. After that I am facing the below mentioned issue <a href="https://github.com/numpy/numpy/issues/15345#issue-551771795" rel="nofollow noreferrer">The detailed link of the question on Github</a></p>
<pre><code> File "C:\Users\AppData\Local\Programs\Python\Python38-32\lib\site-packages\numpy\core\function_base.py", line 119, in linspace
raise TypeError(
TypeError: object of type <class 'numpy.float64'> cannot be safely interpreted as an integer.
</code></pre>
|
<p>Downgrade your <strong>numpy</strong> version to: <strong>1.16</strong></p>
<p><strong>pip install numpy==1.16</strong></p>
<p>worked for me .</p>
|
python|numpy|package|numpy-ufunc
| 2
|
2,784
| 40,563,628
|
How to convert a 2 1d arrays to one 1d arrays but both values should be inside one element
|
<p>i really dont how to phrase this properly so I apologise in advance.
So lets say i have 2, 1D arrays</p>
<pre><code>array1 = [2000, 2100, 2800]
array2 =[20, 80, 40]
</code></pre>
<p>Now how do i convert them into an 2d array in python like shown below</p>
<pre><code>2dArray = [[2000, 20], [2100, 80], [2800, 40]]
</code></pre>
<p>So 2 id arrays to look like the one above in python.</p>
|
<p>Simple NumPy solution - <code>np.array([...]).T</code>:</p>
<pre><code>In [6]: np.array([a1, a2]).T
Out[6]:
array([[2000, 20],
[2100, 80],
[2800, 40]])
</code></pre>
<p>Another NumPy solution, which uses <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html" rel="nofollow noreferrer">vstack()</a> method:</p>
<pre><code>In [142]: np.vstack((array1, array2)).T
Out[142]:
array([[2000, 20],
[2100, 80],
[2800, 40]])
</code></pre>
<p>or using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.column_stack.html" rel="nofollow noreferrer">np.column_stack()</a>:</p>
<pre><code>In [144]: np.column_stack([array1, array2])
Out[144]:
array([[2000, 20],
[2100, 80],
[2800, 40]])
</code></pre>
<p>Another "slow" solution would be to use built-in <a href="https://docs.python.org/3/library/functions.html#zip" rel="nofollow noreferrer">zip()</a> function?</p>
<pre><code>In [131]: np.array(list(zip(array1, array2)))
Out[131]:
array([[2000, 20],
[2100, 80],
[2800, 40]])
</code></pre>
<p>Explanation:</p>
<pre><code>In [132]: list(zip(array1, array2))
Out[132]: [(2000, 20), (2100, 80), (2800, 40)]
</code></pre>
<p><strong>Timing</strong> for two 1M elements arrays:</p>
<pre><code>In [145]: a1 = np.random.randint(0, 10**6, 10**6)
In [146]: a2 = np.random.randint(0, 10**6, 10**6)
In [147]: a1.shape
Out[147]: (1000000,)
In [148]: a2.shape
Out[148]: (1000000,)
In [149]: %timeit np.array(list(zip(a1, a2)))
1 loop, best of 3: 1.78 s per loop
In [150]: %timeit np.vstack((a1, a2)).T
100 loops, best of 3: 6.4 ms per loop
In [151]: %timeit np.column_stack([a1, a2])
100 loops, best of 3: 7.62 ms per loop
In [14]: %timeit np.array([a1, a2]).T
100 loops, best of 3: 6.36 ms per loop # <--- WINNER!
</code></pre>
|
python|arrays|pandas|numpy
| 3
|
2,785
| 40,637,615
|
Bar graph plot with values on top python
|
<p>I have a table in my pandas df.</p>
<pre><code> Total_orders frequency
0.0 18679137
1.0 360235
2.0 68214
3.0 20512
4.0 7211
... ...
50.0 12
</code></pre>
<p>I want to plot a bar graph total orders vs frequency, with the values of frequency displayed on the top of each bars.</p>
<p>I am running these three codes.</p>
<p>Code1:</p>
<pre><code>plt.figure(figsize=(12,6))
df2 = df.groupby('Total_orders')['frequency'].plot(kind='bar')
plt.xlabel('Total_orders')
plt.ylabel('frequency')
for rect in df2.patches:
height = rect.get_height()
df2.text(rect.get_x() + rect.get_width()/2., 1.05*height+100,
'%d' % int(height),ha='center', va='bottom', rotation=90)
</code></pre>
<p>Code2:(for loop)</p>
<pre><code>for ii,rect in enumerate(df2.patches):
height = rect.get_height()
df2.text(rect.get_x() + rect.get_width()/2., 1.14*height+100,'%d' % int(height),ha='center', va='bottom', rotation=90)
</code></pre>
<p>Code3</p>
<pre><code>for p in df2.patches:
df2.annotate(str(p.get_height()), (p.get_x() * 1.005, p.get_height() *1.05),rotation=90)
</code></pre>
<p>but when I am running the code it's showing me error, that</p>
<blockquote>
<p>AttributeError: 'Series' object has no attribute 'patches'</p>
</blockquote>
<p>Any idea why this is happening, and how to remove it?</p>
|
<p>Try this:</p>
<pre><code>def autolabel(rects, height_factor=1.05):
# attach some text labels
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2., height_factor*height,
'%d' % int(height),
ha='center', va='bottom')
In [48]: df
Out[48]:
Total_orders frequency
0 0.0 18679137
1 1.0 360235
2 2.0 68214
3 3.0 20512
4 4.0 7211
5 50.0 12
In [49]: import matplotlib
In [50]: matplotlib.style.use('ggplot')
In [51]: ax = df.plot.bar(x='Total_orders', y='frequency', rot=0, width=0.85, alpha=0.6, figsize=(14,12))
In [52]: autolabel(ax.patches, 1.02)
</code></pre>
<p><a href="https://i.stack.imgur.com/c3suz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c3suz.png" alt="enter image description here"></a></p>
|
python|python-3.x|python-2.7|pandas|matplotlib
| 2
|
2,786
| 40,660,933
|
Why "CopyFrom" is used during the creation of the constant Tensor?
|
<p>During the creation process of the constant Tensor there is the following <a href="https://github.com/tensorflow/tensorflow/blob/beb10ceb086fe94a6b1247b45397aafddd47e05d/tensorflow/python/framework/constant_op.py#L162" rel="nofollow noreferrer">line</a>:</p>
<pre><code> tensor_value.tensor.CopyFrom(
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape))
</code></pre>
<p><code>CopyFrom</code> creates a copy of a newly created Tensor proto. However this looks like waste of resource for coping since <code>make_tensor_proto</code>, according to the doc, creates a new object. Would it be more sufficient, just to do next: </p>
<pre><code> tensor_value.tensor =
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape)
</code></pre>
<p>This should not create a new object, plus it is also a valid usage of the OneOf protobuf fields. </p>
|
<p>You cannot assign a proto to a field of a proto as explained in this doc: <a href="https://developers.google.com/protocol-buffers/docs/reference/python-generated" rel="nofollow noreferrer">https://developers.google.com/protocol-buffers/docs/reference/python-generated</a></p>
<blockquote>
<p>You cannot assign a value to an embedded message field. Instead, assigning a value to any field within the child message implies setting the message field in the parent.</p>
</blockquote>
<p>If you remove the CopyFrom, you will get the following error:</p>
<pre><code>AttributeError: Assignment not allowed to field "tensor" in protocol message object.
</code></pre>
|
python|tensorflow|protocol-buffers
| 4
|
2,787
| 18,608,924
|
Python- New Variable inidcator
|
<p>So I want to make new indicator variable to my dataframe (df),
Basically i want it to read "Splits" unless another field (AssetClass) is "Future" in which case i want the new indicator to read "NotSplit"
the code im using at the moment is:</p>
<pre><code>df['Category'] = 'Split'
df[df.AssetClass == "Future"].Category = 'NotSplit'
</code></pre>
<p>but so far it just seems to make the new variable and call it all "Split" and then skip over the next line.
Can anyone see any problems here?</p>
|
<p>Like this?</p>
<pre><code>df['Category'] = 'NotSplit' if df.AssetClass == 'Future' else 'Split'
</code></pre>
<p>Some background: currently you will get a new key in your dict (either <code>True</code> or <code>False</code>) because the result of the <code>df.AssetClass == 'Future'</code> evaluation is used. On other words, you get the following:</p>
<pre><code>df['Category'] = 'Split'
df[True] = 'NotSplit'
</code></pre>
<p>If you just want to update the entry, reuse the same key name.</p>
<p>The above solution in multiple lines comes down to the following:</p>
<pre><code>df['Category'] = 'Split'
if df.AssetClass == 'Future':
df['Category'] = 'NotSplit'
</code></pre>
|
python|numpy
| 0
|
2,788
| 61,630,046
|
Skipping tuples without attributes Python NLTK
|
<p>I have a script that is mostly working for the Natural Language Tool Kit. It works by using NLTK to tokenize and label (categorize) individual words. </p>
<p>When my list includes names and entities it works fine. </p>
<p>Where it breaks down is if the list includes articles of speech such as "The", "a", "and" etc.</p>
<p>These are words that are not going to receive labels from NLTK (Persons, Organization, Geographic Location etc..) </p>
<p>My question is there is a way to skip the tuples that will give me an error because they will not return a label attribute?</p>
<p>Example dataframe:</p>
<pre><code>Order Text results
0 0 John
1 1 Paul
2 2 George
3 3 Ringo
</code></pre>
<h2>(Obviously not perfect, but better than nothing)</h2>
<p>Code:</p>
<pre><code>for i in range(len(text)):
SENT_DETECTOR = nltk.data.load('tokenizers/punkt/english.pickle')
ne_tree = nltk.ne_chunk(pos_tag(word_tokenize(text[i])))
df['results'][i] = ne_tree[0].label()
print(df)
</code></pre>
<p>Output:</p>
<pre><code> Order Text results
0 0 John PERSON
1 1 Paul PERSON
2 2 George GPE
3 3 Ringo GPE
</code></pre>
<p>Example dataframe 2:</p>
<pre><code> Order Text
0 0 John
1 1 Paul
2 2 George
3 3 to
</code></pre>
<p>Error:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-6-dff2775636f0> in <module>
2 SENT_DETECTOR = nltk.data.load('tokenizers/punkt/english.pickle')
3 ne_tree = nltk.ne_chunk(pos_tag(word_tokenize(text[i])))
----> 4 df['results'][i] = ne_tree[0].label()
5 print(df)
AttributeError: 'tuple' object has no attribute 'label'
</code></pre>
<p>The "to" is causing it to crash because "to" would not get a label. If I'm dealing with thousands of words it would not be practical to find all the words that would cause it to crash and remove them manually. Ideally I would like to skip problematic lines, but I'm not sure if it is possible. </p>
<p>Thanks for the help.</p>
|
<p>First suggestion is to remove stop words (to, the, a, etc). Example code is:</p>
<pre><code>from nltk.corpus import stopwords, wordnet
stop_words = set(stopwords.words('english'))
df['TextRemovedStopWords'] = df['Text']
df.loc[df['Text'].isin(stop_words),'TextRemovedStopWords'] = None
</code></pre>
<p>After that you can use try and except to handle the edge cases</p>
<pre><code>from nltk.tokenize import word_tokenize
from nltk import pos_tag
import nltk
nltk.download('maxent_ne_chunker')
def get_result(text):
if text is not None:
try:
ne_tree = nltk.ne_chunk(pos_tag(word_tokenize(text)))
return ne_tree[0]
except Exception as e:
print(e, text)
return None
else:
return None
df['results'] = df['TextRemovedStopWords'].apply(lambda x:get_result(x))
</code></pre>
<p>You can also skip the removing stop words part, but in general it is better to always remove stop words. Hope it helps.</p>
|
python|python-3.x|pandas|jupyter-notebook|nltk
| 0
|
2,789
| 57,870,748
|
Vectorize QR in Numpy Python
|
<p>Hi I am trying to vectorise the QR decomposition in numpy as the documentation suggests <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.vectorize.html" rel="nofollow noreferrer">here</a>, however I keep getting dimension issues. I am confused as to what I am doing wrong as I believe the following follows the documentation. Does anyone know what is wrong with this:</p>
<pre><code>import numpy as np
X = np.random.randn(100,50,50)
vecQR = np.vectorize(np.linalg.qr)
vecQR(X)
</code></pre>
|
<p>From the doc: "By default, pyfunc is assumed to take scalars as input and output.".
So you need to give it a signature: </p>
<pre><code>vecQR = np.vectorize(np.linalg.qr, signature='(m,n)->(m,p),(p,n)')
</code></pre>
|
python|numpy|vectorization|qr-code
| 2
|
2,790
| 57,793,496
|
Pandas: Sum Previous N Rows by Group
|
<p>I want to sum the prior N periods of data for each group. I have seen how to do each individually (sum by group, or <a href="https://stackoverflow.com/questions/43787059/how-to-compute-cumulative-sum-of-previous-n-rows-in-pandas">sum prior N periods</a>), but can't figure out a clean way to do both together.</p>
<p>I'm currently doing the following:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
sample_data = {'user': ['a', 'a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'b'],\
'clicks': [0,1,2,3,4,5,6,7,8,9]}
df = pd.DataFrame(sample_data)
df['clicks.1'] = df.groupby(['user'])['clicks'].shift(1)
df['clicks.2'] = df.groupby(['user'])['clicks'].shift(2)
df['clicks.3'] = df.groupby(['user'])['clicks'].shift(3)
df['total_clicks_prior3'] = df[['clicks.1','clicks.2', 'clicks.3']].sum(axis=1)
</code></pre>
<p>I don't want the 3 intermediate lagged columns, I just want the sum of those, so my desired output is:</p>
<pre class="lang-py prettyprint-override"><code>>>> df[['clicks','user','total_clicks_prior3']]
clicks user total_clicks_prior3
0 0 a NaN
1 1 a 0.0
2 2 a 1.0
3 3 a 3.0
4 4 a 6.0
5 5 b NaN
6 6 b 5.0
7 7 b 11.0
8 8 b 18.0
9 9 b 21.0
</code></pre>
<p>Note: I could obviously drop the 3 columns after creating them, but given that I will be creating multiple columns of different numbers of lagged periods, I feel like there has to be an easier way.</p>
|
<p>This is <code>groupby</code> + <code>rolling</code> + <code>shift</code></p>
<pre><code>df.groupby('user')['clicks'].rolling(3, min_periods=1).sum().groupby(level=0).shift()
</code></pre>
<p></p>
<pre><code>user
a 0 NaN
1 0.0
2 1.0
3 3.0
4 6.0
b 5 NaN
6 5.0
7 11.0
8 18.0
9 21.0
Name: clicks, dtype: float64
</code></pre>
|
python|pandas
| 6
|
2,791
| 34,355,059
|
OpenCV-Python - How to format numpy arrays when using calibration functions
|
<p>I'm trying to calibrate a fisheye camera using OpenCV 3.0.0 python bindings (with an asymmetric circle grid), but I have problems to format the object and image point arrays correctly. My current source looks like this:</p>
<pre><code>import cv2
import glob
import numpy as np
def main():
circle_diameter = 4.5
circle_radius = circle_diameter/2.0
pattern_width = 4
pattern_height = 11
num_points = pattern_width*pattern_height
images = glob.glob('*.bmp')
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
imgpoints = []
objpoints = []
obj = []
for i in range(pattern_height):
for j in range(pattern_width):
obj.append((
float(2*j + i % 2)*circle_radius,
float(i*circle_radius),
0
))
for name in images:
image = cv2.imread(name)
grayimage = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
retval, centers = cv2.findCirclesGrid(grayimage, (pattern_width, pattern_height), flags=(cv2.CALIB_CB_ASYMMETRIC_GRID + cv2.CALIB_CB_CLUSTERING))
imgpoints_tmp = np.zeros((num_points, 2))
if retval:
for i in range(num_points):
imgpoints_tmp[i, 0] = centers[i, 0, 0]
imgpoints_tmp[i, 1] = centers[i, 0, 1]
imgpoints.append(imgpoints_tmp)
objpoints.append(obj)
# Convertion to numpy array
imgpoints = np.array(imgpoints, dtype=np.float32)
objpoints = np.array(objpoints, dtype=np.float32)
K, D = cv2.fisheye.calibrate(objpoints, imgpoints, image_size=(1280, 800), K=None, D=None)
if __name__ == '__main__':
main()
</code></pre>
<p>The error message is:</p>
<pre><code>OpenCV Error: Assertion failed (objectPoints.type() == CV_32FC3 || objectPoints.type() == CV_64FC3) in cv::fisheye::calibrate
</code></pre>
<p><code>objpoints</code> has shape <code>(31,44,3)</code>.</p>
<p>So <code>objpoints</code> array needs to be formatted in a different way, but I'm not able to achieve the correct layout. Maybe someone can help here?</p>
|
<p>In the sample of OpenCV (<a href="https://docs.opencv.org/4.0.1/dc/dbb/tutorial_py_calibration.html" rel="nofollow noreferrer">Camera Calibration</a>) they set the objp to <code>objp2 = np.zeros((8*9,3), np.float32)</code></p>
<p>However, in omnidirectional camera or fisheye camera, it should be:
<code>objp = np.zeros((1,8*9,3), np.float32)</code></p>
<p>Idea is from here <a href="https://medium.com/@kennethjiang/calibrate-fisheye-lens-using-opencv-333b05afa0b0" rel="nofollow noreferrer">Calibrate fisheye lens using OpenCV — part 1</a></p>
|
python|opencv|numpy|opencv3.0|camera-calibration
| 1
|
2,792
| 36,799,642
|
pandas - groupby and select variable amount of random values according to column
|
<p>Starting from this simple dataframe <code>df</code>:</p>
<pre><code>df = pd.DataFrame({'c':[1,1,2,2,2,2,3,3,3], 'n':[1,2,3,4,5,6,7,8,9], 'N':[1,1,2,2,2,2,2,2,2]})
</code></pre>
<p>I'm trying to select <code>N</code> random value from <code>n</code> for each <code>c</code>. So far I managed to groupby and get one single element / group with:</p>
<pre><code>sample = df.groupby('c').apply(lambda x :x.iloc[np.random.randint(0, len(x))])
</code></pre>
<p>that returns:</p>
<pre><code> N c n
c
1 1 1 2
2 2 2 4
3 2 3 8
</code></pre>
<p>My expected output would be something like:</p>
<pre><code> N c n
c
1 1 1 2
2 2 2 4
2 2 2 3
3 2 3 8
3 2 3 7
</code></pre>
<p>so getting 1 sample from c=1 and 2 samples for c=2 and c=3, according to the <code>N</code> column.</p>
|
<p>Pandas objects now have a <code>.sample</code> method to return a random number of rows:</p>
<pre><code>>>> df.groupby('c').apply(lambda g: g.n.sample(g.N.iloc[0]))
c
1 1 2
2 5 6
2 3
3 6 7
7 8
Name: n, dtype: int64
</code></pre>
|
python|numpy|pandas|random
| 1
|
2,793
| 54,990,022
|
Convolving sobel operator in x direction in frequency domain
|
<p>I implemented the code <a href="https://stackoverflow.com/a/54977551/7328782">given by Cris Luengo for convolution in frequency in domain</a>, however I'm not getting the intended gradient image in x direction.</p>
<p>Image without flipping the kernel in x and y direction:</p>
<p><img src="https://i.stack.imgur.com/JjIjB.png" alt="Image with normal kernel"></p>
<p>Image after flipping the kernel:</p>
<p><img src="https://i.stack.imgur.com/1K2Aq.png" alt="enter image description here"></p>
<p>If you notice, the second image is same as given by <code>ImageKernel</code> filter from the pillow library. Also, one thing to notice is I don't have to flip the kernel if I apply Sobel kernel in y direction, I get the exactly intended image.</p>
<p>This is my code:</p>
<pre><code>import numpy as np
from scipy import misc
from scipy import fftpack
import matplotlib.pyplot as plt
from PIL import Image,ImageDraw,ImageOps,ImageFilter
from pylab import figure, title, imshow, hist, grid,show
im1=Image.open("astronaut.png").convert('L')
# im1=ImageOps.grayscale(im1)
img=np.array(im1)
# kernel = np.ones((3,3)) / 9
# kernel=np.array([[0,-1,0],[-1,4,-1],[0,-1,0]])
kernel=np.array([[-1,0,1],[-2,0,2],[-1,0,1]])
kernel=np.rot90(kernel,2)
print(kernel)
sz = (img.shape[0] - kernel.shape[0], img.shape[1] - kernel.shape[1]) # total
amount of padding
kernel = np.pad(kernel, (((sz[0]+1)//2, sz[0]//2), ((sz[1]+1)//2, sz[1]//2)),
'constant')
kernel = fftpack.ifftshift(kernel)
filtered = np.real(fftpack.ifft2(fftpack.fft2(img) *
fftpack.fft2(kernel)))+np.imag(fftpack.ifft2(fftpack.fft2(img) *
fftpack.fft2(kernel)))
filtered=np.maximum(0,np.minimum(filtered,255))
im2=Image.open("astronaut.png").convert('L')
u=im2.filter(ImageFilter.Kernel((3,3), [-1,0,1,-2,0,2,-1,0,1],
scale=1, offset=0))
fig2=figure()
ax1 = fig2.add_subplot(221)
ax2 = fig2.add_subplot(222)
ax3 = fig2.add_subplot(223)
ax1.title.set_text('Original Image')
ax2.title.set_text('After convolving in freq domain')
ax3.title.set_text('imagefilter conv')
ax1.imshow(img,cmap='gray')
ax2.imshow(filtered,cmap='gray')
ax3.imshow(np.array(u),cmap='gray')
show()
</code></pre>
|
<p>We can use <code>np.fft</code> module's FFT implementation too and here is how we can obtain convolution with sobel horizontal kernel in frequency domain (by the convolution theorem):</p>
<pre><code>h, w = im.shape
kernel = np.array(array([[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]])) #sobel_filter_x
k = len(kernel) // 2 # assuming odd-length square kernel, here it's 3x3
kernel_padded = np.pad(kernel, [(h//2-k-1, h//2-k), (w//2-k-1, w//2-k)])
im_freq = np.fft.fft2(im) # input image frequency
kernel_freq = np.fft.fft2(kernel_padded) # kernel frequency
out_freq = im_freq * kernel_freq # frequency domain convolution output
out = np.fft.ifftshift(np.fft.ifft2(out_freq)).real # spatial domain output
</code></pre>
<p>The below figure shows the input, kernel and output images in spatial and frequency domain:</p>
<p><a href="https://i.stack.imgur.com/zG366.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zG366.png" alt="enter image description here" /></a></p>
|
python|numpy|image-processing|fft|convolution
| 0
|
2,794
| 28,049,842
|
Python: Get all variable names with their types, and then export to a file
|
<p>I have a data set of 10 variables (columns) and I would like to see all of the variable names with their types. For example, the expected output is:</p>
<pre><code>ID Integer,
Name Integer,
Income Float,
</code></pre>
<p>And then export the output to a text file. I know the function <code>type</code> will return the type, but it is like <code><type 'int'></code>. I don't want the <code><type ></code> part.</p>
<p><strong>Updates:</strong></p>
<p>Thank you all for the help.The solutions are really helpful. I would like to further clarify my question:</p>
<p>I have a CSV file with first row as variable names (500 variables totally). I would like to create a SQL type code using python, and then export the output in a text file.</p>
<p>I am expecting the output is like:</p>
<blockquote>
<p>Create Table foo(</p>
<p>ID Integer,</p>
<p>Name Integer,</p>
<p>Income Float)</p>
</blockquote>
<p>My question is how to write python code to automatically generate code like that from a large CSV data file.</p>
<p><strong>Updates 2:</strong></p>
<p>My data looks like:</p>
<blockquote>
<p>ID,Name,Income</p>
<p>1,John,20.0</p>
<p>2,Tom,34.5</p>
</blockquote>
<p>By using module <em>pandas</em>, I read the data like <em>dat=pandas.read_csv('foo.csv')</em>. And then using <em>dat.dtypes</em> gives me:</p>
<blockquote>
<p>ID float64</p>
<p>Name object</p>
<p>Income float64</p>
<p>dtype:object</p>
</blockquote>
<p>I then replaced float64 and object with float and string. But the problem is the last line (dtype:object) does not change. How can I remove that line?</p>
<p>Thank you all so much. My heart felt so warm to see all the help!</p>
<p>Sincerely,
Lincoln</p>
|
<p>Not really clear from your question where those variables are coming from. I'll just assume you've got that covered and the actual problem is to pretty-print the type.</p>
<p>There are (at least) two ways: Either, you could use the <code>type</code>'s <code>__name__</code> attribute to get, e.g., <code>'int'</code> instead of <code><type 'int'></code>, or you could create a dictionary, mapping <code>type</code>s to more expressive strings.</p>
<p>Example:</p>
<pre><code>variables = {"ID": 42, "Name": "someone", "Income": 100.0}
types = {int: "Integer", float: "Floating Point", str: "Text"}
for var, val in variables.items():
t = type(val)
print(var, t.__name__, types[t], sep="\t")
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>Income float Floating Point
ID int Integer
Name str Text
</code></pre>
|
python|csv|pandas|type-inference
| 2
|
2,795
| 73,364,868
|
AttributeError creating line plot using matplotlib
|
<p>I need to plot in a loop structure each unique "plant_name" in the data below so that the values in "Adj_Prod" are plotted over each other by month for each site. My data in df1 looks like this:</p>
<pre><code> plant_name month Adj_Prod Adj_Prod Adj_Prod Adj_Prod Adj_Prod
0 BIRCH BAY 1 64268.0 64268.0 64268.0 64268.0 64268.0
1 BIRCH BAY 2 131415.5 131415.5 131415.5 131415.5 131415.5
2 BIRCH BAY 3 210202.2 210202.2 210202.2 210202.2 210202.2
3 BIRCH BAY 4 317149.1 317149.1 317149.1 317149.1 317149.1
4 BIRCH BAY 5 432973.8 432973.8 432973.8 432973.8 432973.8
5 BIRCH BAY 6 512809.3 512809.3 512809.3 512809.3 512809.3
6 BIRCH BAY 7 607973.6 607973.6 607973.6 607973.6 607973.6
7 BIRCH BAY 8 687322.8 667062.4 682211.8 680210.6 672797.5
8 BIRCH BAY 9 724324.4 692311.0 726442.7 723927.7 720320.0
9 BIRCH BAY 10 778997.5 764772.8 792752.3 792855.2 788970.0
10 BIRCH BAY 11 833594.9 843887.2 871843.0 874795.6 843567.4
11 BIRCH BAY 12 893822.2 916116.3 927657.8 942680.9 917816.7
12 BARON CHAPEL 1 34218.1 34218.1 34218.1 34218.1 34218.1
13 BARON CHAPEL 2 70853.1 70853.1 70853.1 70853.1 70853.1
14 BARON CHAPEL 3 111367.1 111367.1 111367.1 111367.1 111367.1
15 BARON CHAPEL 4 161482.2 161482.2 161482.2 161482.2 161482.2
16 BARON CHAPEL 5 209338.5 209338.5 209338.5 209338.5 209338.5
17 BARON CHAPEL 6 241771.9 241771.9 241771.9 241771.9 241771.9
18 BARON CHAPEL 7 267183.3 267183.3 267183.3 267183.3 267183.3
19 BARON CHAPEL 8 291989.0 290321.0 294038.6 281854.5 288645.7
20 BARON CHAPEL 9 314834.0 322328.5 318351.3 304571.9 310119.9
21 BARON CHAPEL 10 348497.7 358994.8 349025.9 340384.3 343494.6
22 BARON CHAPEL 11 381433.1 391930.2 383084.3 377981.7 375497.5
23 BARON CHAPEL 12 416721.8 435259.4 415697.2 412921.0 404511.5
</code></pre>
<p>I need to make 2 plots in this case that should plot the "Adj_Prod" by month for each site. Thank you since I am stll learning python from matlab.</p>
<pre><code>import matplotlib.patches
levels, categories = pd.factorize(df1['month'])
colors = [plt.cm.prism(i) for i in levels]
handles = [matplotlib.patches.Patch(color=plt.cm.prism(i), label=c) for i, c in enumerate(categories)]
sites = (df1.plant_name.unique())
sites = sites.tolist()
fig, ax = plt.subplots(figsize=(10,4))
for i in range(len(sites)):
plt.figure()
plt.plot(df1.loc[df1['plant_name']==sites[i]].Adj_Prod, df1.loc[df1['plant_name']==sites[i]].month,edgecolors=colors,marker='o',facecolors='none')
site = str(sites[i])
plt.title(site + (' ') + ('Region') + (' Wind Production ') + str(df1.columns[0]) )
plt.xlabel('Month'); plt.ylabel('Estimated Production')
plt.legend(handles=handles, title="Year",loc='center left', bbox_to_anchor=(1,0.5),edgecolor='black')
ax.legend()
plt.show()
</code></pre>
<p>I have tried this but I keep getting an attribute error:</p>
<pre class="lang-py prettyprint-override"><code>AttributeError: 'Line2D' object has no property 'edgecolors'
</code></pre>
|
<p>The error you are seeing is being raised by this line (I've reformatted for clarity but it's the same code as yours):</p>
<pre class="lang-py prettyprint-override"><code> for i in range(len(sites)):
plt.figure()
--> plt.plot(
df1.loc[df1['plant_name']==sites[i]].Adj_Prod,
df1.loc[df1['plant_name']==sites[i]].month,
edgecolors=colors,
marker='o',
facecolors='none',
)
</code></pre>
<p>The error is this:</p>
<pre class="lang-py prettyprint-override"><code>AttributeError: 'Line2D' object has no property 'edgecolors'
</code></pre>
<p>What this is saying is that the function <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.plot.html" rel="nofollow noreferrer"><code>matplotlib.pyplot.plot</code></a> can't accept the argument <code>edgecolors</code>. This type of error comes up any time you call a matplotlib plotting function with the wrong arguments. Matplotlib error tracebacks can be a bit tricky because there are tons of internal calls which are made as part of creating and rendering a figure; checking to make sure your arguments are all allowed is a great first step in plotting.</p>
<p>when you call <code>plt.plot</code>, you're creating a line plot. If you look down the list of arguments, there are only a couple color-related arguments you could pass in. Subsetting from the docs I linked to above:</p>
<blockquote>
<p><strong><code>**kwargs</code></strong>: <strong><em><code>Line2D</code></em> properties, optional</strong></p>
<p>kwargs are used to specify properties like a line label (for auto legends), linewidth, antialiasing, marker face color. Example:</p>
<pre class="lang-py prettyprint-override"><code>plot([1, 2, 3], [1, 2, 3], 'go-', label='line 1', linewidth=2)
plot([1, 2, 3], [1, 4, 9], 'rs', label='line 2')
</code></pre>
<p>If you specify multiple lines with one plot call, the kwargs apply to all those lines. In case the label object is iterable, each element is used as labels for each set of data.</p>
<p>Here is a list of available Line2D properties:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Property</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>color or c</td>
<td>color</td>
</tr>
<tr>
<td>markeredgecolor or mec</td>
<td>color</td>
</tr>
<tr>
<td>markerfacecolor or mfc</td>
<td>color</td>
</tr>
<tr>
<td>markerfacecoloralt or mfcalt</td>
<td>color</td>
</tr>
</tbody>
</table>
</div></blockquote>
<p>So you can only use the above keyword arguments to specify colors for lines, marker edges, or marker face colors.</p>
|
pandas|loops|plot
| 1
|
2,796
| 73,360,749
|
Reading an excel sheet containing hyperlinks using pythons pandas.read_excel
|
<p>I made an excel sheet using pandas dataframe to generate texts with clickable urls using the following code</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'link':['=HYPERLINK("https://ar.wikipedia.org/wiki/","wikipidia")',
'=HYPERLINK("https://www.google.com", "google")']})
df.to_excel('links.xlsx')
</code></pre>
<p>But currently i need to read the generated excel sheet (links.xlsx) using pandas.read_excel so i tried the following code:</p>
<pre><code>import pandas as pd
excelDf=pd.read_excel('links.xlsx')
print(excelDf)
</code></pre>
<p>but this generates a dataframe with all zeroes in the link column.
Is there another way I can read the excel file i created, or another way to create an excel sheet containing clickable links on text using pandas dataframe that is readable?</p>
|
<p>you can do the same as a csv which is cleaner (avoids excel issues).</p>
<pre class="lang-py prettyprint-override"><code># %% write the date
import pandas as pd
df = pd.DataFrame({'link':['=HYPERLINK("https://ar.wikipedia.org/wiki/","wikipidia")',
'=HYPERLINK("https://www.google.com", "google")']})
df.to_csv('F:\\links.xlsx')
# %% read the data
import pandas as pd
excelDf=pd.read_csv('F:\\links.xlsx')
print(excelDf)
</code></pre>
<p>result:</p>
<pre><code> Unnamed: 0 link
0 0 =HYPERLINK("https://ar.wikipedia.org/wiki/","w...
1 1 =HYPERLINK("https://www.google.com", "google")
</code></pre>
|
python|excel|pandas|dataframe|hyperlink
| 0
|
2,797
| 73,457,069
|
Why does my LSTM model predict wrong values although the loss is decreasing?
|
<p>I am trying to build a machine learning model which predicts a single number from a series of numbers. I am using an LSTM model with Tensorflow.</p>
<p>You can imagine my dataset to look something like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>x data</th>
<th>y data</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td><code>np.array(shape (10000,1) )</code></td>
<td><code>numpy.float32</code></td>
</tr>
<tr>
<td>1</td>
<td><code>np.array(shape (10000,1) )</code></td>
<td><code>numpy.float32</code></td>
</tr>
<tr>
<td>2</td>
<td><code>np.array(shape (10000,1) )</code></td>
<td><code>numpy.float32</code></td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>56</td>
<td><code>np.array(shape (10000,1) )</code></td>
<td><code>numpy.float32</code></td>
</tr>
</tbody>
</table>
</div>
<p>Easily said I just want my model to predict a number (y data) from a sequence of numbers (x data).</p>
<p>For example like this:</p>
<ul>
<li>array([3.59280851, 3.60459062, 3.60459062, ...]) => 2.8989773</li>
<li>array([3.54752101, 3.56740332, 3.56740332, ...]) => 3.0893357</li>
<li>...</li>
</ul>
<p><strong>x and y data</strong></p>
<p>From my x data I created a numpy array <code>x_train</code> which I want to use to train the network.
Because I am using an LSTM network, x_train should be of shape <em>(samples, time_steps, features)</em>.
I reshaped my x_train array to be shaped like this: (57, 10000, 1), because I have 57 samples, which each are of length 10000 and contain a single number.</p>
<p>The y data was created similarly and is of shape (57,1) because, once again, I have 57 samples which each contain a single number as the desired y output.</p>
<p><strong>Current model attempt</strong></p>
<p>My model summary looks like this:
<a href="https://i.stack.imgur.com/sUGZf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sUGZf.png" alt="current model" /></a></p>
<p>The model was compiled with <code>model.compile(loss="mse", optimizer="adam")</code> so my loss function is simply the mean squared error and as an optimizer I'm using Adam.</p>
<p><strong>Current results</strong></p>
<p>Training of the model works fine and I can see that the loss and validation loss decreases after some epochs.
The actual problem occurs when I want to predict some data y_verify from some data x_verify.
I do this after the training is finished to determine how well the model is trained.
In the following example I simply used the data I used for training to determine how well the model is trained (I know about overfitting and that verifying with the training set is not the right way of doing it, but that is not the problem I want to demonstrate right not).</p>
<p>In the following graph you can see the y data I provided to the model in blue.
The orange line is the result of calling <code>model.predict(x_verify)</code> where x_verify is of the same shape as x_train.</p>
<p><a href="https://i.stack.imgur.com/VQjdv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VQjdv.png" alt="current results" /></a></p>
<p>I also calculated the mean absolute percentage error (MAPE) of my prediction and the actual data and it came out to be around 4% which is not bad, because I only trained for 40 epochs. But this result still is not helpful at all because as you can see in the graph above the curves do not match at all.</p>
<p><strong>Question:</strong></p>
<p>What is going on here?</p>
<p>Am I using an incorrect loss function?</p>
<p>Why does it seem like the model tries to predict a <em>single</em> value for all samples rather than predicting a different value for all samples like it's supposed to be?</p>
<p>Ideally the prediction should be the y data which I provided so the curves should look the same (more or less).</p>
<p>Do you have any ideas?</p>
<p>Thanks! :)</p>
|
<p>From the notebook it seems you are not scaling your data. You should normalize or standardize your data before training your model.</p>
<p><a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/" rel="nofollow noreferrer">https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/</a></p>
<p>can add normalization layer in keras <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/Normalization" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/layers/Normalization</a></p>
|
python|tensorflow|machine-learning|keras|lstm
| 2
|
2,798
| 35,094,743
|
How to covert np.ndarray into astropy.coordinates.Angle class?
|
<p>What is the quickest/most efficient way to convert a np.ndarray (importing numpy as np) into the astropy.coordinates.Angle class? I am having trouble keeping it as np.ndarray because the .wrap_at() operation will not work.</p>
|
<p>What exactly is your intention? <code>np.asarray</code> is quite ambiguous. If you are dealing with <code>np.ndarray</code> it is quite easy:</p>
<pre><code>from astropy.coordinates import Angle
import astropy.units as u
import numpy as np
angles = np.array([100,200,300,400])
angles_quantity = a * u.degree # Could also be u.radian, u.arcmin, etc.
Angle(angles_quantity).wrap_at('360d')
</code></pre>
<p>But I'm not really sure if that solves your problem.</p>
<p>Converting such an <code>Angle</code> object back to a simple <code>np.ndarray</code> can be done with the <code>.value</code> attribute:</p>
<pre><code>Angle(angles_quantity).wrap_at('360d').value # This returns a simple ndarray again.
</code></pre>
|
python|numpy|astropy
| 4
|
2,799
| 67,563,194
|
Calling Class Method in a pandas Dataframe
|
<p>I have a simple class with one method. How can I create 2 additional columns in a pandas dataframe where 1 column is a column of class objects and column 2 calls the class method. I've tried the below but it returns "Wrong number of items passed 4, placement implies 1"</p>
<pre><code>class test1:
def __init__(self, x, y):
self.x = x
self.y = y
def mult(self):
return self.x*self.y
data = {'x': [3, 2, 1, 0], 'y': [5, 6, 1, 2]}
fpd = pd.DataFrame.from_dict(data)
fpd['class'] = test1(fpd['x'], fpd['y'])
fpd['method'] = fpd.apply(lambda x: x['class'].mult(), axis=1)
</code></pre>
<p>what I'd like it to return:</p>
<pre><code> x y class method
0 3 5 <__main__.test1 object at 0x000001E84ED6C388> 15
1 2 6 <__main__.test1 object at 0x000001E84ED6C388> 12
2 1 1 <__main__.test1 object at 0x000001E84ED6C388> 1
3 0 2 <__main__.test1 object at 0x000001E84ED6C388> 0
</code></pre>
<p>'''</p>
|
<pre><code>fpd['class'] = test1(fpd['x'], fpd['y'])
fpd['method'] = fpd['class'].apply(lambda x: x.mult()).iloc[0]
fpd
</code></pre>
<p><strong>Output</strong></p>
<pre><code> x y class method
0 3 5 <__main__.test1 object at 0x1205bd3a0> 15
1 2 6 <__main__.test1 object at 0x1205bd3a0> 12
2 1 1 <__main__.test1 object at 0x1205bd3a0> 1
3 0 2 <__main__.test1 object at 0x1205bd3a0> 0
</code></pre>
<p><strong>Explanation</strong>
One way is to apply <code>mult</code> function on <code>class</code> column, <code>transform</code> can also be used instead of <code>apply</code>.</p>
|
python|pandas
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.