Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
374,500
| 48,248,816
|
how to create and save a pycharm project from an existing python program?
|
<p>I recently installed TensorFlow on a Ubunu 14.04 machine using virtual environment and installed PyCharm to analyze TensoFlow Python programs. </p>
<p>I downloaded <code>mnist_softmax.py</code>, the first tutorial program under <code>~/TF</code>. I opened it with PyCharm and set the Python interpreter to the one in the virtual environment. I can run it, set breakpoint, and do single stepping. </p>
<p>Ok, then, I exit PyCharm . When I start PyCharm again, the recent project list is shown, but the location is <code>/tmp/mnist_softmax.py</code>, not <code>~/TF/mnist_softmax.py</code> and of course if I try to open it (<code>/tmp/mnist_softmax.py</code>), it complains that the file is not there. </p>
<p>How can I save the <code>mnist_softmax.py</code> as a PyCharm project? I couldn't find the submenu in the <code>File</code> menu. I tried <code>Save All</code> before exiting but it was the same and no <code>*.idea</code> file is under <code>~/TF</code>. </p>
<p>How can I do it?</p>
|
<p>I found creating a project and adding the source file makes pychamr remember the project as working previous project in the list.</p>
|
python|tensorflow|pycharm
| 0
|
374,501
| 48,370,603
|
drop columns in pandas dataframe based on mask
|
<p>I have a dataframe with various number of values in each columns. I created a mask that tells me how many values in each column with the following code from another post > I get the following results</p>
<pre><code>count_year_mask = df_mth_return.notnull().sum()
results in series like this
AAPL US Equity 312
GOOGL US Equity 161
GOOG US Equity 45
MSFT US Equity 312
AMZN US Equity 248
FB US Equity 68
</code></pre>
<p>I then want to delete all the columns in df_mth_return that are LESS THAN 180 from the above series. I want the DF to only have columns with > 180 numbers. So GOOGL, GOOG and FB would be eliminated. I tried this code and got the following error</p>
<pre><code>df_mth_return.drop(np.where(count_year_mask<180))
ValueError: Buffer has wrong number of dimensions (expected 1, got 3)
</code></pre>
<p>This seems like a simple mask so not sure what I am doing wrong. Please help if you can</p>
|
<p>You can filter columns with <code>loc</code>:</p>
<pre><code>df_mth_return.loc[:, count_year_mask>=180]
</code></pre>
<p>Or:</p>
<pre><code>df_mth_return.loc[:, ~count_year_mask<180]
</code></pre>
|
python|pandas
| 3
|
374,502
| 48,226,221
|
What is the function in TensorFlow that is equivalent to expand() in PyTorch?
|
<p>Let's say I have a 2 x 3 matrix and I want to create a 6 x 2 x 3 matrix where each element in the first dimension is the original 2 x 3 matrix.</p>
<p>In PyTorch, I can do this:</p>
<pre><code>import torch
from torch.autograd import Variable
import numpy as np
x = np.array([[1, 2, 3], [4, 5, 6]])
x = Variable(torch.from_numpy(x))
# y is the desired result
y = x.unsqueeze(0).expand(6, 2, 3)
</code></pre>
<p>What is the equivalent way to do this in TensorFlow? I know <code>unsqueeze()</code> is equivalent to <code>tf.expand_dims()</code> but I don't TensorFlow has anything equivalent to <code>expand()</code>. I'm thinking of using <code>tf.concat</code> on a list of the 1 x 2 x 3 tensors but am not sure if this is the best way to do it.</p>
|
<p>The equivalent function for <strong>pytorch</strong> <code>expand</code> is <strong>tensorflow</strong> <code>tf.broadcast_to</code></p>
<p>Docs: <a href="https://www.tensorflow.org/api_docs/python/tf/broadcast_to" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/broadcast_to</a></p>
|
python|tensorflow|pytorch
| 7
|
374,503
| 48,414,408
|
Find accuracy of DNNRegressor with tensorboard
|
<p>Is there a way to show the accuracy of this DNNRegression model after each iteration in tensorboard? The only way I have seen it is using the "session" method, not using tf.estimator. Also, is there is a way to find the final accuracy of the model without resorting to doing it by hand? I tried the evaluation method, but the dictionary it returns doesn't have an "accuracy" key.</p>
<pre><code>import numpy as np
import tensorflow as tf
import _pickle as cPickle
with open("var_x.txt", "rb") as fp: # Unpickling
var_x = cPickle.load(fp)
with open("var_y.txt", "rb") as fp: # Unpickling
var_y = cPickle.load(fp)
with open("var_x_test.txt", "rb") as fp: # Unpickling
var_x_test = cPickle.load(fp)
with open("var_y_test.txt", "rb") as fp: # Unpickling
var_y_test = cPickle.load(fp)
test_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename="test.csv",
target_dtype=np.float64,
features_dtype=np.float64)
feature_columns = [tf.feature_column.numeric_column("x", shape=[4])]
estimator = tf.estimator.DNNRegressor(feature_columns=feature_columns, hidden_units=[1024, 512, 256])
# define our data sets
x_train = np.array(var_x)
y_train = np.array(var_y)
x_test = np.array(var_x_test)
y_test = np.array(var_y_test)
input_fn = tf.estimator.inputs.numpy_input_fn(
{"x": x_train}, y_train, batch_size=4, num_epochs=60, shuffle=True)
# train
estimator.train(input_fn=input_fn, steps=1000)
#TESTING
prediction_input_fn= tf.estimator.inputs.numpy_input_fn(
x ={"x":x_test},
num_epochs=1,
shuffle=False
)
predictions = list(estimator.predict(input_fn=prediction_input_fn))
s=0
for i in range(len(predictions)):
print(str(int(abs(round(predictions[i]['predictions'][0]))))+"\n")
if (int(abs(round(predictions[i]['predictions'][0]))) == y_test[i]):
s+=1
print(s)
</code></pre>
|
<p>To see the final accuracy you need to call <code>estimator.evaluate(..)</code> which returns an evaluate matrics (loss, accuracy...)</p>
<p>check this link</p>
<p><a href="https://www.tensorflow.org/versions/master/api_docs/python/tf/estimator/DNNRegressor" rel="nofollow noreferrer">https://www.tensorflow.org/versions/master/api_docs/python/tf/estimator/DNNRegressor</a></p>
|
tensorflow|tensorboard|tensorflow-estimator
| 0
|
374,504
| 48,115,731
|
How to compare names with and without orthographic accent in pandas?
|
<p>In Python 3 and pandas I have a dataframe with full names. My default encoding is utf-8. The names are in the Portuguese language, therefore they have spelling accentuation</p>
<pre><code>perfis_deputados.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 513 entries, 0 to 512
Data columns (total 10 columns):
data_nascimento 513 non-null object
e_mail 513 non-null object
link_api 513 non-null object
link_foto 513 non-null object
nome_completo 513 non-null object
nome_eleitoral 513 non-null object
partido 513 non-null object
sexo 513 non-null object
telefone 513 non-null object
uf 513 non-null object
dtypes: object(10)
memory usage: 40.2+ KB
</code></pre>
<p>The columns "nome_completo" and "nome_eleitoral" have cases like:</p>
<pre><code>AELTON JOSÉ DE FREITAS
JOÃO ALBERTO FRAGA SILVA
ALTINEU CÔRTES
</code></pre>
<p>I need to compare this dataframe with another - compare the names. But this second dataframe has names without any spelling accent. So the names appear like this, for example</p>
<pre><code>AELTON JOSE DE FREITAS
JOAO ALBERTO FRAGA SILVA
ALTINEU CORTES
</code></pre>
<p>Please, is there a way to compare ignoring orthographic accenting? Or remove the spelling accent in the column I'm analyzing?</p>
|
<p>You can define and apply function to your DF like this : </p>
<pre><code>import unidecode
def f(str):
return (unidecode.unidecode(str))
perfis_deputados["nome_completo"].apply(f)
</code></pre>
|
python|pandas|spelling
| 1
|
374,505
| 48,022,794
|
Tensorflow: difference get_tensor_by_name vs get_operation_by_name?
|
<p>The answer <a href="https://stackoverflow.com/questions/38673771/tensorflow-difference-of-get-collection-get-tensor-by-name-and-get-operation-b">here</a> says that one returns an operation while the other returns a tensor. That is pretty obvious from the name and from the documentation. However, suppose I do the following:</p>
<pre><code>logits = tf.add(tf.matmul(inputs, weights), biases, name='logits')
</code></pre>
<p>I am following the pattern described in <a href="https://www.tensorflow.org/get_started/mnist/mechanics" rel="nofollow noreferrer">Tensorflow Mechanics 101</a>. Should I restore it as an operation or as a tensor? I am afraid that if I restore it as a tensor I will only get the last computed values for the logits; nonetheless, the post <a href="http://cv-tricks.com/tensorflow-tutorial/save-restore-tensorflow-models-quick-complete-tutorial/" rel="nofollow noreferrer">here</a>, seems to suggest that there is no difference or that I should just use <code>get_tensor_by_name</code>. The idea is to compute the logits for a new set of inputs and then make predictions accordingly.</p>
|
<p>Short answer: you can use both, <code>get_operation_by_name()</code> and <code>get_tensor_by_name()</code>. Long answer:</p>
<h2><code>tf.Operation</code></h2>
<p>When you call</p>
<pre><code>op = graph.get_operation_by_name('logits')
</code></pre>
<p>... it returns an instance of type <a href="https://www.tensorflow.org/api_docs/python/tf/Operation" rel="noreferrer"><code>tf.Operation</code></a>, which is a node in the computational graph, which performs some op on its inputs and produces one or more outputs. In this case, it's a <code>plus</code> op.</p>
<p>One can always evaluate an op in a session, and if this op needs some placehoder values to be fed in, the engine will <em>force</em> you to provide them. Some ops, e.g. reading a variable, don't have any dependencies and can be executed without placeholders.</p>
<p>In your case, (I assume) <code>logits</code> are computed from the input placeholder <code>x</code>, so <code>logits</code> doesn't have any value without a particular <code>x</code>.</p>
<h2><code>tf.Tensor</code></h2>
<p>On the other hand, calling</p>
<pre><code>tensor = graph.get_tensor_by_name('logits:0')
</code></pre>
<p>... returns an object <code>tensor</code>, which has the type <a href="https://www.tensorflow.org/api_docs/python/tf/Tensor" rel="noreferrer"><code>tf.Tensor</code></a>:</p>
<blockquote>
<p>Represents one of the outputs of an <code>Operation</code>.</p>
<p>A <code>Tensor</code> is a symbolic handle to one of the outputs of an <code>Operation</code>.
It does not hold the values of that operation's output, but instead
provides a means of computing those values in a TensorFlow <code>tf.Session</code>.</p>
</blockquote>
<p>So, in other words, tensor evaluation is the same as operation execution, and all the restrictions described above apply as well.</p>
<p>Why is <code>Tensor</code> useful? A <code>Tensor</code> can be passed as an input to another <code>Operation</code>, thus forming the graph. But in your case, you can assume that both entities mean the same.</p>
|
tensorflow|deep-learning|tensorflow-serving
| 5
|
374,506
| 48,118,436
|
Numpy attributes not recognized in Numba
|
<p>Numba offers JIT for Python. In its documentation it says "One objective of Numba is having a seamless integration with NumPy."</p>
<p>So why including some of the simplest features from numpy isn't possible:</p>
<pre><code>import numpy as np
from numba import *
@jit(nopython=True)
def testfun(x):
y = np.size(x)
return y
x=np.array([1 ,2, 3],dtype=float)
testfun(x)
</code></pre>
<p>When I run this code, I get the error "Unknown attribute 'size' of type Module," which means attribute 'size' is not recognized. </p>
<p>Numba understands calls to NumPy ufuncs. I assume simple numpy functions such as size, shape, sum, reshape, etc are ufuncs. Of course, removing '
(nopython=True)' works, but that falls it back to the slow run with pyobjects.</p>
|
<p>The following works:</p>
<pre><code>@nb.jit(nopython=True)
def testfun(x):
y = x.size
return y
</code></pre>
<p>Certain attributes are supported, but you should look at when the corresponding function is:</p>
<p><a href="http://numba.pydata.org/numba-doc/latest/reference/numpysupported.html#attributes" rel="nofollow noreferrer">http://numba.pydata.org/numba-doc/latest/reference/numpysupported.html#attributes</a></p>
<p>The documentation is pretty complete regarding what parts of numpy numba supports. </p>
|
python|numpy|scipy|jit|numba
| 4
|
374,507
| 48,044,405
|
split pandas column prepending with actual column name
|
<blockquote>
<pre><code>>>>table1
col1 col2
row1 A A
row2 B A
row3 A B
row4 B A
</code></pre>
<p>I want to convert only one column in the above dataframe into following DataFrame using one-hot expression or any other methods</p>
<pre><code>>>>table1
col1_A col1_B col2
row1 1 0 A
row2 0 1 A
row3 1 0 B
row4 0 1 A
</code></pre>
<p>Thank you in advance</p>
</blockquote>
|
<p>Use <code>pd.get_dummies</code></p>
<pre><code>In [211]: pd.get_dummies(table1)
Out[211]:
col1_A col1_B col2_A col2_B
row1 1 0 1 0
row2 0 1 1 0
row3 1 0 0 1
row4 0 1 1 0
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
374,508
| 48,196,567
|
How to find negative imaginary parts of values in an array then turning them to positive?
|
<p>I have a function <code>a=x*V</code> where <code>x</code> assumes thousands of values as <code>x = arange(1,1000,0.1)</code> and <code>V</code> is a combination of other constants. These make <code>a</code> always complex (has nonzero real and imaginary parts). However, because <code>a</code> depends on other values, the <code>imag(a)</code> can be negative for some <code>x</code>'s. </p>
<p>For what I am doing, however, I need <code>imag(a)</code> to be always positive, so I need to take the negative values and turn them into positive. </p>
<p>I have tried doing</p>
<pre><code>if imag(a)<0:
imag(a) = -1*imag(a)
</code></pre>
<p>That didn't seem to work because it gives me the error: <code>SyntaxError: Can't assign to function call</code>. I thought it was because it's an array so I tried <code>any()</code> and <code>all()</code>, but that didn't work either. </p>
<p>I'm out of options now.</p>
|
<p>IIUC:</p>
<pre><code>In [35]: a = np.array([1+1j, 2-2j, 3+3j, 4-4j])
In [36]: a.imag *= np.where(a.imag < 0, -1, 1)
In [37]: a
Out[37]: array([ 1.+1.j, 2.+2.j, 3.+3.j, 4.+4.j])
</code></pre>
|
python|numpy|math
| 2
|
374,509
| 48,383,962
|
Python: eliminate extra comma (Error tokenizing data. C error: Expected 3 fields in line 29, saw 4)
|
<p>The error cause by 'Food, Beverage & Tobacco' which has extra comma that cause pandas unable to read the csv file.
it cause error </p>
<blockquote>
<p>Error tokenizing data. C error: Expected 3 fields in line 29, saw 4</p>
</blockquote>
<p>How can I elegantly eliminate extra comma in the csv file for 'GICS industry group'(including condition beside the comma is behind Food)?</p>
<p>Here is my code:</p>
<pre><code>#!/usr/bin/env python2.7
print "hello from python 2"
import pandas as pd
from lxml import html
import requests
import urllib2
import os
url = 'http://www.asx.com.au/asx/research/ASXListedCompanies.csv'
response = urllib2.urlopen(url)
html = response.read()
#html = html.replace('"','')
with open('asxtest.csv', 'wb') as f:
f.write(html)
with open("asxtest.csv",'r') as f:
with open("asx.csv",'w') as f1:
f.next()#skip header line
f.next()#skip 2nd line
for line in f:
if line.count(',')>2:
line[2] = 'Food Beverage & Tobacco'
f1.write(line)
os.remove('asxtest.csv')
df_api = pd.read_csv('asx.csv')
df_api.rename(columns={'Company name': 'Company', 'ASX code': 'Stock','GICS industry group': 'Industry'}, inplace=True)
</code></pre>
|
<p>The file from the URL in your post contains additional commas for some items in the <code>GICS industry group</code> column. The first occurs at line 31 in the file:</p>
<pre><code>ABUNDANT PRODUCE LIMITED,ABT,Food, Beverage & Tobacco
</code></pre>
<p>Normally, the 3rd item should be surrounded by quotes to escape breaking on the comma, such as:</p>
<pre><code>ABUNDANT PRODUCE LIMITED,ABT,"Food, Beverage & Tobacco"
</code></pre>
<p>For this situation, because the first 2 columns appear to be clean, you can merge any additional text into the 3rd field. After this cleaning, load it into a data frame.</p>
<p>You can do this with a generator that will pull out and clean each line one at a time. The <code>pd.DataFrame</code> constructor will read in the data and create a data frame.</p>
<pre><code>import pandas as pd
def merge_last(file_name, skip_lines=0):
with open(file_name, 'r') as fp:
for i, line in enumerate(fp):
if i < 2:
continue
x, y, *z = line.strip().split(',')
yield (x,y,','.join(z))
# create a generator to clean the lines, skipping the first 2
gen = merge_last('ASXListedCompanies.csv', 2)
# get the column names
header = next(gen)
# create the data frame
df = pd.DataFrame(gen, columns=header)
df.head()
</code></pre>
<p>returns:</p>
<pre><code> Company name ASX code GICS industry group
0 MOQ LIMITED MOQ Software & Services
1 1-PAGE LIMITED 1PG Software & Services
2 1300 SMILES LIMITED ONT Health Care Equipment & Services
3 1ST GROUP LIMITED 1ST Health Care Equipment & Services
4 333D LIMITED T3D Commercial & Professional Services
</code></pre>
<p>And the rows with the extra commas are preserved:</p>
<pre><code>df.loc[27:30]
# returns:
Company name ASX code GICS industry group
27 ABUNDANT PRODUCE LIMITED ABT Food, Beverage & Tobacco
28 ACACIA COAL LIMITED AJC Energy
29 ACADEMIES AUSTRALASIA GROUP LIMITED AKG Consumer Services
30 ACCELERATE RESOURCES LIMITED AX8 Class Pend
</code></pre>
<hr>
<p>Here is a more generalized generator that will merge after a given number of columns:</p>
<pre><code>def merge_last(file_name, merge_after_col=2, skip_lines=0):
with open(file_name, 'r') as fp:
for i, line in enumerate(fp):
if i < 2:
continue
spl = line.strip().split(',')
yield (*spl[:merge_after_col], ','.join(spl[merge_after_col:]))
</code></pre>
|
python|pandas|csv
| 2
|
374,510
| 48,181,613
|
How can i find the equation of a line passing 2 points and point passing by line -python
|
<p>i have two points:</p>
<pre><code>(283,240,302)
(150,150, 50)
</code></pre>
<p>I want to know equation of the two point , and i want to find (x,y,z)
distance R from point(150,150,50)</p>
<p><img src="https://i.stack.imgur.com/NCrtj.png" alt="enter image description here"></p>
|
<p>The easiest way would be using vectors : compute <code>AB</code> vector, and the use proportionality to compute <code>AC</code> vector. Then compute C position from <code>AC</code>:</p>
<pre><code>A = (150, 150, 50)
B = (283, 240, 302)
from math import sqrt
AB = [A[i] - B[i] for i in range(3)]
length_AB = sqrt(sum([u*u for u in AB]))
AC = [u*600/length_AB for u in AB]
C = [AC[i]+A[i] for i in range(3)]
</code></pre>
|
python|arrays|algorithm|numpy|jupyter-notebook
| 0
|
374,511
| 48,321,437
|
Get minimun value of a column by comparing previous n rows in Pandas
|
<p>I want to get the min value of a column by compare the value in current row with the value in previous 2 rows, I know this can be done by creating 2 columns with the shift(-1) and shift(-2) and return the min value of the row, but I would like to know if there is any way to do it better if I extend the range from previous 2 rows to n rows.</p>
<p>for example in below dataset</p>
<pre><code> df= pd.DataFrame([12,11,4,15,6,],columns=['score'])
>>> df
score
0 12
1 11
2 4
3 15
4 6
</code></pre>
<p>create new columns prv_score_1, prv_score_2 for previous value</p>
<pre><code>>>> df['prv_score_1'] = df['score'].shift(-1)
>>> df['prv_score_2'] = df['score'].shift(-2)
>>> df
score prv_score_1 prv_score_2
0 12 11.0 4.0
1 11 4.0 15.0
2 4 15.0 6.0
3 15 6.0 NaN
4 6 NaN NaN
</code></pre>
<p>Create a Minimum column and get the minimum value of the row</p>
<pre><code>>>> df['Minimum'] = df.min(1)
>>> df
score prv_score_1 prv_score_2 Minimum
0 12 11.0 4.0 4.0
1 11 4.0 15.0 4.0
2 4 15.0 6.0 4.0
3 15 6.0 NaN 6.0
4 6 NaN NaN 6.0
</code></pre>
<p>Anyway to do better?</p>
|
<p>You need rolling min with window 3 i.e </p>
<pre><code>df['new'] = df['score'][::-1].rolling(3,min_periods=1).min()[::-1]
score new
0 12.0 4.0
1 11.0 4.0
2 4.0 4.0
3 15.0 6.0
4 6.0 6.0
</code></pre>
|
python|pandas
| 3
|
374,512
| 48,373,962
|
pandas groupby not working as expected
|
<p>I have a dataframe:</p>
<pre><code> >>> d6
Out[57]:
Date sym Last M1 M2 dist code
52735 2017-11-23 C 0.10 4.72 -9.27 677.93 4250 - 12/15/2017
52736 2017-11-23 P 684.20 1.43 -106.09 677.93 4250 - 12/15/2017
53144 2017-11-23 C 0.10 4.49 -9.37 727.93 4300 - 12/15/2017
53145 2017-11-23 P 734.20 0.69 -105.02 727.93 4300 - 12/15/2017
52738 2017-11-23 P 784.20 nan nan 777.93 4350 - 12/15/2017
52737 2017-11-23 C 0.10 4.29 -9.46 777.93 4350 - 12/15/2017
53081 2017-11-23 P 834.20 nan nan 827.93 4400 - 12/15/2017
53019 2017-11-23 C 0.10 4.12 -9.55 827.93 4400 - 12/15/2017
52747 2017-11-23 C 0.10 3.96 -9.64 877.93 4450 - 12/15/2017
52748 2017-11-23 P 884.20 nan nan 877.93 4450 - 12/15/2017
52605 2017-11-23 C 0.10 3.81 -9.71 927.93 4500 - 12/15/2017
52606 2017-11-23 P 934.20 nan nan 927.93 4500 - 12/15/2017
52753 2017-11-23 C 0.10 3.68 -9.79 977.93 4550 - 12/15/2017
52754 2017-11-23 P 984.30 2.04 -109.96 977.93 4550 - 12/15/2017
53020 2017-11-23 C 0.10 3.56 -9.86 1027.93 4600 - 12/15/2017
53082 2017-11-23 P 1034.30 1.55 -108.99 1027.93 4600 - 12/15/2017
54698 2017-11-23 P 1134.30 0.53 -106.79 1127.93 4700 - 12/15/2017
54687 2017-11-23 C 0.10 3.35 -9.99 1127.93 4700 - 12/15/2017
52337 2017-11-23 C 0.10 3.17 -10.11 1227.93 4800 - 12/15/2017
52338 2017-11-23 P 1234.30 nan nan 1227.93 4800 - 12/15/2017
54699 2017-11-23 P 1334.30 nan nan 1327.93 4900 - 12/15/2017
54688 2017-11-23 C 0.10 3.01 -10.22 1327.93 4900 - 12/15/2017
52191 2017-11-23 P 0.10 0.55 -11.15 -3072.07 500 - 12/15/2017
52190 2017-11-23 C 3066.80 0.29 82.60 -3072.07 500 - 12/15/2017
52339 2017-11-23 C 0.10 2.87 -10.32 1427.93 5000 - 12/15/2017
52340 2017-11-23 P 1434.40 1.26 -110.86 1427.93 5000 - 12/15/2017
54689 2017-11-23 C 0.10 2.75 -10.41 1527.93 5100 - 12/15/2017
54700 2017-11-23 P 1534.40 0.45 -108.55 1527.93 5100 - 12/15/2017
52341 2017-11-23 C 0.10 2.65 -10.50 1627.93 5200 - 12/15/2017
52342 2017-11-23 P 1634.40 nan nan 1627.93 5200 - 12/15/2017
52439 2017-11-23 C 0.10 2.55 -10.58 1727.93 5300 - 12/15/2017
52440 2017-11-23 P 1734.50 1.72 -114.79 1727.93 5300 - 12/15/2017
52343 2017-11-23 C 0.10 2.46 -10.66 1827.93 5400 - 12/15/2017
52344 2017-11-23 P 1834.50 1.08 -112.69 1827.93 5400 - 12/15/2017
54701 2017-11-23 P 1934.50 0.40 -110.30 1927.93 5500 - 12/15/2017
54690 2017-11-23 C 0.10 2.38 -10.73 1927.93 5500 - 12/15/2017
52346 2017-11-23 P 2034.50 nan nan 2027.93 5600 - 12/15/2017
52345 2017-11-23 C 0.10 2.31 -10.80 2027.93 5600 - 12/15/2017
54691 2017-11-23 C 0.10 2.24 -10.87 2127.93 5700 - 12/15/2017
54702 2017-11-23 P 2134.60 1.52 -116.68 2127.93 5700 - 12/15/2017
52348 2017-11-23 P 2234.60 0.97 -114.51 2227.93 5800 - 12/15/2017
52347 2017-11-23 C 0.10 2.18 -10.93 2227.93 5800 - 12/15/2017
54703 2017-11-23 P 2334.60 0.37 -112.06 2327.93 5900 - 12/15/2017
54692 2017-11-23 C 0.10 2.13 -10.99 2327.93 5900 - 12/15/2017
52192 2017-11-23 C 2966.80 0.46 80.38 -2972.07 600 - 12/15/2017
52193 2017-11-23 P 0.10 0.61 -11.16 -2972.07 600 - 12/15/2017
52349 2017-11-23 C 0.10 2.08 -11.05 2427.93 6000 - 12/15/2017
52350 2017-11-23 P 2434.60 nan nan 2427.93 6000 - 12/15/2017
52194 2017-11-23 C 2866.70 nan nan -2872.07 700 - 12/15/2017
52195 2017-11-23 P 0.10 0.67 -11.16 -2872.07 700 - 12/15/2017
54449 2017-11-23 C 0.10 1.71 -11.52 3427.93 7000 - 12/15/2017
54479 2017-11-23 P 3434.90 0.77 -119.84 3427.93 7000 - 12/15/2017
57740 2017-11-24 C 787.75 nan nan -781.23 2800 - 11/24/2017
57742 2017-11-24 P 0.01 nan nan -781.23 2800 - 11/24/2017
57741 2017-11-24 C 737.75 nan nan -731.23 2850 - 11/24/2017
57743 2017-11-24 P 0.01 nan nan -731.23 2850 - 11/24/2017
57730 2017-11-24 C 687.75 nan nan -681.23 2900 - 11/24/2017
57735 2017-11-24 P 0.01 nan nan -681.23 2900 - 11/24/2017
57731 2017-11-24 C 637.75 nan nan -631.23 2950 - 11/24/2017
57736 2017-11-24 P 0.01 nan nan -631.23 2950 - 11/24/2017
57732 2017-11-24 C 587.75 nan nan -581.23 3000 - 11/24/2017
57737 2017-11-24 P 0.01 nan nan -581.23 3000 - 11/24/2017
57733 2017-11-24 C 537.75 nan nan -531.23 3050 - 11/24/2017
57738 2017-11-24 P 0.01 nan nan -531.23 3050 - 11/24/2017
57727 2017-11-24 P 0.20 7.77 -25.05 -431.23 3150 - 12/08/2017
57728 2017-11-24 P 0.30 11.49 -34.45 -381.23 3200 - 12/08/2017
57734 2017-11-24 C 362.75 nan nan -356.23 3225 - 11/24/2017
57739 2017-11-24 P 0.01 nan nan -356.23 3225 - 11/24/2017
57729 2017-11-24 P 0.40 14.84 -43.17 -356.23 3225 - 12/08/2017
57826 2017-11-24 C 234.50 140.14 -124.53 -231.23 3350 - 12/22/2017
57845 2017-11-24 P 5.70 140.19 -156.23 -231.23 3350 - 12/22/2017
57827 2017-11-24 C 210.50 160.38 -138.61 -206.23 3375 - 12/22/2017
57846 2017-11-24 P 6.70 160.34 -170.27 -206.23 3375 - 12/22/2017
</code></pre>
<p>Although I have shown only 2 dates above, it has a lot of dates. Each date has an entry for a few "codes". Each code on a given date has 2 entries - one for sympbol C and one for P. If I have M1/M2 entries for either C or P, i want to fill the "nan" with that for that code/day. If both C and P are nan for a given code+day, I leave it nan. </p>
<p>I am currently doing it as follows:</p>
<pre><code>for code in d1.code:
x_df = d1[d1.code == code]
x_df = x_df.groupby(['Date'], as_index=False).ffill().bfill()
d1[d1.code == code] = x_df
</code></pre>
<p>This works but takes a long time. Here is the output for the df above:</p>
<pre><code>Out[62]:
Date sym Last M1 M2 dist code
52735 2017-11-23 C 0.10 4.72 -9.27 677.93 4250 - 12/15/2017
52736 2017-11-23 P 684.20 1.43 -106.09 677.93 4250 - 12/15/2017
53144 2017-11-23 C 0.10 4.49 -9.37 727.93 4300 - 12/15/2017
53145 2017-11-23 P 734.20 0.69 -105.02 727.93 4300 - 12/15/2017
52738 2017-11-23 P 784.20 4.29 -9.46 777.93 4350 - 12/15/2017
52737 2017-11-23 C 0.10 4.29 -9.46 777.93 4350 - 12/15/2017
53081 2017-11-23 P 834.20 4.12 -9.55 827.93 4400 - 12/15/2017
53019 2017-11-23 C 0.10 4.12 -9.55 827.93 4400 - 12/15/2017
52747 2017-11-23 C 0.10 3.96 -9.64 877.93 4450 - 12/15/2017
52748 2017-11-23 P 884.20 3.96 -9.64 877.93 4450 - 12/15/2017
52605 2017-11-23 C 0.10 3.81 -9.71 927.93 4500 - 12/15/2017
52606 2017-11-23 P 934.20 3.81 -9.71 927.93 4500 - 12/15/2017
52753 2017-11-23 C 0.10 3.68 -9.79 977.93 4550 - 12/15/2017
52754 2017-11-23 P 984.30 2.04 -109.96 977.93 4550 - 12/15/2017
53020 2017-11-23 C 0.10 3.56 -9.86 1027.93 4600 - 12/15/2017
53082 2017-11-23 P 1034.30 1.55 -108.99 1027.93 4600 - 12/15/2017
54698 2017-11-23 P 1134.30 0.53 -106.79 1127.93 4700 - 12/15/2017
54687 2017-11-23 C 0.10 3.35 -9.99 1127.93 4700 - 12/15/2017
52337 2017-11-23 C 0.10 3.17 -10.11 1227.93 4800 - 12/15/2017
52338 2017-11-23 P 1234.30 3.17 -10.11 1227.93 4800 - 12/15/2017
54699 2017-11-23 P 1334.30 3.01 -10.22 1327.93 4900 - 12/15/2017
54688 2017-11-23 C 0.10 3.01 -10.22 1327.93 4900 - 12/15/2017
52191 2017-11-23 P 0.10 0.55 -11.15 -3072.07 500 - 12/15/2017
52190 2017-11-23 C 3066.80 0.29 82.60 -3072.07 500 - 12/15/2017
52339 2017-11-23 C 0.10 2.87 -10.32 1427.93 5000 - 12/15/2017
52340 2017-11-23 P 1434.40 1.26 -110.86 1427.93 5000 - 12/15/2017
54689 2017-11-23 C 0.10 2.75 -10.41 1527.93 5100 - 12/15/2017
54700 2017-11-23 P 1534.40 0.45 -108.55 1527.93 5100 - 12/15/2017
52341 2017-11-23 C 0.10 2.65 -10.50 1627.93 5200 - 12/15/2017
52342 2017-11-23 P 1634.40 2.65 -10.50 1627.93 5200 - 12/15/2017
52439 2017-11-23 C 0.10 2.55 -10.58 1727.93 5300 - 12/15/2017
52440 2017-11-23 P 1734.50 1.72 -114.79 1727.93 5300 - 12/15/2017
52343 2017-11-23 C 0.10 2.46 -10.66 1827.93 5400 - 12/15/2017
52344 2017-11-23 P 1834.50 1.08 -112.69 1827.93 5400 - 12/15/2017
54701 2017-11-23 P 1934.50 0.40 -110.30 1927.93 5500 - 12/15/2017
54690 2017-11-23 C 0.10 2.38 -10.73 1927.93 5500 - 12/15/2017
52346 2017-11-23 P 2034.50 2.31 -10.80 2027.93 5600 - 12/15/2017
52345 2017-11-23 C 0.10 2.31 -10.80 2027.93 5600 - 12/15/2017
54691 2017-11-23 C 0.10 2.24 -10.87 2127.93 5700 - 12/15/2017
54702 2017-11-23 P 2134.60 1.52 -116.68 2127.93 5700 - 12/15/2017
52348 2017-11-23 P 2234.60 0.97 -114.51 2227.93 5800 - 12/15/2017
52347 2017-11-23 C 0.10 2.18 -10.93 2227.93 5800 - 12/15/2017
54703 2017-11-23 P 2334.60 0.37 -112.06 2327.93 5900 - 12/15/2017
54692 2017-11-23 C 0.10 2.13 -10.99 2327.93 5900 - 12/15/2017
52192 2017-11-23 C 2966.80 0.46 80.38 -2972.07 600 - 12/15/2017
52193 2017-11-23 P 0.10 0.61 -11.16 -2972.07 600 - 12/15/2017
52349 2017-11-23 C 0.10 2.08 -11.05 2427.93 6000 - 12/15/2017
52350 2017-11-23 P 2434.60 2.08 -11.05 2427.93 6000 - 12/15/2017
52194 2017-11-23 C 2866.70 0.67 -11.16 -2872.07 700 - 12/15/2017
52195 2017-11-23 P 0.10 0.67 -11.16 -2872.07 700 - 12/15/2017
54449 2017-11-23 C 0.10 1.71 -11.52 3427.93 7000 - 12/15/2017
54479 2017-11-23 P 3434.90 0.77 -119.84 3427.93 7000 - 12/15/2017
57740 2017-11-24 C 787.75 nan nan -781.23 2800 - 11/24/2017
57742 2017-11-24 P 0.01 nan nan -781.23 2800 - 11/24/2017
57741 2017-11-24 C 737.75 nan nan -731.23 2850 - 11/24/2017
57743 2017-11-24 P 0.01 nan nan -731.23 2850 - 11/24/2017
57730 2017-11-24 C 687.75 nan nan -681.23 2900 - 11/24/2017
57735 2017-11-24 P 0.01 nan nan -681.23 2900 - 11/24/2017
57731 2017-11-24 C 637.75 nan nan -631.23 2950 - 11/24/2017
57736 2017-11-24 P 0.01 nan nan -631.23 2950 - 11/24/2017
57732 2017-11-24 C 587.75 nan nan -581.23 3000 - 11/24/2017
57737 2017-11-24 P 0.01 nan nan -581.23 3000 - 11/24/2017
57733 2017-11-24 C 537.75 nan nan -531.23 3050 - 11/24/2017
57738 2017-11-24 P 0.01 nan nan -531.23 3050 - 11/24/2017
57727 2017-11-24 P 0.20 7.77 -25.05 -431.23 3150 - 12/08/2017
57728 2017-11-24 P 0.30 11.49 -34.45 -381.23 3200 - 12/08/2017
57734 2017-11-24 C 362.75 nan nan -356.23 3225 - 11/24/2017
57739 2017-11-24 P 0.01 nan nan -356.23 3225 - 11/24/2017
57729 2017-11-24 P 0.40 14.84 -43.17 -356.23 3225 - 12/08/2017
57826 2017-11-24 C 234.50 140.14 -124.53 -231.23 3350 - 12/22/2017
57845 2017-11-24 P 5.70 140.19 -156.23 -231.23 3350 - 12/22/2017
57827 2017-11-24 C 210.50 160.38 -138.61 -206.23 3375 - 12/22/2017
57846 2017-11-24 P 6.70 160.34 -170.27 -206.23 3375 - 12/22/2017
57828 2017-11-24 C 186.80 184.35 -154.72 -181.23 3400 - 12/22/2017
57847 2017-11-24 P 8.10 185.20 -187.99 -181.23 3400 - 12/22/2017
57829 2017-11-24 C 163.60 213.17 -174.17 -156.23 3425 - 12/22/2017
57848 2017-11-24 P 9.80 213.01 -205.82 -156.23 3425 - 12/22/2017
</code></pre>
<p>To try to make it faster, I tried the following:</p>
<pre><code>new_d1= d1.groupby(['code','Date'], as_index=False).ffill().bfill()
</code></pre>
<p>This doesnt work as expected (as the code above works). It seems like it works as if we are only grouping by Date and not "code". Here is the output:</p>
<pre><code>>>> new_d1
Out[59]:
Date sym Last M1 M2 dist code
52735 2017-11-23 C 0.10 4.72 -9.27 677.93 4250 - 12/15/2017
52736 2017-11-23 P 684.20 1.43 -106.09 677.93 4250 - 12/15/2017
53144 2017-11-23 C 0.10 4.49 -9.37 727.93 4300 - 12/15/2017
53145 2017-11-23 P 734.20 0.69 -105.02 727.93 4300 - 12/15/2017
52738 2017-11-23 P 784.20 4.29 -9.46 777.93 4350 - 12/15/2017
52737 2017-11-23 C 0.10 4.29 -9.46 777.93 4350 - 12/15/2017
53081 2017-11-23 P 834.20 4.12 -9.55 827.93 4400 - 12/15/2017
53019 2017-11-23 C 0.10 4.12 -9.55 827.93 4400 - 12/15/2017
52747 2017-11-23 C 0.10 3.96 -9.64 877.93 4450 - 12/15/2017
52748 2017-11-23 P 884.20 3.96 -9.64 877.93 4450 - 12/15/2017
52605 2017-11-23 C 0.10 3.81 -9.71 927.93 4500 - 12/15/2017
52606 2017-11-23 P 934.20 3.81 -9.71 927.93 4500 - 12/15/2017
52753 2017-11-23 C 0.10 3.68 -9.79 977.93 4550 - 12/15/2017
52754 2017-11-23 P 984.30 2.04 -109.96 977.93 4550 - 12/15/2017
53020 2017-11-23 C 0.10 3.56 -9.86 1027.93 4600 - 12/15/2017
53082 2017-11-23 P 1034.30 1.55 -108.99 1027.93 4600 - 12/15/2017
54698 2017-11-23 P 1134.30 0.53 -106.79 1127.93 4700 - 12/15/2017
54687 2017-11-23 C 0.10 3.35 -9.99 1127.93 4700 - 12/15/2017
52337 2017-11-23 C 0.10 3.17 -10.11 1227.93 4800 - 12/15/2017
52338 2017-11-23 P 1234.30 3.17 -10.11 1227.93 4800 - 12/15/2017
54699 2017-11-23 P 1334.30 3.01 -10.22 1327.93 4900 - 12/15/2017
54688 2017-11-23 C 0.10 3.01 -10.22 1327.93 4900 - 12/15/2017
52191 2017-11-23 P 0.10 0.55 -11.15 -3072.07 500 - 12/15/2017
52190 2017-11-23 C 3066.80 0.29 82.60 -3072.07 500 - 12/15/2017
52339 2017-11-23 C 0.10 2.87 -10.32 1427.93 5000 - 12/15/2017
52340 2017-11-23 P 1434.40 1.26 -110.86 1427.93 5000 - 12/15/2017
54689 2017-11-23 C 0.10 2.75 -10.41 1527.93 5100 - 12/15/2017
54700 2017-11-23 P 1534.40 0.45 -108.55 1527.93 5100 - 12/15/2017
52341 2017-11-23 C 0.10 2.65 -10.50 1627.93 5200 - 12/15/2017
52342 2017-11-23 P 1634.40 2.65 -10.50 1627.93 5200 - 12/15/2017
52439 2017-11-23 C 0.10 2.55 -10.58 1727.93 5300 - 12/15/2017
52440 2017-11-23 P 1734.50 1.72 -114.79 1727.93 5300 - 12/15/2017
52343 2017-11-23 C 0.10 2.46 -10.66 1827.93 5400 - 12/15/2017
52344 2017-11-23 P 1834.50 1.08 -112.69 1827.93 5400 - 12/15/2017
54701 2017-11-23 P 1934.50 0.40 -110.30 1927.93 5500 - 12/15/2017
54690 2017-11-23 C 0.10 2.38 -10.73 1927.93 5500 - 12/15/2017
52346 2017-11-23 P 2034.50 2.31 -10.80 2027.93 5600 - 12/15/2017
52345 2017-11-23 C 0.10 2.31 -10.80 2027.93 5600 - 12/15/2017
54691 2017-11-23 C 0.10 2.24 -10.87 2127.93 5700 - 12/15/2017
54702 2017-11-23 P 2134.60 1.52 -116.68 2127.93 5700 - 12/15/2017
52348 2017-11-23 P 2234.60 0.97 -114.51 2227.93 5800 - 12/15/2017
52347 2017-11-23 C 0.10 2.18 -10.93 2227.93 5800 - 12/15/2017
54703 2017-11-23 P 2334.60 0.37 -112.06 2327.93 5900 - 12/15/2017
54692 2017-11-23 C 0.10 2.13 -10.99 2327.93 5900 - 12/15/2017
52192 2017-11-23 C 2966.80 0.46 80.38 -2972.07 600 - 12/15/2017
52193 2017-11-23 P 0.10 0.61 -11.16 -2972.07 600 - 12/15/2017
52349 2017-11-23 C 0.10 2.08 -11.05 2427.93 6000 - 12/15/2017
52350 2017-11-23 P 2434.60 2.08 -11.05 2427.93 6000 - 12/15/2017
52194 2017-11-23 C 2866.70 0.67 -11.16 -2872.07 700 - 12/15/2017
52195 2017-11-23 P 0.10 0.67 -11.16 -2872.07 700 - 12/15/2017
54449 2017-11-23 C 0.10 1.71 -11.52 3427.93 7000 - 12/15/2017
54479 2017-11-23 P 3434.90 0.77 -119.84 3427.93 7000 - 12/15/2017
57740 2017-11-24 C 787.75 7.77 -25.05 -781.23 2800 - 11/24/2017
57742 2017-11-24 P 0.01 7.77 -25.05 -781.23 2800 - 11/24/2017
57741 2017-11-24 C 737.75 7.77 -25.05 -731.23 2850 - 11/24/2017
57743 2017-11-24 P 0.01 7.77 -25.05 -731.23 2850 - 11/24/2017
57730 2017-11-24 C 687.75 7.77 -25.05 -681.23 2900 - 11/24/2017
57735 2017-11-24 P 0.01 7.77 -25.05 -681.23 2900 - 11/24/2017
57731 2017-11-24 C 637.75 7.77 -25.05 -631.23 2950 - 11/24/2017
57736 2017-11-24 P 0.01 7.77 -25.05 -631.23 2950 - 11/24/2017
57732 2017-11-24 C 587.75 7.77 -25.05 -581.23 3000 - 11/24/2017
57737 2017-11-24 P 0.01 7.77 -25.05 -581.23 3000 - 11/24/2017
57733 2017-11-24 C 537.75 7.77 -25.05 -531.23 3050 - 11/24/2017
57738 2017-11-24 P 0.01 7.77 -25.05 -531.23 3050 - 11/24/2017
57727 2017-11-24 P 0.20 7.77 -25.05 -431.23 3150 - 12/08/2017
57728 2017-11-24 P 0.30 11.49 -34.45 -381.23 3200 - 12/08/2017
57734 2017-11-24 C 362.75 14.84 -43.17 -356.23 3225 - 11/24/2017
57739 2017-11-24 P 0.01 14.84 -43.17 -356.23 3225 - 11/24/2017
57729 2017-11-24 P 0.40 14.84 -43.17 -356.23 3225 - 12/08/2017
57826 2017-11-24 C 234.50 140.14 -124.53 -231.23 3350 - 12/22/2017
57845 2017-11-24 P 5.70 140.19 -156.23 -231.23 3350 - 12/22/2017
57827 2017-11-24 C 210.50 160.38 -138.61 -206.23 3375 - 12/22/2017
57846 2017-11-24 P 6.70 160.34 -170.27 -206.23 3375 - 12/22/2017
57828 2017-11-24 C 186.80 184.35 -154.72 -181.23 3400 - 12/22/2017
57847 2017-11-24 P 8.10 185.20 -187.99 -181.23 3400 - 12/22/2017
57829 2017-11-24 C 163.60 213.17 -174.17 -156.23 3425 - 12/22/2017
57848 2017-11-24 P 9.80 213.01 -205.82 -156.23 3425 - 12/22/2017
</code></pre>
<p>Is there any way to speed the code above or any insights into why the second one doesn't work.</p>
|
<p>The problem is happened to the second <code>bfill</code>(It will back fill nan for whole dataframe , rather than each subgroup),below will work for you </p>
<pre><code>df.groupby(['code','Date']).apply(lambda x : x.ffill().bfill())
</code></pre>
<p>For example, we usually think this will return sum of sum for each group, but it will return one number .</p>
<pre><code>df=pd.DataFrame({'A':[1,1,3,4],'B':[2,3,4,5]})
df.groupby('A').sum().sum()
Out[958]:
B 14
dtype: int64
</code></pre>
|
python|pandas|group-by|fillna
| 3
|
374,513
| 48,302,876
|
dtype changes after set_value/at
|
<p>I'm facing a weird issue on Pandas now, not sure if a pandas pitfall or just something I'm missing...</p>
<p>My pd.Series is just</p>
<p><code>foo
False
False
False
</code></p>
<p><code>> a.foo.dtype
dtype('bool')
</code></p>
<p>When I use a <code>dataframe.set_value(index, col, None)</code>, my whole Series is converted to <code>dtype('float64')</code> (same thing applies to <code>a.at[index, col] = None</code>).</p>
<p>Now my Series is
<code>
foo
NaN
NaN
NaN
</code></p>
<p>Do you have any idea on how this happens and how to fix it?</p>
<p>Thanks in advance. :)</p>
<p>Edit:
Using 0.20.1.</p>
|
<p>I think the problem is related to the fact that I was trying to assign a <code>None</code> to a <code>bool</code> Series, then it just tries to convert to a different type (why not object?)</p>
<p>Fixed changing the dtype to <code>object</code> first: <code>dataframe.foo = dataframe.foo.astype(object)</code>.</p>
<p>Works like a charm now.</p>
|
python|pandas
| 1
|
374,514
| 48,157,575
|
What to do when pip & conda overlap?
|
<p>I have a reasonable understanding of the difference between <code>conda install</code> & <code>pip install</code>; How <code>pip</code> installs python only packages & <code>conda</code> can install non-python binaries. However, there is some overlap between these two. Which leads me to ask:</p>
<p><strong>What's the rule of thumb for whether to use <code>conda</code> or <code>pip</code> when both offer a package?</strong></p>
<p>For example, <code>TensorFlow</code> is available on both repositories but from the <a href="https://www.tensorflow.org/install/install_linux" rel="noreferrer">tensorflow docs</a>:</p>
<blockquote>
<p>within Anaconda, we recommend installing TensorFlow with the
<code>pip install</code> command, not with the <code>conda install</code> command.</p>
</blockquote>
<p>But, there are many other packages that overlap, like <code>numpy</code>, <code>scipy</code> etc.</p>
<p><br>
However, <a href="https://stackoverflow.com/a/33626087/7607701">this Stackoverflow answer</a> suggests that <code>conda install</code> should be the default & <code>pip</code> should only be used if a package is unavailable from <code>conda</code>. Is this true even for <code>TensorFlow</code> or other python-only packages?</p>
|
<p>The Tensorflow maintainers actually publish the wheels of TensorFlow on PyPI that's why it's the recommended <em>official</em> way. The <code>conda</code> packages are created by the Anaconda staff and/or the community. That doesn't mean the conda packages are bad, it just means that the TensorFlow maintainers don't participate there (officially). Basically they are just saying: "If you have trouble installing it with <code>pip</code> the TensorFlow devs will try to help you. But we don't officially support the <code>conda</code> packages so if something goes wrong with the conda package you need to ask the conda-package maintainers. You've been warned."</p>
<hr>
<p>In the more general case:</p>
<p>For Python-only packages you should always use <code>conda install</code>. There might be exceptions, for example if there is no conda-package at all or the conda package is out-of-date (and nobody is releasing a new version of that package) and you really need that package/version.</p>
<p>However it's different for packages that require compilation (e.g. C-Extensions, etc.). It's different because with <code>pip</code> you can install a package either:</p>
<ul>
<li>as pre-compiled wheel</li>
<li>as package, compiled on your computer</li>
</ul>
<p>While conda just provides the</p>
<ul>
<li>compiled conda package</li>
</ul>
<p>With compiled packages you have to be careful with binary compatibility. That means that a package is compiled against specific binary interface of another library - which could depend on the version of the libraries or the compilation flags, etc.</p>
<p>With conda you have to take the package as-is, which means that you have to assume that the packages are binary-compatible. If they aren't it won't work (segfault or linking errors or whatever).</p>
<p>If you use <code>pip</code> and can choose which wheel (if any) to install or compile it against the <strong>available</strong> libraries on your computer. That means it's less likely that you get a binary-incompatibility. That is (or was) a big problem if you install conda packages from different conda-channels. Because they might simply be binary-incompatible (e.g. conda-forge and the anaconda-channel have or had a few problems there).</p>
<p>However it should probably be decided on a case-by-case basis. I had no problems with my <code>tensorflow</code> conda environment where I installed <strong>all</strong> packages from the <code>conda-forge</code> channel, including tensorflow. However I have heard that several people had trouble with tensorflow in <strong>mixed</strong> <code>conda-forge</code> and <code>anaconda</code> channel environments. For example NumPy from the main channel and TensorFlow from the conda-forge channel might just be binary-incompatible.</p>
<p>My rule of thumb is:</p>
<ul>
<li>If it's a Python-only package just install it (it's unlikely to make trouble). Use the conda package when possible but it won't make (much) trouble if you use pip. If you install it using pip it's not managed by conda so it's possible it won't be recognized as available dependency and you have to update it yourself, but that's about all the difference.</li>
<li>If it's a compiled package (like a C extension or a wrapper around a C library or such like) it becomes a bit more complicated. If you want to be "careful" or you have reason to expect problems:</li>
<li>Always create a new environment if you need to test compiled packages from different channels and/or conda and pip. It's easy to discard a messed up conda environment but it's a lot more annoying to fix an environment that you depend on.</li>
<li>If possible install all compiled packages using <code>conda install</code> from <strong>one and only one</strong> channel (if possible the main anaconda channel).</li>
<li>If not possible try to mix main anaconda channel compiled packages with conda packages from a different channel.</li>
<li>If that doesn't work try to mix conda compiled packages and pip compiled-packages (pre-compiled wheels or self-compiled installers).</li>
</ul>
<hr>
<p>You asked about why you cannot install packages from PyPI with <code>conda</code>. I don't know the exact reasons but <code>pip</code> mostly provides the package and you have to install it yourself. With conda you get an already compiled and installed package that is just "copied" without installation. That requires that the package is installed on different operating systems (Mac, Windows, Linux) and on platforms (32-bit, 64-bit), against different Python versions (2.7, 3.5, 3.6) and possibly against different NumPy versions. That means conda has to provide several packages instead of just one. That takes resources (space for the final installed packages and time for the installation) which probably aren't available or feasible. Aside from that there is probably no converter for a pypi package to a conda recipe aside from all the specifics you have to know about a package (compilation, installation) to make it work. That's just my guess though.</p>
|
python|numpy|pip|conda
| 15
|
374,515
| 48,017,236
|
np where statement gets: ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()
|
<p>I am writing for following line of code:</p>
<pre><code>holiday['real_or_not'] = np.where((holiday['transferred']=='False',1,0))
holiday
</code></pre>
<p>Minimum reproducible example: </p>
<pre><code>date type locale locale_name description transferred
2012-03-02 False locale Manta Fundacion de Manta False
2012-03-02 False Regional Regional Gunanta True
</code></pre>
<p>I am getting:</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty,a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>Any ideas why? I write a fairly similar <code>np.where</code> statement on a different pandas data frame in my code and it works perfectly fine. No idea why it would work there but not in here.</p>
|
<p>First, you need to remove the extra parenthesis. Because it creates a tuple and you give <code>np.where</code> one argument, the tuple, instead off three arguments.
This means this tuple is interpreted as as the condition because the second and third argument are optional:</p>
<pre><code>where(condition, [x, y])
</code></pre>
<blockquote>
<p>Return elements, either from <code>x</code> or <code>y</code>, depending on <code>condition</code>.
If only <code>condition</code> is given, return <code>condition.nonzero()</code>.</p>
</blockquote>
<p>Calling a function with just one argument, you can add as many extra parentheses as you like. As soon as you add a comma, you create a tuple, an you cannot do this anymore without changing how the arguments are given to the function.</p>
<p>Assuming the column <code>transferred</code> is bool, you can reverse your logic:</p>
<pre><code>holiday['real_or_not'] = np.where(holiday['transferred'], 0, 1)
</code></pre>
<p>Result:</p>
<pre><code> type locale locale_name description transferred real_or_not
date
2012-03-02 False locale Manta Fundacion False 1
2012-03-02 False Regional Regional Gunanta True 0
</code></pre>
<p>An alternative solution without <code>np.where</code>:</p>
<pre><code>holiday['real_or_not'] = (~holiday.transferred).astype(int)
</code></pre>
|
python|python-2.7|pandas|numpy|where
| 1
|
374,516
| 48,478,780
|
pandas DataFrame.groupby and apply custom function
|
<p>I have a DataFrame with many duplicates (I need Type/StrikePrice pair to be unique) like this:</p>
<pre><code> Pos AskPrice
Type StrikePrice
C 1500.0 10 281.6
C 1500.0 11 281.9
C 1500.0 12 281.7 <- I need this one
P 1400.0 30 1200.5
P 1400.0 31 1250.2 <- I need this one
</code></pre>
<p>How can I group by <code>Type + StrikePrice</code> and apply some logic (my own function) to decide which row from the group to choose (let's say by the most greater <code>Pos</code>)</p>
<p>The expected result is</p>
<pre><code> Pos AskPrice
Type StrikePrice
C 1500.0 12 281.7
P 1400.0 31 1250.2
</code></pre>
<p>Thanks a lot!</p>
|
<p>First <code>reset_index</code> for unique indices, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.idxmax.html" rel="nofollow noreferrer"><code>idxmax</code></a> for indices of max values per groups and select rows by <code>loc</code>, last <code>set_index</code> for <code>MultiIndex</code>:</p>
<pre><code>df = df.reset_index()
df = df.loc[df.groupby(['Type','StrikePrice'])['Pos'].idxmax()]
.set_index(['Type','StrikePrice'])
</code></pre>
<p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>sort_values</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>drop_duplicates</code></a>:</p>
<pre><code>df = (df.reset_index()
.sort_values(['Type','StrikePrice', 'Pos'])
.drop_duplicates(['Type','StrikePrice'], keep='last')
.set_index(['Type','StrikePrice']))
print (df)
Pos AskPrice
Type StrikePrice
C 1500.0 12 281.7
P 1400.0 31 1250.2
</code></pre>
<p>But if need custom function use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.apply.html" rel="nofollow noreferrer"><code>GroupBy.apply</code></a>:</p>
<pre><code>def f(x):
return x[x['Pos'] == x['Pos'].max()]
df = df.groupby(level=[0,1], group_keys=False).apply(f)
print (df)
Pos AskPrice
Type StrikePrice
C 1500.0 12 281.7
P 1400.0 31 1250.2
</code></pre>
|
python|pandas|pandas-groupby
| 3
|
374,517
| 48,558,107
|
Converting float to string in pandas dataframe
|
<p>I have a dataframe in pandas containing datetime and float data.</p>
<pre><code>time price1 price2
2018-02-01T00:00:00.000Z 1.4526547885 1.654775563
</code></pre>
<p>I need to convert the columns to string format such that the price1 and price2 columns shows number upto 4 decimal places and the time is displayed as: 01,02,2018 00:00:00</p>
<p>Any leads on this is appreciated. Thanks </p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.strftime.html" rel="noreferrer"><code>dt.strftime</code></a> for formating <code>datetime</code>s and then custom format of <code>float</code>s:</p>
<pre><code>df['time'] = df['time'].dt.strftime('%Y,%m,%d %H:%M:%S')
cols = ['price1','price2']
df[cols] = df[cols].applymap(lambda x: '{0:.4f}'.format(x))
print (df)
time price1 price2
0 2018,02,01 00:00:00 1.4527 1.6548
</code></pre>
|
python|python-3.x|pandas
| 13
|
374,518
| 48,828,962
|
Reducing GPU memory consumption of tensor flow model
|
<p>I am trying to get the code in this git rep working: <a href="https://github.com/cvikasreddy/skt" rel="nofollow noreferrer">https://github.com/cvikasreddy/skt</a>
The training data is a 7mb text file.
I have a Nvidia GTX 750ti with 1gb of memory. When I try to train on this machine, the trainer crashes because of running out of memory (model size beeing 2.5gb according to the error message). Of course I understand that these cannot fit into 1gb of graphic memory.
The default settings are: </p>
<pre><code>num_layers = 3 # Number of layers of RNN
num_hidden = 128 # Hidden size of RNN cell
batch_size = 128 # Number of sentences in a batch
seq_length = 35 # Length of sequence
</code></pre>
<p>I already tried changing them to: </p>
<pre><code>num_layers = 3 # Number of layers of RNN
num_hidden = 128 # Hidden size of RNN cell
batch_size = 1 # Number of sentences in a batch
seq_length = 35 # Length of sequence
</code></pre>
<p>I also tried changing seq_length but this also doesn't work. What suggestions do you have to solve this problem? Of course buying a bigger graphic unit would work. But I wonder if anything in the code itself can be done. Maybe splitting the input data? The computer itself has 16gb of ram, that should be OK. </p>
|
<p>You can run everything on the CPU by adding </p>
<pre><code>with tf.device('/cpu:0'):
</code></pre>
<p>and the correct indentation before the definition of your graph. It'll be slower but you should have more than enough memory. You could also put most of your ops on the CPU, and choose a few to put on the GPU. You'll find some help for that <a href="https://www.tensorflow.org/programmers_guide/using_gpu" rel="nofollow noreferrer">here</a>. </p>
<p>You can also reduce the network size (<code>num_hidden</code> and <code>num_layers</code>), but your performance will decrease. If the RNN ops are done in 64 bits, maybe you can change to 32 bits, I don't know if it's possible. </p>
|
python|tensorflow
| 1
|
374,519
| 48,833,889
|
when does input pipeline returns a new data batch?
|
<p>I am using an input pipeline with queues and <code>TFRecordReader</code> to read a tfrecord file and use the data directly into a <code>dynamic_rnn</code> function.
So for example if i have this last step of the input pipeline:</p>
<pre><code>xb, yb = tf.train.shuffle_batch([x, y], batch_size, capacity, min_after_dequeue, num_threads=1)
</code></pre>
<p>and i feed xb directly to <code>dynamic_rnn</code> function i understand that it fetches a new xb every time it is run. But when exactly is that? So <code>dynamic_rnn</code> function is initialized once when i build the model, does it fetch these new xb data internally?</p>
<p>And if for example i specify this as sequence_length for <code>dynamic_rnn</code>, <code>sequence_length=xb.shape[1]</code> or if i do something like this <code>print xb</code>, does this mean that xb will be called and return new data two times?</p>
|
<p><code>.shape</code> on a <code>Tensor</code> does not run it when graph building, nor does <code>print</code>, those just fetch Python metadata. Even then, referencing a <code>Tensor</code> multiple times gives you one value per <code>session.run</code> call (unless it's in a <code>tf.while_loop</code> or other control flow construct). <code>Tensor</code>s derived from queues included.</p>
<p>See <a href="https://www.tensorflow.org/api_guides/python/threading_and_queues" rel="nofollow noreferrer">https://www.tensorflow.org/api_guides/python/threading_and_queues</a> for details of TensorFlow queues (which is how <code>shuffle_batch</code> is implemented).</p>
|
tensorflow|input|pipeline|rnn
| 0
|
374,520
| 48,511,766
|
Conditional Function to a pandas dataframe, if vs else change the value
|
<p>I been trying to get my code to work but I am having some trouble here. It would be great if someone could assist me</p>
<p>DF</p>
<pre><code> Col1 Col2
2017-01-01 Coffee
2017-01-01 Muffin
2017-01-01 Donut
2017-01-01 Toast
</code></pre>
<p>How can I change Col2 so that every value that isn't Coffee or Muffin becomes 'Other'?</p>
<pre><code> Col1 Col2
2017-01-01 Coffee
2017-01-01 Muffin
2017-01-01 Other
2017-01-01 Other
</code></pre>
|
<pre><code>In [265]: df.loc[~df.Col2.isin(['Coffee','Muffin']), 'Col2'] = 'Other'
In [266]: df
Out[266]:
Col1 Col2
0 2017-01-01 Coffee
1 2017-01-01 Muffin
2 2017-01-01 Other
3 2017-01-01 Other
</code></pre>
|
python|pandas|conditional
| 1
|
374,521
| 48,838,171
|
Intuition behind Neural Network Results?
|
<p>I am attempting to build a neural network to classify poisonous mushrooms, however the results are not correct. The model compiles successfully, however can someone provide intuition as to why it is the training results are so seemingly accurate after only a few epochs. This does not seem correct, was an error made in the data preprocessing?</p>
<p>Dataset can be found here: <a href="https://archive.ics.uci.edu/ml/machine-learning-databases/mushroom/agaricus-lepiota.data" rel="nofollow noreferrer">https://archive.ics.uci.edu/ml/machine-learning-databases/mushroom/agaricus-lepiota.data</a></p>
<p><a href="https://i.stack.imgur.com/fXU3Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fXU3Q.png" alt="Network Results"></a></p>
<p>Here is the code: </p>
<pre><code>import keras.utils
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
import numpy as np
import pandas as pd
# seed weights
np.random.seed(3)
# import dataset
data = pd.read_csv('agaricus-lepiota.csv', delimiter=',')
# encode labels as integers so the can be one-hot-encoded which takes int matrix
le = preprocessing.LabelEncoder()
data = data.apply(le.fit_transform)
# one-hot-encode string data (now type int)
ohe = preprocessing.OneHotEncoder(sparse=False)
data = ohe.fit_transform(data)
X = data[:, 1:23]
Y = data[:, 0:1]
# split into test and train set
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=.2, random_state=5)
# create model
model = Sequential()
model.add(Dense(500, input_dim=22, activation='relu'))
model.add(Dense(300, activation='relu'))
model.add(Dense(100, activation='relu'))
model.add(Dense(50, activation='relu'))
model.add(Dense(25, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=1000, batch_size=25)
</code></pre>
|
<p>One epoch is a lot of iterations (n=training_set_size/batch_size). Considering that you have so many layers and no regularization i would suspect overfitting.</p>
|
tensorflow|machine-learning|neural-network|keras|classification
| 0
|
374,522
| 48,518,306
|
Converting objects into Panda dataframe?
|
<p>I have a data frame with columns</p>
<ul>
<li>created_at</li>
<li>id</li>
<li>data (I am having trouble parsing through this column)</li>
</ul>
<p>Each object in the data column is a dictionary. I want to have each object in the dictionary be a standalone column. Any help or direction to a package would be appreciated.</p>
<p>Below is how one object in the data frame looks like.</p>
<pre><code>pd.Series(data.data[1])
backers_count 37
blurb Nano Art will make and market customized piece...
category {'id': 21, 'name': 'Digital Art', 'slug': 'art...
converted_pledged_amount 1974
country US
created_at 1332823105
creator {'id': 300795038, 'name': 'Sameer Walavalkar',...
currency USD
currency_symbol $
currency_trailing_code True
current_currency USD
deadline 1337287105
disable_communication False
fx_rate 1
goal 5000
id 120596924
is_starrable False
launched_at 1333399105
location {'id': 2468964, 'name': 'Pasadena', 'slug': 'p...
name Nano Art: Reloaded
photo {'ed': 'https://ksr-ugc.imgix.net/assets/011/3...
pledged 1974
profile {'id': 118685, 'name': None, 'blurb': None, 's...
slug nano-art-reloaded
source_url https://www.kickstarter.com/discover/categorie...
spotlight False
staff_pick True
state failed
state_changed_at 1337287105
static_usd_rate 1
urls {'web': {'project': 'https://www.kickstarter.c...
usd_pledged 1974.0
usd_type domestic
</code></pre>
<p>or</p>
<pre><code>data.data[1]
Out[61]:
{'backers_count': 37,
'blurb': 'Nano Art will make and market customized pieces, in a variety of materials, featuring etchings smaller than an eyelash.',
'category': {'color': 16760235,
'id': 21,
'name': 'Digital Art',
'parent_id': 1,
'position': 3,
'slug': 'art/digital art',
'urls': {'web': {'discover': 'http://www.kickstarter.com/discover/categories/art/digital%20art'}}},
'converted_pledged_amount': 1974,
'country': 'US',
'created_at': 1332823105,
'creator': {'avatar': {'medium': 'https://ksr-ugc.imgix.net/assets/006/332/754/bd9efceb28856e93ff226c9b853773a9_original.jpg?w=160&h=160&fit=crop&v=1461381464&auto=format&q=92&s=24a8cda7b064a8610c1334200a306a2d',
'small': 'https://ksr-ugc.imgix.net/assets/006/332/754/bd9efceb28856e93ff226c9b853773a9_original.jpg?w=160&h=160&fit=crop&v=1461381464&auto=format&q=92&s=24a8cda7b064a8610c1334200a306a2d',
'thumb': 'https://ksr-ugc.imgix.net/assets/006/332/754/bd9efceb28856e93ff226c9b853773a9_original.jpg?w=40&h=40&fit=crop&v=1461381464&auto=format&q=92&s=a4881fe982e2d57b041e2a591cd3e04e'},
'chosen_currency': None,
'id': 300795038,
'is_registered': True,
'name': 'Sameer Walavalkar',
'urls': {'api': {'user': 'https://api.kickstarter.com/v1/users/300795038?signature=1515881698.259ca61a8b86731ffbea53f82f4a05e8d1d9f965'},
'web': {'user': 'https://www.kickstarter.com/profile/300795038'}}},
'currency': 'USD',
'currency_symbol': '$',
'currency_trailing_code': True,
'current_currency': 'USD',
'deadline': 1337287105,
'disable_communication': False,
'fx_rate': 1,
'goal': 5000,
'id': 120596924,
'is_starrable': False,
'launched_at': 1333399105,
'location': {'country': 'US',
'displayable_name': 'Pasadena, CA',
'id': 2468964,
'is_root': False,
'localized_name': 'Pasadena',
'name': 'Pasadena',
'short_name': 'Pasadena, CA',
'slug': 'pasadena-ca-us',
'state': 'CA',
'type': 'Town',
'urls': {'api': {'nearby_projects': 'https://api.kickstarter.com/v1/discover?signature=1515876022.17e1296a181b009b97c854cfded1e99beeefd9fa&woe_id=2468964'},
'web': {'discover': 'https://www.kickstarter.com/discover/places/pasadena-ca-us',
'location': 'https://www.kickstarter.com/locations/pasadena-ca-us'}}},
'name': 'Nano Art: Reloaded',
'photo': {'1024x576': 'https://ksr-ugc.imgix.net/assets/011/336/473/c6f18bfa9e9933bce11272db919a0bb9_original.jpg?crop=faces&w=1024&h=576&fit=crop&v=1463681196&auto=format&q=92&s=4fe4bf0d8a75fa43bbf253f8c1eb5710',
'1536x864': 'https://ksr-ugc.imgix.net/assets/011/336/473/c6f18bfa9e9933bce11272db919a0bb9_original.jpg?crop=faces&w=1552&h=873&fit=crop&v=1463681196&auto=format&q=92&s=76cdc06919b3b8df0f3daae78ab57301',
'ed': 'https://ksr-ugc.imgix.net/assets/011/336/473/c6f18bfa9e9933bce11272db919a0bb9_original.jpg?crop=faces&w=352&h=198&fit=crop&v=1463681196&auto=format&q=92&s=4446e512c7a07794efcf131e35eb0111',
'full': 'https://ksr-ugc.imgix.net/assets/011/336/473/c6f18bfa9e9933bce11272db919a0bb9_original.jpg?crop=faces&w=560&h=315&fit=crop&v=1463681196&auto=format&q=92&s=1d312059cce791c9058255f83c123f47',
'key': 'assets/011/336/473/c6f18bfa9e9933bce11272db919a0bb9_original.jpg',
'little': 'https://ksr-ugc.imgix.net/assets/011/336/473/c6f18bfa9e9933bce11272db919a0bb9_original.jpg?crop=faces&w=208&h=117&fit=crop&v=1463681196&auto=format&q=92&s=c386ed08b0c603e1912b1620f9bb58d6',
'med': 'https://ksr-ugc.imgix.net/assets/011/336/473/c6f18bfa9e9933bce11272db919a0bb9_original.jpg?crop=faces&w=272&h=153&fit=crop&v=1463681196&auto=format&q=92&s=51a7ae3eadcb187a8dd58c14396b0d8c',
'small': 'https://ksr-ugc.imgix.net/assets/011/336/473/c6f18bfa9e9933bce11272db919a0bb9_original.jpg?crop=faces&w=160&h=90&fit=crop&v=1463681196&auto=format&q=92&s=9b1a141c6269c9940f44e273f416b73e',
'thumb': 'https://ksr-ugc.imgix.net/assets/011/336/473/c6f18bfa9e9933bce11272db919a0bb9_original.jpg?crop=faces&w=48&h=27&fit=crop&v=1463681196&auto=format&q=92&s=73ef45a6f4df337a942b3a75fec87996'},
'pledged': 1974,
'profile': {'background_color': None,
'background_image_opacity': 0.8,
'blurb': None,
'feature_image_attributes': {'image_urls': {'baseball_card': 'https://ksr-ugc.imgix.net/assets/011/336/473/c6f18bfa9e9933bce11272db919a0bb9_original.jpg?crop=faces&w=560&h=315&fit=crop&v=1463681196&auto=format&q=92&s=1d312059cce791c9058255f83c123f47',
'default': 'https://ksr-ugc.imgix.net/assets/011/336/473/c6f18bfa9e9933bce11272db919a0bb9_original.jpg?crop=faces&w=1552&h=873&fit=crop&v=1463681196&auto=format&q=92&s=76cdc06919b3b8df0f3daae78ab57301'}},
'id': 118685,
'link_background_color': None,
'link_text': None,
'link_text_color': None,
'link_url': None,
'name': None,
'project_id': 118685,
'should_show_feature_image_section': True,
'show_feature_image': False,
'state': 'inactive',
'state_changed_at': 1425915807,
'text_color': None},
'slug': 'nano-art-reloaded',
'source_url': 'https://www.kickstarter.com/discover/categories/art/digital%20art',
'spotlight': False,
'staff_pick': True,
'state': 'failed',
'state_changed_at': 1337287105,
'static_usd_rate': 1,
'urls': {'web': {'project': 'https://www.kickstarter.com/projects/300795038/nano-art-reloaded?ref=category_newest',
'rewards': 'https://www.kickstarter.com/projects/300795038/nano-art-reloaded/rewards'}},
'usd_pledged': '1974.0',
'usd_type': 'domestic'}
</code></pre>
<p>I've tried transposing the dataframe and the using a for loop to stack the second column generated by the pd.Series. But it's not working.</p>
|
<p>Here a subset of your dictionnary example : </p>
<pre><code>d = {
'backers_count':
37,
'blurb':
'Nano Art will make and market customized pieces, in a variety of materials, featuring etchings smaller than an eyelash.',
'category': {
'color': 16760235,
'id': 21,
'name': 'Digital Art',
'parent_id': 1,
'position': 3,
'slug': 'art/digital art',
'urls': {
'web': {
'discover':
'http://www.kickstarter.com/discover/categories/art/digital%20art'
}
}
},
'converted_pledged_amount':
1974,
'country':
'US',
'created_at':
1332823105,
'creator': {
'avatar': {
'medium':
'https://ksr-ugc.imgix.net/assets/006/332/754/bd9efceb28856e93ff226c9b853773a9_original.jpg?w=160&h=160&fit=crop&v=1461381464&auto=format&q=92&s=24a8cda7b064a8610c1334200a306a2d',
'small':
'https://ksr-ugc.imgix.net/assets/006/332/754/bd9efceb28856e93ff226c9b853773a9_original.jpg?w=160&h=160&fit=crop&v=1461381464&auto=format&q=92&s=24a8cda7b064a8610c1334200a306a2d',
'thumb':
'https://ksr-ugc.imgix.net/assets/006/332/754/bd9efceb28856e93ff226c9b853773a9_original.jpg?w=40&h=40&fit=crop&v=1461381464&auto=format&q=92&s=a4881fe982e2d57b041e2a591cd3e04e'
},
'chosen_currency': None,
'id': 300795038,
'is_registered': True,
'name': 'Sameer Walavalkar',
'urls': {
'api': {
'user':
'https://api.kickstarter.com/v1/users/300795038?signature=1515881698.259ca61a8b86731ffbea53f82f4a05e8d1d9f965'
},
'web': {
'user': 'https://www.kickstarter.com/profile/300795038'
}
}
}
}
</code></pre>
<p>I create a minimal example :</p>
<pre><code>df = pd.DataFrame({"data":[d, d]})
</code></pre>
<p>If you want to apply the conversion from dictionnary to DataFrame you can use the <code>map</code> function :</p>
<pre><code>list_df = df.data.map(lambda d : pd.DataFrame.from_dict(d, orient="index").transpose()).tolist()
</code></pre>
<p>Then, you can concatenate the result :</p>
<pre><code>df_concat = pd.concat(list_df)
</code></pre>
<p>After this operation you can concatenate your original DataFrame <code>data</code> and <code>df_concat</code>.</p>
|
python|pandas|parsing|dictionary|dataframe
| 1
|
374,523
| 48,793,413
|
How to connect tensoflow to jupyter notebook?
|
<p>I have installed <code>Anaconda</code> with <code>jupyter notebook</code> in <code>/home/serg/anaconda/bin</code> and installed <code>tensoflow</code> in <code>./.local/lib/python3.5/site-packages/tensorflow</code>. My operation system is <code>Ubuntu 16.04</code>.</p>
<p>Is it possible to use <code>tensorflow</code> in <code>jupyter notebook</code> via changing some configuration in <code>anaconda</code> or <code>jupyter</code>?</p>
<p>P.S.: I know it is possible for most python IDE, but I need to do it with <code>jupyter notebook</code> in <code>anaconda</code>.</p>
|
<p>Looks like you may have installed stuff in the wrong order.</p>
<p>I run TensorFlow in Jupyter notebook all the time, I don't get your issue? If you have installed Anaconda and it is active eg when you type python, you get the Anaconda version of python, you just install TensorFlow with Pip (following instructions at TF.. ) and the fire up jupyter notebook do your imports including TF and you should be off and going, nothing special required. If you did it in the wrong order, eg TF then Anaconda just make sure that Anaconda is your default interpeter (as above) and re-install TF with pip. If that does not seem to fix whatever issue you are having, post more info...
post your notebook info that tries to import TF?</p>
|
tensorflow|anaconda|jupyter-notebook
| 0
|
374,524
| 48,868,660
|
TensorFlow - predicting next word - loss function logit na target shape
|
<p>I'm trying to create a language model. I have <code>logit</code> and target of size: <code>[32, 312, 512]</code></p>
<p>Where: </p>
<ul>
<li><code>.shape[0]</code> is <code>batch_size</code></li>
<li><code>.shape[1]</code> is <code>sequence_max_len</code></li>
<li><code>.shape[2]</code> is <code>vocabulary size</code></li>
</ul>
<p>The question is - when I pass <code>logit</code> and <code>target</code> to the loss function as follows:</p>
<pre><code>self.loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(
logits=self.logit, labels=self.y))
</code></pre>
<p>Does it compute appropriate loss for the current batch? Or should I reshape <code>logit</code> and <code>target</code> to express the following shape: <code>[32, 312*512]</code>?</p>
<p>Thanks in advance for your help!</p>
|
<p>The api documentation says about labels,</p>
<blockquote>
<p>labels: Each row labels[i] must be a valid probability distribution</p>
</blockquote>
<p>If you are predicting each character at a time, you would have a probability distribution (probability of being each character sum up to 1) over your vocab size 512. Given that, your labels and unscaled logits of shape [32, 312, 512], you should reshape it into [32*312, 512] before calling the function. In this way each row of your labels have a valid probability distribution and your unscaled logits will be converted to prob distribution by the function itself and then loss will be calculated.</p>
|
tensorflow|neural-network|recurrent-neural-network|seq|language-model
| 1
|
374,525
| 48,874,483
|
Python Pandas read_sql - call previously specified date
|
<p>I have a sql query and I want to specify the date outside of the where query so it can be changed as needed. The following code isn't working and I'm not sure what else to try.</p>
<pre><code>startdate='2018-01-01'
test=pd.read_sql("""select * from database.table where date > :startdate ; """ , connection)
</code></pre>
|
<p>How about this</p>
<pre><code>test=pd.read_sql('select * from database.table where date > {}'.format(startdate) , connection)
</code></pre>
|
python|pandas
| 0
|
374,526
| 48,583,002
|
How to open a .tsv file in Jupyter? Jupyter.Notebook tried suggestions, but it doesn't work
|
<p>How can I open a <code>.tsv</code> file in Jupyter.<br>
The data is stored under <code>C:/User/anna/</code>. </p>
<p>This is my code:</p>
<pre><code>import pandas as pd
df=pd.read_csv('C:/User/anna/train')
</code></pre>
<p>But I get this error message:</p>
<blockquote>
<p>FileNotFoundError: File b'C:/Users/anna/train.txt' does not exist</p>
</blockquote>
|
<p>it's actually to do with pandas, by default the separator is comma, not tab.
try the code below:</p>
<pre><code>df=pd.read_csv('C:/User/anna/train', sep='\t')
</code></pre>
|
pandas|csv|jupyter-notebook|jupyter
| 3
|
374,527
| 48,630,060
|
Select N rows above and below a specific row in pandas
|
<p>I have this data frame and I want to select 10 rows before and after on a specific column. I have reached up to this point but I was wondering how to make it more elegant in a lambda python expression as I need to run this on a loop 10 thousand times.</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data=np.random.rand(90),
index=pd.date_range('2015-01-01','2015-03-31'),columns=['A'])
</code></pre>
<p>I have reached to this as an solution in progress:</p>
<p>10 observations before and after:</p>
<pre><code>df.loc['2015-01-17':].head(11)[1:11].transpose() ! before
df.loc[:'2015-01-17'].tail(11)[0:10].transpose() ! after
</code></pre>
<p>So, how can I make this is in a loop with a lambda expression and having not only one <code>index</code> but two <code>indexes</code>?</p>
|
<p>Really simple using <code>index.get_loc</code>. Get the index of the label, and slice accordingly. </p>
<pre><code>idx = df.index.get_loc('2015-01-17')
df.iloc[idx - 10 : idx + 10]
A
2015-01-07 0.262086
2015-01-08 0.836742
2015-01-09 0.094763
2015-01-10 0.133500
2015-01-11 0.285372
2015-01-12 0.338112
2015-01-13 0.451852
2015-01-14 0.163001
2015-01-15 0.247186
2015-01-16 0.227053
2015-01-17 0.837647
2015-01-18 0.918334
2015-01-19 0.514731
2015-01-20 0.207688
2015-01-21 0.700314
2015-01-22 0.363784
2015-01-23 0.811346
2015-01-24 0.079030
2015-01-25 0.051900
2015-01-26 0.520310
</code></pre>
|
python|pandas|dataframe|indexing
| 20
|
374,528
| 48,453,143
|
Using sequences of images for as an input for time distributed conv2d
|
<p>I'm currently attempting to build a model that uses image sequences and classifies each item in the sequence (not retaining state between sequences) in Keras with a TF backend; however, I'm running into an issue with the input shape for the first layer.</p>
<p>the model looks like this: </p>
<pre><code>model.add(TimeDistributed(Conv2D(64, (3, 3), activation='relu'), input_shape=(10, 1, 224, 224, 3)))
model.add(TimeDistributed(MaxPooling2D((2, 2), strides=(1, 1))))
model.add(TimeDistributed(Conv2D(128, (4,4), activation='relu')))
model.add(TimeDistributed(MaxPooling2D((2, 2), strides=(2, 2))))
model.add(TimeDistributed(Conv2D(256, (4,4), activation='relu')))
model.add(TimeDistributed(MaxPooling2D((2, 2), strides=(2, 2))))
model.add(TimeDistributed(Flatten()))
model.add(Dropout(0.5))
model.add(LSTM(256, return_sequences=False, dropout=0.5))
model.add(Dense(num_classes, activation='sigmoid'))
</code></pre>
<p>I think my issue comes from a lack of understanding regarding array shapes, and I'm most likely making an amateur mistake here. Each individual sequence gets loaded into a numpy array of shape (10, 1, 224, 224, 3) where the first axis is the number of items in the sequence(they are padded to be the same length), the second is the batch size, and the others are simply an RGB image. In my understanding from reading the available information on the docs (and my errors) a conv2D inside the time distributed wrapper takes a 5D array, essentially being formatted as the normal conv2D input of (batch size, rows, cols, channels) plus an added dimension for time (each item in the sequence). Is this anywhere close to accurate?</p>
<p>Moving on to the issue I'm facing...
If I try to input an image sequence of shape (10, 1, 224, 224, 3), I get the error
"ValueError: number of input channels does not match corresponding dimension of filter, 224 != 3". Can anyone shed some light on this? I'm fairly certain I'm not shaping the input array and the input_shape properly. </p>
<p>Another question I have that someone might be able to answer is:
Once this is resolved and I can use the sequence as an input, how do I go about feeding the model not just a single image sequence like shown above, but instead use an array of sequences? My dataset is made up of thousands of these sequences. All my previous experience in this stuff has been using Caffe and certainly not anything using recurrent models so not only am I unfamiliar with actually implementing recurrent networks, I am unfamiliar with Keras overall (although I'm beginning to learn, albeit slowly lol). From my experience and observations, Caffe provides a heavy layer of abstraction between you and the data inputs so I've never had to deal with this before. </p>
<p>I hope this question and my problem is clear and concise, and any information provided is greatly appreciated! </p>
|
<p>According to Keras documentation, you have to update the input_dim to be</p>
<pre>model.add(TimeDistributed(Conv2D(64, (3, 3), activation='relu'), input_shape=(10, 224, 224, 3))) </pre>
<p>and then use an image generator that fits the new shape of data.
you can use the tweaked one in this <a href="https://gist.github.com/Emadeldeen24/736c33ac2af0c00cc48810ad62e1f54a" rel="nofollow noreferrer">gist</a>.</p>
<p>It's the same as the main Keras ImageDataGenerator but I added an option to take more than one image/frame on each iteration. This is by changing the parameter <strong>frames_per_step</strong> to specify the number of frames/images you want to include in each iteration.</p>
<p>This is how to use it:</p>
<pre>from tweaked_ImageGenerator_v2 import ImageDataGenerator
datagen = ImageDataGenerator()
train_data=datagen.flow_from_directory('path/to/data', target_size=(x, y), batch_size=32, frames_per_step=4)</pre>
|
python|tensorflow|machine-learning|neural-network|keras
| 1
|
374,529
| 48,788,708
|
Tensorflow, delete element from matrixs
|
<p>I want know how to delete an element from a matrix in a tensor. </p>
<pre><code>a = tf.Variable(initial_value=[[0, 0, 0, 0],[2, 2, 2, 2],[1, 1, 1, 1]])
b = tf....(a)
#desired output of b, if I want to remove the second element in dim 1 of "a"
[[0, 0, 0, 0],[1, 1, 1, 1]]
#desired output of b, if I want to remove the last element in dim 1 of "a"
[[0, 0, 0, 0],[2, 2, 2, 2]]
</code></pre>
|
<p>Slice and then Stack</p>
<pre><code>a = tf.Variable(initial_value=[[0, 0, 0, 0],[2, 2, 2, 2],[1, 1, 1, 1]])
b = tf.stack([a[0], a[2]]) #[[0, 0, 0, 0],[1, 1, 1, 1]]
c = tf.stack([a[0], a[1]]) #[[0, 0, 0, 0],[2, 2, 2, 2]]
</code></pre>
|
python|tensorflow
| 0
|
374,530
| 48,689,113
|
Dynamic - Automated multiplication - Pandas dataframes
|
<p>after spending quite a while search and reading on Stackoverflow and around the web, I am desperate...</p>
<p>I have a Pandas DataFrame with some imported data (spectra). The first column is the wavelength while the others are the various spectra (the data). The names of the columns are imported from a list that reads the filenames from a path and keeps just the names.</p>
<p>What I would like to achieve and I can't quite seem to get how is to multiply each of the columns with the wavelength column and either overwrite the existing ones or create a new dataframe (doesn't matter that much).</p>
<p>This is the code I have so far that does the job (even if not the most elegant, it get's the job done):</p>
<pre><code>path = r'"thePathToData\PL_calc\Data_NIR'
idx = 0
#Create the DataFrame with all the data from the path above, use the filenames as column names
all_files = glob.glob(os.path.join(path, "*.asc"))
df = pd.concat((pd.read_csv(f, usecols=[1], sep='\t') for f in all_files), axis=1) #usecol=1 for the spectrum only
fileNames = [] # create a list for the filenames
for i in range(0,len(all_files)):
fileNames.append(all_files[i][71:-4])
df.columns = fileNames # assign the filenames as columns
wavelengths = pd.read_csv(all_files[0], usecols=[0], sep='\t') # add the wavelength column as first column of the dataframe
df.insert(loc=idx, column='Wavelength', value=wavelengths)
</code></pre>
<p>If I plot just the head of the DF it looks like this:</p>
<pre><code>Wavelength F8BT_Pure_Batch1_px1_spectra_4V \ ...
0 478.0708 -3.384101
1 478.3917 -1.580399
2 478.7126 -0.323580
3 479.0334 -1.131425
4 479.3542 1.202728
</code></pre>
<p>The complete DF is:</p>
<pre><code>1599 rows × 46 columns
</code></pre>
<p><strong>Question 1:</strong></p>
<p>I can't quite find an automated (dynamic) way of multiplying each col with the first one, essentially this:</p>
<pre><code>for i in range(1, len(df.columns)):
df[[i]] = df[[0]] * df[[i]]
</code></pre>
<p><strong>Question 2:</strong></p>
<p>Why does this work:</p>
<pre><code>df['F8BT_Pure_Batch1_px1_spectra_4V'] = df['Wavelength']*df['F8BT_Pure_Batch1_px1_spectra_4V']
</code></pre>
<p>while this doesn't and gives me an <code>"IndexError: indices are out-of-bounds"</code></p>
<pre><code>df[[1]] = df[[0]]*df[[1]]
</code></pre>
<p>But when I <code>print(df[['Wavelength']]) Name: Wavelength, dtype: float64</code> and <code>print(df[[0]]) [1599 rows x 1 columns]</code> I get the same numbers..</p>
<p><strong>Question 3:</strong></p>
<p>Why does this <code>df[fileNames] = df[fileNames].multiply(df.Wavelength)</code> give me a <code>ValueError: Columns must be same length as key</code>? All the columns are of the same length (1599 rows long, 0-1598 and a total of 46 columns in this case). <code>fileNames</code> contains the names of the imported files and the names of the columns of the dataframe.</p>
<p><strong>Many many thanks</strong> in advance for your help...</p>
<p>Alex</p>
|
<p><strong>Question 1</strong></p>
<p>To multiply your wavelength column by every other column in your DataFrame, you can use:</p>
<pre><code>df.iloc[:, 1:] = df.iloc[:, 1:].mul(df['Wavelength'], axis=0)
</code></pre>
<p>This assumes your wavelength column is the first column.</p>
<p><strong>Question 2</strong></p>
<p>Selecting columns like that using an integer is asking for columns of your DataFrame that are named 0, 1, etc., as ints. There are none in your DataFrame. To select columns by index number look into the documentation for <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow noreferrer">pandas' iloc method.</a></p>
<p><strong>Question 3</strong></p>
<p>When you call <code>df[fileNames]</code>, you are getting a a DataFrame with the same number of columns as the length of your list <code>fileNames</code>. Your code <code>df[fileNames].multiply(df.Wavelength)</code> is not giving you a DataFrame with the same number of columns as <code>df[fileNames]</code>, hence you cannot assign the values. Using the <code>axis=0</code> parameter in the multiply function is working for me.</p>
|
python|pandas|dataframe|multiplication
| 1
|
374,531
| 48,852,577
|
Tensorflow fileio reading from GCS bucket via Dataflow: SSL no alternative certificate subject name matches target host name
|
<p>I am running a slightly modified version of the <a href="https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/flowers" rel="nofollow noreferrer">cloudml flowers sample</a> to classify my own images where I encounter a problem in the preprocess part. It seems when pointing to my own images which are in another project they can't be reached:</p>
<blockquote>
<blockquote>
<p>return pywrap_tensorflow.ReadFromStream(self._read_buf, length, status): File "/usr/lib/python2.7/contextlib.py", line 24, in <strong>exit</strong> self.gen.next() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) UnavailableError: Error executing an HTTP request (HTTP response code 0, error code 51, error message 'SSL: no alternative certificate subject name matches target host name '$BUCKET.com.storage.googleapis.com'') when reading $BUCKET.com/image.jpg</p>
</blockquote>
</blockquote>
<p>(I've replaced the actual bucket name with $BUCKET).</p>
<p>I run the scripts from a vm where I've installed the required packages from the requirements.txt file:</p>
<blockquote>
<blockquote>
<p>apache-beam[gcp]==0.6.0</p>
<p>pillow==4.0.0</p>
<p>tensorflow==1.4.1</p>
</blockquote>
</blockquote>
<p>What I've tried/done so far:</p>
<ul>
<li>verified that I can run the original flowers preprocessing from the sample.sh file with no modifications</li>
<li>changed access for the compute engine service account of the default project ("default-project-id"-compute@developer.gserviceaccount.com) to "Storage Object Viewer" for the project holding the bucket. The same has been done for the corresponding @cloudservices.gserviceaccount.com </li>
<li>verified that the path names to the images in bucket owned by the other project are right</li>
<li><p>when reading from a bucket that <em>does not</em> have a .com name but still is in another project (with access to the compute engine service account of the other project also set to "Storage Object Viewer") a similar error is thrown, only it's now "PermissionDenied" instead of "Unavailable".</p></li>
<li><p>with my default gcloud auth I am able to run the preprocess locally with no errors.</p>
<blockquote>
<blockquote>
<p>python trainer/preprocess.py \
--input_dict "$DICT_FILE" \
--input_path $INPUT_PATH_EVAL \
--output_path $OUTPUT_PATH_EVAL \</p>
</blockquote>
</blockquote></li>
<li><p>I've have looked at the solution <a href="https://stackoverflow.com/questions/43044566/ssl-no-alternative-certificate-subject-name-matches-target-host-name-name-stor/43082418#43082418">here</a>, only that was an older version of tensorflow that should not be an issue with 1.4 and if that was the case I would probably still be able to reach the regular non-domain bucket which I am not.</p></li>
</ul>
<p>So what am I missing with access between projects in running this example?</p>
|
<p>Google(TensorFlow) APIs do not support double wildcard format, e.g., <code>*.*.storage.googleapis.com</code>. They just support one wildcard certificate e.g., <code>*.storage.googleapis.com</code> . In your case, when you use "$BUCKET.com.storage.googleapis.com”, more than one identity of a given type is present in the certificate (e.g., more than one DNS name). For more details please refer to <a href="http://www.ietf.org/rfc/rfc2818.txt" rel="nofollow noreferrer">RFC standard</a>. </p>
<p>The Permission Denied Error was thrown due to the permission issue. So, in this case, you do not have enough access to the buckets of other projects but the service is available. For more information please refer to <a href="https://cloud.google.com/storage/docs/request-endpoints" rel="nofollow noreferrer">Request Endpoints</a>
and <a href="https://cloud.google.com/storage/docs/access-control/" rel="nofollow noreferrer">Access Control Options</a> documents. </p>
<p>The easiest solution is to copy your files into Google Storage via Google Cloud API and get the images from there. </p>
|
python|tensorflow|ssl-certificate|google-cloud-storage|google-cloud-dataflow
| 2
|
374,532
| 48,472,331
|
Evaluating a classifier in TensorFlow
|
<p>I was walking through <a href="https://pythonprogramming.net/convolutional-neural-network-kats-vs-dogs-machine-learning-tutorial/" rel="nofollow noreferrer">this tutorial</a>.</p>
<p>I couldn't figure out how to evaluate the classifier, especially finding its sensitivity, specificity, AUC, ...etc.</p>
<p>I found those in the TensorFlow documentation (<a href="https://www.tensorflow.org/api_docs/python/tf/metrics/specificity_at_sensitivity" rel="nofollow noreferrer">this</a>, and <a href="https://www.tensorflow.org/api_docs/python/tf/metrics/auc" rel="nofollow noreferrer">this</a>), but couldn't figure how to use it with the code shown in the article.</p>
<p>How can I find such measures for the code shown in the article?</p>
|
<p>So as I see it, the tutorial is about classifying pictures, it's a dog or a cat. After completion of the training, you will be evaluated with a test data set (pictures that were not used in the training) for these test data points will make a prediction for the two classes (for example: cat: 0.08 dogs: 0.92) and then choose the class that has the highest value. Then compare whether the predicted class matches the label. Now you do this for example for 100 test images and then you have x wrongly detected and y correctly recognized test data sets. This will then calculate the accuracy of your model.</p>
<p>In Tensorflow you can find out if a picture has been correctly classified with something like this:</p>
<pre><code>corr_pred = tf.equal(tf.argmax(logits, 1), label)
</code></pre>
<p>To evaluate several images at once: <a href="https://www.tensorflow.org/api_docs/python/tf/nn/in_top_k" rel="nofollow noreferrer">here</a></p>
|
python|tensorflow|statistics|evaluation|auc
| 0
|
374,533
| 48,456,459
|
What is the correct way to read txt file using command line in Pandas
|
<p>I am new to python and I am having error like this with my code, which is to scan IP address list and show only the malware IP lists:
import os
from datetime import datetime, date, timedelta
import subprocess
import pyjq
import pandas as pd</p>
<pre><code># Initializes the variables for the directories
HomeDir = "/Users/mani/Downloads/"
ScriptDir = HomeDir + "/panpython"
ResultDir = "/Users/mani/Desktop/result"
# Create the dates
ToDay = datetime.now().strftime('%Y%m%d')
# checkDATE = (date.today() - timedelta(1)).strfttime('%Y%m%d')
ResultFile = "Test"
CheckDATE = "2015-10-01"
NOWDATE = "2015-10-02"
secretkey = 'secret key'
progToRun = 'python ' + ScriptDir + '/bin/panafapi.py -K ' + secretkey + ' --samples -j -r "{\\"query\\":{\\"operator\\":\\"all\\",\\"children\\":[{\\"field\\":\\"alias.ip_address\\",\\"operator\\":\\"contains\\",\\"value\\":\\"' + ResultFile + '\\"},{\\"operator\\":\\"any\\",\\"children\\":[{\\"field\\":\\"sample.update_date\\",\\"operator\\":\\"is in the range\\",\\"value\\":[\\"' + CheckDATE + 'T00:00:00\\",\\"' + NOWDATE + 'T23:59:59\\"]},{\\"field\\":\\"sample.create_date\\",\\"operator\\":\\"is in the range\\",\\"value\\":[\\"' + CheckDATE + 'T00:00:00\\",\\"' + NOWDATE + 'T23:59:59\\"]},{\\"operator\\":\\"any\\",\\"children\\":[{\\"field\\":\\"sample.malware\\",\\"operator\\":\\"is\\",\\"value\\":1},{\\"field\\":\\"sample.malware\\",\\"operator\\":\\"is\\",\\"value\\":4}]}]}]},\\"scope\\":\\"global\\",\\"size\\":1,\\"from\\":0,\\"sort\\":{\\"create_date\\":{\\"order\\":\\"desc\\"}}}" > ' + ResultDir + 'srciplist-' + ToDay + '.json'
# Run the panafpi
subprocess.check_output(progToRun, shell=True)
# Using pyjq to filter
filteredResultData = pyjq.all('.hits[]._source | .create_date + "," + .sha256')
file_to_open=sys.argv[1]
df=pd.read_csv(file_to_open)
df.to_csv(ResultDir + "/srciplist-" + ToDay + ".csv", sep=',')
</code></pre>
<p>I did this in command line, then it scans and show this error</p>
<pre><code>python finalauto.py xyz.txt
samples_search: 200 OK 339 0%
.......
samples_results: 200 OK 100% hits=1 total=674415 time=0:08:57.082 "complete"
</code></pre>
<p>error:</p>
<pre><code> Traceback (most recent call last):
File "finalauto.py", line 36, in <module>
df=pd.read_csv(file_to_open)
File "/usr/local/lib/python3.6/site-packages/pandas/io/parsers.py", line 709, in parser_f
return _read(filepath_or_buffer, kwds)
File "/usr/local/lib/python3.6/site-packages/pandas/io/parsers.py", line 455, in _read
data = parser.read(nrows)
File "/usr/local/lib/python3.6/site-packages/pandas/io/parsers.py", line 1069, in read
ret = self._engine.read(nrows)
File "/usr/local/lib/python3.6/site-packages/pandas/io/parsers.py", line 1839, in read
data = self._reader.read(nrows)
File "pandas/_libs/parsers.pyx", line 902, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 924, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 978, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 965, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 2208, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 778, saw 3
</code></pre>
|
<p><strong>1)</strong> There is a module within the standard library called CSV. It is probably better to use that when creating CSV's. Used like this:</p>
<pre><code>import csv
with open("file.csv", 'w') as f:
writer = csv.writer(f, delimiter=',')
writer.writerow(ResultDir + "/srciplist-" + ToDay + ".csv")
</code></pre>
<p><strong>2)</strong> Here is some code for opening a file in the command line:</p>
<pre><code>import sys
with open(sys.argv[1], 'r') as f:
contents = f.read()
# Continue code below
</code></pre>
|
python|pandas
| 0
|
374,534
| 48,646,485
|
Append a "layer" to 3D-Array with Numpy
|
<p>I have a numpy array with dimensions <code>12 x 12 x 4</code>. Now I'm trying to add an extra layer to this cube resulting in a <code>12 x 13 x 4</code> array. This 13th layer should contain the corresponding indices from the first axis, so for example addressing <code>[7, 13, :]</code> results in <code>[7, 7, 7, 7]</code>.</p>
<p>Hard to explain but maybe someone has some advice on how to achieve this with numpy?</p>
<p>EDIT:
I've found a solution, though it seems a little overcomplicated:</p>
<pre><code># Generate extra layer
layer = np.repeat(np.arange(0, 12)[:, np.newaxis], data.shape[2], axis=1)
# Get dimensions right...
layer = np.expand_dims(layer, axis=1)
# ... and finally append to data
result = np.append(data, layer, axis=1)
</code></pre>
<p>Still open for better suggestions.</p>
|
<p>You have the right idea. A slight simplification:</p>
<pre><code>layer = np.repeat(np.arange(3)[:,None,None], data.shape[2], axis=2)
result = np.concatenate((data, layer), axis=1)
</code></pre>
|
numpy|numpy-ndarray
| 2
|
374,535
| 48,629,632
|
Merging a pandas dataframe with a pivot table
|
<p>I have 2 pieces of data that I want to merge. <br><br>
<code>df1</code> is a pandas dataframe that contains a list of contracts, where <code>year</code> is the year the contract was was executed, and <code>o_id</code> refers to the id of the organization that this contract is from.<br><br>
<code>df2</code> is a pivot table comprised of an organization's problems over the years (where year is the year in which the audit was competed to check for organization problems). <code>P_1</code> and <code>P_2</code> refer to problem 1 and problem 2.</p>
<pre><code>df1
c_id | o_id | year |
====================
101 | 10 | 2013 |
102 | 10 | 2014 |
103 | 10 | 2015 |
103 | 10 | 2016 |
121 | 12 | 2013 |
122 | 12 | 2014 |
123 | 12 | 2015 |
123 | 12 | 2016 |
df2
P_1 | P_2
year | 2013 | 2014 | 2015 | 2013 | 2014 | 2015 |
id |
================================================
10 | 1 | 0 | 0 | 0 | 0 | 0 |
12 | 0 | 1 | 0 | 1 | 1 | 0 |
</code></pre>
<p>The aim is to merge these two data sets in order to capture the 'history' of problems for each contract relative to the year in which that contract was > executed (merging on <code>df1['o_id'] = df2['id']</code>). </p>
<p>Note that I cannot include history for the year in which the contract was executed (e.g. a 2015 contract can only use history from 2014 and before). </p>
<p>I'm looking to make the final output look like this:</p>
<pre><code>id | year | 2013_P_1 | 2014_P_1 | 2015_P_1 | 2013_P_2 | 2014_P_2 | 2015_P_2
===============================================================================
10 | 2013 | NA | NA | NA | NA | NA | NA
10 | 2014 | 1 | NA | NA | 0 | NA | NA
10 | 2015 | 1 | 0 | NA | 0 | 0 | NA
10 | 2016 | 1 | 0 | 0 | 0 | 0 | 0
12 | 2013 | NA | NA | NA | NA | NA | NA
12 | 2014 | 0 | NA | NA | 1 | NA | NA
12 | 2015 | 0 | 1 | NA | 1 | 1 | NA
12 | 2016 | 0 | 1 | 0 | 1 | 1 | 0
</code></pre>
|
<p>First reshape <code>df2</code> by <code>stack</code> and <code>join</code> <code>df1</code>, then replace values by <code>NaN</code>s by custom function:</p>
<pre><code>df = (df1.drop('c_id', 1)
.join(df2.stack(0).reset_index(level=1), on='o_id')
.set_index(['o_id','year', 'level_1']))
def f(x):
il1 = np.triu_indices(len(x.columns))
a = x.values.astype(float)
a[il1] = np.nan
x = pd.DataFrame(a, columns=x.columns, index=x.index)
return (x)
df = df.groupby(['o_id','level_1']).apply(f).unstack().sort_index(axis=1, level=1)
df.columns = ['{}_{}'.format(a,b) for a,b in df.columns]
df = df.reset_index()
print (df)
o_id year 2013_P_1 2014_P_1 2015_P_1 2013_P_2 2014_P_2 2015_P_2
0 10 2013 NaN NaN NaN NaN NaN NaN
1 10 2014 1.0 NaN NaN 0.0 NaN NaN
2 10 2015 1.0 0.0 NaN 0.0 0.0 NaN
3 10 2016 1.0 0.0 0.0 0.0 0.0 0.0
4 12 2013 NaN NaN NaN NaN NaN NaN
5 12 2014 0.0 NaN NaN 1.0 NaN NaN
6 12 2015 0.0 1.0 NaN 1.0 1.0 NaN
7 12 2016 0.0 1.0 0.0 1.0 1.0 0.0
</code></pre>
|
python|pandas|merge
| 2
|
374,536
| 48,600,521
|
Recovering parameters for wald distribution: from numpy to scipy
|
<p>could someone please help with a questions around the parametrization of scipy distributions and how to transform them? </p>
<p>I basically would like to recover distribution parameters of data that I simulate with numpy... </p>
<pre><code>some_data = np.random.normal(loc=81, scale=7, size=100000)
</code></pre>
<p>...by fitting a distribution with scipy</p>
<pre><code>recovered_parms = scipy.stats.norm.fit(some_data)
</code></pre>
<p>For the normal distribution, this works. recovered_parms ~= (81,7) </p>
<p>However, for e.g. a wald distribution it does not. </p>
<pre><code>some_data = np.random.wald(mean=4, scale=41, size=100000)
recovered_parms = scipy.stats.wald.fit(some_data)
</code></pre>
<p>Result: recovered_parms ~= (1.28,3.66) </p>
<p>I understand that they need to be transformed but just can't figure out how. Any help appreciated. </p>
|
<p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.wald.html" rel="nofollow noreferrer"><code>numpy.random.wald</code></a> has two parameters, <code>mean</code> and <code>scale</code>.
<code>scale</code> is, as the name suggests, a <em>scale parameter</em>, in the sense
of a <a href="https://en.wikipedia.org/wiki/Location%E2%80%93scale_family" rel="nofollow noreferrer">location-scale family</a>.
<code>mean</code> is a shape parameter; it is <em>not</em> a location parameter.</p>
<p>If you look at the docstring for <code>numpy.random.wald</code>, it says
"Draw samples from a Wald, or inverse Gaussian, distribution."
The docstring for <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wald.html" rel="nofollow noreferrer"><code>scipy.stats.wald</code></a>, however, says that it is
"a special case of <code>invgauss</code> with <code>mu == 1</code>", where <code>mu</code> is a
shape parameter of <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.invgauss.html" rel="nofollow noreferrer"><code>scipy.stats.invgauss</code></a>. <code>scipy.stats.wald</code> has
only two parameters, <code>loc</code> and <code>scale</code>. (All the continuous
distributions in <code>scipy.stats</code> have these parameters.) So the parameters
of <code>numpy.random.wald</code> and <code>scipy.stats.wald</code> don't match up:
<code>numpy.random.wald</code> has a shape and a scale parameter, but <code>scipy.stats.wald</code> has a location and a scale parameter.</p>
<p>Instead of <code>scipy.stats.wald</code>, you must use <code>scipy.stats.invgauss</code> to fit data generated with <code>numpy.random.wald</code>.
<code>scipy.stats.invgauss</code> is an implementation of the inverse Gaussian distribution that is
mentioned in the docstring of <code>numpy.random.wald</code>. <code>scipy.stats.invgauss</code>
has three parameters: one shape parameter called <code>mu</code>, along with the
standard location (<code>loc</code>) and scale parameters.</p>
<p>The shape parameter <code>mu</code> of <code>scipy.stats.invgauss</code> is not the same as the
shape parameter <code>mean</code> of <code>numpy.random.wald</code>. If you do a little algebra with the PDFs of the two functions, you'll find that the relation is</p>
<pre><code>mean = mu * scale
</code></pre>
<p>where <code>mu</code> is the <code>invgauss</code> shape parameter, <code>mean</code> is the shape parameter used in <code>numpy.random.wald</code>, and <code>scale</code> has the same meaning in both functions.</p>
<p>If you generate a sample using <code>numpy.random.wald</code> and you then want to
recover the parameters by fitting the inverse Gaussian distribution to
it, you must use the above relation to convert the result of the fit to
the <code>mean</code> used by <code>numpy.random.wald</code>. Also, <code>numpy.random.wald</code> doesn't
have a location parameter, so you must restrict the location of <code>scipy.stats.invgauss</code> to be 0 by using the argument <code>floc=0</code> in <code>scipy.stats.invgauss.fit()</code>.</p>
<p>Here's an example. First, generate some data using <code>numpy.random.wald</code>:</p>
<pre><code>In [55]: m = 4
In [56]: s = 41
In [57]: some_data = np.random.wald(mean=m, scale=s, size=100000)
</code></pre>
<p>Now fit <code>scipy.stats.invgauss</code> to that data, with the restriction that the
location parameter is 0:</p>
<pre><code>In [58]: from scipy.stats import invgauss
In [59]: mu, loc, scale = invgauss.fit(some_data, floc=0)
In [60]: mu, loc, scale
Out[60]: (0.097186409353576975, 0, 41.155034600558793)
</code></pre>
<p>As expected, the <code>scale</code> parameter is close to the parameter that was used to generate the data. To get the estimate of the shape parameter that was used, multiply <code>mu</code> and <code>scale</code>:</p>
<pre><code>In [61]: mu*scale
Out[61]: 3.9997100396505312
</code></pre>
<p>It is approximately 4, as expected.</p>
<p>A plot is always useful for visualizing the fit. In the plot, the blue bars show the normalized histogram of the data, and the black curve is the PDF of the fitted inverse Gaussian distribution.</p>
<pre><code>In [86]: import matplotlib.pyplot as plt
In [87]: _ = plt.hist(some_data, bins=40, normed=True, alpha=0.6)
In [88]: xx = np.linspace(some_data.min(), some_data.max(), 500)
In [89]: yy = invgauss.pdf(xx, mu, loc, scale)
In [90]: plt.plot(xx, yy, 'k')
Out[90]: [<matplotlib.lines.Line2D at 0x11b6d64e0>]
</code></pre>
<p><a href="https://i.stack.imgur.com/js1vJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/js1vJ.png" alt="plot"></a></p>
|
python|numpy|scipy|statistics|distribution
| 2
|
374,537
| 48,786,388
|
numpy ravel function return issue
|
<p>When ravel() returns a contiguous 1D array of all the elements in nD array, I observed it just returned only unique elements of X1 as mentioned. Am I missing anything?</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>tst
Out[44]:
array([[1, 2, 3, 4],
[1, 2, 3, 4]])
tst.ravel()
Out[45]: array([1, 2, 3, 4, 1, 2, 3, 4])
X1
Out[46]:
array([[-2.99318916, -2.98318916, -2.97318916, ..., 3.13681084,
3.14681084, 3.15681084],
[-2.99318916, -2.98318916, -2.97318916, ..., 3.13681084,
3.14681084, 3.15681084],
[-2.99318916, -2.98318916, -2.97318916, ..., 3.13681084,
3.14681084, 3.15681084],
...,
[-2.99318916, -2.98318916, -2.97318916, ..., 3.13681084,
3.14681084, 3.15681084],
[-2.99318916, -2.98318916, -2.97318916, ..., 3.13681084,
3.14681084, 3.15681084],
[-2.99318916, -2.98318916, -2.97318916, ..., 3.13681084,
3.14681084, 3.15681084]])
X1.ravel()
Out[47]:
array([-2.99318916, -2.98318916, -2.97318916, ..., 3.13681084,
3.14681084, 3.15681084])</code></pre>
</div>
</div>
</p>
|
<p>This is a printing issue.
You can tune this behavior with <code>numpy.set_printoptions</code>:</p>
<pre><code>In [404]: np.array([arange(10)]*10).ravel()
Out[404]:
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2,
3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5,
6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8,
9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1,
2, 3, 4, 5, 6, 7, 8, 9])
In [405]: np.set_printoptions(threshold=90)
In [406]: np.array([arange(10)]*10).ravel()
Out[406]: array([0, 1, 2, ..., 7, 8, 9])
</code></pre>
|
python|numpy
| 0
|
374,538
| 48,688,382
|
Pandas module in SPSS Modeler
|
<p>I need to put a certain code developed in Python 3 into a SPSS Modeler node (using the Extension Transform node). This code uses pandas and the default installation of Modeler doesn't include this module.</p>
<p>I tried to make SPSS to point to my own Python installation (which includes pandas module) by modifying the 'options.cfg' file following these instructions:</p>
<p><a href="https://www.ibm.com/support/knowledgecenter/en/SS3RA7_sub/modeler_r_nodes_ddita/clementine/r_pyspark_api.html" rel="nofollow noreferrer">https://www.ibm.com/support/knowledgecenter/en/SS3RA7_sub/modeler_r_nodes_ddita/clementine/r_pyspark_api.html</a></p>
<p>However, when I try to import pandas inside SPSS Modeler, it isn't able to load the module. In fact I am not able to load pyspark neither by writing:</p>
<p><code>import spss.pyspark</code></p>
<p>Also when I try to see the directory of the python executable: </p>
<p><code>import sys
print sys.executable</code></p>
<p>SPSS gives back a 'None' value.</p>
<p>How can I get to work pandas in SPSS Modeler? It seems that I am not able to import any module in Modeler. I am a beginner in SPSS so any help would be appreciated.</p>
|
<p>You can install new packages to your existing SPSS Modeler 18.1 Version by going to your installation path, e.g. "C:\Program Files\IBM\SPSS\Modeler\18.1" and then into the folder python. There you open a windows command shell in admin mode. Now enter </p>
<blockquote>
<p>python.exe -m pip install pandas</p>
</blockquote>
<p>and it will install the library for SPSS to use.</p>
|
python|pandas|pyspark|spss-modeler
| 4
|
374,539
| 48,822,061
|
If statement string data from dataframe does not work for larger years
|
<p>I have a problem with an if statement with my data from the dataframe. Somehow performing an if statement for years > 3years somehow all values larger than 9Y are not showing up and it is not clear why. The output looks like the following:</p>
<blockquote>
<pre><code>4Y
5Y
6Y
7Y
8Y
9Y
4Y
5Y
6Y
7Y
8Y
9Y
</code></pre>
</blockquote>
<p>My code looks like the following:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([
['2015-02-09', '1Y', 2.241],
['2015-02-09', '1Y', 2.413],
['2015-02-09', '2Y', 2.228],
['2015-02-09', '2Y', 2.289],
['2015-02-09', '3Y', 2.263],
['2015-02-09', '3Y', 2.371],
['2015-02-09', '4Y', 2.413],
['2015-02-09', '5Y', 2.487],
['2015-02-09', '6Y', 2.578],
['2015-02-09', '7Y', 2.655],
['2015-02-09', '8Y', 2.74959],
['2015-02-09', '9Y', 2.81729],
['2015-02-09', '10Y', 2.853],
['2015-02-09', '12Y', 2.942],
['2015-02-09', '15Y', 3.047],
['2015-02-09', '20Y', 3.165],
['2015-02-09', '25Y', 3.225],
['2015-02-09','30Y', 3.225],
['2015-02-09', '1Y', 9.5],
['2015-02-09', '2Y', 8.75],
['2015-02-09', '3Y', 8.5],
['2015-02-09', '4Y', 8.13],
['2015-02-09', '5Y', 7.75],
['2015-02-09', '6Y', 7.63],
['2015-02-09', '7Y', 7.5],
['2015-02-09', '8Y', 7.45],
['2015-02-09','9Y', 7.25],
['2015-02-09', '10Y', 7.125],
['2015-02-09', '12Y', 7.08],
['2015-02-09', '15Y', 7.04],
['2015-02-09', '20Y', 6.435],
['2015-02-09', '25Y', 5.83],
['2015-02-09', '30Y', 5.45]
], columns=['date', 'year', 'values'])
for index, row in df.iterrows():
if row['year'] > '3Y':
print(row['year'])
</code></pre>
|
<p>There is problem you compare strings lexicographically, so <code>10Y < 3Y</code>. Solution is convert values to integers.</p>
<pre><code>df['mask'] = df['year'].str.extract('(\d+)', expand=False).astype(int) > 3
</code></pre>
<hr>
<pre><code>print (df)
date year values mask
0 2015-02-09 1Y 2.24100 False
1 2015-02-09 1Y 2.41300 False
2 2015-02-09 2Y 2.22800 False
3 2015-02-09 2Y 2.28900 False
4 2015-02-09 3Y 2.26300 False
5 2015-02-09 3Y 2.37100 False
6 2015-02-09 4Y 2.41300 True
7 2015-02-09 5Y 2.48700 True
8 2015-02-09 6Y 2.57800 True
9 2015-02-09 7Y 2.65500 True
10 2015-02-09 8Y 2.74959 True
11 2015-02-09 9Y 2.81729 True
12 2015-02-09 10Y 2.85300 True
13 2015-02-09 12Y 2.94200 True
14 2015-02-09 15Y 3.04700 True
15 2015-02-09 20Y 3.16500 True
16 2015-02-09 25Y 3.22500 True
17 2015-02-09 30Y 3.22500 True
18 2015-02-09 1Y 9.50000 False
19 2015-02-09 2Y 8.75000 False
20 2015-02-09 3Y 8.50000 False
21 2015-02-09 4Y 8.13000 True
22 2015-02-09 5Y 7.75000 True
23 2015-02-09 6Y 7.63000 True
24 2015-02-09 7Y 7.50000 True
25 2015-02-09 8Y 7.45000 True
26 2015-02-09 9Y 7.25000 True
27 2015-02-09 10Y 7.12500 True
28 2015-02-09 12Y 7.08000 True
29 2015-02-09 15Y 7.04000 True
30 2015-02-09 20Y 6.43500 True
31 2015-02-09 25Y 5.83000 True
32 2015-02-09 30Y 5.45000 True
</code></pre>
<p>Loop solution from comment of @CristiFati:</p>
<pre><code>for index, row in df.iterrows():
if int(row["year"][:-1]) > 3:
print(row['year'])
</code></pre>
<p>Or with regex:</p>
<pre><code>import re
for index, row in df.iterrows():
if int(re.search(r'\d+', row["year"]).group()) > 3:
print(row['year'])
</code></pre>
<p>Also is possible create integer column first:</p>
<pre><code>df['year-int'] = df['year'].str.extract('(\d+)', expand=False).astype(int)
for index, row in df.iterrows():
if row["year-int"] > 3:
print(row['year'])
</code></pre>
|
python|pandas|loops|dataframe
| 3
|
374,540
| 48,633,293
|
Matplotlib: Drawing contour lines independent of x and y
|
<p>I am trying to draw contour lines (elevation) associated with x and y coordinates. I have read examples <a href="https://matplotlib.org/examples/pylab_examples/contour_demo.html" rel="nofollow noreferrer">here</a> on how you draw contours on Matplotlib when z is defined by x and y but how can I draw contour lines that are independent of x and y?</p>
<p>This is my code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
data = [(0, 200, 140), (100, 430, 260), (800, 340, 320), (250, 110, 430), (290, 40, 100), (590, 35, 180)]
x = np.arange(0, 900, 20)
y = np.arange(0, 500, 20)
X, Y = np.meshgrid(x, y)
Z = [i[2] for i in data]
Z = np.array(Z)
plt.figure()
plt.contour(X, Y, Z)
plt.show()
</code></pre>
<p>I get an error "TypeError: Input z must be a 2D array."</p>
<p>This is the first time I am trying to draw contour lines and I appreciate any help on this.</p>
|
<p>Given that there are only 6 data points, a contour plot drawn from those may not be very informative. Still, the concept would be the same for more points. </p>
<p>Of course one cannot draw contour lines where x,y and z are independent. If you have 6 z points, you need 6 x points and 6 y points - which you have. So the solution may be rather trivial - just use <code>tricontour</code> instead of <code>contour</code>:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
data = [(0, 200, 140), (100, 430, 260), (800, 340, 320),
(250, 110, 430), (290, 40, 100), (590, 35, 180)]
x,y,z = zip(*data)
plt.figure()
plt.tricontour(x,y,z)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/D6YzY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D6YzY.png" alt="enter image description here"></a></p>
<p>More generally, you may also interpolate your data. See <a href="https://stackoverflow.com/questions/18764814/make-contour-of-scatter">Make contour of scatter</a>.</p>
|
python|arrays|numpy|matplotlib|contour
| 2
|
374,541
| 48,599,207
|
Python: How to get Dataframes with get_groups in for loops
|
<p>I have grouped a DataFrame using <code>data.groupby('column)</code> and now I want to create a dataframe from each group:</p>
<pre><code>for i in data_group.indices:
i = data_group.get_group(i)
</code></pre>
<p>I can print the dataframes out within the for-loop, but I can't access them otherwise... Somehow the naming of the <code>DataFrame</code> with a variable is not working. Does anyone have a solution?</p>
|
<p>You can store them in a list </p>
<pre><code>g=data.groupby('column')
l=[]
for x,df in g :
l.append(df)
</code></pre>
<p>Or using <code>get_group</code></p>
<pre><code>g.get_group('groupkey')
</code></pre>
|
python|pandas|dataframe
| 1
|
374,542
| 48,471,688
|
Using tensorflow's Dataset pipeline, how do I *name* the results of a `map` operation?
|
<p>I have the map function below (runnable example), which inputs a <code>string</code> and outputs a <code>string</code> and an <code>integer</code>.</p>
<p>in <code>tf.data.Dataset.from_tensor_slices</code> I named the original input <code>'filenames'</code>. But when I return the values from the map function <code>map_element_counts</code> I can only return a tuple (returning a dictionary generates an exception).</p>
<p>Is there a way to name the 2 elements returned from my <code>map_element_counts</code> function?</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
filelist = ['fileA_6', 'fileB_10', 'fileC_7']
def map_element_counts(fname):
# perform operations outside of tensorflow
return 'test', 10
ds = tf.data.Dataset.from_tensor_slices({'filenames': filelist})
ds = ds.map(map_func=lambda x: tf.py_func(
func=map_element_counts, inp=[x['filenames']], Tout=[tf.string, tf.int64]
))
element = ds.make_one_shot_iterator().get_next()
with tf.Session() as sess:
print(sess.run(element))
</code></pre>
<p>Result:</p>
<pre><code>(b'test', 10)
</code></pre>
<p>Desired Result:</p>
<pre><code>{'elementA': b'test', 'elementB': 10)
</code></pre>
<hr>
<p><strong>Added detail:</strong></p>
<p>When I do <code>return {'elementA': 'test', 'elementB': 10}</code> I get this exception:</p>
<pre><code>tensorflow.python.framework.errors_impl.UnimplementedError: Unsupported object type dict
</code></pre>
|
<p>I'm posing a final solution to this question for posterity sake. The code below is a copy/paste example that works under the most complex conditions this question addresses (note that the other two answers aren't copy/pastable code samples):</p>
<p>The goal of the code is:</p>
<ul>
<li>Take a list of (big) files and split it into chunks (filename/index pairs)</li>
<li>Process each chunk using a map operation (generators aren't a workable solution here, see: <a href="https://github.com/tensorflow/tensorflow/issues/16343" rel="noreferrer">https://github.com/tensorflow/tensorflow/issues/16343</a>)</li>
<li>Output multiple samples from a map operation that takes only 1 file/chunk as input.</li>
<li>Maintain element naming throughout the process</li>
</ul>
<p>Copy/pastable working sample for Tensorflow 1.5 / Python 3.x</p>
<pre><code>import tensorflow as tf
import numpy as np
files = [b'testA', b'testB', b'testC']
def mymap1(x):
result_tensors = tf.py_func(func=mymap2, inp=[x], Tout=[tf.string, tf.int64])
return {'filename': result_tensors[0], 'value': result_tensors[1]}
def mymap2(x):
return np.array([x, x, x]), np.array([10, 20, 30])
def myflatmap(named_elements):
return tf.data.Dataset.zip({
'filename': tf.data.Dataset.from_tensor_slices(named_elements['filename']),
'value': tf.data.Dataset.from_tensor_slices(named_elements['value'])
})
ds = tf.data.Dataset.from_tensor_slices(files)
ds = ds.map(map_func=mymap1)
ds = ds.flat_map(map_func=myflatmap)
element = ds.make_one_shot_iterator().get_next()
with tf.Session() as sess:
for _ in range(9):
print(sess.run(element))
</code></pre>
<p>Output:</p>
<pre><code>{'filename': b'testA', 'value': 10}
{'filename': b'testA', 'value': 20}
{'filename': b'testA', 'value': 30}
{'filename': b'testB', 'value': 10}
{'filename': b'testB', 'value': 20}
{'filename': b'testB', 'value': 30}
{'filename': b'testC', 'value': 10}
{'filename': b'testC', 'value': 20}
{'filename': b'testC', 'value': 30}
</code></pre>
|
python|dictionary|tensorflow|mapping|tensorflow-datasets
| 6
|
374,543
| 48,736,176
|
Cross product between columns of two matrices
|
<p>Given two matrices <a href="https://i.stack.imgur.com/Gngz8.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gngz8.gif" alt="enter image description here"></a> and <a href="https://i.stack.imgur.com/zck2c.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zck2c.gif" alt="enter image description here"></a> of dimensions a <code>D x N</code>, I want to compute <a href="https://i.stack.imgur.com/RgLNV.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RgLNV.gif" alt="enter image description here"></a>. Could you please suggest a vectorize way to do such calculation?</p>
<p>Thank you! </p>
|
<p>Here is another way to do it:</p>
<pre><code>np.sum(x*y, axis=0)
</code></pre>
<p>Efficiency: </p>
<pre><code>x = np.random.randint(0, 10, size=(30, 400))
y = np.random.randint(0, 10, size=(30, 400))
%timeit np.sum(x*y, axis=0)
# 38.4 µs ± 942 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit for i in range(len(x.T)): x[:, i].dot(y[:, i].T)
# 1.01 ms ± 19 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit np.diag(x.T.dot(y))
# 6.57 ms ± 248 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
|
python|numpy|matrix
| 3
|
374,544
| 48,797,147
|
Drop string that is all zeros - pandas python
|
<p>I have a dataframe of strings that looks like this: </p>
<pre><code>In [58]: d['upin'].head()
Out[58]:
0 'H8409
1 'H8409
2 .31961
3 .31961
4 000000
Name: upin, dtype: object
</code></pre>
<p>I want to drop the rows that have only zeros, i.e. the last row in this example. I haven't found a nice regexp way of doing this, and I'm not sure what else to try. </p>
|
<p>This should work if I understood your problem correctly:</p>
<pre><code>df = pd.DataFrame()
df['pin'] = ["00000","F4923","'222R","0","00001"]
ndf = df[~(df['pin'].str.contains('^0+$'))]
</code></pre>
|
python|regex|pandas
| 3
|
374,545
| 48,453,509
|
Transform all elements positionally below 0 into 0 in a matrix (Python)
|
<p>This is a matrix :</p>
<pre><code>matrix = [[1, 1, 1, 0],
[0, 5, 0, 1],
[2, 1, 3, 10]]
</code></pre>
<p>I want to change all the element <em>positionally</em> below 0 into 0 (on the same column).</p>
<p>The resulting matrix will be :</p>
<pre><code>matrix = [[1, 1, 1, 0],
[0, 5, 0, 0],
[0, 1, 0, 0]]
</code></pre>
<p>I tried this so far. The return is empty</p>
<pre class="lang-python prettyprint-override"><code>import numpy as np
def transform(matrix):
newmatrix = np.asarray(matrix)
i = 0
j = 0
for j in range(0,len(matrix[0])-1):
while i < int(len(matrix))-1 and j < int(len(matrix[0]))-1:
if newmatrix[i][j] == 0:
np.put(newmatrix,newmatrix[i+1][j], 0 )
i +=1
return print (newmatrix)
</code></pre>
|
<h1>Method 1 (Original)</h1>
<pre><code>import numpy as np
def transform(matrix):
mat = np.asarray(matrix)
mat[np.logical_not(np.not_equal(mat, 0).cumprod(axis=0))] = 0
# Alternatively:
# mat[~(mat != 0).cumprod(axis=0, dtype=np.bool)] = 0
# or,
# mat[~((mat != 0).cumprod(axis=0, dtype=np.bool))] = 0
return mat
</code></pre>
<p>Then with your sample data, I get the following <code>mat</code>:</p>
<pre><code>In [195]: matrix = [[1, 1, 1, 0],
...: [0, 5, 0, 1],
...: [2, 1, 3, 10]]
In [196]: transform(matrix)
Out[196]:
array([[1, 1, 1, 0],
[0, 5, 0, 0],
[0, 1, 0, 0]])
</code></pre>
<h1>Method 2 (further optimized)</h1>
<pre><code>def transform2(matrix):
mat = np.asarray(matrix)
mat *= (mat != 0).cumprod(axis=0, dtype=np.bool)
return mat
</code></pre>
<h1>Method 3 (even more optimized)</h1>
<pre><code>def transform3(matrix):
mat = np.asarray(matrix)
mat *= mat.cumprod(axis=0, dtype=np.bool)
return mat
</code></pre>
<h1>Explanation</h1>
<p>Let's look at the main statement (in Method 1):</p>
<pre><code>mat[np.logical_not(np.not_equal(mat, 0).cumprod(axis=0))] = 0
</code></pre>
<p>We can split it into several "elementary" operations:</p>
<ol>
<li><p>Create a boolean mask containing <code>False</code> (numerically <code>0</code>) where elements of <code>mat</code> are <code>0</code> and <code>True</code> (numerically <code>1</code>) where they are non-zero:</p>
<pre><code>mask1 = np.not_equal(mat, 0)
</code></pre></li>
<li><p>Using the fact that numerically <code>False</code> is 0, use <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.cumprod.html" rel="nofollow noreferrer"><code>cumprod()</code></a> function (a good explanation can be found here: <a href="https://www.mathworks.com/help/matlab/ref/cumprod.html" rel="nofollow noreferrer">https://www.mathworks.com/help/matlab/ref/cumprod.html</a>) </p>
<pre><code>mask2 = mask1.cumprod(axis=0)
</code></pre>
<p>Since <code>1*1==1</code> and <code>0*0</code> or <code>0*1</code> is <code>0</code>, all elements of this "mask" will be either <code>0</code> or <code>1</code>. They will be <code>0</code> only in locations for which <code>mask1</code> is zero <em>and below</em> (!) because of the "cumulative nature" of the product <em>along the columns</em> (hence <code>axis=0</code>).</p></li>
<li><p>Now, we want set those elements of <code>mat</code> that correspond to <code>0</code> in <code>mask2</code> to <code>0</code>. To do this we create a boolean mask that is <code>True</code> where <code>mask2</code> is <code>0</code> and <code>False</code> elsewhere. This can be easily achieved by applying logical (or binary) NOT to <code>mask2</code>:</p>
<pre><code>mask3 = np.logical_not(mask2)
</code></pre>
<p>Using "logical" NOT here creates a boolean array thus we avoid explicit type conversion.</p></li>
<li><p>Finally we use <a href="https://docs.scipy.org/doc/numpy-1.13.0/user/basics.indexing.html#boolean-or-mask-index-arrays" rel="nofollow noreferrer">Boolean Indexing</a> to select those elements of <code>mat</code> that need to be set to <code>0</code> and set them to <code>0</code>:</p>
<pre><code>mat[mask3] = 0
</code></pre></li>
</ol>
<hr>
<h2>OPTIONAL OPTIMIZATION</h2>
<p>If you think of it, we can get rid of steps 3 and 4 if we do the following:</p>
<pre><code>mask2 = mask1.cumprod(axis=0, dtype=np.bool) #convert result to boolean type
mat *= mask2 # combined step 3&4
</code></pre>
<p>See "Method 2" section above for a complete implementation.</p>
<h2>PERFORMANCE</h2>
<p>There have been several additional answers that use <code>numpy.ufunc.accumulate()</code>. Fundamentally, all these methods revolve around the idea that <code>0</code> is a "special" value in the sense that <code>0*anything==0</code> or, in the case of @DSM's answer, that <code>False=0<True=0</code> AND letting <code>numpy</code> perform "cumulative" operation on arrays.</p>
<p>There are some variations in performance but most are minimal except for my method #1 that is slower than other methods.</p>
<p>Here are some timing tests for more functions. NOTE: in order to perform tests correctly, we need to use large arrays. Small array tests will be measuring overhead, cashing, etc.</p>
<pre><code>In [1]: import sys
...: import numpy as np
...:
In [2]: print(sys.version)
...:
3.6.2 |Continuum Analytics, Inc.| (default, Jul 20 2017, 13:14:59)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)]
In [3]: print(np.__version__)
...:
1.12.1
In [4]: # Method 1 (Original)
...: def transform1(matrix):
...: mat = np.asarray(matrix)
...: mat[np.logical_not(np.not_equal(mat, 0).cumprod(axis=0))] = 0
...: return mat
...:
In [5]: # Method 2:
...: def transform2(matrix):
...: mat = np.asarray(matrix)
...: mat *= (mat != 0).cumprod(axis=0, dtype=np.bool)
...: return mat
...:
In [6]: # @DSM method:
...: def transform_DSM(matrix):
...: mat = np.asarray(matrix)
...: mat *= np.minimum.accumulate(mat != 0)
...: return mat
...:
In [7]: # @DanielF method:
...: def transform_DanielF(matrix):
...: mat = np.asarray(matrix)
...: mat[~np.logical_and.accumulate(mat, axis = 0)] = 0
...: return mat
...:
In [8]: # Optimized @DanielF method:
...: def transform_DanielF_optimized(matrix):
...: mat = np.asarray(matrix)
...: mat *= np.logical_and.accumulate(mat, dtype=np.bool)
...: return mat
...:
In [9]: matrix = np.random.randint(0, 20000, (20000, 20000))
In [10]: %timeit -n1 transform1(matrix)
22.1 s ± 241 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [11]: %timeit -n1 transform2(matrix)
9.29 s ± 185 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [12]: %timeit -n1 transform3(matrix)
9.23 s ± 180 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [13]: %timeit -n1 transform_DSM(matrix)
9.24 s ± 195 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [14]: %timeit -n1 transform_DanielF(matrix)
10.3 s ± 219 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [15]: %timeit -n1 transform_DanielF_optimized(matrix)
9.27 s ± 187 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>My initial solution (Method 1) is slowest while other methods are much faster. @DanielF original method is somewhat slower due to use of boolean indexing (but the optimized variant is as fast as other optimized methods).</p>
|
python|python-3.x|numpy|linear-algebra
| 2
|
374,546
| 48,860,117
|
Why PerformanceWarning when indexed lookup on sorted index?
|
<p>Does anyone know why this gives a PerformanceWarning?</p>
<pre><code>d=pd.DataFrame(
[
[1,2,3],
[1,2,4],
[1,None,5],
[2,3,5],
],
columns=['i','j','k']
)
print d.dtypes
d = d.set_index(['i','j'])['k']
d = d.sort_index()
print d.loc[(2,3)] # PerformanceWarning: indexing past lexsort depth may impact performance.
</code></pre>
<p>My understanding from the docs is that the PerformanceWarning follows from not sorting the index (the index was sorted).</p>
|
<p>It turns out this is an open bug:</p>
<ul>
<li><a href="https://github.com/pandas-dev/pandas/issues/19771" rel="noreferrer">https://github.com/pandas-dev/pandas/issues/19771</a></li>
<li><a href="https://github.com/pandas-dev/pandas/issues/17931" rel="noreferrer">https://github.com/pandas-dev/pandas/issues/17931</a></li>
</ul>
<p>@ayhan's comment provides workaround:</p>
<pre><code>d = d.sort_index(level=d.index.names)
</code></pre>
<p>which is should be the default behavior of:</p>
<pre><code>d = d.sort_index()
</code></pre>
|
python|pandas
| 5
|
374,547
| 48,442,775
|
Comparing computational speed of these two short codes
|
<p>In the two versions of the code, both v1 and v2 are large vectors (length ranging from 1,000 1,000,000 with len(v1)=len(v2)). I expected <strong>code 2</strong> to be much master than <strong>code 1</strong>, but it turns out <strong>code 1</strong> is much faster and I do not know why. Could you please explain why <strong>code 2</strong> is slow? Thank you. </p>
<p><strong>Code 1:</strong></p>
<pre><code>norm1=math.sqrt(np.dot(v1,v1))
norm2=math.sqrt(np.dot(v2,v2))
kern=np.dot(v1,v2)/(norm1*norm2)
</code></pre>
<p><strong>Code 2:</strong></p>
<pre><code>kern=0
for i in range(0, len(v1)):
kern+=min(v1[i], v2[i])
</code></pre>
|
<p>The <code>np.dot()</code> calls also require loops through the vectors, but these loops are implemented (typically) natively / in C++. Loops implemented explicitly in python (as in your code 2) are notoriously slow in comparison to such C++-based loops.</p>
|
python|algorithm|performance|numpy
| 5
|
374,548
| 70,939,265
|
Duplicated rows when merging on pandas
|
<p>I have a list that contains multiple pandas dataframes.</p>
<p>Each dataframe has columns 'Trading Day' and Maturity.
However the name of the column Maturity changes depending on the maturity, for example the first dataframe column names are: 'Trading Day', 'Y_2021','Y_2022'.</p>
<p>The second dataframe has 'Trading Day',Y_2022','Y_2023','Y_2024'.</p>
<p>The column 'Trading day' has all unique np.datetime64 dates for every dataframe
And the maturity columns have either floats or nans</p>
<p>My goal is to merge all the dataframes into one and have something like:
'Trading Day','Y_2021,'Y_2022','Y_2023',...'Y_2030'</p>
<p>In my code gh is the list that contains all the dataframes and original is a dataframe that contains all the dates from 5 years ago through today.</p>
<p>gt is the final dataframe.</p>
<p>So far what I have done is:</p>
<pre><code>original = pd.DataFrame()
original['Trading Day'] = np.arange(np.datetime64(str(year_now-5)+('-01-01')), np.datetime64(date.today())+1)
for i in range(len(gh)):
gh[i]['Trading Day']=gh[i]['Trading Day'].astype('datetime64[ns]')
gt = pd.merge(original,gh[0],on='Trading Day',how = 'left')
for i in range (1,len(gh)):
gt=pd.merge(gt,gh[i],how='outer')
</code></pre>
<p>The code works more or less the problem is that when there is a change of years I get the following example results:</p>
<pre><code> Y_2021 Y_2023 Y_2024
2020-06-05 45
2020-06-05 54
2020-06-05 43
2020-06-06 34
2020-06-06 23
2020-06-06 34
#While what I want is:
Y_2021 Y_2023 Y_2024
2020-06-05 45 54 43
2020-06-06 34 23 34
</code></pre>
|
<p>Given your actual output and what you want, you should be able to just:</p>
<pre><code>output.ffill().bfill().drop_duplicates()
</code></pre>
<p>To get the output you want.</p>
|
pandas|merge|rows
| 0
|
374,549
| 70,952,419
|
Avoiding iterating through a dataframe to get a total column, using a second dataframe as data
|
<p>Considering the two dataframes</p>
<pre><code>>>> df1
Dr Cr Opening Balance
0 B2 B2 0.0
1 B1 B1 100.0
2 D1 D1 0.0
3 F1 F1 -100.0
>>> df2
Date Amount Dr Cr
0 2021-12-01 452.25 B1 D1
1 2022-01-01 100.00 B1 D1
2 2022-01-02 100.00 B1 D1
</code></pre>
<p>I am trying to add a column to <code>df1</code> which gives the total of <code>Opening Balance</code> + sum of <code>Dr</code> from <code>df2</code> - sum of <code>Cr</code> from <code>df2</code></p>
<p>In this case, df1 would become:</p>
<pre><code>>>> df1
Dr Cr Opening Balance Total
0 B2 B2 0.0 0.0
1 B1 B1 100.0 752.25
2 D1 D1 0.0 -652.25
3 F1 F1 -100.0 -100.0
</code></pre>
<p>Thanks</p>
|
<p>One of more straight-forward and relatively debug friendly approach is to group <code>df2</code> based on <code>Dr</code> and <code>Cr</code>, <code>join</code> the results to <code>df1</code> and add/subtract the values:</p>
<pre><code>dr = df2.groupby('Dr')['Amount'].sum().rename('Dr Amount')
cr = df2.groupby('Cr')['Amount'].sum().rename('Cr Amount')
df3 = df1.join(dr, 'Dr').join(cr, 'Cr').fillna(0)
df3['Total'] = df3['Opening Balance'] + df3['Dr Amount'] - df3['Cr Amount']
# Dr Cr Opening Balance Dr Amount Cr Amount Total
# 0 B2 B2 0.0 0.00 0.00 0.00
# 1 B1 B1 100.0 652.25 0.00 752.25
# 2 D1 D1 0.0 0.00 652.25 -652.25
# 3 F1 F1 -100.0 0.00 0.00 -100.00
# Optional clean-up:
df3.drop(columns=['Dr Amount', 'Cr Amount'], inplace=True)
</code></pre>
<p>Note that the <code>fillna(0)</code> on third row is needed only for rows where there is no corresponding <code>Dr</code> or <code>Cr</code> in <code>df2</code></p>
|
python|pandas|dataframe
| 2
|
374,550
| 70,742,726
|
Pandas Selection of rows not working propelry
|
<p>I am trying to delete rows of a df which are not part of an other columns entry from another table. For further explanation: I have a table with transactions including materialnumbers and another table with production information also including materialnumbers. I want to delete every row where a materialnumber is contained which is not in the other table.</p>
<p>My full code is not working. Tho the code is doing what I expect when used on a small sample. See below.</p>
<pre><code>import pandas as pd
import numpy as np
import os
file_path = os.path.realpath(__file__)
dic_path = os.path.dirname(file_path)
os.chdir(dic_path)
df_V = pd.read_excel("V.xlsx", dtype ='str')
mn = df_V.MAT
print(mn.dtype)
mn = mn.drop_duplicates()
print(mn)
df_L = pd.read_excel("L.xlsx", sheet_name = "Sheet1", dtype ='str')
df_LH = df_L.head()
print(df_LH)
df_LH = df_LH[df_LH.MAT.isin(mn) == True]
print(df_LH)
</code></pre>
<p>Works as predicted</p>
<pre><code>df_L = df_L[df_L.MAT.isin(mn) == True]
df_L.to_excel("correct_L.xlsx")
print("done")
</code></pre>
<p>both files new_L aswell as L contain the same values though in the head() part some rows get removed.</p>
<p>The Tables can be seen as following:</p>
<pre><code>Table V
index MAT Value
1. 1 any
2. 2 any
3. 2 any
4. 3 any
Table L
index MAT value
1. 1 any
2. 1 any
3. 2 any
4. 3 any
5. 4 any
predicted outcome
index MAT value
1. 1 any
2. 1 any
3. 2 any
4. 3 any
</code></pre>
<p>Many Thanks in advance</p>
|
<p>You probably want to be using the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge.html" rel="nofollow noreferrer">merge</a> function in pandas opposed to isin.</p>
<p>The code below is a simple demonstration of how to use the function</p>
<p>We use <code>how='left'</code> so that only 'materials' that are in the left dataframe are included. The <code>on='MAT'</code> is used to tell pandas to look at this column to decide what should be merged.</p>
<pre><code> import pandas as pd
v = pd.DataFrame([[1,9],[2,8],[2,7],[3,6]], columns=['MAT', 'V_vals'])
l = pd.DataFrame([[1,5],[1,4],[2,3],[3,2],[4,1]], columns=['MAT', 'M_vals'])
print('Table V:\n', v)
print('Table M:\n', l)
output = pd.merge(v,l, how='left', on='MAT')
print('Merged table:\n', output)
</code></pre>
<p>This produces the output shown below.</p>
<pre><code>Table V:
MAT V_vals
0 1 9
1 2 8
2 2 7
3 3 6
Table M:
MAT M_vals
0 1 5
1 1 4
2 2 3
3 3 2
4 4 1
Merged table:
MAT V_vals M_vals
0 1 9 5
1 1 9 4
2 2 8 3
3 2 7 3
4 3 6 2
</code></pre>
|
python|pandas|dataframe|data-manipulation
| 0
|
374,551
| 70,816,859
|
How can we retrieve the epoch number from Keras ModelCheckpoint?
|
<p>While training my ML model, I applied CSVlogger and the ModelCheckpoint callbacks so basically all epochs metric results are logged by the CSVlogger and only best model saved.</p>
<p>However with <em>save_best_only</em> for ModelCheckpoint, how can we get/log the epoch number when the model is updated by ModelCheckpoint in the last time, just as <strong>an integer number</strong>.</p>
<pre><code>csv_logger =CSVlogger('history.csv', append=True, seperator=";")
callbacks = [csv_logger,
tf.keras.callbacks.ModelCheckpoint('model.h5', verbose=1, save_best_only=True)
results = model.fit(X,Y, callbacks=callbacks, epochs=30)
# how can we have something like lastSavedEpoch = results.last_saved_epoch?
# e.g. it's lastly updated in 27th epoch although total training epochs is 30.
</code></pre>
|
<blockquote>
<p>Hi I also use another by the callback methods , I can easily extract
information as these below :</p>
</blockquote>
<pre><code>lossvalue: 1.1057155132293701
accvalue: 0.6833927035331726
val_loss: None
val_acc: None
epoch: 1
step: 64
step: 10009601
best acc: 0.0
</code></pre>
<blockquote>
<p>Anyway you do it using CSV logger from model fitting, try find where
is a best accuracy as you finding or not !?</p>
</blockquote>
<pre><code>plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
</code></pre>
|
tensorflow|keras|deep-learning|callback|tensorflow2.0
| 0
|
374,552
| 71,052,394
|
replace values by different conditions in a dataframe
|
<p>I have a dataframe like this:</p>
<pre><code>df_test = pd.DataFrame({'ID1':['A','B','C','BA','BA','AB','>','>','>','>'],
'ID2':['','','','','','','mh','mh','nn','nn']})
df_test
</code></pre>
<p><a href="https://i.stack.imgur.com/rAZk9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rAZk9.png" alt="df_test" /></a></p>
<p>I want to obtain a dataframe like this based on the column 'ID1'(1. <code>if len(ID1)>2: then ID1=ID1[-1]</code>(for example 'BA', 'AB' will be replaced with 'A', 'B', respectively); 2. <code>if ID1='>': then ID1=ID2</code>(for example: '>' will be replaced with 'mh','nn',respectively)):</p>
<pre><code>df_result = pd.DataFrame({'ID1':['A','B','C','A','A','B','mh','mh','nn','nn']})
df_result
</code></pre>
<p><a href="https://i.stack.imgur.com/6QETM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6QETM.png" alt="df_result" /></a></p>
|
<p>You can use <code>.str[-1]</code> regardless of the length of the strings in the column to select the last character, and use <code><column>.where(cond, other_col)</code> to fill in values that don't match <code>cond</code> with those values from <code>other_col</code>:</p>
<pre><code>df_test['ID1'] = df_test.assign(ID1=df_test['ID1'].str[-1]).pipe(lambda x: x['ID1'].where(x['ID1'] != '>', x['ID2']))
</code></pre>
|
pandas|dataframe
| 2
|
374,553
| 70,970,506
|
Creating a new dataframe column based on operations applied to nested arrays in another column?
|
<p>Let me start off by saying this unfortunately cannot be solved by doing something as simple as df[A] = df[B] - df[C].</p>
<p>I have a column containing arrays (let's call it df[A]). I want to z-score the items in each array (with respect to only the values in that array), then store this new array of z-scored values in the corresponding row of a new column.</p>
<p>To hopefully make it a bit clearer, each entry in df[A] looks like [[1, 2, 3, ..., 4170945]] and is of length 4170945. (The nesting is due to how the arrays are loaded into the dataframe, and not important.) I have 69 rows of such entries (example image below).</p>
<p>I then want each row of df['zscores'] to contain a corresponding array of <code>(row[A][0] - row[A][0].mean()) / row[A][0].std()</code>.</p>
<p><a href="https://i.stack.imgur.com/QW8Dn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QW8Dn.png" alt="enter image description here" /></a></p>
<p>I have tried the following:</p>
<p><strong>1.</strong></p>
<p><code>df['zscores'] = (df['A'] - df['A'].mean()) / df['A'].std()</code></p>
<p>This gives the following error:</p>
<pre><code>ValueError: operands could not be broadcast together with shapes (69,) (1,4170945)
</code></pre>
<p>My suspicion is that it's returning a single series where the first item of each row of df[A] is z-scored, then the second, etc., essentially iterating item-wise through each row.</p>
<p><strong>2.</strong></p>
<pre><code>for idx, row in df.iterrows():
if idx == 1:
_series = pd.Series((row['A'][0] - row['A'][0].mean()) / row['A'][0].std())
else:
_ = pd.Series((row['A'][0] - row['A'][0].mean()) / row['A'][0].std())
_series.append(_)
</code></pre>
<p>My aim was to extract each array, operate on it, and append it to a series. I then wanted to something like <code>df['zscores'] = _series</code>.</p>
<p>My ideal result looks like this:</p>
<pre><code> A zscores
0 [[43.7916, 10.7261, 30.9748, ... [[2.5077, 2.1846, 2.2108, ...
1 [[53.8916, 16.7261, 3.5668, ... [[1.0177, 5.1846, 0.2108, ...
...
</code></pre>
|
<p>You need an apply function for sure. This might either solve it or give you an insight:</p>
<pre><code>df.apply(lambda x: (x['A'][0] - x['A'][0].mean()) / x['A'][0].std())
</code></pre>
|
python|arrays|pandas|dataframe
| 0
|
374,554
| 70,838,701
|
Output of vgg16 layer doesn't make sense
|
<p>I have a vgg16 network without the last max pooling, fully connected and softmax layers. The network summary says that the last layer's output is going to have a size of <code>(batchsize, 512, 14, 14)</code>. Putting an image into the network gives me an output of <code>(batchsize, 512, 15, 15)</code>. How do I fix this?</p>
<pre><code>import torch
import torch.nn as nn
from torchsummary import summary
vgg16 = torch.hub.load('pytorch/vision:v0.10.0', 'vgg16', pretrained=True)
vgg16withoutLastFewLayers = nn.Sequential(*list(vgg16.children())[:-2][0][0:30]).cuda()
image = torch.zeros((1,3,244,244)).cuda()
output = vgg16withoutLastFewLayers(image)
summary(vgg16withoutLastFewLayers, (3,224,224))
print(output.shape)
</code></pre>
<pre><code>----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 224, 224] 1,792
ReLU-2 [-1, 64, 224, 224] 0
Conv2d-3 [-1, 64, 224, 224] 36,928
ReLU-4 [-1, 64, 224, 224] 0
MaxPool2d-5 [-1, 64, 112, 112] 0
Conv2d-6 [-1, 128, 112, 112] 73,856
ReLU-7 [-1, 128, 112, 112] 0
Conv2d-8 [-1, 128, 112, 112] 147,584
ReLU-9 [-1, 128, 112, 112] 0
MaxPool2d-10 [-1, 128, 56, 56] 0
Conv2d-11 [-1, 256, 56, 56] 295,168
ReLU-12 [-1, 256, 56, 56] 0
Conv2d-13 [-1, 256, 56, 56] 590,080
ReLU-14 [-1, 256, 56, 56] 0
Conv2d-15 [-1, 256, 56, 56] 590,080
ReLU-16 [-1, 256, 56, 56] 0
MaxPool2d-17 [-1, 256, 28, 28] 0
Conv2d-18 [-1, 512, 28, 28] 1,180,160
ReLU-19 [-1, 512, 28, 28] 0
Conv2d-20 [-1, 512, 28, 28] 2,359,808
ReLU-21 [-1, 512, 28, 28] 0
Conv2d-22 [-1, 512, 28, 28] 2,359,808
ReLU-23 [-1, 512, 28, 28] 0
MaxPool2d-24 [-1, 512, 14, 14] 0
Conv2d-25 [-1, 512, 14, 14] 2,359,808
ReLU-26 [-1, 512, 14, 14] 0
Conv2d-27 [-1, 512, 14, 14] 2,359,808
ReLU-28 [-1, 512, 14, 14] 0
Conv2d-29 [-1, 512, 14, 14] 2,359,808
ReLU-30 [-1, 512, 14, 14] 0
================================================================
torch.Size([1, 512, 15, 15])
</code></pre>
|
<p>The output shape should be <code>[512, 14, 14]</code>, assuming that the input image is <code>[3, 224, 224]</code>. Your input image size is <code>[3, 244, 244]</code>. For example,</p>
<pre class="lang-py prettyprint-override"><code>image = torch.zeros((1,3,224,224))
# torch.Size([1, 512, 14, 14])
output = vgg16withoutLastFewLayers(image)
</code></pre>
<p>Therefore, by increasing the image size, the spatial size <code>[W, H]</code> of your output tensor also increases.</p>
|
machine-learning|pytorch|computer-vision
| 2
|
374,555
| 70,787,375
|
Pandas: aggregate and join if different string
|
<p>I have the following table:</p>
<pre><code>data = [['abc', 'bin_1', "bin_2"], ['abc', 'bin_1', "bin_1"]]
data = pd.DataFrame(data, columns = ['name', 'bin1', 'bin2'])
</code></pre>
<p>And I want to merge the columns <code>bin1</code> and <code>bin2</code>.
As you see, there can be the same cell value in these two columns.
I want to combine the two columns by <code>|</code> if the values differ, otherwise just put a single unique value.</p>
<pre><code>data["bin"] = data[['bin1', 'bin2']].agg(' | '.join, axis=1)
</code></pre>
<p>Unfortunately gives me:</p>
<pre><code>name bin1 bin2
abc bin_1 bin_2
abc bin_1 bin_1
</code></pre>
<p>if I want:</p>
<pre><code>name bin1 bin2 bin
abc bin_1 bin_2 bin_1 | bin_2
abc bin_1 bin_1 bin_1
</code></pre>
|
<p>Use <code>set</code>s if order is not important:</p>
<pre><code>data["bin"] = data[['bin1', 'bin2']].agg(lambda x: ' | '.join(set(x)), axis=1)
print (data)
name bin1 bin2 bin
0 abc bin_1 bin_2 bin_1 | bin_2
1 abc bin_1 bin_1 bin_1
</code></pre>
<p>Or <code>dict.fromkeys</code> if ordering is important:</p>
<pre><code>data["bin"] = data[['bin1', 'bin2']].agg(lambda x: ' | '.join(dict.fromkeys(x)), axis=1)
</code></pre>
|
python|pandas
| 0
|
374,556
| 70,863,943
|
How to add title to each subplot
|
<p>I want to add title for each subplot. I want to assign a separate title to each subplot from a list of title in same sequence.</p>
<p>title_list = ['Table1', 'Table2',, 'Table3', 'Table4', 'Table5, 'Table6']</p>
<p>Hence assign title for df1 as 'Table1', df2 as 'Table2'.. and so on.</p>
<p>My Code as below:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# dataframe sample data
df1 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
df2 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
df3 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
df4 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
df5 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
df6 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
#define number of rows and columns for subplots
nrow=3
ncol=2
# make a list of all dataframes
df_list = [df1 ,df2, df3, df4, df5, df6]
fig, axes = plt.subplots(nrow, ncol)
# plot counter
count=0
for r in range(nrow):
for c in range(ncol):
df_list[count].plot(ax=axes[r,c])
count+=1
</code></pre>
|
<p>You can use the method <code>set_title()</code> on the axis object:</p>
<pre><code>axes[r, c].set_title(f"This is row={r} and column={c}")
</code></pre>
<p>I also added a call <code>fig.tight_layout()</code> to fix the spacing between subplots.</p>
<p><a href="https://i.stack.imgur.com/wAeez.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wAeez.png" alt="plot output of sample code" /></a></p>
<p>The complete code:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# dataframe sample data
df1 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
df2 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
df3 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
df4 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
df5 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
df6 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
#define number of rows and columns for subplots
nrow=3
ncol=2
# make a list of all dataframes
df_list = [df1 ,df2, df3, df4, df5, df6]
fig, axes = plt.subplots(nrow, ncol)
# plot counter
count=0
for r in range(nrow):
for c in range(ncol):
df_list[count].plot(ax=axes[r,c])
count+=1
axes[r, c].set_title(f"This is row={r} and column={c}")
fig.tight_layout()
</code></pre>
<p>Note that you can simplify the creation of your sample data:</p>
<pre><code># dataframe sample data
df_list = [pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
for _ in range(nrow * ncol)]
</code></pre>
|
python|python-3.x|pandas
| 1
|
374,557
| 70,885,645
|
selecting random elements from each column of numpy array
|
<p>I have an n row, m column numpy array, and would like to create a new k x m array by selecting k random elements from each column of the array. I wrote the following python function to do this, but would like to implement something more efficient and faster:</p>
<pre><code>def sample_array_cols(MyMatrix, nelements):
vmat = []
TempMat = MyMatrix.T
for v in TempMat:
v = np.ndarray.tolist(v)
subv = random.sample(v, nelements)
vmat = vmat + [subv]
return(np.array(vmat).T)
</code></pre>
<p>One question is whether there's a way to loop over each column without transposing the array (and then transposing back). More importantly, is there some way to map the random sample onto each column that would be faster than having a for loop over all columns? I don't have that much experience with numpy objects, but I would guess that there should be something analogous to apply/mapply in R that would work?</p>
|
<p>One alternative is to randomly generate the indices first, and then use <code>take_along_axis</code> to map them to the original array:</p>
<pre><code>arr = np.random.randn(1000, 5000) # arbitrary
k = 10 # arbitrary
n, m = arr.shape
idx = np.random.randint(0, n, (k, m))
new = np.take_along_axis(arr, idx, axis=0)
</code></pre>
<p>Output (shape):</p>
<pre><code>in [215]: new.shape
out[215]: (10, 500) # (k x m)
</code></pre>
|
python|arrays|numpy
| 1
|
374,558
| 70,746,737
|
TypeError: nll_loss_nd(): argument 'input' (position 1) must be Tensor, not tuple
|
<p>So I'm trying to train my BigBird model (BigBirdForSequenceClassification) and I got to the moment of the training, which ends with below error message:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\######", line 189, in <module>
train_loss, _ = train()
File "C:\Users\######", line 152, in train
loss = cross_entropy(preds, labels)
File "C:\Users\#####\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\######\venv\lib\site-packages\torch\nn\modules\loss.py", line 211, in forward
return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction)
File "C:\Users\######\venv\lib\site-packages\torch\nn\functional.py", line 2532, in nll_loss
return torch._C._nn.nll_loss_nd(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
TypeError: nll_loss_nd(): argument 'input' (position 1) must be Tensor, not tuple
</code></pre>
<p>From what I understand, the problem happens because the train() function returns the tuple. Now - my question is how I should approach such issue? How do I change the output of train() function to return tensor instead of tuple?
I have seen similar issues posted here but none of the solutions seems to be helpful in my case, not even</p>
<pre><code>model = BigBirdForSequenceClassification(config).from_pretrained(checkpoint, return_dict=False)
</code></pre>
<p>(When I don't add return_dict=False I got similiar error message but it says "<code>TypeError: nll_loss_nd(): argument 'input' (position 1) must be Tensor, not SequenceClassifierOutput</code>"
Please see my train code below:</p>
<pre><code>def train():
model.train()
total_loss = 0
total_preds = []
for step, batch in enumerate(train_dataloader):
if step % 10 == 0 and not step == 0:
print('Batch {:>5,} of {:>5,}.'.format(step, len(train_dataloader)))
batch = [r.to(device) for r in batch]
sent_id, mask, labels = batch
preds = model(sent_id, mask)
loss = cross_entropy(preds, labels)
total_loss = total_loss + loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
optimizer.zero_grad()
preds = preds.detach().cpu().numpy()
total_preds.append(preds)
avg_loss = total_loss / len(train_dataloader)
total_preds = np.concatenate(total_preds, axis=0)
return avg_loss, total_preds
</code></pre>
<p>and then:</p>
<pre><code>for epoch in range(epochs):
print('\n Epoch {:} / {:}'.format(epoch + 1, epochs))
train_loss, _ = train()
train_losses.append(train_loss)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
</code></pre>
<p>I will really appreciate any help on this case and thank you in advance!</p>
|
<p>Ok, so it seems like I should have used BigBirdModel instead of BigBirdForSequenceClassification - issue solved</p>
|
python|pytorch|huggingface-transformers|bert-language-model
| 0
|
374,559
| 71,030,620
|
split a workbook into different workbooks with worksheets using python pandas
|
<p>I have a list of transactions from the last 7 years in one big excel file.
I m trying to create an excel workbook for each year that includes each months as worksheet.</p>
<p>Im using a column called 'date' that has each transactions recorded as MM/DD/YYY. I split that column to single out my years and months but Im stuck on how I can use them to get multiple workbooks (YYYYmoney.xlsx) that contain worksheets for each month.</p>
<p>here's what I was able to get to but i got stuck when it came to nesting my for loop.
Can anyone help?</p>
<pre><code>import pandas as pd
#location of the file you want to work on
file1 = '.\money.xlsx'
#make it a dataframe
df1 = pd.read_excel(file1)
#create 3 columns from splitting the date column
df1[["month", "day", "year"]] = df1["Date"].str.split("/", expand = True)
#list each year and each month to make sure you did it right
each_year = df1['year'].unique()
print(each_year)
each_month = df1['month'].unique()
print(each_month)
#create separate workbook for each year with sheet for each month
for value in each_year:
df00 = df1[df1['year'] == value]
output_file_name = str(value) + 'money' + '.xlsx'
df00.to_excel(output_file_name, index=False)
for monthly in each_month:
writer = pd.ExcelWriter(output_file_name, engine='xlsxwriter')
df00[monthly].to_excel(writer, sheet_name=monthly, index=False)
writer.save()
print('DataFrame written to Excel File successfully.')
</code></pre>
|
<p>I know this is a bit late, but perhaps better late than never...</p>
<p>I'm not sure what issue you ran into b/c it doesn't really say, but I suspect your issue was b/c you created a new writer for each sheet instead of each workbook. You also tried to write all months for all years and didn't create a new DF for each each year.</p>
<p>Without testing, I can't say this is 100% working code, but I'd rearrange what you have to something like below. This should get you close.</p>
<pre><code>for value in each_year:
dfyear = df1[df1['year'] == value]
output_file_name = str(value)+'money.xlsx'
writer = pd.ExcelWriter(output_file_name, engine='xlsxwriter')
each_month = dfyear['month'].unique()
for month in each_month:
dfyear[month].to_excel(writer, sheet_name=str(month), index=False)
writer.save()
print('DataFrame written to Excel File successfully.')
</code></pre>
|
python|excel|pandas|dataframe
| 0
|
374,560
| 70,762,615
|
'DataFrame' object has no attribute 'to_delta'
|
<p>My code used to work. Why does my code not work anymore? I updated to the newer Databricks runtime 10.2 so I had to change some earlier code to use pandas on pyspark.</p>
<pre><code># Drop customer ID for AutoML
automlDF = churn_features_df.drop(key_id)
# Write out silver-level data to autoML Delta lake
automlDF.to_delta(mode='overwrite', path=automl_silver_tbl_path)
</code></pre>
<p>The error I am getting is <code>'DataFrame' object has no attribute 'to_delta'</code></p>
|
<p>I was able to get it to work as expected using <code>to_pandas_on_spark()</code>. My working code looks like this:</p>
<pre><code># Drop customer ID for AutoML
automlDF = churn_features_df.drop(key_id).to_pandas_on_spark()
# Write out silver-level data to autoML Delta lake
automlDF.to_delta(mode='overwrite', path=automl_silver_tbl_path)
</code></pre>
|
pyspark|databricks|delta-lake|pyspark-pandas
| 1
|
374,561
| 70,937,513
|
RuntimeError: mat1 and mat2 shapes cannot be multiplied (4x73034 and 200x120)
|
<p>Building a Neural Network layers for Skin detection dataset, and got a error here. I know i have done some mistake but cannot figure it out. Error is am getting is after taking image size 224*224 and channels 3: <em>RuntimeError: mat1 and mat2 shapes cannot be multiplied (4x73034 and 200x120)</em></p>
<pre><code>import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 16, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(16, 26, 5)
self.fc1 = nn.Linear(8 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 86)
self.fc3 = nn.Linear(86, 2)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x,1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net().to(device)
print(net)
</code></pre>
<p>These are the layers and Net module</p>
<pre><code><ipython-input-41-8c9bafb31c44> in forward(self, x)
16 x = self.pool(F.relu(self.conv2(x)))
17 x = torch.flatten(x,1)
---> 18 x = F.relu(self.fc1(x))
19 x = F.relu(self.fc2(x))
20 x = self.fc3(x)
</code></pre>
<p>Can anyone help me solve this.</p>
|
<p>As <a href="https://stackoverflow.com/users/3999668">Anant</a> said, you need to match the flattened conv2 dimension (73034) to be the input dimension for the fc1 layer.</p>
<pre class="lang-py prettyprint-override"><code>self.fc1 = nn.Linear(73034, 120)
</code></pre>
<p>The formula to calculate the output of each conv layer:</p>
<pre><code>[(height or width) - kernel size + 2*padding] / stride + 1
</code></pre>
<p>For the following I will use the dimensions (Channels, Height, Width)
Input (3,224,224) -> conv1 -> (16,220,220) -> pool -> (16,110,110) -> conv2 -> (26,106,106) -> pool -> (26,53,53) -> flatten -> (73034)</p>
<p>It seems your batch size is 4, which refers to the "4" in (4x73034). If you print the dimensions of the output of conv1 or conv2 layers, the format will be (Batch, Channels, Height, Width).</p>
|
python|machine-learning|deep-learning|pytorch|conv-neural-network
| 2
|
374,562
| 70,933,814
|
SageMaker custom model output path for tensorflow when creating from s3 artifacts
|
<p>I'm running the following code to create an endpoint with a preexisting model:</p>
<pre><code>from sagemaker.tensorflow import serving
sagemaker_session = sagemaker.Session()
clf_sm_model = serving.Model(model_data='s3://mybucket/mytrainedmodel/model.tar.gz',
entry_point="inference.py",
source_dir="inf_source_dir",
role=get_execution_role(),
framework_version='1.14',
sagemaker_session=sagemaker_session)
</code></pre>
<p>However this create a copy of the model into the default sagemaker bucket. How can I pass a custom path? I've tried model_dir, and output_path but neither are accepted as parameters</p>
|
<p>The SageMaker Python SDK repackages your model to include your <code>entry_point</code> and <code>source_dir</code> files and uploads this "new" tar ball to the SageMaker default bucket.</p>
<p>You can change this behavior by setting the <code>default_bucket</code> in your <code>sagemaker_session</code> as follows:</p>
<pre><code>sagemaker_session = sagemaker.Session(default_bucket="<mybucket>")
clf_sm_model = serving.Model(model_data='s3://mybucket/mytrainedmodel/model.tar.gz',
.
.
sagemaker_session=sagemaker_session)
.
)
</code></pre>
|
tensorflow|amazon-sagemaker
| 3
|
374,563
| 70,950,970
|
How to read large csv from Azure container using Python Azure Function?
|
<p>I need to read a larger csv efficiently from container using Python Azure Function.</p>
<p>I am using the below code for reading csv, it works fine for small csv but there must be some other way to read larger csv efficiently.</p>
<pre><code># Container Connection.
container_client1 = ContainerClient.from_connection_string(
conn_str=conn_str,
container_name=container_name
)
# Reading File.
downloaded_blob = container_client.download_blob(file_name.csv)
df = pd.read_csv(StringIO(downloaded_blob.content_as_text()))
</code></pre>
<p>The Above function taking too much time to read ~2gb file.
I need help to efficiently read the larger csv using Python Azure Function.</p>
|
<p>One of the workaround is to process the file in chunks, resulting in lower memory use while parsing.</p>
<pre class="lang-py prettyprint-override"><code>chunksize = 10 ** 6
for chunk in pd.read_csv(filename, chunksize=chunksize):
process(chunk)
</code></pre>
<p><strong>NOTE:-</strong> <code>chunksize</code> parameter refers to number of rows per chunk.</p>
<p>Here is a <a href="https://stackoverflow.com/questions/44381097/download-and-split-large-file-into-100-mb-chunks-in-blob-storage">thread</a> that you can refer to.</p>
<p><strong>REFERENCES:</strong>
<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html" rel="nofollow noreferrer">pandas.read_csv</a></p>
|
python|pandas|azure|azure-functions|azure-blob-storage
| 0
|
374,564
| 71,028,228
|
GPT-3 long input posts for Question Answering
|
<p>From my understanding, GPT-3 is "trained" for a specific task by including some labelled examples before the desired/test example. In Question Answering, this includes a context and a question. In this situation, the input prompt can become long. How do people address this?</p>
<p>I am using the Hugging Face GPT-J implementation, and there is an input token limit (of 2000). However, when including multiple qa examples in the prompt (especially with the contexts), it quickly reaches this limit, limitting the amount of example prompts to be inputted. Does anyone know how this issue is handled in a GPT-J setting, especially for QA?</p>
|
<p>Unfortunately GPT-3 and GPT-J both have a 2048 token context limitation, and there's nothing you can do about it.</p>
<p>On my <a href="https://nlpcloud.io" rel="nofollow noreferrer">NLP Cloud</a> API, the solution I suggest in general is to fine-tune GPT-J. Fine-tuning GPT-J is like giving ton of context to the model.</p>
|
deep-learning|nlp|huggingface-transformers|nlp-question-answering|gpt-3
| 0
|
374,565
| 70,918,256
|
Begginer/ numpy where and copy
|
<p>I am trying to copy values from one Field2 into Field1 if Field1 is null or NaN.
I have tried below where statement as per documentation, but it cuts outliners instead of copyting the value.</p>
<p><code>dataframe=np.where(dataframe['field1'].isnull(),np.copy(dataframe['field2']),1)</code></p>
<p>I have interpreted it as if statement, but apparently its wrong interpretation, as results are not correct. Has anyone of you had similar issues?</p>
<p><a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer">np.where source</a> <a href="https://numpy.org/doc/stable/reference/generated/numpy.copy.html" rel="nofollow noreferrer">np.copy source</a></p>
|
<p>You don't need <code>np.copy</code>, nor <code>np.where</code>. Use pandas' <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.mask.html" rel="nofollow noreferrer"><code>Series.mask</code></a> instead</p>
<pre><code>dataframe['field1'] = dataframe['field1'].where(dataframe['field1'].isnull(),
dataframe['field2'])
</code></pre>
<p>If you really want to use <code>np.where</code> the syntax is:</p>
<pre><code>dataframe['field1'] = np.where(dataframe['field1'].isnull(), # condition
dataframe['field2'], # value if True
dataframe['field1']) # value if False
</code></pre>
|
python|pandas|numpy
| 4
|
374,566
| 70,833,201
|
Iterating over a DataFrame and appending the score into a column
|
<p>When I run this code below, it returns <code>'float' object has no attribute 'encode'</code>
Im not sure what Im doing wrong, but I want to get the VADER sentiment values for the Titles (which is in a large dataframe) but Im not sure where im going wrong, or how to convert the type of variable to make the object iterable. And then appending the 'compound' scores into the dataframe. I have tried iteration code like:</p>
<p><code>pd.concat([bitcoin,bitcoin['Title'].apply(lambda r : pd.Series(analyzer.polarity_scores(r)))],axis=1) </code>
and
<code>score_compound = bitcoin['Title'].apply(lambda r : analyzer.polarity_scores(r)['compound']) </code></p>
<pre><code>import nltk
import pandas as pd
analyzer = SentimentIntensityAnalyzer()
bitcoin = pd.read_csv("Subreddit_Bitcoin_2021.csv")
score_compound = []
for i in range(0, bitcoin.shape[0]):
score = analyzer.polarity_scores(bitcoin.iloc[i][1])
score1 = score['compound']
score_compound.append(score1)```
</code></pre>
|
<p>Without your data to work on, it is har to know. I saw you posted the same question elsewhere and some data so I tested it on:</p>
<pre><code> index text
0 0 I can’t believe Bitcoin is going to hit 100k b...
1 1 What new Bitcoin related project are you the m...
2 2 Yin decline is about to end! Historical data s...
3 3 If you discovered a way to model turning $100 ...
4 4 Happy New Year and some nice Gains !! ...
</code></pre>
<p>and with a completion of your code (please, for further notice, do share which libraries you import):</p>
<pre><code>from nltk import *
import pandas as pd
import vader
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyzer = SentimentIntensityAnalyzer()
bitcoin = df
score_compound = []
for i in range(0, bitcoin.shape[0]):
score = analyzer.polarity_scores(bitcoin.iloc[i][1])
score1 = score['compound']
score_compound.append(score1)
score_compound
</code></pre>
<p>which returns:</p>
<pre><code>[0.0258, 0.4005, 0.0, 0.6199, 0.9421]
</code></pre>
|
python|pandas|list|dataframe|vader
| 0
|
374,567
| 70,897,989
|
How to use pandas.Series.str.contains to return true value for row after row that contains the given condition
|
<p>I am using the following code to make a mask of a data frame. The mask means that I return TRUE for all values in a row where one cell in that row has a certain condition, for instance where one cell value is exactly 21.</p>
<pre><code> mask_pipe21 = np.column_stack([output[col].str.contains("^21$", regex=True, na=False) for col in output])
</code></pre>
<p>What I want to do now, is to return TRUE not for the row that contains 21, but for the row after the row that contains 21.</p>
<p>To use a simpler example from the pandas instruction:</p>
<pre><code>s1 = pd.Series(['Mouse', 'dog', 'house and parrot', '23', np.NaN])
s1.str.contains('og', regex=False)
0 False
1 True
2 False
3 False
4 NaN
</code></pre>
<p>Instead of this results, with the same logic, I would like to return:</p>
<pre><code>0 False
1 False
2 True
3 False
4 NaN
</code></pre>
<p>Does anybody know how I could achieve this with my code line? Thanks in advance.</p>
|
<p>Try</p>
<pre><code>s1 = pd.Series(['Mouse', 'dog', 'house and parrot', '23', np.NaN])
s1.str.contains('og').shift(1)
>>>
0 NaN
1 False
2 True
3 False
4 False
</code></pre>
<p>This is not 100% your wanted output. Therefor you maybe want to change the NaN values afterwards.</p>
|
python|pandas|dataframe|contains
| 1
|
374,568
| 70,881,899
|
Turning secondary keys into primary keys in dataframe
|
<p>Pandas Dataframe: Turning secondary keys into primary keys in Python</p>
<p>I would like to pass the secondary keys of this plot as primary key. Currently, the primary key is 'ustar' but I want 'time', 'latitude' and 'longitude' to be the primary keys. How do I do this?</p>
<pre><code>ustar = ds['ustar'].to_dataframe()
print(ustar)
</code></pre>
<p><img src="https://i.stack.imgur.com/036IQ.png" alt="print(ustar)" /></p>
<p>Thanks for your help!</p>
|
<blockquote>
<p>I would like to put 'ustar' at the same level as 'time', 'longitude' and 'latitude' in the dataframe</p>
</blockquote>
<p>I think you want to <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer">reset your index</a></p>
<pre><code>ds = ds.reset_index()
</code></pre>
|
python|pandas
| 0
|
374,569
| 70,833,621
|
How can I convert all values in a column like '€226.5M' or '€100.1K' (type object) to 226.5 or 0.1001 (type float) while working with Pandas?
|
<p>I have this DataFrame and I know I should use the replace method, but I don't in which way.
I want all values in the column to be floats in million euros, so I would erase the '€', also the 'M' and if a value has a 'K' instead of an 'M', erase the K and make the number 1000 times smaller.
Thanks!</p>
<p><a href="https://i.stack.imgur.com/lqbvU.png" rel="nofollow noreferrer">https://i.stack.imgur.com/lqbvU.png</a></p>
|
<p>Create a custom function to convert string values to numeric:</p>
<pre><code>mappings = {'M': 1, 'K': 0.001}
def to_numeric(sr):
df = sr.str.extract('([^€KM]+)([KM]?)')
return df[0].astype(float) * df[1].map(mappings).astype(float)
# Convert your columns
df['Value'] = to_numeric(df['Value'])
df['Wage'] = to_numeric(df['Wage'])
df['Release Clause'] = to_numeric(df['Release Clause'])
</code></pre>
|
python|pandas|replace
| -1
|
374,570
| 70,867,360
|
Explode data frame columns into multiple rows
|
<p>I have a large dataframe <code>a</code> that I would like to split or explode to become dataframe <code>b</code> (the real dataframe <code>a</code> contains 90 columns).</p>
<p>I tried to look up for solutions to a problem similar to this but I did not find since it is not related to the values in cells but to the column names.</p>
<p>Any pointer to the solution or to using an existing function in the pandas library would be appreciated.</p>
<p>Thank you in advance.</p>
<pre><code>from pandas import DataFrame
import numpy as np
# current df
a = DataFrame([{'ID': 'ID_1', 'A-1': 'a1', 'B-1':'b1','C-1':'c1', 'A-2': 'a2', 'B-2':'b2','C-2':'c2'}])
# desired df
b = DataFrame([{'ID': 'ID_1', 'A': 'a1', 'B':'b1', 'C':'c1'},
{'ID': 'ID_1','A': 'a2', 'B':'b2','C':'c2'}])
</code></pre>
<p>current df
<a href="https://i.stack.imgur.com/C5Mgv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C5Mgv.png" alt="current df" /></a></p>
<p>desired df
<a href="https://i.stack.imgur.com/8NmAW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8NmAW.png" alt="desired df" /></a></p>
<p>One idea I have is to to split this dataframe into two dataframes (Dataframe 1 will contain columns from A1 to C1 and Dataframe 2 will contain columns from A2 to C2 ) rename the columns to A/B/C and than concatenate both. But I am not sure in terms of efficiency since I have 90 Columns that will grow over time.</p>
|
<p>This approach will generate some intermediate columns which will be removed later on.</p>
<p>First bring down those labels (A-1,...) from the header into a column</p>
<pre><code>df = pd.melt(a, id_vars=['ID'], var_name='label')
</code></pre>
<p>Then split the label into character and number</p>
<pre><code>df[['char', 'num']] = df['label'].str.split('-', expand=True)
</code></pre>
<p>Finally drop the label, <code>set_index</code> before <code>unstack</code>, and take care of the final table formats.</p>
<pre><code>df.drop('label', axis=1)\
.set_index(['ID', 'num', 'char'])\
.unstack()\
.droplevel(0, axis=1)\
.reset_index()\
.drop('num', axis=1)
</code></pre>
|
python|pandas|dataframe
| 2
|
374,571
| 70,845,169
|
Python numpy.corrcoef() got different result in different float number when two doesn't change vector
|
<pre class="lang-py prettyprint-override"><code>import numpy as np
len = 999
a = np.array([1.0]*len)
b = np.array([3.5]*len)
print(np.corrcoef(a, b))
a = np.array([0.9]*len)
b = np.array([3.4]*len)
print(np.corrcoef(a, b))
</code></pre>
<p>Got result:</p>
<pre><code>[[nan nan]
[nan nan]]
[[ 1. -1.]
[-1. 1.]]
</code></pre>
<p>I think both results should be below:</p>
<pre><code>[[ 1. 1.]
[1. 1.]]
</code></pre>
<p>or</p>
<pre><code>[[nan nan]
[nan nan]]
</code></pre>
<p>Why got different result between different constant float number?</p>
|
<p>The correct answer should be na for all, because definition for correlation is (from wiki):</p>
<p><a href="https://i.stack.imgur.com/kp0tZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kp0tZ.png" alt="enter image description here" /></a></p>
<p>If all your observations are the same, the denominator is 0, so you are dividing by zero.</p>
<p>In instances where the denominator is not exactly zero, you get a -1, 1 value, which is not correct.</p>
<p>You can use a higher precision and you get:</p>
<pre><code>np.corrcoef(np.array([0.9]*len),np.array([3.4]*len),dtype = np.float128)
</code></pre>
<p>My suggestion is that check your entries before correlating. No point making the correlation between two vectors with no variation.</p>
|
numpy|statistics|data-analysis|numerical-methods
| 0
|
374,572
| 71,071,440
|
Pandas Drop Rows when a String is Matched to a Longer String in a Column in an Exact Match
|
<p>I'm trying to drop rows in a pandas DataFrame if a substring in a column exactly matches a string in a list. At the moment I can only get it working for partial matches.</p>
<pre><code># list of strings to drop in an exact match
drop_list = ["sock", "shirt"]
# initialize data of lists.
data = {'keyword': ['adidas socks', 'adidas sock', 'adidas shoes', "sock"]}
# Create DataFrame
df = pd.DataFrame(data)
df = df[~df['keyword'].str.contains("|".join(drop_list))]
</code></pre>
<p>Current Output:</p>
<pre><code> keyword
2 adidas shoes
</code></pre>
<p>Desired Output:</p>
<pre><code> keyword
0 adidas socks
1 adidas shoes
</code></pre>
|
<p>You can create a set from <code>drop_list</code> and use <code>set.isdisjoint</code> on the split words in each row to evaluate if the exact match appears.</p>
<pre><code>drop_set = set(drop_list)
msk = df['keyword'].apply(lambda x: drop_set.isdisjoint(x.split()))
df = df[msk]
</code></pre>
<p>Output:</p>
<pre><code> keyword
0 adidas socks
2 adidas shoes
</code></pre>
|
python|pandas|dataframe
| 1
|
374,573
| 71,073,808
|
Advance year problem appear when plotting (pandas && matplotlib)
|
<p>My problem is when I plot the users joining by day the advance year appear, it should not have year 2023. I tried to search it into my csv file and there is no row holding the value of 2023.</p>
<pre><code>data = pd.read_csv('users-current.csv')
#transform datetime to date
data['dateCreated'] = pd.to_datetime(data['created_on']).dt.date
#date Count Registered
dataCreated = data.groupby('dateCreated').size()
#dataCreatedArray = np.array([dataCreated], dtype = object)
dataCreated.head(50)
dataCreated.plot().invert_xaxis()
plt.title('Users Joining in a Day',pad=20, fontdict={'fontsize':24})
plt.show()
</code></pre>
<p>the output:</p>
<p><a href="https://i.stack.imgur.com/9XsEJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9XsEJ.png" alt="enter image description here" /></a></p>
<p>column in my csv used below:</p>
<p><a href="https://i.stack.imgur.com/2h6Zx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2h6Zx.png" alt="enter image description here" /></a></p>
|
<p>This is because the range of <code>x</code> is automatically generated. Instead, you can explicitly limit a range of <code>x</code> using <code>plt.xlim()</code>, as follows:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import datetime
data = pd.read_csv('users-current.csv')
#transform datetime to date
data['dateCreated'] = pd.to_datetime(data['created_on']).dt.date
#date Count Registered
dataCreated = data.groupby('dateCreated').size()
#dataCreatedArray = np.array([dataCreated], dtype = object)
dataCreated.head(50)
dataCreated.plot().invert_xaxis()
# import datetime, and use this code to set a period as you want.
plt.xlim([datetime.date(2021, 1, 1), datetime.date(2022, 12, 31)])
plt.title('Users Joining in a Day', pad=20, fontdict={'fontsize':24})
plt.show()
</code></pre>
|
python|pandas
| 0
|
374,574
| 70,885,170
|
efficiently vectorize scipy.stats.beta function
|
<p>I wanted to fit a curve (with <code>scipy.curve_fit</code>) that contains a <code>beta</code> distribution in the formula</p>
<pre><code>def f(X,a,b):
# X.shape == (2, 100)
# X[0] is the column 0 of the matrix X,
# the following line doesn't work because a must be a float not an array
beta_cdf = beta.cdf([0,0.5,1], a=a*X[0], b=b))
beta_dif = np.diff(beta_cdf)
return beta_dif*X[1]
</code></pre>
<p>the problem is that <code>scipy.stats.beta.cdf</code> only accepts a float as the <code>a</code> parameter, and I need that the shape of the <code>beta</code> function depends on the <code>X[0]</code> column, I know that I can solve that with a loop, but because I'm using the <code>scipy.curve_fit</code> method/solver, I need that the <code>f(X,a,b)</code> is evaluated very fast. How can I "vectorize" the <code>scipy.stats.beta.cdf</code> so it can receive a NumPy array instead of a float ? or is there a way to "create" a vectorized <code>beta</code>?</p>
<p>Thanks !</p>
|
<p>finally turns out that <code>scipy.stats.beta.cdf</code> is able to receive a column vector with a list of parameters (thanks @Michael Szczesny for pointing that out).</p>
<pre><code>def f(X:np.ndarray, a:float, b:float):
beta_cdf = beta.cdf([0,0.5,1],
a= a*X[0].reshape((-1, 1)),
b=b))
# where a*X[0].reshape((-1, 1)) its a column wise array = [[1], [2], ..]
beta_dif = np.diff(beta_cdf)
return beta_dif*X[1]
</code></pre>
|
python|numpy|scipy
| 0
|
374,575
| 71,015,610
|
Can I load two large csv files in pandas and perform Upsert ( Update / Insert )
|
<p>I have Two 5GB CSV files with 10 Columns, I need to perform update/Insert logic and generate a final CSV by comparing both CSV files.</p>
<p>How to do it in Python Pandas?</p>
<p>Ex:</p>
<p><a href="https://i.stack.imgur.com/K57sh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K57sh.png" alt="enter image description here" /></a></p>
<p><strong>If you have any alternatives solutions to do the job, let me know</strong></p>
|
<p>Try using the isin() method or the merge() method to compare the 2 csv files.</p>
<pre><code>import pandas as pd
csv1 = pd.read_csv("file1.csv")
csv2 = pd.read_csv("file2.csv")
#comparing the data using isin()
result = csv1[csv1.apply(tuple,1).isin(csv2.apply(tuple,1))]
print(result)
#comparing the data using merge()
result2 = csv1.merge(csv2, indicator=True, how='outer').loc[lambda v : V['_merge'] != 'both']
print(result2)
</code></pre>
<p>To update or insert into csv files, check out the following link.
<a href="https://www.geeksforgeeks.org/update-column-value-of-csv-in-python/" rel="nofollow noreferrer">Updating Values in csv files</a></p>
|
python|python-3.x|pandas|dataframe|upsert
| 1
|
374,576
| 71,081,720
|
TypeError: incompatible index of inserted column with frame index when grouping 2 columns
|
<p>I have a dataset that looks like this (+ some other cols):</p>
<pre><code>Value Theme Country
-1.975767 Weather China
-0.540979 Fruits China
-2.359127 Fruits China
-2.815604 Corona Brazil
-0.929755 Weather UK
-0.929755 Weather UK
</code></pre>
<p>I want to find standard deviations for the values after grouping by Themes and Countries (as explained here <a href="https://stackoverflow.com/questions/71080234/calculate-standard-deviation-by-grouping-two-columns/71080528#71080528">calculate standard deviation by grouping two columns</a></p>
<pre><code>df = pd.read_csv('./Brazil.csv')
df['std'] = df.groupby(['themes', 'country'])['value'].std()
</code></pre>
<p>However, currently, I get this error:</p>
<pre><code>File /usr/local/Cellar/ipython/8.0.1/libexec/lib/python3.10/site-packages/pandas/core/frame.py:3656, in DataFrame.__setitem__(self, key, value)
3653 self._setitem_array([key], value)
3654 else:
3655 # set column
-> 3656 self._set_item(key, value)
File /usr/local/Cellar/ipython/8.0.1/libexec/lib/python3.10/site-packages/pandas/core/frame.py:3833, in DataFrame._set_item(self, key, value)
3823 def _set_item(self, key, value) -> None:
3824 """
3825 Add series to DataFrame in specified column.
3826
(...)
3831 ensure homogeneity.
3832 """
-> 3833 value = self._sanitize_column(value)
3835 if (
3836 key in self.columns
3837 and value.ndim == 1
3838 and not is_extension_array_dtype(value)
3839 ):
3840 # broadcast across multiple columns if necessary
3841 if not self.columns.is_unique or isinstance(self.columns, MultiIndex):
File /usr/local/Cellar/ipython/8.0.1/libexec/lib/python3.10/site-packages/pandas/core/frame.py:4534, in DataFrame._sanitize_column(self, value)
4532 # We should never get here with DataFrame value
4533 if isinstance(value, Series):
-> 4534 return _reindex_for_setitem(value, self.index)
4536 if is_list_like(value):
4537 com.require_length_match(value, self.index)
File /usr/local/Cellar/ipython/8.0.1/libexec/lib/python3.10/site-packages/pandas/core/frame.py:10985, in _reindex_for_setitem(value, index)
10981 if not value.index.is_unique:
10982 # duplicate axis
10983 raise err
> 10985 raise TypeError(
10986 "incompatible index of inserted column with frame index"
10987 ) from err
10988 return reindexed_value
TypeError: incompatible index of inserted column with frame index
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.expanding.html" rel="nofollow noreferrer"><code>DataFrame.expanding</code></a> with remove first level for new column by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.droplevel.html" rel="nofollow noreferrer"><code>DataFrame.droplevel</code></a> should be simplier solution:</p>
<pre><code>df['std'] = (df.groupby(['Theme', 'Country'])['Value']
.expanding()
.std()
.droplevel([0,1]))
print (df)
Value Theme Country std
0 -1.975767 Weather China NaN
1 -0.540979 Fruits China NaN
2 -2.359127 Fruits China 1.285625
3 -2.815604 Corona Brazil NaN
4 -0.929755 Weather UK NaN
5 -0.929755 Weather UK 0.000000
</code></pre>
|
python|pandas|dataframe|numpy|standard-deviation
| 2
|
374,577
| 70,861,551
|
Find Pandas column largest/smallest values where dates don't overlap
|
<p>I have a DataFrame like:</p>
<pre><code>df = pd.DataFrame(index = [0,1,2,3,4,5])
df['XYZ'] = [2, 8, 6, 5, 9, 10]
df['Date2'] = ["2005-01-06", "2005-01-07", "2005-01-08", "1994-06-08", "1999-06-15", "2005-01-09"]
df['Date1'] = ["2005-01-02", "2005-01-03", "2005-01-04", "1994-06-04", "1999-06-12", "2005-01-05"]
df['Date1'] = pd.to_datetime(df['Date1'])
df['Date2'] = pd.to_datetime(df['Date2'])
</code></pre>
<p>I need to follow the 2 largest values of XYZ with dates that do not overlap. The expected output would be:</p>
<pre><code>XYZ Date1 Date2
10 2005-01-05 2005-01-09
9 1999-06-12 1999-06-15
5 1994-06-04 1994-06-08
</code></pre>
<p>I tried to sort by "XYZ":</p>
<pre><code>df.sort_values(by="XYZ", ascending=False, inplace=True)
</code></pre>
<p>And then compare dates:</p>
<pre><code>df['overlap'] = (df['Date1] <= df['Date2'].shift()) & (df['Date2'] >= df['Date1'].shift())
</code></pre>
<p>And then drop any <code>True</code> values in df['overlap'] and take the <code>nlargest()</code> values, however that results in cases that do overlap.</p>
<p>Any help would be much appreciated.</p>
|
<p>This is somewhat involved but hopefully will work for you. We introduce a <code>mask</code> indexed by every date between the min and the max date in your df, where we mark each date as 'used' if it appears in the range, and then use that to reject overlapping rows</p>
<p>First we get the min and the max date (while also sorting the original df by 'XYZ')</p>
<pre><code>df1 = df.sort_values('XYZ', ascending = False)
dmin, dmax = df1[['Date1', 'Date2']].unstack().agg([min,max])
</code></pre>
<p>then we create a mask populated with 0s initially</p>
<pre><code>mask = pd.Series(index = pd.date_range(dmin,dmax), data = 0)
</code></pre>
<p>Then we iterate over rows marking those we want in the 'include' column</p>
<pre><code>for idx,row in df1.iterrows():
if sum(mask[row['Date1']:row['Date2']]) > 0:
df1.loc[idx, 'include'] = False
continue
mask[row['Date1']:row['Date2']] = 1
df1.loc[idx, 'include'] = True
</code></pre>
<p>finally filter on 'include'</p>
<pre><code>df1[df1['include']].drop(columns = 'include')
</code></pre>
<p>output</p>
<pre><code> XYZ Date1 Date2
5 10 2005-01-05 2005-01-09
4 9 1999-06-12 1999-06-15
3 5 1994-06-04 1994-06-08
</code></pre>
|
python|python-3.x|pandas
| 1
|
374,578
| 70,890,680
|
How can I get the symbolic gradient [Tensorflow 2.x]
|
<p>I want to get the symbolic expression for gradient estimation. When I see the output it's quite difficult to understand what's going on.</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
@tf.function
def f_k(input_dat):
y = tf.matmul(tf.sin(input_dat[0]), input_dat[1])
grads = tf.gradients([y], input_dat)
# grads = tape.gradient([y], input_dat)
tf.print('tf >>', grads)
print('print >>', grads)
return y, grads
a = tf.Variable([[1., 3.0], [2., 6.0]])
b = tf.Variable([[1.], [2.]])
input_data = [a, b]
y, z = f_k(input_data)
print(y, z)
</code></pre>
<p>Output: inside the function</p>
<pre><code>print >> [<tf.Tensor 'gradients/Sin_grad/mul:0' shape=(2, 2) dtype=float32>, <tf.Tensor 'gradients/MatMul_grad/MatMul_1:0' shape=(2, 1) dtype=float32>]
tf >> [[[0.540302277 -1.979985]
[-0.416146845 1.92034054]], [[1.75076842]
[-0.138295487]]
</code></pre>
<p>As the output, I want which is shown with print:</p>
<pre><code>[<tf.Tensor 'gradients/Sin_grad/mul:0' shape=(2, 2) dtype=float32>, <tf.Tensor 'gradients/MatMul_grad/MatMul_1:0' shape=(2, 1) dtype=float32>]
</code></pre>
<p>However, the function always returns the numerical result. Could someone help me to get this symbolic representation of the gradient?</p>
|
<p>The symbolic representation you want will only work in <code>graph</code> mode. Outside of <code>graph</code> mode, eager execution is enabled by default. What you can do is create a new function to print the values and wrap it with the <code>@tf.function</code> decorator like you are already doing for <code>f_k</code>:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
@tf.function
def f_k(input_dat):
y = tf.matmul(tf.sin(input_dat[0]), input_dat[1])
grads = tf.gradients([y], input_dat)
# grads = tape.gradient([y], input_dat)
tf.print('tf >>', grads)
print('print >>', grads)
return y, grads
a = tf.Variable([[1., 3.0], [2., 6.0]])
b = tf.Variable([[1.], [2.]])
input_data = [a, b]
y, z = f_k(input_data)
@tf.function
def print_symbolic(y, z):
print(y,z)
return y, z
y, z = print_symbolic(y, z)
</code></pre>
<pre><code>print >> [<tf.Tensor 'gradients/Sin_grad/mul:0' shape=(2, 2) dtype=float32>, <tf.Tensor 'gradients/MatMul_grad/MatMul_1:0' shape=(2, 1) dtype=float32>]
tf >> [[[0.540302277 -1.979985]
[-0.416146845 1.92034054]], [[1.75076842]
[-0.138295487]]]
Tensor("y:0", shape=(2, 1), dtype=float32) [<tf.Tensor 'z:0' shape=(2, 2) dtype=float32>, <tf.Tensor 'z_1:0' shape=(2, 1) dtype=float32>]
</code></pre>
<p>You could also just access the tensors of your graph:</p>
<pre class="lang-py prettyprint-override"><code>graph = f_k.get_concrete_function(input_data).graph
print(*[tensor for op in graph.get_operations() for tensor in op.values()], sep="\n")
</code></pre>
<pre><code>Tensor("input_dat:0", shape=(), dtype=resource)
Tensor("input_dat_1:0", shape=(), dtype=resource)
Tensor("Sin/ReadVariableOp:0", shape=(2, 2), dtype=float32)
Tensor("Sin:0", shape=(2, 2), dtype=float32)
Tensor("MatMul/ReadVariableOp:0", shape=(2, 1), dtype=float32)
Tensor("MatMul:0", shape=(2, 1), dtype=float32)
Tensor("gradients/Shape:0", shape=(2,), dtype=int32)
Tensor("gradients/grad_ys_0/Const:0", shape=(), dtype=float32)
Tensor("gradients/grad_ys_0:0", shape=(2, 1), dtype=float32)
Tensor("gradients/MatMul_grad/MatMul:0", shape=(2, 2), dtype=float32)
Tensor("gradients/MatMul_grad/MatMul_1:0", shape=(2, 1), dtype=float32)
Tensor("gradients/Sin_grad/Cos:0", shape=(2, 2), dtype=float32)
Tensor("gradients/Sin_grad/mul:0", shape=(2, 2), dtype=float32)
Tensor("StringFormat:0", shape=(), dtype=string)
Tensor("Identity:0", shape=(2, 1), dtype=float32)
Tensor("Identity_1:0", shape=(2, 2), dtype=float32)
Tensor("Identity_2:0", shape=(2, 1), dtype=float32)
</code></pre>
<p>Check the <a href="https://www.tensorflow.org/guide/intro_to_graphs#graph_execution_vs_eager_execution" rel="nofollow noreferrer">docs</a> for more information.</p>
|
python|tensorflow|tensorflow2.0|tensorflow2.x
| 1
|
374,579
| 70,825,977
|
Identify varying rows in pandas dataframe
|
<p>I have a dataframe:</p>
<pre><code>ColA ColB ColC
a 0 1
b 3 3
c 1 1
a 0 1
a 1 2
b 3 3
</code></pre>
<p>I need to identify every row which has different values while filtering based on a value in a column. Example : when I filter the dataframe with value 'a' in ColA, the 5th row has different values in the ColB and ColC.</p>
<p>I tried with</p>
<p>df['result']=df['ColA'].ne(df['ColA'].shift().bfill()).astype(int)</p>
<p>which resulted in:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">ColA</th>
<th style="text-align: center;">ColB</th>
<th style="text-align: center;">ColC</th>
<th style="text-align: right;">result</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">a</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: left;">b</td>
<td style="text-align: center;">3</td>
<td style="text-align: center;">3</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">c</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">a</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">a</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">2</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">b</td>
<td style="text-align: center;">3</td>
<td style="text-align: center;">3</td>
<td style="text-align: right;">1</td>
</tr>
</tbody>
</table>
</div>
<p>What I need is(Filtering for the value 'a' should identify the row with different values in other columns):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">ColA</th>
<th style="text-align: center;">ColB</th>
<th style="text-align: center;">ColC</th>
<th style="text-align: right;">result</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">a</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: left;">b</td>
<td style="text-align: center;">3</td>
<td style="text-align: center;">3</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">c</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">a</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: left;">a</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">2</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">b</td>
<td style="text-align: center;">3</td>
<td style="text-align: center;">3</td>
<td style="text-align: right;">1</td>
</tr>
</tbody>
</table>
</div>
<p>If I use groupby method:</p>
<p>df.groupby(df.columns.tolist())['ColA'].nunique()</p>
<p>it works only with small dataframes with a few data types.</p>
|
<p>If I understand correctly, you can <code>drop_duplicates</code> and then create the result column with <code>groupby</code> and <code>cumcount</code> to get an identifier per unique row per group.</p>
<pre><code>print(df.drop_duplicates(subset=['ColA','ColB','ColC'])
.assign(result=lambda x: x.groupby('ColA').cumcount()))
# ColA ColB ColC result
# 0 a 0 1 0
# 1 b 3 3 0
# 2 c 1 1 0
# 4 a 1 2 1
</code></pre>
<p>As you can see, you are "missing rows" from the original df, so <code>merge</code> it back to df.</p>
<pre><code>df = (
df.merge(df.drop_duplicates(subset=['ColA','ColB','ColC'])
.assign(result=lambda x: x.groupby('ColA').cumcount()),
how='left')
)
print(df)
# ColA ColB ColC result
# 0 a 0 1 0
# 1 b 3 3 0
# 2 c 1 1 0
# 3 a 0 1 0
# 4 a 1 2 1
# 5 b 3 3 0
</code></pre>
|
python|pandas|dataframe|rows
| 0
|
374,580
| 70,949,823
|
Python pandas concatenate all tsv files from directory to new file
|
<p>I'm trying to do a concatenate a couple hundred files in one directory and write that into a new file in a separate directory. The underlying files each have a header row. The headers in each file are expected to have the same number, name, and position based upon how the data is generated. This is the code I'm using:</p>
<pre><code>import os
import glob
import pandas as pd
#set working directory
os.chdir("C:\\Users\\user.name\\Desktop\\CombineTSV\\source\\")
#find all tsv files in the working directory
#using glob pattern matching -> extension = 'tsv'
#save result in list -> all_filenames
extension = 'tsv'
all_filenames = [i for i in glob.glob('*.{}'.format(extension))]
#combine all tsv files in the list
combined_file = pd.concat([pd.read_csv(f) for f in all_filenames], sep='\t')
#set output directories, output file name
outdir = "C:\\Users\\user.name\\Desktop\\CombineTSV\\output\\"
outfile = "combined_output.tsv"
#export to tsv
combined_file.to_csv( "outdir"+"outfile", sep='\t', index=False, encoding='utf-8-sig')
</code></pre>
<p>and instead of my desired output file, I'm getting this error about tokenizing data:</p>
<pre><code>Traceback (most recent call last):
File "combinetsv.py", line 16, in <module>
combined_file = pd.concat([pd.read_csv(f) for f in all_filenames], sep='\t')
File "combinetsv.py", line 16, in <listcomp>
combined_file = pd.concat([pd.read_csv(f) for f in all_filenames], sep='\t')
File "C:\Users\user.name\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\io\parsers.py", line 676, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\Users\user.name\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\io\parsers.py", line 454, in _read
data = parser.read(nrows)
File "C:\Users\user.name\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\io\parsers.py", line 1133, in read
ret = self._engine.read(nrows)
File "C:\Users\user.name\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\io\parsers.py", line 2037, in read
data = self._reader.read(nrows)
File "pandas\_libs\parsers.pyx", line 860, in pandas._libs.parsers.TextReader.read
File "pandas\_libs\parsers.pyx", line 875, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas\_libs\parsers.pyx", line 929, in pandas._libs.parsers.TextReader._read_rows
File "pandas\_libs\parsers.pyx", line 916, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas\_libs\parsers.pyx", line 2071, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 4 fields in line 4, saw 5
</code></pre>
<p>Disclaimer: I'm relatively new to using pandas. What am I missing here?</p>
|
<p>In your tsv files, in some row, the format is wrong. That row has 5 values, but expect 4 values, so the error message is shown.<br />
If you only want 4 values and ignore the exception value, you can use param <code>usecols</code> to set the cols you want.</p>
<pre><code>combined_file = pd.concat([pd.read_csv(f) for f in all_filenames], usecols=range(0,4), sep='\t')
</code></pre>
|
python|python-3.x|pandas
| 0
|
374,581
| 70,923,153
|
Key error while plotting a bar graph using Matplotlib
|
<p>I have been facing one issue while I am trying to plot a bar graph using the matplotlib library.</p>
<p>Please find the sample data below</p>
<p><a href="https://i.stack.imgur.com/3De8U.png" rel="nofollow noreferrer">Sample Data Image</a></p>
<pre><code>count_movies_year = n_db.groupby('release_year').agg({'title':'count'}).rename(columns={'title':'no_of_titles'})
count_movies_year.reset_index()
</code></pre>
<p>I have written the above code and did the group_by on certain cases and renamed the column in the dataframe that I have in place. Now after this I wanted to plot a bar graph of the same using the matplotlib and I have written the below code</p>
<pre><code>plt.bar(count_movies_year['release_year'],count_movies_year['no_of_titles'])
plt.xlabel('release_year')
plt.ylabel('no_of_titles')
plt.show()
</code></pre>
<p>but, when I do this I have some errors in place and the key_error shows me 'release_year'. Can I know what is wrong over here as I am new to Python and Matplotlib understanding. Can someone guide me where exactly things are going wrong so that I can correct them next time?</p>
|
<p>When doing a <code>group_by</code>, the column "release_year" no longer exist in you Dataframe, since it's now the index.</p>
<p>You have multiple solution :</p>
<hr />
<p>using a <code>reset_index</code> as you did, but you should reattribute it to your variable</p>
<pre><code>count_movies_year = count_movies_year.reset_index()
</code></pre>
<hr />
<p>or use the <code>inplace</code> parameter</p>
<pre><code>count_movies_year.reset_index(inplace=True)
</code></pre>
<hr />
<p>use the <code>.index</code> directly in your plot</p>
<pre><code>plt.bar(count_movies_year.index, count_movies_year['no_of_titles'])
</code></pre>
|
python|pandas|matplotlib
| 0
|
374,582
| 51,685,660
|
fromstring() when converting Windows string to numpy under Linux
|
<p>A Pyro4 server running on 32bit Windows machine is serving numpy image data as a string using <code>img.tostring()</code>, the <code>dtype</code> reported before conversion is <code>int32</code>.</p>
<p>The server code looks like:</p>
<pre><code>def getLastPhase(self):
print("Sending the data now: ")
print( self.lastPhase.dtype )
return self.lastPhase.tostring()
</code></pre>
<p>The client code looks like:</p>
<pre><code>data = getLastPhase()
</code></pre>
<p>The data is received on a Linux machine with <code>len( data )</code> = 4177920 or precisely the size of the image in bytes (1024x1020 x4).</p>
<p>However, using <code>fromstring( data, dtype='int32' )</code> results in exception:</p>
<pre><code>ValueError: string size must be a multiple of element size
</code></pre>
<p>If <code>int16</code> is used instead of <code>int32</code> no exception is raised but the data is nonsense.</p>
<p>Why is this exception raised in the case where string size matches my data size and not raised in the int16 case? </p>
<p>Is there a difference between <code>string</code> in Python under Windows and Linux?</p>
<p>Any ideas on how to overcome this problem will be much appreciated. </p>
<p><strong>Edit:</strong> the python version on the Windows machine is 2.7, whereas on the Linux it is 3.6</p>
|
<p>The key point is that in Python 2.x, the <code>str</code> type is (sometimes!) a series of bytes and so not interpreted any further unless you explicitly ask it to be so.</p>
<p>In Python 3.x, the <code>str</code> type <em>is</em> interpreted, and as UTF-8 I believe as standard.</p>
<p>Therefore you want, on Python 3.x, to use the <code>byte</code> type.</p>
<p>The natural way to do this is to <code>encode</code> the string to a series of bytes:</p>
<pre><code>fromstring( data.encode('raw_unicode_escape'), dtype='int32' )
</code></pre>
<p>As others mention,
<a href="https://stackoverflow.com/questions/48367128/string-to-bytes-python-without-change-in-encoding">String to Bytes Python without change in encoding</a>
&
<a href="https://stackoverflow.com/questions/51457418/python-convert-raw-string-to-bytes-string-without-adding-escape-chraracters">Python: Convert Raw String to Bytes String without adding escape chraracters</a></p>
<p>you need to be <em>careful</em> but in this case I understand it is just transformed binary data so we do not expect any Unicode characters outside of the range <code>raw_unicode_escape</code> will <em>successfully</em> de/encode.</p>
<p>So it is okay.</p>
|
python|linux|windows|numpy|tostring
| 2
|
374,583
| 51,926,303
|
how to insert flag for column match and non-match using read_sql
|
<p>I am trying to insert a flag (match/non-match) after the comparing columns for 2 different tables. I am able to compare the two mysql table columns <strong><em>but not getting how I can insert a flag column and get the status (match/non-match)</em></strong></p>
<p>The below is an example, consider 2 mysql tables:</p>
<p>tab1:</p>
<pre><code>email
abc@gamil.com
xyz@email.com
ijk@gmail.com
ghi@gmail.com
pqr@gmail.com
yup@gmail.com
</code></pre>
<p>tab2:</p>
<pre><code>email
ijk@gmail.com
yup@gmail.com
</code></pre>
<p>tab3:</p>
<pre><code>email
xyz@email.com
pqr@gmail.com
</code></pre>
<p>Now I want output like this:</p>
<pre><code>email valid
abc@gamil.com non-match
xyz@email.com match
ijk@gmail.com match
ghi@gmail.com non-match
pqr@gmail.com match
yup@gmail.com match
</code></pre>
<p><em>Tried with pandas:</em></p>
<pre><code>data_2=pd.read_sql("select tab1.*,if(tab2.email is not null ,'MATCH','NONMATCH') stataus from tab1 left join tab2 on tab1.email=tab2.email ",con=engine)
</code></pre>
<p><strong>getting incorrect syntax for multiple table comparision :</strong></p>
<p>*but how can i do for 2 tables :</p>
<p>tried this way </p>
<p>data_2=pd.read_sql("select tab1.<em>,if(tab2.email and tab3.email is not null ,'MATCH','NONMATCH') stataus from tab1 left join tab2 on tab1.email=tab2.email left join tab3 on tab1.email=tab3.email",con=engine)</em></p>
|
<p>Can be solved purely in SQL.</p>
<pre><code>SELECT tab1.email
CASE WHEN tab2.email IS NULL THEN 'non-match' ELSE 'valid' END
FROM tab1 left join tab2 on tab1.email =tab2.email"
</code></pre>
<p>Case / When is how you assign a value conditionally in mysql</p>
|
python|mysql|pandas
| 1
|
374,584
| 51,987,651
|
Pandas Filtering Data Based on what appears at the start
|
<p>I have a dataframe that looks like this:</p>
<pre><code>df4 = pd.DataFrame({'Q':['chair', 'desk', '-----monitor', 'chair'], 'R':['red', '-- use blue or dark blue', 'yellow', 'purple'], 'S': ['-- is english spoken?', 'german', 'spanish', 'english']})
Q R S
0 chair Red -- is english spoken?
1 desk -- blue or dark blue german
2 -----monitor yellow spanish
3 chair purple english
</code></pre>
<p>what I want to be returned:</p>
<pre><code> Q R S
3 chair purple english
</code></pre>
<p>I want to filter out the entire row if any column has a "-" value that appears 2 or more times at the beginning.</p>
<p>I found a thread for filtering numerical values, but is there any way to filter out special characters? Particularly with regex?</p>
<p><strong>Edit #1:</strong></p>
<p>I am only looking to remove rows if "-" appears 2 or more times at the very beginning. If that value appears in the middle of some text, that's fine.</p>
<p>Let's say my dataframe looks like this:</p>
<pre><code> Q R S
0 chair Red -- is english spoken?
1 desk blue or dark blue ger--man
2 -----monitor yellow spanish
3 chair purple english
</code></pre>
<p>I would have this returned:</p>
<pre><code> Q R S
1 desk blue or dark blue ger--man
3 chair purple english
</code></pre>
<p><strong>Edit #2:</strong></p>
<p>I have tried this:</p>
<pre><code>df4[~df4.Q.str.startswith(('--'))]
</code></pre>
<p>But this only works on 1 column, not all.</p>
|
<p>Using <code>applymap</code> with <code>in</code> and <code>any</code></p>
<pre><code>df4[~df4.applymap(lambda x : '--' in x).any(1)]
Out[287]:
Q R S
3 chair purple english
</code></pre>
<p>Update only exclude the certain at the beginning.</p>
<pre><code>df4[~df4.applymap(lambda x : str.startswith(x,'--')).any(1)]
</code></pre>
|
python|python-3.x|pandas
| 6
|
374,585
| 51,816,647
|
For unsupervised learning, how to generate image set
|
<p>I've got unlabeled 500 pieces of RGB color image set(200x300pixel) for unsupervised learning(CNN, GAN, autoencoder).
I want to import my image set to tensorflow, instead of MNIST example.
Do I need to transform them in CSV file?</p>
<pre><code>import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("./", one_hot=True)
</code></pre>
|
<p>This is part of my code that trains a GAN. The files are read like this. That is one way to do this.</p>
<pre><code>filenames = tf.train.string_input_producer(
tf.train.match_filenames_once("D:/TensorFlow/resizedimages/*.png"))
</code></pre>
<p>Code for reference is this.</p>
<pre><code>def train():
filenames = tf.train.string_input_producer(
tf.train.match_filenames_once("D:/TensorFlow/resizedimages/*.png"))
reader = tf.WholeFileReader()
_, input = reader.read(filenames)
input = tf.image.decode_png(input, channels=3)
input.set_shape([299, 299, 3])
batch = tf.train.batch([input],
batch_size=30)
init = (tf.global_variables_initializer(), tf.local_variables_initializer())
with tf.Session() as sess:
sess.run(init)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
train_writer = tf.summary.FileWriter('D:/TensorFlow/logs/1/train', sess.graph)
tf.summary.image("Image", GeneratedImage)
merge = tf.summary.merge_all()
for it in range(100):
_, X_batch = sess.run([input,batch])
summary,_ = sess.run([merge,D_optimizer], feed_dict={Z : samplefromuniformdistribution(20,100), X: X_batch, keep_prob: keep_prob_value})
summary,_ = sess.run([merge,G_optimizer],feed_dict={ Z : samplefromuniformdistribution(20,100), X: X_batch, keep_prob: keep_prob_value})
train_writer.add_summary(summary, it)
train_writer.flush()
train_writer.close()
coord.request_stop()
coord.join(threads)
</code></pre>
|
tensorflow|image-processing
| 0
|
374,586
| 51,909,757
|
Insert rows based on values pandas dataframe
|
<p>I have this:</p>
<p><a href="https://i.stack.imgur.com/nhogL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nhogL.png" alt="enter image description here"></a></p>
<p>I would like to achieve this using pandas:</p>
<p><a href="https://i.stack.imgur.com/uyacb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uyacb.png" alt="enter image description here"></a></p>
|
<p>I may be unsure about about what you want, but I believe you are trying to add rows of one data frame to another data frame:</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/merging.html" rel="nofollow noreferrer">Use pd.concat()</a></p>
|
python|pandas|insert|append|rows
| 0
|
374,587
| 51,939,022
|
Correct way to create Pytorch dataset that returns sequence of data for RNN?
|
<p>I am attempting to train an RNN on time series data, and while there are plenty of tutorials out there on how to build a RNN model I am having some trouble with building the dataloader object for this task. The data is all going to be the same length, so no need for padding as well. The approach I have taken so far is to return a range of data in the <strong>getitem</strong> function on the dataset class and define the length as</p>
<pre><code>len(data) - seq_len + 1
</code></pre>
<p>, however I feel that this is a bit "hacky" and that there should be a more proper way to do this. This method seems confusing and I feel that this would cause problems if collaborating with a group. More specifically, I think that somehow overriding the sampler function in the Pytorch Dataset constructor is the correct way, but I am having trouble understanding how to implement that. Below is the current dataset class I have built, can anyone point me in the right direction with how to fix it? Thank you in advance.</p>
<pre><code>class CustomDataset(Dataset):
def __init__(self, df, cats, y, seq_l):
self.n, self.seq_l = len(df), seq_l
self.cats = np.array(np.stack([c.values for n,c in df[cats].items()], 1).astype(np.int64))
self.conts = np.array(np.stack([c.values for n,c in df[[i for i in df.columns if i not in cats]].items()], 1).astype(np.float32))
self.y = np.array(y)
def __len__(self): return len(self.y) - self.seq_l + 1
def __getitem__(self, idx):
return [
(torch.from_numpy(self.cats[idx:idx+self.seq_l]),
torch.from_numpy(self.conts[idx:idx+self.seq_l])),
self.y[idx+self.seq_l-1]
]
</code></pre>
|
<p>If I understood correctly you have time series data and you want to crate batches of data with the same length by sampling from it?
I think you can use <strong>Dataset</strong> for returning just one sample of data as it was initially intended by the PyTorch developers. You can stack them in the batch with your own <strong>_collate_fn</strong> function and pass it to the DataLoader class (_collate_fn is a callable which takes a list of samples and returns a batch, usually, for example, padding is done there). So you would not have a dependency of length (=batch size in your dataset class). I assume you want to preserve sequential order of your samples when you form a batch (given that you work with time series), you can write your own <strong>Sampler</strong> class (or use SequentialSampler already available in PyTorch).
As a result, you will decouple your sample representation, forming them in a batch (_collate_fn in DataLoader) and sampling (Sampler class). Hope this helps.</p>
|
python|deep-learning|dataset|pytorch|rnn
| 3
|
374,588
| 51,643,755
|
Numpy reshape "reversal"
|
<p>I read a 4D array from a file which is given in a 2D form i, j, k, x, y, z.
<a href="https://i.stack.imgur.com/Aft8s.png" rel="nofollow noreferrer">Input file header and shape</a>
I use numpy.reshape to reshape the 2D array to it's 3-D form. After making changes to this, I wish to write the file exactly the same order / format as I read it.
I do not understand how to "reverse" numpy.reshape to put it back in the same format.</p>
<pre><code>import numpy as np
import pandas as pd
from pandas import read_csv
header = read_csv("Input/Grid1_test.csv", nrows=1,skipinitialspace=True)
print header.head()
imax=header['IMAX'].iloc[0]
jmax=header['JMAX'].iloc[0]
kmax=header['KMAX '].iloc[0]
print(imax,jmax,kmax)
#read grid data with numpy array .. it is slow but probably worth it
grid_data = np.loadtxt('Input/Grid1_test.csv', delimiter=',', skiprows=3)
size_arr = np.array(grid_data).reshape(kmax, jmax, imax, 6)
#do something
#write the grid back into the file
</code></pre>
|
<p>To "reverse" a reshape, you can just call <code>reshape</code> again on the array to reshape it into the original dimensions.</p>
<p>If you have an array <code>x</code> with dimensions (<code>n</code>, <code>m</code>) then:</p>
<pre><code>x.reshape(kmax, jmax, imax, 6).reshape(n, m) == x
</code></pre>
|
python|numpy|multidimensional-array|reshape
| 5
|
374,589
| 51,608,346
|
Bitwise not True (including NaN) in pandas DataFrame
|
<p>I have a data frame which looks like:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({"id": range(5), "is_bar": [np.nan, np.nan, False, True, False], "is_foo": [True, False, True, True, False]})
</code></pre>
<p>Now I want rows of <code>df</code> where are foo, but not bar or bar is missing. In other words, this is the desired result:</p>
<pre><code> id is_bar is_foo
0 0 NaN True
2 2 False True
</code></pre>
<p>I expected <code>df.loc[df["is_foo"] & ~df["is_bar"]]</code> to work, but obviously the <code>np.nan</code>s are causing <code>TypeError</code>.</p>
<p>How can it be achieved?</p>
|
<p>I think need <code>fillna</code>:</p>
<pre><code>df = df.loc[df["is_foo"] & ~df["is_bar"].fillna(False)]
print (df)
id is_bar is_foo
0 0 NaN True
2 2 False True
</code></pre>
|
python|pandas
| 1
|
374,590
| 51,575,584
|
Can someone explain to me reduce sum in tensorFlow.js
|
<p>When I use the reduce sum of tensorFlow.js as : <a href="https://js.tensorflow.org/api/0.12.0/#sum" rel="nofollow noreferrer">https://js.tensorflow.org/api/0.12.0/#sum</a> , I was thinking that it would simply add all the element of an array to get a sum. But apparently it's something more complicated than that.</p>
<pre><code>const x = tf.tensor([12, 12, 24]).sum().print();
// result : 60
</code></pre>
<p>I would be expecting 48</p>
|
<p>There were a problem with Ubuntu and tfjs 0.12.0</p>
|
tensorflow.js
| 0
|
374,591
| 51,827,536
|
How to set the ticks of log scale for x&y axis?
|
<p>I want to plot a log scale graph without scientific notation.</p>
<pre><code> import matplotlib as mpl
import matplotlib.pyplot as plt
plt.plot(np.arange(0,10,0.1))
plt.xscale('log')
plt.yscale('log')
plt.xlim(0.1,100)
plt.ylim(1,10)
plt.gca().xaxis.set_major_formatter(mpl.ticker.ScalarFormatter())
plt.gca().yaxis.set_major_formatter(mpl.ticker.ScalarFormatter())
plt.show()
</code></pre>
<p>Question:</p>
<ol>
<li><p>Y axis still shows the format of scientific notation. How to change it?</p></li>
<li><p>How to make specific ticks for y axis? I tried <code>plt.yticks([1,10])</code>, but it doesn't work.</p></li>
<li><p>How to get rid of the decimal point of ticks for both x and y axis?</p>
<p><a href="https://i.stack.imgur.com/NAZAt.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NAZAt.png" alt="enter image description here"></a></p></li>
</ol>
|
<p><strong>1. Get rid of Scientific notation.</strong></p>
<p>The ticks are major and minor ticks, hence you would need to set the minor formatter as well:</p>
<pre><code>plt.gca().yaxis.set_major_formatter(mpl.ticker.ScalarFormatter())
plt.gca().yaxis.set_minor_formatter(mpl.ticker.ScalarFormatter())
</code></pre>
<p><strong>2. Show ticks at specific custom locations</strong></p>
<p>Getting rid of the minor ticklabels allows <code>yticks</code> to work as expected.</p>
<pre><code>plt.yticks([1,10])
plt.gca().yaxis.set_minor_formatter(mpl.ticker.NullFormatter())
</code></pre>
<p><strong>3. Getting rid of the decimal points</strong></p>
<p>I suppose it does not make sense to get rid of the decimal points for a label like <code>0.1</code>. Hence one would probably choose a <code>StrMethodFormatter</code> with the <strong>g</strong>eneral purpose numeric format <code>g</code>.</p>
<pre><code>plt.gca().yaxis.set_major_formatter(mpl.ticker.StrMethodFormatter("{x:g}"))
</code></pre>
|
python|python-3.x|numpy|matplotlib|plot
| 3
|
374,592
| 51,749,235
|
Pandas: filling placeholders in string column
|
<p>I am working with a pandas DataFrame looking as follows:</p>
<pre><code>df = pd.DataFrame(
[['There are # people', '3', np.nan], ['# out of # people are there', 'Five', 'eight'],
['Only # are here', '2', np.nan], ['The rest is at home', np.nan, np.nan]])
</code></pre>
<p>resulting in:</p>
<pre><code> 0 1 2
0 There are # people 3 NaN
1 # out of # people are there Five eight
2 Only # are here 2 NaN
3 The rest is at home NaN NaN
</code></pre>
<p>I would like to replace the <code>#</code> placeholders with the varying strings in columns 1 and 2, resulting in:</p>
<pre><code>0 There are 3 people
1 Five out of eight people are there
2 Only 2 are here
3 The rest is at home
</code></pre>
<p>How could I achieve this?</p>
|
<p>Using string format </p>
<pre><code>df=df.replace({'#':'%s',np.nan:'NaN'},regex=True)
l=[]
for x , y in df.iterrows():
if y[2]=='NaN' and y[1]=='NaN':
l.append(y[0])
elif y[2]=='NaN':
l.append(y[0] % (y[1]))
else:
l.append(y[0] % (y[1], y[2]))
l
Out[339]:
['There are 3 people',
'Five out of eight people are there',
'Only 2 are here',
'The rest is at home']
</code></pre>
|
python|pandas|string-formatting
| 2
|
374,593
| 51,651,476
|
tqdm progress bar with json string stuck
|
<p>I have a list of json strings, and I'm converting them to a list of dicts. </p>
<p>I do that to combine them in one final json string, to be converted later to a Pandas Dataframe:</p>
<pre><code>s1 = '{ "id": 11, "label": "REF", "claim": "Lorelai Gilmore", "ce": [[[1,2, "Gilmore", 3]]]}'
s2 = '{ "id": 0, "label": "REF", "claim": "named Robert s.", "ce": [[[1,2, "Lorelai", 3]]]}'
s = [s1, s2]
combine = [json.loads(item) for item in s]
r = json.dumps(combine, indent=2)
s = pandas.read_json(r)
print(s)
</code></pre>
<p>The list of the json strings that I have is very large, therefore I tried to use Tqdm progress bar to monitor the progress:</p>
<pre><code>combine = tqdm([json.loads(item) for item in s])
</code></pre>
<p>but I got this error:</p>
<pre><code> 0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last):
File "D:/OneDrive/PhD/fever_challenge/test.py", line 11, in <module>
r = json.dumps(combine, indent=2)
File "C:\Python35\lib\json\__init__.py", line 237, in dumps
**kw).encode(obj)
File "C:\Python35\lib\json\encoder.py", line 200, in encode
chunks = list(chunks)
File "C:\Python35\lib\json\encoder.py", line 436, in _iterencode
o = _default(o)
File "C:\Python35\lib\json\encoder.py", line 179, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: 0%| | 0/2 [00:00<?, ?it/s] is not JSON serializable
</code></pre>
<p>I removed the last three lines in my code when I was tracing the error, and I observed that the loop is stuck. This is what appeared to me:</p>
<pre><code> 0%| | 0/2 [00:00<?, ?it/s]
</code></pre>
<p>What is the problem in my code?</p>
|
<p>Try this:</p>
<pre><code>combine = []
for i in tqdm([json.loads(item) for item in s]):
combine.append(i)
</code></pre>
|
json|python-3.x|pandas|tqdm
| 1
|
374,594
| 51,730,651
|
Pandas: merging dataframes using a loop - MemoryError
|
<p>I have a few <strong>dataframes</strong> stored inside a <strong>dict</strong> called <strong>my_dict</strong>. The keys of the dict are stored inside a list called <strong>filter_list</strong>.</p>
<pre><code>filter_list = ["A", "B", "C", ...]
</code></pre>
<p><strong>my_dict[A]</strong> gives me the following result:</p>
<pre><code> links A
0 Q11@8.jpg 1
1 Q11@11.jpg 1
2 Q11@4.2.jpg 1
3 Q11@4.3.jpg 1
</code></pre>
<p><strong>my_dict[B]</strong> gives me the following result:</p>
<pre><code> links B
0 Q11@8.jpg 1
1 A11@21.jpg 1
2 Q11@42.jpg 1
3 C11@4.jpg 1
</code></pre>
<p>and so on...</p>
<p>Now I want to <strong>merge</strong> all the dataframes together. I am using an outer-join logic since I want my final dataframe to include all possible links that are present across all dataframes inside the "links" column. </p>
<p>As such, I use a loop to merge them iteratively but I keep getting an error message telling me </p>
<blockquote>
<p>MemoryError:</p>
</blockquote>
<p>with no further info. In order to release RAM during my loop I am saving the results to a pickle file, but this doesn't seem to help either. Still I get the same error.</p>
<p>This is the code I am using: </p>
<pre><code>for index in tqdm(range(2,len(filter_list))):
try:
result = pd.read_pickle("result.pkl")
except:
pass
if index == 2:
result = pd.merge(my_data[filter_list[0]], my_data[filter_list[1]], on="links", how="outer")
result = pd.merge(result , my_data[filter_list[index]], on="links", how="outer")
result.fillna(0, inplace=True)
result[result.columns[1:]] = result[result.columns[1:]].astype(int)
result.to_pickle("result.pkl")
del result
</code></pre>
|
<p>I think what you try to achieve can be done with <code>pd.concat</code>:</p>
<pre><code>result = (pd.concat([my_dict[key].set_index('links') for key in filter_list],
axis=1,sort=False)
.fillna(0).reset_index())
result[result.columns[1:]] = result[result.columns[1:]].astype(int)
</code></pre>
<p>with your two dataframes A and B, it gives:</p>
<pre><code> index A B
0 Q11@8.jpg 1 1
1 Q11@11.jpg 1 0
2 Q11@4.2.jpg 1 0
3 Q11@4.3.jpg 1 0
4 A11@21.jpg 0 1
5 Q11@42.jpg 0 1
6 C11@4.jpg 0 1
</code></pre>
|
python|pandas
| 1
|
374,595
| 51,849,186
|
When using Pandas .groupby, why use .agg versus directly using the function eg .sum()
|
<p>In Python, to obtain summaries by group, I use <code>groupby().agg(fx())</code>; eg <code>groupby('variable').agg('sum')</code>. What is the difference between that and directly using the function, eg; <code>groupby('variable').sum()</code> ?</p>
|
<p><strong><em>Setup</em></strong></p>
<pre><code>df = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6]})
</code></pre>
<p>The primary benefit of using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.agg.html" rel="nofollow noreferrer"><code>agg</code></a> is stated in <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.agg.html" rel="nofollow noreferrer">the docs</a>:</p>
<blockquote>
<p>Aggregate using <strong>one or more operations</strong> over the specified axis.</p>
</blockquote>
<p>If you have separate operations that need to be applied to each individual column, <code>agg</code> takes a dictionary (or a function, string, or list of strings/functions) that allows you to create that mapping in a single statement. So if you'd like the <code>sum</code> of column <code>a</code>, and the <code>mean</code> of column <code>b</code>:</p>
<pre><code>df.agg({'a': 'sum', 'b': 'mean'})
a 6.0
b 5.0
dtype: float64
</code></pre>
<p>It also allows you to apply multiple operations to a single column in a single statement. For example, to find the <code>sum</code>, <code>mean</code>, and <code>std</code> of column <code>a</code>:</p>
<pre><code>df.agg({'a': ['sum', 'mean', 'std']})
a
sum 6.0
mean 2.0
std 1.0
</code></pre>
<p>There's no difference in outcome when you use <code>agg</code> with a single operation. I'd argue that <code>df.agg('sum')</code> is less clear than <code>df.sum()</code>, but the results will be the same:</p>
<pre><code>df.agg('sum')
a 6
b 15
dtype: int64
df.sum()
a 6
b 15
dtype: int64
</code></pre>
<p>The main benefit <code>agg</code> provides is the convenience of applying multiple operations.</p>
|
python|pandas|pandas-groupby
| 6
|
374,596
| 51,593,871
|
calculating mean with a condition on python pandas Group by on two columns. And print only the mean for each category?
|
<p>Input </p>
<pre><code>Fruit Count Price tag
Apple 55 35 red
Orange 60 40 orange
Apple 60 36 red
Apple 70 41 red
</code></pre>
<p>Output 1</p>
<pre><code>Fruit Mean tag
Apple 35.5 red
Orange 40 orange
</code></pre>
<p>I need <strong>mean</strong> on condition price between 31 and 40 </p>
<p>Output 2</p>
<pre><code> Fruit Count tag
Apple 2 red
Orange 1 orange
</code></pre>
<p>I need <strong>count</strong> on condition price between 31 and 40</p>
<p>pls help </p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.between.html" rel="nofollow noreferrer"><code>between</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> for filtering:</p>
<pre><code>df1 = df[df['Price'].between(31, 40)]
print (df1)
Fruit Count Price tag
0 Apple 55 35 red
1 Orange 60 40 orange
2 Apple 60 36 red
</code></pre>
<p>If possible multiple columns by aggregated functions:</p>
<pre><code>df2 = df1.groupby(['Fruit', 'tag'])['Price'].agg(['mean','size']).reset_index()
print (df2)
Fruit tag mean size
0 Apple red 35.5 2
1 Orange orange 40.0 1
</code></pre>
<p>Or 2 separately DataFrames:</p>
<pre><code>df3 = df1.groupby(['Fruit', 'tag'], as_index=False)['Price'].mean()
print (df3)
Fruit tag Price
0 Apple red 35.5
1 Orange orange 40.0
df4 = df1.groupby(['Fruit', 'tag'])['Price'].size().reset_index()
print (df4)
Fruit tag Price
0 Apple red 2
1 Orange orange 1
</code></pre>
|
python-3.x|pandas
| 1
|
374,597
| 51,689,035
|
Spyder: Check failed: PyBfloat16_Type.tp_base != nullptr error while starting kernel
|
<p>I am getting some weird error when starting Spyder 3.6. The error reads:</p>
<pre><code>An error ocurred while starting the kernel
2018 15:37:39.970424: F T:\src\github\tensorflow\tensorflow\python\lib\core\bfloat16.cc:664]
Check failed: PyBfloat16_Type.tp_base != nullptr
</code></pre>
<p>I Googled for a solution and found this:</p>
<pre><code>conda update setuptools
</code></pre>
<p>I ran that in the Anaconda console; still dealing with the same issue. This has been working fine for more than 1 year, and I don't know what changed in the past day or so. The only thing that I have done recently is run Jupyter Notebook; I tried a few sample scripts in that environment and everything ran fine. I can't think of anything that has changed in the past 1-2 days. Any idea what could be wrong here? Thanks!</p>
|
<p>Run:</p>
<pre><code>conda install tensorflow
</code></pre>
<p>This will solve the problem</p>
|
python|python-3.x|tensorflow|anaconda|spyder
| 0
|
374,598
| 51,890,425
|
Plot categorical data with matplotlib - transposed pandas dataframe
|
<p>I have data for two groups in a pandas dataframe, with for each group the mean of 3 different items of a scale:</p>
<pre><code> item1 item2 item3
group
1 2.807692 3.115385 3.923077
2 2.909091 2.454545 3.909091
</code></pre>
<p>I would like to plot the means for both groups in a bar plot. I found some code for doing just that <a href="https://www.datascience.com/blog/learn-data-science-intro-to-data-visualization-in-matplotlib" rel="nofollow noreferrer">here</a> with the following function:</p>
<pre><code>def groupedbarplot(x_data, y_data_list, y_data_names, colors, x_label, y_label, title):
_, ax = plt.subplots()
# Total width for all bars at one x location
total_width = 0.5
# Width of each individual bar
ind_width = total_width / len(y_data_list)
# This centers each cluster of bars about the x tick mark
alteration = np.arange(-(total_width/2), total_width/2, ind_width)
# Draw bars, one category at a time
for i in range(0, len(y_data_list)):
# Move the bar to the right on the x-axis so it doesn't
# overlap with previously drawn ones
ax.bar(x_data + alteration[i], y_data_list[i], color = colors[i], label = y_data_names[i], width = ind_width)
ax.set_ylabel(y_label)
ax.set_xlabel(x_label)
ax.set_title(title)
ax.legend(loc = 'upper right')
</code></pre>
<p>This is working perfectly fine with the mentioned dataframe, however, instead of having the bars for all items grouped together per group I wanted the bars to be grouped per item so I can see for each item the difference per groups. Therefore, I've transposed the dataframe, but this is giving me errors when plotting:</p>
<pre><code>groupedbarplot(x_data = data.index.values
, y_data_list = [data[1],data[2]]
, y_data_names = ['group1', 'group2']
, colors = ['blue', 'orange']
, x_label = 'Scale'
, y_label = 'Score'
, title = 'title')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-111-b910d304a19e> in <module>()
5 , x_label = 'Verloning scale'
6 , y_label = 'Score'
----> 7 , title = 'Score op elke item voor werknemer en oud-werknemers')
<ipython-input-66-9fa4a515d5e9> in groupedbarplot(x_data, y_data_list, y_data_names, colors, x_label, y_label, title)
11 # Move the bar to the right on the x-axis so it doesn't
12 # overlap with previously drawn ones
---> 13 ax.bar(x_data + alteration[i], y_data_list[i], color = colors[i], label = y_data_names[i], width = ind_width)
14 ax.set_ylabel(y_label)
15 ax.set_xlabel(x_label)
TypeError: Can't convert 'float' object to str implicitly
</code></pre>
<p>I have checked all variable to see where the difference is, but can't seem to find it. Any ideas?</p>
|
<p>When I transpose the dataframe you provide the result looks like this</p>
<pre><code> 1 2
item1 2.807692 2.909091
item2 3.115385 2.454545
item3 3.923077 3.909091
</code></pre>
<p>therefore <code>data.index.values</code> returns <code>array(['item1', 'item2', 'item3'], dtype=object)</code>, and I suspect that your error results from <code>x_data + alteration[i]</code> which tries to add a float to your array which contains strings.</p>
|
python|pandas|matplotlib|transpose
| 1
|
374,599
| 51,859,857
|
Search through a dataframe for a partial string match and put the rows into a new dataframe with only their IDs
|
<p>I have a dataframe of publications that have the following rows:</p>
<p>publication_ID , title, author_name, date
12344, Design style, Jake Kreath, 20071208
12334, Power of Why, Samantha Finn, 20150704</p>
<p>I ask the user for a string and use that string to search through the titles.</p>
<p><strong>The goal:</strong> Search through the dataframe to see if the title contains the word the user provides and return the rows in a new dataframe with just the title and publication_ID. </p>
<p>This is my code so far:</p>
<pre><code>import pandas as pd
from pandas import DataFrame
publications = pd.read_csv(filepath, sep= "|")
search_term = input('Enter the term you are looking for: ')
def stringDataFrame(publications, title, regex):
newdf = pd.DataFrame()
for idx, search_term in publications['title'].iteritems():
if re.search(regex, search_term):
newdf = concat([publications[publications['title'] == search_term], newdf], ignore_index=True)
return newdf
print(newdf.stringDataFrame)
</code></pre>
|
<p>Use a combination of <code>.str.contains</code> and <code>.loc</code></p>
<pre><code>publications.loc[publications.title.str.contains(search_term), ['title', 'publication_ID']]
</code></pre>
<p>Just be careful, because if your title is <code>'nightlife'</code> and someone searches for <code>'night'</code> this will return a match. If that's not your desired behavior then you may need <code>.str.split</code> instead. </p>
<hr>
<p>As jpp points out, <code>str.contains</code> is case sensitive. One simple fix is to just ensure everything is lowercase. </p>
<pre><code>title_mask = publications.title.str.lower().str.contains(search_term.lower())
pmids = publications.loc[title_mask, ['title', 'publication_ID']]
</code></pre>
<p>now <code>Lord</code>, <code>LoRD</code>, <code>lord</code> and all other permutations will return a valid match, and your original <code>DataFrame</code> has the capitalization unchanged.</p>
|
python|string|python-3.x|pandas|dataframe
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.