Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
1,400
| 25,197,052
|
Is there a numpy function to convert from log(e) to log(10)?
|
<p>Is there a single numpy function convert log(e) numbers to log(10) numbers? If so, would the single function be faster than this code?</p>
<pre><code>y = np.array([1, 2, 3, 4])
np.log10(np.exp(y))
</code></pre>
|
<p>I do not think there is a <code>numpy</code> function (and I hope to be corrected), but the standard mathematical way is faster than what you have shown.</p>
<p>That is, you would simply divide <code>log_e(n)</code> by the constant <code>log_e(10)</code> to obtain <code>log_10(n)</code>.</p>
|
python|numpy
| 2
|
1,401
| 39,275,533
|
Select row from a DataFrame based on the type of the object(i.e. str)
|
<p>So there's a DataFrame say:</p>
<pre><code>>>> df = pd.DataFrame({
... 'A':[1,2,'Three',4],
... 'B':[1,'Two',3,4]})
>>> df
A B
0 1 1
1 2 Two
2 Three 3
3 4 4
</code></pre>
<p>I want to select the rows whose datatype of particular row of a particular column is of type <code>str</code>.</p>
<p>For example I want to select the row where <code>type</code> of data in the column <code>A</code> is a <code>str</code>.
so it should print something like:</p>
<pre><code> A B
2 Three 3
</code></pre>
<p>Whose intuitive code would be like:</p>
<pre><code>df[type(df.A) == str]
</code></pre>
<p>Which obviously doesn't works!</p>
<p>Thanks please help!</p>
|
<p>This works:</p>
<pre><code>df[df['A'].apply(lambda x: isinstance(x, str))]
</code></pre>
|
python|pandas
| 47
|
1,402
| 39,018,107
|
Convert non-strict JSON Amazon SNAP metadata to Pandas DataFrame
|
<p>I am trying to convert <a href="http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/meta_Grocery_and_Gourmet_Food.json.gz" rel="nofollow">this Amazon sample Snap grocery JSON data</a> to a Pandas dataframe in IBM Bluemix (using Python 2.x) and then analyze it with Apache Spark.</p>
<p>I have unzipped the JSON file and uploaded it to an Apache Spark Container.</p>
<p>Here is my container connection:</p>
<pre><code># In[ ]:
credentials_1 = {
'auth_uri':'',
'global_account_auth_uri':'',
'username':'myUname',
'password':"myPw",
'auth_url':'https://identity.open.softlayer.com',
'project':'object_storage_988dfce6_5b93_48fc_9575_198bbed3abfc',
'project_id':'2c05de8a36d74d32bdbe0eeec7e5a372',
'region':'dallas',
'user_id':'4976489bab7d489f8d2eba681adacb78',
'domain_id':'8b6bc3e989d644858d7b74f24119447a',
'domain_name':'1079761',
'filename':'meta_Grocery_and_Gourmet_Food.json',
'container':'grocery',
'tenantId':'s31d-8e24c13d9c36f4-43b43b7b993d'
}
</code></pre>
<p>I then used Apache Spark's sample of importing data from container to StringIO</p>
<pre><code># In[ ]:
import requests, StringIO, pandas as pd, json, re
# In[ ]:
def get_file_content(credentials):
"""For given credentials, this functions returns a StringIO object containing the file content."""
url1 = ''.join([credentials['auth_url'], '/v3/auth/tokens'])
data = {'auth': {'identity': {'methods': ['password'],
'password': {'user': {'name': credentials['username'],'domain': {'id': credentials['domain_id']},
'password': credentials['password']}}}}}
headers1 = {'Content-Type': 'application/json'}
resp1 = requests.post(url=url1, data=json.dumps(data), headers=headers1)
resp1_body = resp1.json()
for e1 in resp1_body['token']['catalog']:
if(e1['type']=='object-store'):
for e2 in e1['endpoints']:
if(e2['interface']=='public'and e2['region']==credentials['region']):
url2 = ''.join([e2['url'],'/', credentials['container'], '/', credentials['filename']])
s_subject_token = resp1.headers['x-subject-token']
headers2 = {'X-Auth-Token': s_subject_token, 'accept': 'application/json'}
resp2 = requests.get(url=url2, headers=headers2)
return StringIO.StringIO(resp2.content)
</code></pre>
<p>I then converted the String content to a strict JSON pattern by appending [ and ] at the beginning and at the end and by separating the data with a comma.</p>
<pre><code>print('----------------------\n')
import json
myDf=[];
def parse(data):
for l in data:
yield json.dumps(eval(l))
def getDF(data):
st='['
i = 0
df =[]
for d in parse(data):
if i<100:
i += 1
#print(str(d))
st=st+str(d)+','
#print('----------------\n')
st=st[:-1]
st=st+']'
#js=json.loads(st)
#print(json.dumps(js))
return pd.read_json(st)
content_string = get_file_content(credentials_1)
df = getDF(content_string)
df.head()
</code></pre>
<p>I am getting a perfectly desirable result.
<a href="http://i.stack.imgur.com/wivZh.png" rel="nofollow">Output of the code</a></p>
<p>The problem is that when I remove i < 100 condition, it just never completes and the kernel remains busy for over one hour.</p>
<p>Is there any other elegant ways to convert the the data into dataframe?</p>
<p>Also, ijson is not available with Bluemix Notebook.</p>
|
<p>Let me answer this in two parts:-</p>
<ol>
<li><p>You can install ijson in your bluemix spark service using below command and then user <code>import ijson</code> to further use it as per your use.</p>
<p><code>!pip install --user ijson</code></p></li>
<li><p>You can use <code>sqlContext.jsonFile</code> to read the json from object storage rather than going around to define your schema.
This will even infer the schema for you and then you can run some spark-sql queries to do whatever you want with the dataframe.</p>
<p><code>df = sqlContext.jsonFile("swift://" + objectStorageCreds['container'] + "." + objectStorageCreds['name'] + "/" + objectStorageCreds['filename'])</code></p></li>
</ol>
<p>Here is the link to complete <a href="https://github.com/charles2588/bluemixsparknotebooks/blob/master/Python/json_parser.ipynb" rel="nofollow">notebook</a>.</p>
<p>If you have to work with pandas dataframe , you can simply convert it to </p>
<pre><code>df.toPandas().head()
</code></pre>
<p>But this will cause to get everything at the driver node(use carefully).</p>
<p>Thanks,
Charles.</p>
|
python-2.7|pandas|apache-spark|ibm-cloud
| 0
|
1,403
| 39,200,644
|
numpy 3 dimension array middle indexing bug
|
<p>I seems found a bug when I'm using python 2.7 with numpy module:</p>
<pre><code>import numpy as np
x=np.arange(3*4*5).reshape(3,4,5)
x
</code></pre>
<p>Here I got the full 'x' array as follows:</p>
<pre><code>array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]],
[[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29],
[30, 31, 32, 33, 34],
[35, 36, 37, 38, 39]],
[[40, 41, 42, 43, 44],
[45, 46, 47, 48, 49],
[50, 51, 52, 53, 54],
[55, 56, 57, 58, 59]]])
</code></pre>
<p>Then I try to indexing single row values in sheet [1]:</p>
<pre><code>x[1][0][:]
</code></pre>
<p>Result:</p>
<pre><code>array([20, 21, 22, 23, 24])
</code></pre>
<p>But something wrong while I was try to indexing single column in sheet [1]:</p>
<pre><code>x[1][:][0]
</code></pre>
<p>Result still be the same as previous:</p>
<pre><code>array([20, 21, 22, 23, 24])
</code></pre>
<p>Should it be array([20, 25, 30, 35])??</p>
<p>It seems something wrong while indexing the middle index with range?</p>
|
<p>No, it's not a bug.</p>
<p>When you use <code>[:]</code> you are using slicing notation and it takes all the list:</p>
<pre><code>l = ["a", "b", "c"]
l[:]
#output:
["a", "b", "c"]
</code></pre>
<p>and in your case:</p>
<pre><code>x[1][:]
#output:
array([[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29],
[30, 31, 32, 33, 34],
[35, 36, 37, 38, 39]])
</code></pre>
<p>What you realy wish is using numpy <code>indexing</code> notation:</p>
<pre><code>x[1, : ,0]
#output:
array([20, 25, 30, 35])
</code></pre>
|
python|arrays|numpy
| 3
|
1,404
| 39,218,768
|
Find numpy vectors in a set quickly
|
<p>I have a numpy array, for example:</p>
<pre><code>a = np.array([[1,2],
[3,4],
[6,4],
[5,3],
[3,5]])
</code></pre>
<p>and I also have a set</p>
<pre><code>b = set((1,2),(6,4),(9,9))
</code></pre>
<p>I want to find the index of vectors that exist in set b, here is</p>
<pre><code>[0, 2]
</code></pre>
<p>but I use a for loop to implement this, is there a convinient way to do this job avoiding for loop?
The for loop method I used:</p>
<pre><code>record = []
for i in range(a.shape[0]):
if (a[i, 0], a[i, 1]) in b:
record.append(i)
</code></pre>
|
<p>You can use filter:</p>
<pre><code>In [8]: a = np.array([[1,2],
[3,4],
[6,4],
[5,3],
[3,5]])
In [9]: b = {(1,2),(6,4)}
In [10]: filter(lambda x: tuple(a[x]) in b, range(len(a)))
Out[10]: [0, 2]
</code></pre>
|
python|numpy|set|vectorization|lookup
| 1
|
1,405
| 19,456,239
|
Convert python list with None values to numpy array with nan values
|
<p>I am trying to convert a list that contains numeric values and <code>None</code> values to <code>numpy.array</code>, such that <code>None</code> is replaces with <code>numpy.nan</code>.</p>
<p>For example:</p>
<pre><code>my_list = [3,5,6,None,6,None]
# My desired result:
my_array = numpy.array([3,5,6,np.nan,6,np.nan])
</code></pre>
<p>Naive approach fails:</p>
<pre><code>>>> my_list
[3, 5, 6, None, 6, None]
>>> np.array(my_list)
array([3, 5, 6, None, 6, None], dtype=object) # very limited
>>> _ * 2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
>>> my_array # normal array can handle these operations
array([ 3., 5., 6., nan, 6., nan])
>>> my_array * 2
array([ 6., 10., 12., nan, 12., nan])
</code></pre>
<p>What is the best way to solve this problem?</p>
|
<p>You simply have to explicitly declare the data type:</p>
<pre><code>>>> my_list = [3, 5, 6, None, 6, None]
>>> np.array(my_list, dtype=np.float)
array([ 3., 5., 6., nan, 6., nan])
</code></pre>
|
python|numpy
| 55
|
1,406
| 12,957,593
|
Pandas: reshape data with duplicate row names to columns
|
<p>I have a data set that's sort of like this (first lines shown):</p>
<pre><code>Sample Detector Cq
P_1 106 23.53152
P_1 106 23.152458
P_1 106 23.685083
P_1 135 24.465698
P_1 135 23.86892
P_1 135 23.723469
P_1 17 22.524242
P_1 17 20.658733
P_1 17 21.146122
</code></pre>
<p>Both "Sample" and "Detector" columns contain duplicated values ("Cq" is unique): to be precise, each "Detector" appears 3 times for each sample, because it's a replicate in the data.</p>
<p>What I need to do is to:</p>
<ul>
<li>Reshape the table so that the columns contain Samples and rows Detectors</li>
<li>Rename the duplicate columns so that I know which replicate is it</li>
</ul>
<p>I thought that <code>DataFrame.pivot</code> would do the trick, but it fails because of the duplicate data. What would be the best approach? Rename the duplicates, then reshape, or is there a better option?</p>
<p>EDIT: I thought over it and I think it's better to state the purpose. I need to store for each "Sample" the mean and standard deviation of their "Detector". </p>
|
<p>It looks like what you may be looking for is a hierarchical indexed dataframe<a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex"> [link]</a>.</p>
<p>Would something like this work?</p>
<pre><code>#build a sample dataframe
a=['P_1']*9
b=[106,106,106,135,135,135,17,17,17]
c = np.random.randint(1,100,9)
df = pandas.DataFrame(data=zip(a,b,c), columns=['sample','detector','cq'])
#add a repetition number column
df['rep_num']=[1,2,3]*( len(df)/3 )
#Convert to a multi-indexed DF
df_multi = df.set_index(['sample','detector','rep_num'])
#--------------Resulting Dataframe---------------------
cq
sample detector rep_num
P_1 106 1 97
2 83
3 81
135 1 46
2 92
3 89
17 1 58
2 26
3 75
</code></pre>
|
python|pandas
| 5
|
1,407
| 13,215,525
|
How to extend an array in-place in Numpy?
|
<p>Currently, I have some code like this</p>
<pre><code>import numpy as np
ret = np.array([])
for i in range(100000):
tmp = get_input(i)
ret = np.append(ret, np.zeros(len(tmp)))
ret = np.append(ret, np.ones(fixed_length))
</code></pre>
<p>I think this code is <strong>not efficient</strong> as <code>np.append</code> needs to return a copy of the array instead of <strong>modify the ret in-place</strong></p>
<p>I was wondering whether I can use the <code>extend</code> for a numpy array like this:</p>
<pre><code>import numpy as np
from somewhere import np_extend
ret = np.array([])
for i in range(100000):
tmp = get_input(i)
np_extend(ret, np.zeros(len(tmp)))
np_extend(ret, np.ones(fixed_length))
</code></pre>
<p>So that the <code>extend</code> would be much more efficient.
Does anyone have ideas about this?
Thanks!</p>
|
<p>Imagine a numpy array as occupying one contiguous block of memory. Now imagine other objects, say other numpy arrays, which are occupying the memory just to the left and right of our numpy array. There would be no room to append to or extend our numpy array. The underlying data in a numpy array always occupies a <em>contiguous</em> block of memory.</p>
<p>So any request to append to or extend our numpy array can only be satisfied by allocating a whole new larger block of memory, copying the old data into the new block and then appending or extending.</p>
<p>So:</p>
<ol>
<li>It will not occur in-place.</li>
<li>It will not be efficient.</li>
</ol>
|
python|arrays|numpy|scipy
| 69
|
1,408
| 29,287,943
|
pandas groupby for multiple data frames/files at once
|
<p>I have multiple huge tsv files that I'm trying to process using pandas. I want to group by 'col3' and 'col5'. I've tried this:</p>
<pre><code>import pandas as pd
df = pd.read_csv('filename.txt', sep = "\t")
g2 = df.drop_duplicates(['col3', 'col5'])
g3 = g2.groupby(['col3', 'col5']).size().sum(level=0)
print g3
</code></pre>
<p>It works fine so far and prints an output like this:</p>
<pre><code>yes 2
no 2
</code></pre>
<p>I'd like to be able to aggregate the output from multiple files, i.e., to be able to group by these two columns in all the files at once and print one common output with total number of occurrences of 'yes' or 'no' or whatever that attribute could be. In other words, I'd now like to use groupby on multiple files at once. And if a file doesn't have one of these columns, it should be skipped and should go to the next file.</p>
|
<p>This is a nice use case for <a href="http://blaze.pydata.org" rel="noreferrer"><code>blaze</code></a>.</p>
<p>Here's an example using a couple of reduced files from the <a href="http://www.andresmh.com/nyctaxitrips/" rel="noreferrer">nyctaxi dataset</a>. I've purposely split a single large file into two files of 1,000,000 lines each:</p>
<pre class="lang-python prettyprint-override"><code>In [16]: from blaze import Data, compute, by
In [17]: ls
trip10.csv trip11.csv
In [18]: d = Data('*.csv')
In [19]: expr = by(d[['passenger_count', 'medallion']], avg_time=d.trip_time_in_secs.mean())
In [20]: %time result = compute(expr)
CPU times: user 3.22 s, sys: 393 ms, total: 3.61 s
Wall time: 3.6 s
In [21]: !du -h *
194M trip10.csv
192M trip11.csv
In [22]: len(d)
Out[22]: 2000000
In [23]: result.head()
Out[23]:
passenger_count medallion avg_time
0 0 08538606A68B9A44756733917323CE4B 0
1 0 0BB9A21E40969D85C11E68A12FAD8DDA 15
2 0 9280082BB6EC79247F47EB181181D1A4 0
3 0 9F4C63E44A6C97DE0EF88E537954FC33 0
4 0 B9182BF4BE3E50250D3EAB3FD790D1C9 14
</code></pre>
<p><strong>Note:</strong> This will perform the computation with pandas, using pandas' own chunked CSV reader. If your files are in the GB range you're better off converting to a format such as <a href="http://bcolz.blosc.org/" rel="noreferrer">bcolz</a> or <a href="https://pytables.github.io/" rel="noreferrer">PyTables</a>, as these are binary formats and designed for data analysis on huge files. CSVs are justs blobs of text with conventions.</p>
|
python|csv|pandas|group-by
| 8
|
1,409
| 29,291,279
|
Group and average NumPy matrix
|
<p>Say I have an arbitrary numpy matrix that looks like this:</p>
<pre><code>arr = [[ 6.0 12.0 1.0]
[ 7.0 9.0 1.0]
[ 8.0 7.0 1.0]
[ 4.0 3.0 2.0]
[ 6.0 1.0 2.0]
[ 2.0 5.0 2.0]
[ 9.0 4.0 3.0]
[ 2.0 1.0 4.0]
[ 8.0 4.0 4.0]
[ 3.0 5.0 4.0]]
</code></pre>
<p>What would be an efficient way of averaging rows that are grouped by their third column number?</p>
<p>The expected output would be:</p>
<pre><code>result = [[ 7.0 9.33 1.0]
[ 4.0 3.0 2.0]
[ 9.0 4.0 3.0]
[ 4.33 3.33 4.0]]
</code></pre>
|
<p>A compact solution is to use <a href="https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP" rel="noreferrer">numpy_indexed</a> (disclaimer: I am its author), which implements a fully vectorized solution:</p>
<pre><code>import numpy_indexed as npi
npi.group_by(arr[:, 2]).mean(arr)
</code></pre>
|
python|numpy|matrix|grouping|average
| 9
|
1,410
| 33,728,831
|
Remove bracket of a array python
|
<p>In here, i want to ask how to remove bracket of a array in python. This is my following code:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.read_csv('data.csv', index_col=0, header=0)
X = np.array(df.ix[:,0:29])
Y = np.array(df.ix[:,29:30])
Y
Out[55]:
array([[ 1],
[ 2],
[ 3],
...,
[35],
[36],
[37]], dtype=int64)
</code></pre>
<p>The desired output is following below:</p>
<pre><code>Y
Out[55]:
array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10,....])
</code></pre>
<p>I already tried to use <code>np.array</code>, however it did not work. </p>
|
<p>Check if it works</p>
<pre><code>X = np.array(df.ix[:,0:29])
Y = np.array(df.ix[:,29:30])
Y = Y[0]
</code></pre>
|
python|numpy|pandas
| 4
|
1,411
| 33,622,481
|
"The truth value of a Series is ambiguous. " Series vs Element Fuction
|
<p>I have a dataframe and I have written the following function to populate a new column:</p>
<pre><code>df = pd.DataFrame(np.random.randn(10, 2), columns=['a', 'b'])
def perc(a,b):
if a/b < 0:
n = 0
elif a/b > 1:
n = 1
else:
n = a/b
return n
df['c']=perc(df['a'],df['b'])
df[1:10]
</code></pre>
<p>It's supposed to calculate a percent column. Here is the error I am getting:</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>I understand that it has to do with dif and unc being series instead of individual elements. But how do I fix it?</p>
|
<p>What you're actually asking for is a bit hard to describe in words, but the following example captures it:</p>
<blockquote>
<p>If <code>a</code> is the series <code>[-1, 1, 3, 5]</code> and <code>b</code> is <code>[2, 2, 3, 3]</code>, then <code>a/b</code> will be a series like <code>[-0.5, 0.5, 1, 1.6666667]</code>, and what you ultimately want to return is <code>[0, 0.5, 1, 1]</code>. </p>
</blockquote>
<p>You can "cap values at 1" for a series by taking the minimum of that series with the series of all ones. Similar, you can ensure nothing is below 0 by taking the maximum of a series with the series of all zeroes. <code>numpy</code> lets you do this easily:</p>
<pre><code>def perc(a,b):
length = len(a)
return np.maximum(np.minimum(np.ones(length), a/b), np.zeros(length))
</code></pre>
|
python|pandas|series
| 0
|
1,412
| 33,576,758
|
Creating a pandas DataFrame with counts of categorical data
|
<p>I have a bunch of survey data broken down by number of responses for each choice for each question (multiple-choice questions). I have one of these summaries for each of several different courses, semesters, sections, etc. Unfortunately, all of my data was given to me in PDF printouts and I cannot get the digital data. On the bright side, that means I have free reign to format my data file however I need to so that I can import it into Pandas.</p>
<p>How do I import my data into Pandas, preferably without needing to reproduce it line-by-line (one line for each entry represented by my summary).</p>
<h1>The data</h1>
<p>My survey comprises several multiple-choice questions. I have the number of respondents who chose each option for each question. Something like:</p>
<pre class="lang-py prettyprint-override"><code>Course Number: 100
Semester: Spring
Section: 01
Question 1
----------
Option A: 27
Option B: 30
Option C: 0
Option D: 2
Question 2
----------
Option X: 20
Option Y: 10
</code></pre>
<p>So essentially I have the <code>.value_counts()</code> results if my data was already in Pandas. Note that the questions do not always have the same number of options (categories), and they do not always have the same number of respondents. I will have similar results for multiple course numbers, semesters, and sections.</p>
<p>The categories <code>A</code>, <code>B</code>, <code>C</code>, etc. are just placeholders here to represent the labels for each response category in my actual data.</p>
<p>Also, I have to manually input all of this into something, so I am not worried about reading the specific file format above, it just represents what I have on the actual printouts in front of me.</p>
<h1>The goal</h1>
<p>I would like to recreate the response data in Pandas by telling Pandas how many of each response category I have for each question. Basically I want an Excel file or CSV that looks like the response data above, and a Pandas DataFrame that looks like:</p>
<pre class="lang-py prettyprint-override"><code>Course Number Semester Section Q1 Q2
100 Spring 01 A X
100 Spring 01 A X
... (20 identical entries)
100 Spring 01 A Y
100 Spring 01 A Y
... (7 of these)
100 Spring 01 B Y
100 Spring 01 B Y
100 Spring 01 B Y
100 Spring 01 B N/A (out of Q2 responses)
...
100 Spring 01 D N/A
100 Spring 01 D N/A
</code></pre>
<p>I should note that I am not reproducing the <em>actual</em> response data here, because I have no way of knowing that someone who chose option <code>D</code> for question 1 didn't also choose option <code>X</code> for question 2. I just want the number of each result to show up the same, and for my <code>df.count_values()</code> output to basically give me what my summary already says.</p>
<h1>Attempts so far</h1>
<p>So far the best I can come up with is actually reproducing each response as its own row in an excel file, and then importing this file and converting to categories:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.read_excel("filename")
df["Q1"] = df["Q1"].astype("category")
df["Q2"] = df["Q2"].astype("category")
</code></pre>
<p>There are a couple of problems with this. First, I have thousands of responses, so creating all of those rows is going to take way too long. I would much prefer the compact approach of just recording directly how many of each response I have and then importing that into Pandas.</p>
<p>Second, this becomes a bit awkward when I do not have the same number of responses for every question. At first, to save time on entering every response, I was only putting a value in a column when that value was different than the previous row, and then using <code>.ffill()</code> to forward-fill the values in the Pandas DataFrame. The issue with that is that all <code>NaN</code> values get filled, so I cannot have different numbers of responses for different questions.</p>
<p>I am not married to the idea of recording the data in Excel first, so if there is an easier way using something else I am all ears.</p>
<p>If there is some other way of looking at this problem that makes more sense than what I am attempting here, I am open to hearing about that as well.</p>
<h1>Edit: kind of working</h1>
<p>I switched gears a bit and made an Excel file where each sheet is a single survey summary, the first few columns identify the <code>Course</code>, <code>Semester</code>, <code>Section</code>, <code>Year</code>, etc., and then I have a column of possible <code>Response</code> categories. The rest of the file comprises a column for each question, and then the number of responses in each row corresponding to the responses that match that question. I then import each sheet and concatenate:</p>
<pre class="lang-py prettyprint-override"><code>df = [pd.read_excel("filename", sheetname=i, index_col=range(0,7)) for i in range(1,3)]
df = pd.concat(df)
</code></pre>
<p>This seems to work, but I end up with a really ugly table (lots of NaN's for all of the responses that don't actually correspond to each question). I can kind of get around this for plotting the results for any one question with something like:</p>
<pre class="lang-py prettyprint-override"><code>df_grouped = df.groupby("Response", sort=False).aggregate(sum) # group according to response
df_grouped["Q1"][np.isfinite(df_grouped["Q1"])].plot(kind="bar") # only plot responses that have values
</code></pre>
<p>I feel like there must be a better way to do this, maybe with multiple indices or some kind of 3D data structure...</p>
|
<p>One hacky way to get the information out is to first split by ----- and then use regex.</p>
<p>For each course so something like the following:</p>
<pre><code>In [11]: s
Out[11]: 'Semester: Spring\nSection: 01\nQuestion 1\n----------\nOption A: 27\nOption B: 30\nOption C: 0\nOption D: 2\n\nQuestion 2\n----------\nOption A: 20\nOption B: 10'
In [12]: blocks = s.split("----------")
</code></pre>
<p>Parse out the information from the first block, use regex or just split:</p>
<pre><code>In [13]: semester = re.match("Semester: (.*)", blocks[0]).groups()[0]
In [14]: semester
Out[14]: 'Spring'
</code></pre>
<p>To parse the option info from each block:</p>
<pre><code>def parse_block(lines):
d = {}
for line in lines:
m = re.match("Option ([^:]+): (\d+)", line)
if m:
d[m.groups()[0]] = int(m.groups()[1])
return d
In [21]: [parse_block(x.splitlines()) for x in blocks[1:]]
Out[21]: [{'A': 27, 'B': 30, 'C': 0, 'D': 2}, {'A': 20, 'B': 10}]
</code></pre>
<p>You can similarly pull out the question number (if you don't know they're sequential):</p>
<pre><code>In [22]: questions = [int(re.match(".*Question (\d+)", x, re.DOTALL).groups()[0]) for x in blocks[:-1]]
In [23]: questions
Out[23]: [1, 2]
</code></pre>
<p>and zip these together:</p>
<pre><code>In [31]: dict(zip(questions, ds))
Out[31]: {1: {'A': 27, 'B': 30, 'C': 0, 'D': 2}, 2: {'A': 20, 'B': 10}}
In [32]: pd.DataFrame(dict(zip(questions, ds)))
Out[32]:
1 2
A 27 20
B 30 10
C 0 NaN
D 2 NaN
</code></pre>
<p><em>I'd put these in another dict of (course, semester, section) -> DataFrame and then concat and work out where to go from the big MultiIndex dataFrame...</em></p>
|
python|excel|pandas|categorical-data
| 0
|
1,413
| 23,628,503
|
Finding indices in Python lists efficiently (in comparison to MATLAB)
|
<p>I have got difficulties to find an efficient solution to find indices in Python lists. All the solutions I have tested so far are slower than the 'find' function in MATLAB. I have only just started to use Python (therefore, I am not very experienced). </p>
<p>In MATLAB I would use the following:</p>
<pre><code>a = linspace(0, 1000, 1000); % monotonically increasing vector
b = 1000 * rand(1, 100); % 100 points I want to find in a
for i = 1 : numel(b)
indices(i) = find(b(i) <= a, 1); % find the first index where b(i) <= a
end
</code></pre>
<p>If I use MATLAB's arrayfun() I can speed this process up a little bit.
In Python I tried several possibilities. I used </p>
<pre><code>for i in xrange(0, len(b)):
tmp = numpy.where(b[i] <= a)
indices.append(tmp[0][0])
</code></pre>
<p>which takes a lot of time, especially if a is quite big.
If b is sorted than I can use</p>
<pre><code>for i in xrange(0, len(b)):
if(b[curr_idx] <= a[i]):
indices.append(i)
curr_idx += 1
if(curr_idx >= len(b)):
return indices
break
</code></pre>
<p>This is much quicker than the numpy.where() solution because I only have to search through the list a once, but this is still slower than the MATLAB solution. </p>
<p>Could anyone suggest a better / more efficient solution?
Thanks in advance. </p>
|
<p>Try <code>numpy.searchsorted</code>:</p>
<pre><code>>> a = np.array([0, 1, 2, 3, 4, 5, 6, 7])
>> b = np.array([1, 2, 4, 3, 1, 0, 2, 9])
% sorting b "into" a
>> np.searchsorted(a, b, side='right')-1
array([1, 2, 4, 3, 1, 0, 2, 9])
</code></pre>
<p>You might have to apply a little special treatment for values in b, that are outside the range of a - such as the 9 in the above example.
Despite that, this should be faster than any loop-based method.</p>
<p>As an aside:
Similarly, <code>histc</code> in MATLAB will be much faster than the loop.</p>
<p><strong>EDIT:</strong></p>
<p>If you want the get the index where <code>b</code> is closest to <code>a</code>, you should be able to use the same code, simply with a modified a:</p>
<pre><code>>> a_mod = 0.5*(a[:-1] + a[1:]) % take the centers between the elements in a
>> np.searchsorted(a_mod, np.array([0.9, 2.1, 4.2, 2.9, 1.1]), side='right')
array([1, 2, 4, 3, 1])
</code></pre>
<p>Note that you can drop the <code>-1</code> since <code>a_mod</code> has one element less than <code>a</code>.</p>
|
python|matlab|list|numpy
| 5
|
1,414
| 15,194,468
|
How to generate n dimensional random variables in a specific range in python
|
<p>I want to generate uniform random variables in the range of <code>[-10,10]</code> of various dimensions in python. Numbers of 2,3,4,5.... dimension. </p>
<p>I tried random.uniform(-10,10), but that is only one dimensional. I do not know how to do it for n-dimension.
By 2 dimension I mean,</p>
<pre><code>[[1 2], [3 4]...]
</code></pre>
|
<p>Since <code>numpy</code> is tagged, you can use the random functions in <code>numpy.random</code>:</p>
<pre><code>>>> import numpy as np
>>> np.random.uniform(-10,10)
7.435802529756465
>>> np.random.uniform(-10,10,size=(2,3))
array([[-0.40137954, -1.01510912, -0.41982265],
[-8.12662965, 6.25365713, -8.093228 ]])
>>> np.random.uniform(-10,10,size=(1,5,1))
array([[[-3.31802611],
[ 4.60814984],
[ 1.82297046],
[-0.47581074],
[-8.1432223 ]]])
</code></pre>
<p>and modify the <code>size</code> parameter to suit your needs.</p>
|
python|numpy|scipy
| 11
|
1,415
| 15,072,626
|
Get group id back into pandas dataframe
|
<p>For dataframe</p>
<pre><code>In [2]: df = pd.DataFrame({'Name': ['foo', 'bar'] * 3,
...: 'Rank': np.random.randint(0,3,6),
...: 'Val': np.random.rand(6)})
...: df
Out[2]:
Name Rank Val
0 foo 0 0.299397
1 bar 0 0.909228
2 foo 0 0.517700
3 bar 0 0.929863
4 foo 1 0.209324
5 bar 2 0.381515
</code></pre>
<p>I'm interested in grouping by Name and Rank and possibly getting aggregate values</p>
<pre><code>In [3]: group = df.groupby(['Name', 'Rank'])
In [4]: agg = group.agg(sum)
In [5]: agg
Out[5]:
Val
Name Rank
bar 0 1.839091
2 0.381515
foo 0 0.817097
1 0.209324
</code></pre>
<p>But I would like to get a field in the original <code>df</code> that contains the group number for that row, like</p>
<pre><code>In [13]: df['Group_id'] = [2, 0, 2, 0, 3, 1]
In [14]: df
Out[14]:
Name Rank Val Group_id
0 foo 0 0.299397 2
1 bar 0 0.909228 0
2 foo 0 0.517700 2
3 bar 0 0.929863 0
4 foo 1 0.209324 3
5 bar 2 0.381515 1
</code></pre>
<p>Is there a good way to do this in pandas?</p>
<p>I can get it with python, </p>
<pre><code>In [16]: from itertools import count
In [17]: c = count()
In [22]: group.transform(lambda x: c.next())
Out[22]:
Val
0 2
1 0
2 2
3 0
4 3
5 1
</code></pre>
<p>but it's pretty slow on a large dataframe, so I figured there may be a better built in pandas way to do this. </p>
|
<p>A lot of handy things are stored in the <code>DataFrameGroupBy.grouper</code> object. For example:</p>
<pre><code>>>> df = pd.DataFrame({'Name': ['foo', 'bar'] * 3,
'Rank': np.random.randint(0,3,6),
'Val': np.random.rand(6)})
>>> grouped = df.groupby(["Name", "Rank"])
>>> grouped.grouper.
grouped.grouper.agg_series grouped.grouper.indices
grouped.grouper.aggregate grouped.grouper.labels
grouped.grouper.apply grouped.grouper.levels
grouped.grouper.axis grouped.grouper.names
grouped.grouper.compressed grouped.grouper.ngroups
grouped.grouper.get_group_levels grouped.grouper.nkeys
grouped.grouper.get_iterator grouped.grouper.result_index
grouped.grouper.group_info grouped.grouper.shape
grouped.grouper.group_keys grouped.grouper.size
grouped.grouper.groupings grouped.grouper.sort
grouped.grouper.groups
</code></pre>
<p>and so:</p>
<pre><code>>>> df["GroupId"] = df.groupby(["Name", "Rank"]).grouper.group_info[0]
>>> df
Name Rank Val GroupId
0 foo 0 0.302482 2
1 bar 0 0.375193 0
2 foo 2 0.965763 4
3 bar 2 0.166417 1
4 foo 1 0.495124 3
5 bar 2 0.728776 1
</code></pre>
<p>There may be a nicer alias for for <code>grouper.group_info[0]</code> lurking around somewhere, but this should work, anyway.</p>
|
python|pandas|group-by
| 39
|
1,416
| 29,730,488
|
Setting columns for an empty pandas dataframe
|
<p>This is something that I'm confused about...</p>
<pre><code>import pandas as pd
# this works fine
df1 = pd.DataFrame(columns=['A','B'])
# but let's say I have this
df2 = pd.DataFrame([])
# this doesn't work!
df2.columns = ['A','B']
# ValueError: Length mismatch: Expected axis has 0 elements, new values have 2 elements
</code></pre>
<p>Why doesn't this work? What can I do instead? Is the only way to do something like this?</p>
<pre><code>if len(df2.index) == 0:
df2 = pd.DataFrame(columns=['A','B'])
else:
df2.columns = ['A','B']
</code></pre>
<p>There must be a more elegant way.</p>
<p>Thank you for your help!</p>
<h1>Update 4/19/2015</h1>
<p>Someone asked why do this at all:</p>
<pre><code>df2 = pd.DataFrame([])
</code></pre>
<p>The reason is that actually I'm doing something like this:</p>
<pre><code>df2 = pd.DataFrame(data)
</code></pre>
<p>... where data could be empty list of lists, but in most cases it is not. So yes, I could do:</p>
<pre><code>if len(data) > 0:
df2 = pd.DataFrame(data, columns=['A','B'])
else:
df2 = pd.DataFrame(columns=['A','B'])
</code></pre>
<p>... but this doesn't seem very DRY (and certainly not concise).</p>
<p>Let me know if you have any questions. Thanks!</p>
|
<p>Update: <a href="https://github.com/pydata/pandas/pull/9939" rel="nofollow">as of Pandas version 0.16.1</a>, passing <code>data = []</code> works:</p>
<pre><code>In [85]: df = pd.DataFrame([], columns=['a', 'b', 'c'])
In [86]: df
Out[86]:
Empty DataFrame
Columns: [a, b, c]
Index: []
</code></pre>
<p>so the best solution is to update your version of Pandas.</p>
<hr>
<p>If <code>data</code> is an empty list of lists, then</p>
<pre><code>data = [[]]
</code></pre>
<p>But then <code>len(data)</code> would equal 1, so <code>len(data) > 0</code> is not the right condition to check to see if <code>data</code> is an empty list of lists.</p>
<p>There are a number of values for <code>data</code> which could make </p>
<pre><code>pd.DataFrame(data, columns=['A','B'])
</code></pre>
<p>raise an Exception. An AssertionError or ValueError is raised if <code>data</code> equals <code>[]</code> (no data), <code>[[]]</code> (no columns), <code>[[0]]</code> (one column) or <code>[[0,1,2]]</code> (too many columns). So instead of trying to check for all of these I think it is safer and easier to use <code>try..except</code> here:</p>
<pre><code>columns = ['A', 'B']
try:
df2 = pd.DataFrame(data, columns=columns)
except (AssertionError, ValueError):
df2 = pd.DataFrame(columns=columns)
</code></pre>
<p>It would be nice if there is a DRY-er way to write this, but given that it's the
<a href="https://github.com/pydata/pandas/blob/master/pandas/core/frame.py#L5029" rel="nofollow">caller's responsibility to check for this</a>, I don't see a better way.</p>
|
python|pandas
| 3
|
1,417
| 29,381,271
|
How to make array into array list in python
|
<p>from this array </p>
<pre><code>s = np.array([[35788, 41715, ... 34964],
[5047, 23529, ... 5165],
[12104, 33899, ... 11914],
[3646, 21031, ... 3814],
[8704, 7906, ... 8705]])
</code></pre>
<p>I have a loop like this</p>
<pre><code>end =[]
for i in range(len(s)):
for j in range(i, len(s)):
out = mahalanobis(s[i], s[j], invcov)
end.append(out)
print end
</code></pre>
<p>and i take output :</p>
<pre><code>[0.0, 12.99, 5.85, 10.22, 3.95, 0.0, 5.12, 3.45, 4.10, 0.0, 5.05, 8.10, 0.0, 15.45, 0.0]
</code></pre>
<p>but I want the output like this :</p>
<pre><code>[[0.0, 12.99, 5.85, 10.22, 3.95],
[12.99, 0.0, 5.12, 3.45, 4.10],
[5.85, 5.12, 0.0, 5.05, 8.10],
[10.22, 3.45, 5.05, 0.0, 15.45],
[3.95, 4.10, 8.10, 15.45, 0.0]]
</code></pre>
|
<p>You need to loop differently in at least <strong>two</strong> ways:</p>
<pre><code>end =[]
for s1 in s:
end.append([mahalanobis(s1, s2, invcov) for s2 in s])
</code></pre>
<p>The most important thing is that the inner loop needs to be on the whole <code>s</code> again, else you will never get a square but <code>1 + 2 + ... + len(s)</code> items (15 in this case as <code>len(s)</code> is 5).</p>
<p>Next, the inner loop must be enclosed in a list, since you want a list of lists.</p>
<p>Less important but nice: I've changed the inner loop to a list comprehension; and I've changed both loops to be directly on <code>s</code> since there's really no reason to go over the indirection of looping over indices then using those indices to get the <code>s</code> items you <strong>care</strong> about.</p>
<p>So I made four changes in all, but the first two are what you really need to get the result you desire, the other two are just nice improvements:-).</p>
|
python|arrays|numpy
| 6
|
1,418
| 62,392,798
|
Tensorflow 2.x: How to assign convolution weights manually using numpy
|
<p>In tensorflow 1.x this can be done using a <a href="https://stackoverflow.com/questions/39555256/tensorflow-how-can-i-assign-numpy-pre-trained-weights-to-subsections-of-graph">graph and a session</a>, which is quite tedious.</p>
<p>Is there an easier way to manually assign pretrained weights to a specific convolution in tensorflow 2.x?</p>
|
<p>If you are working with Keras inside Tensorflow 2.x, every layer has a method called <code>set_weights</code> that you can use to substitute weights or assign new ones from Numpy arrays.</p>
<p>Say, for example, that you are doing distillation knowledge. Then you could assign weights of the teacher to the student by:</p>
<p><code>conv.set_weights(teacher.convx.get_weights())</code></p>
<p>where <code>conv</code> is a particular layer of the student and <code>convx</code> the homologue of the teacher.</p>
<p>You can check the documentation for more details:</p>
<p><a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer#set_weights" rel="nofollow noreferrer">Documentation - set_weights()</a></p>
|
tensorflow|tensorflow2.0|onnx
| 1
|
1,419
| 62,327,529
|
Pandas DataFrame data types change unexpectedly
|
<p>I am working on a script that solves sudoku puzzles. I use a <code>pandas.DataFrame</code> for the sudoku itself and the numbers are integers.</p>
<p>When I check which numbers are possible in a box and multiple numbers fit the requirements, I put the numbers as a <code>list</code> within the box. Because of this, I need the <code>dtype</code> of all of the columns of the <code>DataFrame</code> to be <code>object</code>.</p>
<p>The problem is that at some point in my code, the <code>dtype</code> changes to <code>float64</code> unexpectedly.</p>
<p>Here, I make a copy of the <code>DataFrame</code> and I change the <code>list</code>s to <code>NaN</code>s to check the requirements:</p>
<pre class="lang-py prettyprint-override"><code>sudoku_copy = sudoku
for column in range(sudoku_copy.shape[1]):
sudoku_copy[column] = sudoku_copy[column].apply(
lambda x: x if str(x).isnumeric() else np.nan
)
</code></pre>
<p>I have to do this because later I use <code>isin()</code> to check whether a number is already in a column, row or subgrid, and this raises an error if there are <code>list</code>s in there.</p>
<p>I checked the <code>dtype</code> of <code>sudoku</code> right before and right after that statement and the problem is there. The <code>dtype</code> before is <code>object</code>, but after, it's <code>float64</code>. However, the statement only changes <code>sudoku_copy</code>, not <code>sudoku</code>, so I don't see why <code>sudoku</code> changes at all.</p>
|
<p>I saw such issue in practice. This is because you insert NaNs into your DataFrame, ie.:</p>
<pre><code>df = pd.DataFrame([range(3), range(3)])
df.dtypes
</code></pre>
<p>Output:</p>
<pre><code>0 int64
1 int64
2 int64
dtype: object
</code></pre>
<p>Then:</p>
<pre><code>df.iloc[0,0] = np.nan
df.dtypes
</code></pre>
<p>Output:</p>
<pre><code>0 float64
1 int64
2 int64
dtype: object
</code></pre>
<p>If you want to preserve the original, then you should use the <code>copy()</code> method to create a separate copy:</p>
<pre><code>sudoku_copy = sudoku.copy()
</code></pre>
<p>That is because the <code>copy()</code> method creates a new object and the assignment from the original code creates a reference to the existing object.</p>
|
python|python-3.x|pandas|dataframe
| 1
|
1,420
| 62,062,539
|
Transform with sum of values of the same column
|
<p>I have the following dataframe:-</p>
<pre><code>traffic_type date unique_visitors region total_views
desktop 01/04/2018 72 aug 50
mobileweb 01/04/2018 1 aug 60
total 01/04/2018 sum(mobileweb+desktop) aug 100
desktop 01/04/2018 75848907.6 world 20
mobileweb 01/04/2018 105737747.4 world 30
total 01/04/2018 sum(mobileweb+desktop) world 40
</code></pre>
<p>This might be a duplicate so any link to similar questions will also help and i can
build the script on similar lines.
As you can see the data i need to fill in the column of unique_visitors is sum of desktop and mobile
provided they are in same region and same date. Dataframe i need</p>
<pre><code>traffic_type date unique_visitors region total_views
desktop 01/04/2018 72 aug 50
mobileweb 01/04/2018 1 aug 60
total 01/04/2018 73 aug 100
desktop 01/04/2018 75848907.6 world 20
mobileweb 01/04/2018 105737747.4 world 30
total 01/04/2018 181,586,655 world 40
</code></pre>
<p>Again I am sorry if this is duplicated i am looking reference links if not the exact solution.</p>
|
<p>You can use go row by row and check and sum as below </p>
<pre class="lang-py prettyprint-override"><code>
import pandas as pd
df = pd.DataFrame([["desktop","01/04/2018",72,"aug",50],
["mobileweb","01/04/2018",1,"aug",60],
["total","01/04/2018","","aug",100],
["desktop","01/04/2018",75848907.6 ,"world",20],
["mobileweb","01/04/2018",105737747.4,"world",30],
["total","01/04/2018","","world",40]],
columns=["traffic_type","date","unique_visitors","region","total_views"])
for index, row in df.iterrows():
if row["unique_visitors"] == "":
df.at[index,"unique_visitors"] = df.loc[(df['date'] == row["date"]) & (df["region"] == row["region"]) & (df["unique_visitors"] != ""), 'unique_visitors'].sum()
print(df)
</code></pre>
<h2>Output</h2>
<pre class="lang-sh prettyprint-override"><code> traffic_type date unique_visitors region total_views
0 desktop 01/04/2018 72 aug 50
1 mobileweb 01/04/2018 1 aug 60
2 total 01/04/2018 73 aug 100
3 desktop 01/04/2018 7.58489e+07 world 20
4 mobileweb 01/04/2018 1.05738e+08 world 30
5 total 01/04/2018 1.81587e+08 world 40
</code></pre>
<p>for final answer, you should go row by row and add these rows to your original dataset.</p>
|
python|python-3.x|pandas|numpy|dataframe
| 3
|
1,421
| 62,162,677
|
Slicing lists based on several parameters
|
<p><strong>I have posted a similar question several times but it was closed or redirected to another post that wasn't answering my question. I hope this time this post stays.</strong></p>
<p>I have a df with US census data. I grouped states with their correspondent counties. There's also another column with population ordered from high to low. The only thing I am trying to do is to slice it so I only get the three first most populous counties for each state. The final result should display the three most populous states based on their three most populous counties.</p>
<p>Here's so far my code:</p>
<pre><code>def answer_six():
cdf = census_df[census_df['SUMLEV'] == 50]
columns_to_keep = ['STNAME', 'CTYNAME', 'CENSUS2010POP']
cdf = cdf[columns_to_keep]
cdf = cdf.sort_values('CENSUS2010POP', ascending=False)
cdf = cdf.groupby('STNAME')
cdf = cdf.apply(pd.DataFrame.sort_values, 'CENSUS2010POP', ascending=False).head(100)
# cdf = [i for i in cdf['STNAME'][:3] if all(cdf['STNAME']) == all(cdf['STNAME'])]
return cdf
answer_six()
</code></pre>
<p>Here's a sample of my data:</p>
<pre><code> STNAME CTYNAME CENSUS2010POP
37 Alabama Jefferson County 658466
49 Alabama Mobile County 412992
45 Alabama Madison County 334811
51 Alabama Montgomery County 229363
59 Alabama Shelby County 195085
63 Alabama Tuscaloosa County 194656
2 Alabama Baldwin County 182265
41 Alabama Lee County 140247
52 Alabama Morgan County 119490
8 Alabama Calhoun County 118572
28 Alabama Etowah County 104430
35 Alabama Houston County 101547
48 Alabama Marshall County 93019
39 Alabama Lauderdale County 92709
58 Alabama St. Clair County 83593
42 Alabama Limestone County 82782
61 Alabama Talladega County 82291
22 Alabama Cullman County 80406
26 Alabama Elmore County 79303
25 Alabama DeKalb County 71109
64 Alabama Walker County 67023
5 Alabama Blount County 57322
1 Alabama Autauga County 54571
17 Alabama Colbert County 54428
36 Alabama Jackson County 53227
57 Alabama Russell County 52947
23 Alabama Dale County 50251
16 Alabama Coffee County 49948
24 Alabama Dallas County 43820
11 Alabama Chilton County 43643
... ... ... ... ...
80 Alaska Kenai Peninsula Borough 55400
79 Alaska Juneau City and Borough 31275
72 Alaska Bethel Census Area 17013
</code></pre>
|
<p>I am guessing what you are looking for is <code>cdf.groupby('STNAME').head(3)</code> after you sort the cdf?</p>
<p>P.S. perhaps your questions keep getting closed because of duplicate questions? like:
<a href="https://stackoverflow.com/questions/20069009/pandas-get-topmost-n-records-within-each-group">Pandas get topmost n records within each group
</a></p>
|
python|pandas|greatest-n-per-group
| 0
|
1,422
| 62,438,764
|
Adding rows at a specific column
|
<p>I have a dataframe-</p>
<pre><code> DATE CITY_NAME WEEK_NUM
0 2019-12-01 Bangalore 48
1 2019-12-01 Delhi 48
2 2019-12-02 Bangalore 49
3 2019-12-02 Delhi 49
</code></pre>
<p>Now I want to add a new column to the dataframe and add a row to the new column in a loop. I did this-</p>
<pre><code>R1['Hi']=0
for a,b in zip(R1['DATE'],R1['CITY_NAME']):
R1['Hi'].loc[-1]=random.randrange(10)
</code></pre>
<p>I am using a default value 5 here but I am
but its showing-</p>
<pre><code> DATE CITY_NAME WEEK_NUM Hi
0 2019-12-01 Bangalore 48 0
1 2019-12-01 Delhi 48 0
2 2019-12-02 Bangalore 49 0
3 2019-12-02 Delhi 49 0
</code></pre>
<p>lets assume the f output of rand is [2,3,1,6]</p>
<p>then final output will be-</p>
<pre><code> DATE CITY_NAME WEEK_NUM Hi
0 2019-12-01 Bangalore 48 2
1 2019-12-01 Delhi 48 3
2 2019-12-02 Bangalore 49 1
3 2019-12-02 Delhi 49 6
</code></pre>
|
<pre><code>import random
df['Hi'] = [random.randrange(10) for _ in range(len(df))]
print(df)
</code></pre>
<p>Prints (for example):</p>
<pre><code> DATE CITY_NAME WEEK_NUM Hi
0 2019-12-01 Bangalore 48 2
1 2019-12-01 Delhi 48 9
2 2019-12-02 Bangalore 49 0
3 2019-12-02 Delhi 49 7
</code></pre>
<hr>
<p>Edit: (using indices)</p>
<pre><code>df['Hi'] = 0
for i, a, b in zip(df.index, df.DATE, df.CITY_NAME):
df.loc[i,['Hi']] = '{} - {} - {}'.format(a, b, random.randrange(10))
</code></pre>
<p>Prints (for example):</p>
<pre><code> DATE CITY_NAME WEEK_NUM Hi
0 2019-12-01 Bangalore 48 2019-12-01 - Bangalore - 5
1 2019-12-01 Delhi 48 2019-12-01 - Delhi - 0
2 2019-12-02 Bangalore 49 2019-12-02 - Bangalore - 8
3 2019-12-02 Delhi 49 2019-12-02 - Delhi - 5
</code></pre>
<hr>
<p>EDIT2:</p>
<pre><code>df['Hi'] = 0
for i in df.index:
df.loc[i,['Hi']] = random.randrange(10)
</code></pre>
<p>Prints:</p>
<pre><code> DATE CITY_NAME WEEK_NUM Hi
0 2019-12-01 Bangalore 48 7
1 2019-12-01 Delhi 48 1
2 2019-12-02 Bangalore 49 2
3 2019-12-02 Delhi 49 8
</code></pre>
|
python|pandas
| 1
|
1,423
| 62,274,746
|
How to convert mat file to numpy array
|
<p>I want to convert a mat file with size 600 by 600 to numpy array and I got this error "float() argument must be a string or a number, not 'dict'" I am wondering how can I fix it.</p>
<pre><code> import numpy as np
import scipy.io as sio
test = sio.loadmat('Y7.mat')
data=np.zeros((600,600))
data[:,:]=test
</code></pre>
|
<pre><code>In [240]: from scipy.io import loadmat
</code></pre>
<p>Using a test mat file that I have from past SO questions:</p>
<pre><code>In [241]: loadmat('test.mat')
Out[241]:
{'__header__': b'MATLAB 5.0 MAT-file Platform: posix, Created on: Sat Mar 21 20:09:30 2020',
'__version__': '1.0',
'__globals__': [],
'x': array([[ 0, 3, 6, 9],
[ 1, 4, 7, 10],
[ 2, 5, 8, 11]])}
</code></pre>
<p><code>loadmat</code> has given me a dictionary (see the {}?). This happens to have one array, with key 'x'. So I just access it with standard dictionary indexing:</p>
<pre><code>In [242]: _['x']
Out[242]:
array([[ 0, 3, 6, 9],
[ 1, 4, 7, 10],
[ 2, 5, 8, 11]])
</code></pre>
<p><code>x</code> was a variable in the MATLAB session that saved this file.</p>
<p>I don't know anything about what's on your file. <code>print(list(data.keys()))</code> can be used to see those keys/variable names.</p>
<p>I tried to get you to look at the <code>loadmat</code> docs, and see that it:</p>
<pre><code>Returns
mat_dictdict
dictionary with variable names as keys, and loaded matrices as values.
</code></pre>
|
arrays|numpy|scipy
| 1
|
1,424
| 62,341,554
|
How to assign value to new column based on other column string?
|
<p>I have a trade data of the country with the following column names </p>
<pre><code>Index(['TNVED', 'Product_Name', 'Export_Value', 'Import_Value', 'Year',
'Country', 'Region', 'Total_Export_XLS', 'Total_Import_XLS',
'Export_Sum', 'Import_Sum', 'Type', 'Nonraw_Type'],
</code></pre>
<p>Now I am trying to fill the 'Type' column with 'Raw' if the string from the raw_materials list is the same with string in df['TNVED']. The data frame look like:</p>
<pre><code> TNVED Product_Name ... Type Nonraw_Type
010190 Лошади, ослы, мулы и лошаки живые прочие ...
010210 Чистопородный племенной крупный рогатый скот ж... ...
010511 Куры домашние (callus domesticus) живые массой... ...
010639 Прочие хищные птицы ...
010690 Прочие животные живые ...
020110 Мясо крупного рогатого скота, свежее или охлаж... ...
020322 Свиные окорока, лопатки и отруба из них, необв... ...
020713 Части тушек и субпродукты домашних кур, свежие... ...
020714 Части тушек и субпродукты домашних кур, мороженые ...
021099 Прочие: мясо и пищевые мясные субпродукты : вк... ...
030351 Рыба мороженная,за искл.рыбн.филе и прочего мя... ...
030379 Прочая рыба за исключением печени, икры и моло... ...
030530 Рыбное филе, сушеное, соленое или в рассоле, н... ...
030559 Прочая рыба сушеная, несоленая, или соленая, н... ...
030749 Прочие каракатицы и кальмары ...
040221 Молоко и сливки сгущенные с содержанием жира б... ...
040221 Молоко и сливки сгущенные с содержанием жира б... ...
040299 Прочее молоко и сливки сгущенные с добавлением... ...
040299 Прочее молоко и сливки сгущенные с добавлением... ...
040510 Сливочное масло ...
040520 Молочные пасты ...
040630 Плавленные сыры, нетертые и не в порошке ...
040690 Прочие сыры ...
050400 Кишки, пузыри и желудки животных (кроме рыбьих... ...
050800 Кораллы и аналог.мат-лы,необработ.или первич.о... ...
051110 Сперма бычья ...
070320 Чеснок свежий или охлажденный ...
071010 Картофель сырой или вареный в воде или на пару... ...
071022 Фасоль (vigna spp., pнaseolus spp.), в стручка... ...
071029 Прочие бобовые овощи в стручках или очищенные,... ...
[30 rows x 13 columns]
</code></pre>
<p>and list</p>
<pre><code>raw_materials = [010190, 071029, 04, 05, etc] #strings
</code></pre>
<p>TNVED column contains the str values.
and I want if a string value is equal to string value from the list the 'Type' column should take 'Raw' otherwise has to be empty. Furthermore, as you can see the list contains two values where the length of the string is two, in this case, all string values starting with these strings have to change. Finally according to the given data frame I expect:</p>
<pre><code> TNVED Product_Name ... Type Nonraw_Type
010190 Лошади, ослы, мулы и лошаки живые прочие ... Raw
010210 Чистопородный племенной крупный рогатый скот ж... ...
010511 Куры домашние (callus domesticus) живые массой... ...
010639 Прочие хищные птицы ...
010690 Прочие животные живые ...
020110 Мясо крупного рогатого скота, свежее или охлаж... ...
020322 Свиные окорока, лопатки и отруба из них, необв... ...
020713 Части тушек и субпродукты домашних кур, свежие... ...
020714 Части тушек и субпродукты домашних кур, мороженые ...
021099 Прочие: мясо и пищевые мясные субпродукты : вк... ...
030351 Рыба мороженная,за искл.рыбн.филе и прочего мя... ...
030379 Прочая рыба за исключением печени, икры и моло... ...
030530 Рыбное филе, сушеное, соленое или в рассоле, н... ...
030559 Прочая рыба сушеная, несоленая, или соленая, н... ...
030749 Прочие каракатицы и кальмары ...
040221 Молоко и сливки сгущенные с содержанием жира б... ... Raw
040221 Молоко и сливки сгущенные с содержанием жира б... ... Raw
040299 Прочее молоко и сливки сгущенные с добавлением... ... Raw
040299 Прочее молоко и сливки сгущенные с добавлением... ... Raw
040510 Сливочное масло ... Raw
040520 Молочные пасты ... Raw
040630 Плавленные сыры, нетертые и не в порошке ... Raw
040690 Прочие сыры ... Raw
050400 Кишки, пузыри и желудки животных (кроме рыбьих... ... Raw
050800 Кораллы и аналог.мат-лы,необработ.или первич.о... ... Raw
051110 Сперма бычья ... Raw
070320 Чеснок свежий или охлажденный ...
071010 Картофель сырой или вареный в воде или на пару... ...
071022 Фасоль (vigna spp., pнaseolus spp.), в стручка... ...
071029 Прочие бобовые овощи в стручках или очищенные,... ... Raw
</code></pre>
<p>I have tried </p>
<pre><code>lens = len(raw_materials)-1
while(0<=lens):
str_len = len(raw_materials[lens])
print(str_len)
df['Type'] = np.where(df['TNVED'] == raw_materials[lens], 'Raw', '')
lens -= 1`
</code></pre>
|
<p>Use, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>Series.str.contains</code></a> along with regex pattern to create a boolean mask, then substitute values in <code>Type</code> column using this mask:</p>
<pre><code>pattern = r'^(?:' + '|'.join(raw_materials) + ')'
m = df['TNVED'].str.contains(pattern)
df.loc[m, 'Type'] = 'Raw'
</code></pre>
<p>You can test the regex pattern <a href="https://regex101.com/r/gi5sRY/2" rel="nofollow noreferrer"><code>here</code></a>.</p>
|
python-3.x|pandas
| 0
|
1,425
| 62,341,053
|
validation accuracy not improving
|
<p>No matter how many epochs I use or change learning rate, my validation accuracy only remains in 50's. Im using 1 dropout layer right now and if I use 2 dropout layers, my max train accuracy is 40% with 59% validation accuracy. And currently with 1 dropout layer, here's my results:</p>
<pre><code>2527/2527 [==============================] - 26s 10ms/step - loss: 1.2076 - accuracy: 0.7944 - val_loss: 3.0905 - val_accuracy: 0.5822
Epoch 10/20
2527/2527 [==============================] - 26s 10ms/step - loss: 1.1592 - accuracy: 0.7991 - val_loss: 3.0318 - val_accuracy: 0.5864
Epoch 11/20
2527/2527 [==============================] - 26s 10ms/step - loss: 1.1143 - accuracy: 0.8034 - val_loss: 3.0511 - val_accuracy: 0.5866
Epoch 12/20
2527/2527 [==============================] - 26s 10ms/step - loss: 1.0686 - accuracy: 0.8079 - val_loss: 3.0169 - val_accuracy: 0.5872
Epoch 13/20
2527/2527 [==============================] - 31s 12ms/step - loss: 1.0251 - accuracy: 0.8126 - val_loss: 3.0173 - val_accuracy: 0.5895
Epoch 14/20
2527/2527 [==============================] - 26s 10ms/step - loss: 0.9824 - accuracy: 0.8165 - val_loss: 3.0013 - val_accuracy: 0.5917
Epoch 15/20
2527/2527 [==============================] - 26s 10ms/step - loss: 0.9417 - accuracy: 0.8216 - val_loss: 2.9909 - val_accuracy: 0.5938
Epoch 16/20
2527/2527 [==============================] - 26s 10ms/step - loss: 0.9000 - accuracy: 0.8264 - val_loss: 3.0269 - val_accuracy: 0.5943
Epoch 17/20
2527/2527 [==============================] - 26s 10ms/step - loss: 0.8584 - accuracy: 0.8332 - val_loss: 3.0011 - val_accuracy: 0.5934
Epoch 18/20
2527/2527 [==============================] - 26s 10ms/step - loss: 0.8172 - accuracy: 0.8378 - val_loss: 2.9918 - val_accuracy: 0.5949
Epoch 19/20
2527/2527 [==============================] - 26s 10ms/step - loss: 0.7796 - accuracy: 0.8445 - val_loss: 2.9974 - val_accuracy: 0.5929
Epoch 20/20
2527/2527 [==============================] - 25s 10ms/step - loss: 0.7407 - accuracy: 0.8502 - val_loss: 3.0005 - val_accuracy: 0.5907
</code></pre>
<p>Again max, it can reach is 59%. Here's the graph obtained:</p>
<p><a href="https://i.stack.imgur.com/di9AT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/di9AT.png" alt="enter image description here"></a></p>
<p>No matter how much changes I make, the validation accuracy reaches max 59%.
Here's my code:</p>
<pre><code>BATCH_SIZE = 64
EPOCHS = 20
LSTM_NODES = 256
NUM_SENTENCES = 3000
MAX_SENTENCE_LENGTH = 50
MAX_NUM_WORDS = 5000
EMBEDDING_SIZE = 100
encoder_inputs_placeholder = Input(shape=(max_input_len,))
x = embedding_layer(encoder_inputs_placeholder)
encoder = LSTM(LSTM_NODES, return_state=True)
encoder_outputs, h, c = encoder(x)
encoder_states = [h, c]
decoder_inputs_placeholder = Input(shape=(max_out_len,))
decoder_embedding = Embedding(num_words_output, LSTM_NODES)
decoder_inputs_x = decoder_embedding(decoder_inputs_placeholder)
decoder_lstm = LSTM(LSTM_NODES, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs_x, initial_state=encoder_states)
decoder_dropout1 = Dropout(0.2)
decoder_outputs = decoder_dropout1(decoder_outputs)
decoder_dense1 = Dense(num_words_output, activation='softmax')
decoder_outputs = decoder_dense1(decoder_outputs)
opt = tf.keras.optimizers.RMSprop()
model = Model([encoder_inputs_placeholder,
decoder_inputs_placeholder],
decoder_outputs)
model.compile(
optimizer=opt,
loss='categorical_crossentropy',
metrics=['accuracy']
)
history = model.fit(
[encoder_input_sequences, decoder_input_sequences],
decoder_targets_one_hot,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_split=0.1,
)
</code></pre>
<p>Im very confused why only my training accuracy is updating, not the validation accuracy.</p>
<p>Here's the model summary:</p>
<pre><code>Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 25) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 23) 0
__________________________________________________________________________________________________
embedding_1 (Embedding) (None, 25, 100) 299100 input_1[0][0]
__________________________________________________________________________________________________
embedding_2 (Embedding) (None, 23, 256) 838144 input_2[0][0]
__________________________________________________________________________________________________
lstm_1 (LSTM) [(None, 256), (None, 365568 embedding_1[0][0]
__________________________________________________________________________________________________
lstm_2 (LSTM) [(None, 23, 256), (N 525312 embedding_2[0][0]
lstm_1[0][1]
lstm_1[0][2]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 23, 256) 0 lstm_2[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 23, 3274) 841418 dropout_1[0][0]
==================================================================================================
Total params: 2,869,542
Trainable params: 2,869,542
Non-trainable params: 0
__________________________________________________________________________________________________
None
</code></pre>
|
<p>The size of the training dataset is less than 3K. While the amount of the trainable parameters is around 3 million. The answer to your question is classical overfitting - the model is so huge, that just remember the training subset instead of a generalization.</p>
<p>How to improve the current situation:</p>
<ul>
<li>try to generate or find more data;</li>
<li>reduce the complexity of the model:
<ul>
<li>use pre-trained embedding (<a href="https://nlp.stanford.edu/projects/glove/" rel="nofollow noreferrer">glove</a>, <a href="https://fasttext.cc/docs/en/english-vectors.html" rel="nofollow noreferrer">fasttext</a>, etc.)</li>
<li>reduce the number of the LSTM nodes;</li>
</ul>
</li>
</ul>
|
python|tensorflow|machine-learning|keras|neural-network
| 10
|
1,426
| 51,291,053
|
How to append to a column in a DataFrame containing time sequences
|
<p><a href="https://i.stack.imgur.com/gC8ak.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gC8ak.png" alt="enter image description here"></a></p>
<p>How do I append another list to the column df_final['time_seq'] which is stored in timeseq=[xyz,xyz...,xyz]</p>
|
<p>For your particular question:</p>
<pre><code>additional_time_seq = [1234567, 1234568]
# i the index of the row you want
# the line below gives you the time_seq list
time_seq_to_append = df_final.loc[df_final.index[i], 'time_seq']
# the line below extends the time_seq list with the additional_time_seq
time_seq_to_append.extend(additional_time_seq)
</code></pre>
<p>Generally, <code>df.loc[df.index[i], 'NAME']</code> <a href="https://stackoverflow.com/questions/28754603/indexing-pandas-data-frames-integer-rows-named-columns/45746617#45746617">gives you the element for a named column with integer indices</a>, and since that's a list in your case, you can use the built-in <a href="https://docs.python.org/3/tutorial/datastructures.html#more-on-lists" rel="nofollow noreferrer">extends</a> method for Python lists.</p>
|
python|pandas
| 2
|
1,427
| 51,344,660
|
How to use pandas at?
|
<p>I am often confused about pandas slice operation, for example, </p>
<pre><code>import pandas as pd
raw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks', 'Dragoons', 'Dragoons', 'Dragoons', 'Dragoons', 'Scouts', 'Scouts', 'Scouts', 'Scouts'],
'company': ['1st', '1st', '2nd', '2nd', '1st', '1st', '2nd', '2nd','1st', '1st', '2nd', '2nd'],
'name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze', 'Jacon', 'Ryaner', 'Sone', 'Sloan', 'Piger', 'Riani', 'Ali'],
'preTestScore': [4, 24, 31, 2, 3, 4, 24, 31, 2, 3, 2, 3],
'postTestScore': [25, 94, 57, 62, 70, 25, 94, 57, 62, 70, 62, 70]}
df = pd.DataFrame(raw_data, columns = ['regiment', 'company', 'name', 'preTestScore', 'postTestScore'])
def get_stats(group):
return {'min': group.min(), 'max': group.max(), 'count': group.count(), 'mean': group.mean()}
bins = [0, 25, 50, 75, 100]
group_names = ['Low', 'Okay', 'Good', 'Great']
df['categories'] = pd.cut(df['postTestScore'], bins, labels=group_names)
des = df['postTestScore'].groupby(df['categories']).apply(get_stats).unstack()
des.at['Good','mean']
</code></pre>
<p>and I got:</p>
<blockquote>
<p>TypeError Traceback (most recent call
last) pandas/_libs/index.pyx in
pandas._libs.index.IndexEngine.get_loc()</p>
<p>pandas/_libs/hashtable_class_helper.pxi in
pandas._libs.hashtable.Int64HashTable.get_item()</p>
<p>TypeError: an integer is required</p>
</blockquote>
<p>During handling of the above exception, another exception occurred:</p>
<blockquote>
<p>KeyError Traceback (most recent call
last) in ()
----> 1 des.at['Good','mean']</p>
<p>C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexing.py in
<strong>getitem</strong>(self, key) 1867 1868 key = self._convert_key(key)
-> 1869 return self.obj._get_value(*key, takeable=self._takeable) 1870 1871 def <strong>setitem</strong>(self,
key, value):</p>
<p>C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\frame.py in
_get_value(self, index, col, takeable) 1983 1984 try:
-> 1985 return engine.get_value(series._values, index) 1986 except (TypeError, ValueError): 1987 </p>
<p>pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_value()</p>
<p>pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_value()</p>
<p>pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()</p>
<p>KeyError: 'Good'</p>
</blockquote>
<p>How can I do this? </p>
<p>Thanks in advance.</p>
|
<p>Problem is with the line,</p>
<pre><code>des = df['postTestScore'].groupby(df['categories']).apply(get_stats).unstack()
</code></pre>
<p>after doing a group by over 'postTestScroe' you are getting <strong>"Series"</strong> not <strong>"DataFrame"</strong> as shown below.</p>
<p><a href="https://i.stack.imgur.com/QtViq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QtViq.png" alt="enter image description here"></a></p>
<p>Now when you are trying to access scalar labels with DataFrame des <strong>".at"</strong> it doesn't recognize label 'Good' since it doesn't exist with Series. </p>
<pre><code>des.at['Good','mean']
</code></pre>
<p>Just try to print <strong>des</strong> print,you will see the resulting series.</p>
<pre><code> count max mean min
categories
Low 2.0 25.0 25.00 25.0
Okay 0.0 NaN NaN NaN
Good 8.0 70.0 63.75 57.0
Great 2.0 94.0 94.00 94.0
</code></pre>
|
python|pandas|dataframe
| 0
|
1,428
| 48,285,160
|
Write 1s faster to col-rows based on positions in a list
|
<p>I'm new to pandas. I'm using a dataframe to tally how many times two positions match. </p>
<p>Here is the code in question...right at the start. The "what am I trying to accomplish" below...</p>
<pre><code>def crossovers(df, index):
# Duplicate the dataframe passed in
_dfcopy = df.copy(deep=True)
# Set all values to 0
_dfcopy[:] = 0.0
# change the value of any col/row where there's a shared SNP
for i in index:
for j in index:
if i == j: continue # Don't include self as a shared SNP
_dfcopy[i][j] = 1
# Return the DataFrame.
# Should only contain 0s (no shared SNP) or 1s ( a shared SNP)
return _dfcopy
</code></pre>
<p><strong>QUESTION:*</strong>
The data is flipping all the 0s in a dataframe to 1s, for all the intersections of rows/columns in a list (see details below). </p>
<p>I.e. if the list is </p>
<pre><code>_indices = [0,2,3]
</code></pre>
<p>...all the locations at (0,2); (0,3); (2,0); (2,3); (3,0); and (3,2) get flipped to 1s.</p>
<p>Currently I do this by iterating through the list recursively onto itself. But this is painfully slow...and I'm passing in 16 million lines of data (16 mil indices). </p>
<p><strong><em>How can I speed up this overall process?</em></strong></p>
<p><br>
<br></p>
<p><strong><em>LONGER DESCRIPTION</em></strong></p>
<p>I start with a dataframe called <code>sharedby_BOTH</code> similar to below, except much larger (70 cols x 70 rows)- I'm using it to tally occurrences of shared data intersections. </p>
<p>Rows (index) are labeled 0,1,2,3 & 4...70 - as are the columns. Each location contains a 0.</p>
<p><strong><em>sharedby_BOTH</em></strong></p>
<pre><code> 0 1 2 3 4 (more)
------------------
0 | 0 | 0 | 0 | 0 | 0
1 | 0 | 0 | 0 | 0 | 0
2 | 0 | 0 | 0 | 0 | 0
3 | 0 | 0 | 0 | 0 | 0
4 | 0 | 0 | 0 | 0 | 0
(more)
</code></pre>
<p>Then I have a list, which contains intersecting data. </p>
<pre><code>_indices = [0,2,3 (more)] # for example
</code></pre>
<p>This means that 0, 2, & 3 all contain shared data. So, I pass it to <code>crossovers</code> which returns a dataframe with a "1" at the intersection places, obtaining this...</p>
<pre><code> 0 1 2 3 4 (more)
------------------
0 | 0 | 0 | 1 | 1 | 0
1 | 0 | 0 | 0 | 0 | 0
2 | 1 | 0 | 0 | 1 | 0
3 | 1 | 0 | 1 | 0 | 0
4 | 0 | 0 | 0 | 0 | 0
(more)
</code></pre>
<p>...where the shared data locations are (0,2),(0,3),(2,0),(2,3),(3,0),(3,2).</p>
<p><strong>*Notice that self is not recognized [(0,0), (2,2), and (3,3) DO NOT have 1s] *</strong></p>
<p>Then I add this to the original dataframe with this code (inside a loop)...</p>
<pre><code>sharedby_BOTH = sharedby_BOTH.add(crossovers(sharedby_BOTH, _indices)
</code></pre>
<p>I repeat this in a loop...</p>
<pre><code>for pos, pos_val in chrom_val.items(): # pos_val is a dict
_indices = [i for i, x in enumerate(pos_val["sharedby"]) if (x == "HET")]
sharedby_BOTH = sharedby_BOTH.add(crossovers(sharedby_BOTH, _indices))
</code></pre>
<p>The end result is that <code>sharedby_BOTH</code> will look like the following, if I added the three example <code>_indices</code></p>
<pre><code>sharedby_BOTH = sharedby_BOTH.add(crossovers(sharedby_BOTH, [0,2,3] ))
sharedby_BOTH = sharedby_BOTH.add(crossovers(sharedby_BOTH, [0,2,4] ))
sharedby_BOTH = sharedby_BOTH.add(crossovers(sharedby_BOTH, [0,2,3] ))
0 1 2 3 4 (more)
------------------
0 | 0 | 0 | 3 | 2 | 1
1 | 0 | 0 | 0 | 0 | 0
2 | 3 | 0 | 0 | 2 | 1
3 | 2 | 0 | 2 | 0 | 0
4 | 1 | 0 | 1 | 0 | 0
(more)
</code></pre>
<p>...where, amongst the three indices passed in...</p>
<p><code>0</code>shared data with <code>2</code> a total of three times so (0,2) and (2,0) totaled three.</p>
<p><code>0</code>shared data with <code>3</code> twice so (0,3) and (3,0) total two.</p>
<p><code>0</code>shared data with <code>4</code> only once, so (0,4) and (4,0) total one.</p>
<p>I hope this makes sense :)</p>
<p><strong><em>EDIT</em></strong></p>
<p>I did try the following...</p>
<pre><code>addit = pd.DataFrame(1, index=_indices, columns=_indices)
sharedby_BOTH = sharedby_BOTH.add(addit)
</code></pre>
<p>BUT...then any locations within <code>sharedby_BOTH</code> that DID NOT HAVE SHARED DATA ended up as <code>NAN</code></p>
<p>I.e...</p>
<pre><code>sharedby_BOTH = pd.DataFrame(0, index=[x for x in range(4)], columns=[x for x in range(4)])
_indices = [0,2,3 (more)] # for example
addit = pd.DataFrame(1, index=_indices, columns=_indices)
sharedby_BOTH = sharedby_BOTH.add(addit)
0 1 2 3 4 (more)
------------------
0 | NAN | NAN | 1 | 1 | NAN
1 | NAN | NAN | NAN | NAN | NAN
2 | 1 | NAN | NAN | 1 | NAN
3 | 1 | NAN | 1 | NAN | NAN
4 | NAN | NAN | NAN | NAN | NAN
(more)
</code></pre>
|
<p>I'd organize it with numpy slice assignment and the handy <code>np.triu_indices</code> function. It returns the row and column indices of the upper triangle. I make sure to pass <code>k=1</code> to ensure I skip the diagonal. When I slice assign, I make sure to use both <code>i, j</code> and <code>j, i</code> to get upper and lower
triangles.</p>
<pre><code>def xover(n, idx):
idx = np.asarray(idx)
a = np.zeros((n, n))
i_, j_ = np.triu_indices(len(idx), 1)
i = idx[i_]
j = idx[j_]
a[i, j] = 1
a[j, i] = 1
return a
pd.DataFrame(xover(len(df), [0, 2, 3]), df.index, df.columns)
0 1 2 3
0 0.0 0.0 1.0 1.0
1 0.0 0.0 0.0 0.0
2 1.0 0.0 0.0 1.0
3 1.0 0.0 1.0 0.0
</code></pre>
<p><strong>Timings</strong></p>
<pre><code>%timeit pd.DataFrame(xover(len(df), [0, 2, 3]), df.index, df.columns)
10000 loops, best of 3: 192 µs per loop
%%timeit
for i,j in product(li,repeat=2):
if i != j:
ndf.loc[i,j] = 1
100 loops, best of 3: 6.8 ms per loop
</code></pre>
|
python-3.x|list|pandas|dataframe
| 3
|
1,429
| 48,269,372
|
Tensorflow serving_input_receiver_fn with arguments
|
<p>I want to add some arguments to the function serving_input_receiver_fn, because the size of the feature array depends of the model. The problem is that the oficial definition of serving_input_receiver_fn is: </p>
<blockquote>
<p>serving_input_receiver_fn: A function that takes no argument and
returns a ServingInputReceiver. Required for custom models.</p>
</blockquote>
<p>My implementation of this function is:</p>
<pre><code>def serving_input_receiver_fn():
serialized_tf_example = tf.placeholder(dtype=tf.string, shape=[None], name='input_tensors')
receiver_tensors = {'inputs': serialized_tf_example}
feature_spec = {'words': tf.FixedLenFeature([25],tf.int64)}
features = tf.parse_example(serialized_tf_example, feature_spec)
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
</code></pre>
<p>So, I want that the size ([25]), the name of the features ('words') and the name of the receiver ('inputs') can be variables. There is a chance to have arguments in this function? or another way to do this?</p>
|
<p>How about using a nested function or closure?</p>
<pre><code>>>> def create_serving_fn(size, feature, inputs):
def serving_input_receiver_fn():
serialized_tf_example = tf.placeholder(dtype=tf.string, shape=[None], name='input_tensors')
receiver_tensors = {inputs: serialized_tf_example}
feature_spec = {feature: tf.FixedLenFeature([size],tf.int64)}
features = tf.parse_example(serialized_tf_example, feature_spec)
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
return serving_input_receiver_fn
your_serving_fn = create_serving_fn(25, 'words', 'inputs')
print(your_serving_fn)
<function create_serving_fn.<locals>.serving_input_receiver_fn at 0x7f10df77bf28>
</code></pre>
<p>This way, <code>serving_input_receiver_fn</code> has access to arguments passed to <code>create_serving_fn</code>.</p>
|
python|tensorflow|machine-learning
| 5
|
1,430
| 48,015,191
|
How to extrapolate a function based on x,y values?
|
<p>Ok so I started with Python a few days ago. I mainly use it for DataScience because I am an undergraduate chemistry student. Well, now I got a small problem on my hands, as I have to extrapolate a function. I know how to make simple diagrams and graphs, so please try to explain it as easy to me as possible. I start off with:</p>
<pre><code>from matplotlib import pyplot as plt
from matplotlib import style
style.use('classic')
x = [0.632455532, 0.178885438, 0.050596443, 0.014310835, 0.004047715]
y = [114.75, 127.5, 139.0625, 147.9492188, 153.8085938]
x2 = [0.707, 0.2, 0.057, 0.016, 0.00453]
y2 = [2.086, 7.525, 26.59375,87.03125, 375.9765625]
</code></pre>
<p>so with these values I have to work out a way to extrapolate in order to get a y(or y2) value when my x=0. I know how to do this mathematically, but I would like to know if python can do this and how do I execute it in Python. Is there a simple way? Can you give me maybe an example with my given values?
Thank you</p>
|
<p>Taking a quick look at your data,</p>
<pre><code>from matplotlib import pyplot as plt
from matplotlib import style
style.use('classic')
x1 = [0.632455532, 0.178885438, 0.050596443, 0.014310835, 0.004047715]
y1 = [114.75, 127.5, 139.0625, 147.9492188, 153.8085938]
plt.plot(x1, y1)
</code></pre>
<p><a href="https://i.stack.imgur.com/scM6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/scM6P.png" alt="enter image description here"></a></p>
<pre><code>x2 = [0.707, 0.2, 0.057, 0.016, 0.00453]
y2 = [2.086, 7.525, 26.59375,87.03125, 375.9765625]
plt.plot(x2, y2)
</code></pre>
<p><a href="https://i.stack.imgur.com/P60Gn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P60Gn.png" alt="enter image description here"></a></p>
<p>This is definitely not linear. If you know what sort of function this follows, you may want to use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html" rel="nofollow noreferrer">scipy's curve fitting</a> to get a best-fit function which you can then use.</p>
<p><strong>Edit:</strong></p>
<p>If we convert the plots to log-log,</p>
<pre><code>import numpy as np
plt.plot(np.log(x1), np.log(y1))
</code></pre>
<p><a href="https://i.stack.imgur.com/hoxYO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hoxYO.png" alt="enter image description here"></a></p>
<pre><code>plt.plot(np.log(x2), np.log(y2))
</code></pre>
<p><a href="https://i.stack.imgur.com/QNU4B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QNU4B.png" alt="enter image description here"></a></p>
<p>they look pretty linear (if you squint a bit). Finding a best-fit line,</p>
<pre><code>np.polyfit(np.log(x1), np.log(y1), 1)
# array([-0.05817402, 4.73809081])
np.polyfit(np.log(x2), np.log(y2), 1)
# array([-1.01664659, 0.36759068])
</code></pre>
<p>we can convert back to functions,</p>
<pre><code># f1:
# log(y) = -0.05817402 * log(x) + 4.73809081
# so
# y = (e ** 4.73809081) * x ** (-0.05817402)
def f1(x):
return np.e ** 4.73809081 * x ** (-0.05817402)
xs = np.linspace(0.01, 0.8, 100)
plt.plot(x1, y1, xs, f1(xs))
</code></pre>
<p><a href="https://i.stack.imgur.com/vSN96.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vSN96.png" alt="enter image description here"></a></p>
<pre><code># f2:
# log(y) = -1.01664659 * log(x) + 0.36759068
# so
# y = (e ** 0.36759068) * x ** (-1.01664659)
def f2(x):
return np.e ** 0.36759068 * x ** (-1.01664659)
plt.plot(x2, y2, xs, f2(xs))
</code></pre>
<p><a href="https://i.stack.imgur.com/2N6z5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2N6z5.png" alt="enter image description here"></a></p>
<p>The second looks pretty darn good; the first still needs a bit of refinement (ie find a more representative function and curve-fit it). But you should have a pretty good picture of the process ;-)</p>
|
python|numpy|matplotlib|extrapolation
| 5
|
1,431
| 48,613,671
|
slice pandas dataframe to get noncontiguous columns
|
<p>I have a <code>pandas.DataFrame</code>: <code>wordvecs_df</code>, with columns labeled <code>'word'</code>, <code>'count'</code>, <code>'v1'</code> through <code>'v50'</code> and <code>'norm1'</code> through <code>'norm50'</code> in that order. I want to create a new pandas df with just the columns for <code>'word'</code>, <code>'count'</code> and <code>norm1-norm50</code>.</p>
<pre><code>wordvecs_df.loc[:,"norm1":"norm50"]
</code></pre>
<p>gets me <code>norm1</code>-<code>norm50</code>, but if I try to put in word and count I get an IndexingError: Too many indexers.</p>
<p>I can't figure out how to get just the columns I want out of the dataframe. Any ideas?</p>
|
<p>You can build up a list of column names like:</p>
<pre><code>columns = ['word', 'count'] + ['norm%d' % i for i in range(1, 51)]
wordvecs_df.loc[:,columns]
</code></pre>
|
python|pandas|dataframe|slice
| 4
|
1,432
| 48,445,748
|
group the same consecutive values in pandas and store: values, indices, and column slices
|
<p>I have a dataframe</p>
<pre><code>import pandas as pd
import numpy as np
v1=list(np.random.rand(30))
v2=list(np.random.rand(30))
mydf=pd.DataFrame(data=zip(v1,v2),columns=['var1','var2'])
</code></pre>
<p>then I apply some boolean conditions on some variables</p>
<pre><code>mydf['cond1']=mydf['var1']>0.2
mydf['cond2']=mydf['var1']>0.8
mydf['cond1']=
0 False
1 True
2 True
3 False
4 False
5 True
6 False
....
</code></pre>
<p>I would like to group in blocks where 'cond1' (or 'cond2') is True, and for each group store:</p>
<ul>
<li><p>the value of the group: True/False</p></li>
<li><p>the index of the start, and of the end, of the block: e.g.
1,2
5,5</p></li>
<li><p>the 2 values of <code>var2</code> at index of the start, and of the end,</p></li>
<li><p>all the values of <code>var1</code> between the index of the start, and of the end, as an iterable (list of np.array)</p></li>
</ul>
<p>this is one example of returned values:</p>
<pre><code>summary=
'Start' 'End' 'Start_var2' 'End_var2' 'Value' 'var1'
1 2 0.3217381 0.454543 True [0.25,0.26]
</code></pre>
|
<p>I think you can use <a href="https://stackoverflow.com/questions/40802800/pandas-dataframe-how-to-groupby-consecutive-values">this SO answer</a>. <code>i</code> gives you the group number, and the <code>index</code> of <code>g</code> can be used to get the <code>var</code> values.</p>
<pre><code>v1=list(np.random.rand(30))
v2=list(np.random.rand(30))
df=pd.DataFrame(data=zip(v1,v2),columns=['var1','var2'])
df['cond1']=df['var1']>0.2
df['cond2']=df['var1']>0.8
for i, g in df.groupby([(df['cond1'] != df['cond1'].shift()).cumsum()]):
print (i)
print (g)
print (g['cond1'].tolist())
print(g['cond1'].index[0])#can get var values from this
</code></pre>
|
python|pandas
| 1
|
1,433
| 48,839,618
|
High training accuracy but low prediction performance for Tensorflow's official MNIST model
|
<p>I'm new to machine learning and I was following along with the Tensorflow official MNIST model (<a href="https://github.com/tensorflow/models/tree/master/official/mnist" rel="nofollow noreferrer">https://github.com/tensorflow/models/tree/master/official/mnist</a>). After training the model for 3 epochs and getting accuracy results of over 98%, I decided to test the dataset with some of my own handwritten images that are very close to those found in the MNIST dataset:</p>
<pre><code> {'loss': 0.03686057, 'global_step': 2400, 'accuracy': 0.98729998}
</code></pre>
<p>handwritten 1, predicted as 2: <a href="https://storage.googleapis.com/imageexamples/example1.png" rel="nofollow noreferrer">https://storage.googleapis.com/imageexamples/example1.png</a></p>
<p>handwritten 4, predicted as 5:
<a href="https://storage.googleapis.com/imageexamples/example4.png" rel="nofollow noreferrer">https://storage.googleapis.com/imageexamples/example4.png</a></p>
<p>handwritten 7, predicted correctly as 7:
<a href="https://storage.googleapis.com/imageexamples/example7.png" rel="nofollow noreferrer">https://storage.googleapis.com/imageexamples/example7.png</a></p>
<p>However, as you can see below, the predictions were mostly incorrect. Can anyone share some insight as to why this might be? If you want any other info, please let me know. Thanks!</p>
<pre><code>[2 5 7]
Result for output key probabilities:
[[ 1.47042423e-01 1.40417784e-01 2.80471593e-01 1.18162427e-02
1.71029475e-02 1.15245730e-01 9.41787264e-04 1.71402004e-02
2.61987478e-01 7.83374347e-03]
[ 3.70134876e-05 3.59491096e-03 1.70885725e-03 3.44008535e-01
1.75098982e-02 6.24581575e-01 1.02930271e-05 3.97418407e-05
7.59732258e-03 9.11886105e-04]
[ 7.62941269e-03 7.74145573e-02 1.42017215e-01 4.73754480e-03
3.75231934e-06 7.16139004e-03 4.40478354e-04 7.60131121e-01
4.09408152e-04 5.51677040e-05]]
</code></pre>
<p>Here is the script I used to convert pngs into npy arrays for testing. The resulting array for the provided '3' and '5' image is identical to the one given in the TF repository, so I don't think it's the issue:</p>
<pre><code>def main(unused_argv):
output = []
images = []
filename_generate = True
index = 0
if FLAGS.images is not None:
images = str.split(FLAGS.images)
if FLAGS.output is not "": # check for output names and make sure outputs map to images
output = str.split(FLAGS.output)
filename_generate = False
if len(output) != len(images):
raise ValueError('The number of image files and output files must be the same.')
if FLAGS.batch == "True":
combined_arr = np.array([]) # we'll be adding up arrays
for image_name in images:
input_image = Image.open(image_name).convert('L') # convert to grayscale
input_image = input_image.resize((28, 28)) # resize the image, if needed
width, height = input_image.size
data_image = array('B')
pixel = input_image.load()
for x in range(0,width):
for y in range(0,height):
data_image.append(pixel[y,x]) # use the MNIST format
np_image = np.array(data_image)
img_arr = np.reshape(np_image, (1, 28, 28))
img_arr = img_arr/float(255) # use scale of [0, 1]
if FLAGS.batch != "True":
if filename_generate:
np.save("image"+str(index), img_arr) # save each image with random filenames
else:
np.save(output[index], img_arr) # save each image with chosen filenames
index = index+1
else:
if combined_arr.size == 0:
combined_arr = img_arr
else:
combined_arr = np.concatenate((combined_arr, img_arr), axis=0) # add all image arrays to one array
if FLAGS.batch == "True":
if filename_generate:
np.save("images"+str(index), combined_arr) # save batched images with random filename
else:
np.save(output[0], combined_arr) # save batched images with chosen filename
</code></pre>
<p>I haven't changed anything in the official MNIST model except the number of epochs (previously 40, changed because it was taking so long to train and already seeing high accuracy after 1 epoch).</p>
<p>Thanks so much!</p>
|
<p>The MNIST images are white-on-black; the images you've linked are black-on-white.</p>
<p>Unless there's a conversion step I missed, you'll want to invert the colors before attempting detection.</p>
|
python|tensorflow|machine-learning|conv-neural-network|mnist
| 5
|
1,434
| 48,639,183
|
pandas dataframe get NaN when mapping
|
<p>Can anyone answer me why I always get NaN ?<a href="https://i.stack.imgur.com/MIcl5.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MIcl5.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/N5jHh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N5jHh.jpg" alt="enter image description here"></a></p>
|
<p>It seems there are some whitespaces, so is necesary remove them first.</p>
<p>You can check it by:</p>
<pre><code>print(Xtrain['Sex'].head().tolist())
</code></pre>
<p>So use:</p>
<pre><code>Xtrain = pd.read_csv('train.csv', skipinitialspace=True)
Xtrain['Sex'] = Xtrain['Sex'].map(sex_mapping)
</code></pre>
<p>Or:</p>
<pre><code>Xtrain = pd.read_csv('train.csv')
Xtrain['Sex'] = Xtrain['Sex'].str.strip().map(sex_mapping)
</code></pre>
|
pandas|csv|dictionary
| 0
|
1,435
| 48,553,070
|
python: handle out of range numpy indices
|
<p>I am using the following code to rotate an image along it's y-axis.</p>
<pre><code>y, x = np.indices(im1.shape[:2])
return im1[y, ((x-im1.shape[1]/2)/math.cos(t*math.pi/2)+im1.shape[1]/2).astype(np.int)]
</code></pre>
<p>Some of the values are intentionally out-of-range. I would like for out of range pixels to be black (0, 0, 0). How can I allow the in range indices to work, while replacing the out-of range with black?</p>
|
<p>Just mask them:</p>
<pre><code># transform
>>> x2 = ((x-im1.shape[1]/2)/np.cos(t*np.pi/2)+im1.shape[1]/2).astype(np.int)
# check bounds
>>> allowed = np.where((x2>=0) & (x2<im1.shape[1]))
# preallocate with zeros
>>> res = np.zeros_like(im1)
# fill in within-bounds pixels
>>> res[allowed] = im1[y[allowed],x2[allowed]]
>>>
# one possible economy I left out for clarity
>>> np.all(y[allowed] == allowed[0])
True
</code></pre>
|
python|numpy|indexing
| 1
|
1,436
| 70,909,233
|
Numpy: replace RGB color at specific indexes
|
<p>I have a Numpy Array of an image and I need to only replace the RGB color of specific elements.</p>
<p>E.G.: If I have 10 elements in the Array with the color rgb(16, 16, 16), I want to replace the color of the 2nd and 7th elements only.</p>
<p>How to do this?</p>
<h1>What I have so far replace them all:</h1>
<pre><code>r1, g1, b1 = 124, 252, 0 # original
r2, g2, b2 = 255, 255, 255 # new
red, green, blue = img_array[:,:,0], img_array[:,:,1], img_array[:,:,2]
mask = (red == r1) & (green == g1) & (blue == b1)
img_array[:,:,:3][mask] = [r2, g2, b2]
</code></pre>
|
<p>It's a bit hard to check without a working example img_array. But if I'm not mistaken, you should be able to get the indeces of the masked values like so :</p>
<pre><code>masked_indeces = np.where(img_array[:,:,:3][mask])
</code></pre>
<p>And then you can choose which of the indeces you want to change in the img_array.</p>
|
python|numpy|image-processing
| 0
|
1,437
| 70,823,496
|
reset cumulative sum base on condition pandas and return other cumulative sum
|
<p>I have this dataframe -</p>
<pre><code> counter duration amount
0 1 0.08 1,235
1 2 0.36 1,170
2 3 1.04 1,222
3 4 0.81 1,207
4 5 3.99 1,109
5 6 1.20 1,261
6 7 4.24 1,068
7 8 3.07 1,098
8 9 2.08 1,215
9 10 4.09 1,043
10 11 2.95 1,176
11 12 3.96 1,038
12 13 3.95 1,119
13 14 3.92 1,074
14 15 3.91 1,076
15 16 1.50 1,224
16 17 3.65 962
17 18 3.85 1,039
18 19 3.82 1,062
19 20 3.34 917
</code></pre>
<p>I would like to create another column based on the following logic:</p>
<p>For each row, I want to calculate a running sum of 'duration' but it should be a running sum for the rows that are below the current row (lead and not lag).
I would like to stop the calculation when the running sum reaches 5 -> when it reaches 5, I want to return the running sum for 'amount' (with the same logic).</p>
<p>For instance, for 'counter' 1 it should take the first 4 rows (0.08+0.36+1.04+0.81<5) and then to return 1,235+1,170+1,222+1,207=4834</p>
<p>for 'counter' 2 it should take only 0.36 + 1.04 + 0.81<5 and to return 1,170+1,222+1,207=3599</p>
<p>Will appreciate any help!</p>
|
<p>I would first go through the 2 columns once for their cumulative sums.</p>
<pre><code>cum_amount = df['amount'].cumsum()
cum_duration = df['duration'].cumsum()
</code></pre>
<p>Get a list ready for the results</p>
<pre><code>results = []
</code></pre>
<p>Then loop through each index (equivalent to counter)</p>
<pre><code>for idx in cum_duration.index:
# keep only rows within `5` and the max. index is where the required numbers are located
wanted_idx = (cum_duration[cum_duration<5]).index.max()
# read those numbers with the wanted index
results.append({'idx': idx, 'cum_duration': cum_duration[wanted_idx], 'cum_amount': cum_amount[wanted_idx]})
# subtract the lag (we need only the leads not the lags)
cum_amount -= cum_amount[idx]
cum_duration -= cum_duration[idx]
</code></pre>
<p>Finally the result in a DataFrame.</p>
<pre><code>pd.DataFrame(results)
idx cum_duration cum_amount
0 0 2.29 4834.0
1 1 2.21 3599.0
2 2 1.85 2429.0
3 3 4.80 2316.0
4 4 3.99 1109.0
5 5 1.20 1261.0
6 6 4.24 1068.0
7 7 3.07 1098.0
8 8 2.08 1215.0
9 9 4.09 1043.0
10 10 2.95 1176.0
11 11 3.96 1038.0
12 12 3.95 1119.0
13 13 3.92 1074.0
14 14 3.91 1076.0
15 15 1.50 1224.0
16 16 3.65 962.0
17 17 3.85 1039.0
18 18 3.82 1062.0
19 19 3.34 917.0
</code></pre>
|
python|pandas|dataframe|cumsum
| 0
|
1,438
| 51,954,912
|
Adding header/column to my .csv file
|
<p>My current code:</p>
<pre><code>file='filelocation.sav'
finalfile = 'filelocation.csv'
s=spio.readsav(file, python_dict=True, verbose=True)
amf=np.asarray(s["amf"])
d=[amf]
df=pd.DataFrame(data=d)
df=df.T
df.to_csv(finalfile,sep= ' ', encoding = 'utf-u', header=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/qCxa3.png" rel="nofollow noreferrer">This is my current output</a></p>
<p>I would like my output to look like this:</p>
<pre><code> amf
0 6.685
1 6.84
2 7.0
3 7.16
4 7.33
5 7.51
6 7.70
</code></pre>
<p>etc. Basically I want the header to line up with the data it represents so that I can properly call upon said data and plot. </p>
<p>Another issue I noticed was that when I asked jupyter with </p>
<pre><code>file.columns
</code></pre>
<p>to tell me the index, I was returned this output:</p>
<pre><code>Index([u' 0 1 2 3 4 5 6'], dtype='object)
</code></pre>
<p>which leads me to believe that there is only one index being accounted for, when I'd like to have 7 specific indices (including "0").</p>
<hr>
<p>Edited past here:</p>
<p>So: It looks like another issue I'm having is that although the number of rows I intend to have is about 87000 in length, it's showing through as having 87000 columns. </p>
<p>Looks like I need to make some changes to that my array "amf" is created as a row and not a column. </p>
|
<p>The problem is that your are wrapping amf between brackets <code>df=[amf]</code> making it a list of arrays, change it to <code>df=amf</code>, for instance:</p>
<pre><code>data = [np.array([6.685, 6.84, 7.0, 7.16, 7.33, 7.51, 7.70])]
result = pd.DataFrame(data=data)
print(result.shape)
</code></pre>
<p>returns: <code>(1, 7)</code> that is 1 column and 7 rows, however:</p>
<pre><code>data = np.array([6.685, 6.84, 7.0, 7.16, 7.33, 7.51, 7.70])
result = pd.DataFrame(data=data)
print(result.shape)
</code></pre>
<p>returns <code>(7, 1)</code>, also you need to set the columns of the <code>DataFrame</code>:</p>
<pre><code>import numpy as np
import pandas as pd
data = np.array([6.685, 6.84, 7.0, 7.16, 7.33, 7.51, 7.70])
result = pd.DataFrame(data=data, columns=['amf'])
print(result)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> amf
0 1
1 2
2 3
3 4
4 5
5 6
</code></pre>
|
python|pandas
| 0
|
1,439
| 51,987,076
|
Avoiding overfitting while training a neural network with Tensorflow
|
<p>I am training a neural network using Tensorflow's object detetction API to detect cars. I used the following youtube video to learn and execute the process.</p>
<p><a href="https://www.youtube.com/watch?v=srPndLNMMpk&t=65s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=srPndLNMMpk&t=65s</a></p>
<p>Part 1 to 6 of his series. </p>
<p>Now in his video, he has mentioned to stop the training when the loss value reaches ~1 or below on an average and that it would take about 10000'ish' steps.</p>
<p>In my case, it is 7500 steps right now and the loss values keep fluctuating from 0.6 to 1.3.</p>
<p>Alot of people complained in the comment section about false positives on this series but I think this happened because of the unnecessary prolonged process of training (because they din't know maybe when to stop ?) which caused overfitting!</p>
<p>I would like to avoid this problem. I would like to have not the most optimum weights but fairly optimum weights while avoiding false detection or overfitting. I am also observing 'Total Loss' section of Tensorboard. It fluctuates between 0.8 to 1.2. When do I stop the training process? </p>
<p>I would also like to know in general, which factors does the 'stopping of training' depend on? is it always about the average loss of 1 or less? </p>
<p>Additional information:
My training data has ~300 images
Test data ~ 20 images</p>
<p>Since I am using the concept of transfer learning, I chose ssd_mobilenet_v1.model.</p>
<p>Tensorflow version 1.9 (on CPU)
Python version 3.6</p>
<p>Thank you!</p>
|
<p>You should use a validation test, different from the training set and the test set.</p>
<p>At each epoch, you compute the loss of both training and validation set.
If the validation loss begin to increase, stop your training. You can now test your model on your test set.</p>
<p>The Validation set size is usually the same as the test one. For example, training set is 70% and both validation and test set are 15% each.</p>
<p>Also, please note that 300 images in your dataset seems not enough. You should increase it.</p>
<p>For your other question :
The loss is the sum of your errors, and thus, depends on the problem, and your data. A loss of 1 does not mean much in this regard. Never rely on it to stop your training.</p>
|
tensorflow|machine-learning|neural-network|tensorboard|training-data
| 2
|
1,440
| 42,003,177
|
Merge text into datetime column
|
<p>I have a dataframe with 2 columns.
<code>col1</code> is <code>date</code> and <code>col2</code> is <code>bigint</code>.
There are dummy values <code>1970-01-01 00:00:00</code> and <code>19700101000000</code></p>
<pre><code>col1 col2
2012-01-12 18:09:42 19700101000000
1970-01-01 00:00:00 20140701000001
</code></pre>
<p>I am looking for a way to merge these 2 columns into a single datetime column like this...</p>
<pre><code>col3
2012-01-12 18:09:42
2014-07-01 00:00:01
</code></pre>
<p>Or is there any way to merge text from column col2 into col1.</p>
|
<p>You need first <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_timedelta.html" rel="nofollow noreferrer"><code>to_timedelta</code></a>, last add to <code>col1</code>:</p>
<pre><code>print (pd.to_datetime(df.col2, format='%Y%m%d%H%M%S'))
0 1970-01-01 00:00:00
1 2014-07-01 00:00:01
Name: col2, dtype: datetime64[ns]
print (pd.to_timedelta(pd.to_datetime(df.col2, format='%Y%m%d%H%M%S')))
0 0 days 00:00:00
1 16252 days 00:00:01
Name: col2, dtype: timedelta64[ns]
df.col1 = pd.to_datetime(df.col1)
df['col3'] = pd.to_timedelta(pd.to_datetime(df.col2, format='%Y%m%d%H%M%S')) + df.col1
print (df)
col1 col2 col3
0 2012-01-12 18:09:42 19700101000000 2012-01-12 18:09:42
1 1970-01-01 00:00:00 20140701000001 2014-07-01 00:00:01
</code></pre>
<p>Parameter <code>unit</code> can be used too:</p>
<pre><code>df['col3'] = pd.to_timedelta(pd.to_datetime(df.col2, format='%Y%m%d%H%M%S'), unit='ns') +
df.col1
print (df)
col1 col2 col3
0 2012-01-12 18:09:42 19700101000000 2012-01-12 18:09:42
1 1970-01-01 00:00:00 20140701000001 2014-07-01 00:00:01
</code></pre>
|
python|pandas|datetime
| 1
|
1,441
| 64,321,843
|
Divide rows in two columns with Pandas
|
<p>I am using Pandas.</p>
<ol>
<li>For each row, regardless of the County, I would like to divide "AcresBurned" by "CrewsInvolved".</li>
<li>For each County, I would like to sum the total AcresBurned for that County and divide by the sum of the total CrewsInvolved for that County.</li>
</ol>
<p>I just started coding and am not able to solve this. Please help. Thank you so much.</p>
<pre><code>Counties AcresBurned CrewsInvolved
1 400 2
2 500 3
3 600 5
1 800 9
2 850 8
</code></pre>
|
<p>This is very simple with Pandas. You could create a new col with these operations.</p>
<pre><code>df['Acer_per_Crew'] = df['AcersBurned'] / df['CrewsaInvolved']
</code></pre>
<p>You could use a groupby clause for viewing the sum of AcersBurned for a county.</p>
<pre><code>df_gb = df.groupby(['counties']) ['AcersBurned', 'CrewsInvolved'].sum().reset_index()
df_gb.columns = ['counties', 'AcersBurnedPerCounty', 'CrewsInvolvedPerCounty']
df = df.merge(df_gb, on = 'counties')
</code></pre>
<p>Once you've done this, you could create a new column with a similar arithmetic operation to divide AcersBurnedPerCounty by CrewsInvolvedPerCounty.</p>
|
pandas|division
| 0
|
1,442
| 47,856,852
|
Estimator predict infinite loop
|
<p>I don't understand how to make a single prediction using TensorFlow Estimator API - my code results in an endless loop that keeps predicting for the same input.</p>
<p>According to the <a href="https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator#predict" rel="noreferrer">documentation</a>, the prediction is supposed to stop when input_fn raises a StopIteration exception:</p>
<blockquote>
<p>input_fn: Input function returning features which is a dictionary of
string feature name to Tensor or SparseTensor. If it returns a tuple,
first item is extracted as features. Prediction continues until
input_fn raises an end-of-input exception (OutOfRangeError or
StopIteration).</p>
</blockquote>
<p>Here's the relevant part in my code:</p>
<pre><code>classifier = tf.estimator.Estimator(model_fn=image_classifier, model_dir=output_dir,
config=training_config, params=hparams)
def make_predict_input_fn(filename):
queue = [ filename ]
def _input_fn():
if len(queue) == 0:
raise StopIteration
image = model.read_and_preprocess(queue.pop())
return {'image': image}
return _input_fn
predictions = classifier.predict(make_predict_input_fn('garden-rose-red-pink-56866.jpeg'))
for i, p in enumerate(predictions):
print("Prediction %s: %s" % (i + 1, p["class"]))
</code></pre>
<p>What am I missing?</p>
|
<p>That's because input_fn() needs to be a generator.
Change your function to (yield instead of return):</p>
<pre><code>def make_predict_input_fn(filename):
queue = [ filename ]
def _input_fn():
if len(queue) == 0:
raise StopIteration
image = model.read_and_preprocess(queue.pop())
yield {'image': image}
return _input_fn
</code></pre>
|
python|tensorflow|google-cloud-ml
| 0
|
1,443
| 49,171,911
|
How to select column for a specific time range from pandas dataframe in python3?
|
<p>This is my pandas dataframe</p>
<pre><code> time energy
0 2018-01-01 00:15:00 0.0000
1 2018-01-01 00:30:00 0.0000
2 2018-01-01 00:45:00 0.0000
3 2018-01-01 01:00:00 0.0000
4 2018-01-01 01:15:00 0.0000
5 2018-01-01 01:30:00 0.0000
6 2018-01-01 01:45:00 0.0000
7 2018-01-01 02:00:00 0.0000
8 2018-01-01 02:15:00 0.0000
9 2018-01-01 02:30:00 0.0000
10 2018-01-01 02:45:00 0.0000
11 2018-01-01 03:00:00 0.0000
12 2018-01-01 03:15:00 0.0000
13 2018-01-01 03:30:00 0.0000
14 2018-01-01 03:45:00 0.0000
15 2018-01-01 04:00:00 0.0000
16 2018-01-01 04:15:00 0.0000
17 2018-01-01 04:30:00 0.0000
18 2018-01-01 04:45:00 0.0000
19 2018-01-01 05:00:00 0.0000
20 2018-01-01 05:15:00 0.0000
21 2018-01-01 05:30:00 0.9392
22 2018-01-01 05:45:00 2.8788
23 2018-01-01 06:00:00 5.5768
24 2018-01-01 06:15:00 8.6660
25 2018-01-01 06:30:00 15.8648
26 2018-01-01 06:45:00 24.1760
27 2018-01-01 07:00:00 38.5324
28 2018-01-01 07:15:00 49.9292
29 2018-01-01 07:30:00 64.3788
</code></pre>
<p>I would like to select the values from <strong>energy column</strong> using a specific Time range <strong>01:15:00 - 05:30:00</strong> and sum those values. To select datas from column I need both hour and minute values. I know how to select data from column using hour and minute separately..</p>
<pre><code>import panadas as pd
from datetime import datetime as dt
energy_data = pd.read_csv("/home/mayukh/Downloads/Northam_january2018/output1.csv", index_col=None)
#Using Hour
sum = energy_data[((energy_data.time.dt.hour < 1) & (energy_data.time.dt.hour >= 5))]['energy'].sum()
#using Minute
sum = energy_data[((energy_data.time.dt.minute < 15) & (energy_data.time.dt.minute >= 30))]['energy'].sum()
</code></pre>
<p>but I don't know how to use both hour and minute together to select data. Please tell me the way how can I will proceed.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/version/0.18/generated/pandas.DataFrame.between_time.html" rel="nofollow noreferrer"><code>between_time</code></a> working with <code>Datetimeindex</code> created by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a>:</p>
<pre><code>#if necessary convert to datetime
df['time'] = pd.to_datetime(df['time'])
a = df.set_index('time').between_time('01:15:00','05:30:00')['energy'].sum()
print (a)
0.9392
</code></pre>
<p><strong>Detail</strong>:</p>
<pre><code>print (df.set_index('time').between_time('01:15:00','05:30:00'))
energy
time
2018-01-01 01:15:00 0.0000
2018-01-01 01:30:00 0.0000
2018-01-01 01:45:00 0.0000
2018-01-01 02:00:00 0.0000
2018-01-01 02:15:00 0.0000
2018-01-01 02:30:00 0.0000
2018-01-01 02:45:00 0.0000
2018-01-01 03:00:00 0.0000
2018-01-01 03:15:00 0.0000
2018-01-01 03:30:00 0.0000
2018-01-01 03:45:00 0.0000
2018-01-01 04:00:00 0.0000
2018-01-01 04:15:00 0.0000
2018-01-01 04:30:00 0.0000
2018-01-01 04:45:00 0.0000
2018-01-01 05:00:00 0.0000
2018-01-01 05:15:00 0.0000
2018-01-01 05:30:00 0.9392
</code></pre>
|
python|python-3.x|pandas|python-datetime
| 2
|
1,444
| 48,932,611
|
Insert seam into a numpy matrix
|
<p>I have a numpy matrix with values in it. They won't be all the same, but the example is easier if I show it like this:</p>
<pre><code>input = np.array([
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]
])
</code></pre>
<p>Now I have another matrix of the same size. There is a "seam" of numbers, one number per column (not more, not less). The position they are in the <code>seam</code> matrix is where I want to insert them into the <code>input</code> matrix. All values in the input matrix are moved to the right.</p>
<p>For example if you apply this "seam"</p>
<pre><code>seam = np.array([
[0, 2, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 4]
])
</code></pre>
<p>to the <code>input</code> matrix, I want this output:</p>
<pre><code>output = np.array([
[1, 2, 1, 1, 1],
[1, 1, 3, 1, 1],
[1, 1, 1, 4, 1]
])
</code></pre>
<p>The input and seam matrix will always have the exact same dimensions. The output will have the same height and width + 1 of the input.</p>
<p>Is there an efficient way to perform this insertion? </p>
|
<p>Here is the straight-forward way using a mask to rearrange the input values:</p>
<pre><code>>>> m, n = seam.shape
>>> output = np.empty((m, n+1), input.dtype)
>>> mask = np.ones((m, n+1), dtype=bool)
>>> nz = np.where(seam)
>>> mask[nz] = False
>>> output[mask]=input.ravel()
>>> output[nz]=seam[nz]
>>> output
array([[1, 2, 1, 1, 1],
[1, 1, 3, 1, 1],
[1, 1, 1, 4, 1]])
</code></pre>
|
python|numpy
| 2
|
1,445
| 49,075,726
|
Compare values/strings in two columns in python pandas
|
<p>Python Pandas:
I want to compare values/strings in two columns in an excel and return a string/value in new column based on a condition given.
i tried this below code.. but the output is lengthier than the actual array..</p>
<p>could someone help me to sort it out</p>
<pre><code>Resource = []
for x in df['Category']:
for y in df['Service_Line']:
if x=='low space'and y=='Intel':
Resource.append('Rhythm')
elif x=='log space' and y=='Intel':
Resource.append('Blue')
elif x=='CPU usage' and y=='Intel':
Resource.append('Jazz')
else:
Resource.append('Other')
print('Resource')
df['Resource'] = Resource
print(df)
</code></pre>
<p>Sample Data</p>
<pre><code>d = {'Category': {0: 'low space',1: 'CPU usage',2: 'log space',3: 'low volume',4: 'CPU usage',5: 'low volume',6: 'CPU usage',7: 'log space',8: 'log spac',9: 'other',10: 'other',11: 'Low space'},
'Service_Line': {0: 'Intel',1: 'SQL',2: 'Intel',3: 'BUR',4: 'AIX',5: 'BUR',
6: 'Intel',7: 'SQL',8: 'AIX',9:'SAN',10: 'SAN',11: 'SQL'},
'summary_data': {0: 'low space in server123',1: 'Server213f3 CPU usage', 2: 'getting more data in log space',3: 'low volume space in server',4: 'high CPU usage by application',5: 'low volume space in server',6: 'high CPU usage by application',7: 'getting more data in log space',8: 'getting more data in log space',9: 'space in server123',10: 'space in server123',11: np.nan}}
df = pd.DataFrame(d)
</code></pre>
<hr>
<pre><code> Category Service_Line summary_data
0 low space Intel low space in server123
1 CPU usage SQL Server213f3 CPU usage
2 log space Intel getting more data in log space
3 low volume BUR low volume space in server
4 CPU usage AIX high CPU usage by application
5 low volume BUR low volume space in server
6 CPU usage Intel high CPU usage by application
7 log space SQL getting more data in log space
8 log spac AIX getting more data in log space
9 other SAN space in server123
10 other SAN space in server123
11 Low space SQL NaN
</code></pre>
|
<pre><code> Resource = []
for i, x in enumerate(df['Category']):
y = df['Service_Line'][ i ]
if x=='low space'and y=='Intel':
Resource.append('Rhythm')
elif x=='log space' and y=='Intel':
Resource.append('Blue')
elif x=='CPU usage' and y=='Intel':
Resource.append('Jazz')
else:
Resource.append('Other')
print('Resource')
df['Resource'] = Resource
print(df)
</code></pre>
<p>This should work IIUC your problem. </p>
<p>The problem with your code is its generating N*N values in Resources since for each x it will get N number of Y and you're putting the value in Resources.</p>
<p>You can also use df.index instead of enumerating
as</p>
<pre><code>for i in df.index:
x = df['Category'][ i ]
y = df['Service_Line'][ i ]
if x=='low space'and y=='Intel':
Resource.append('Rhythm')
elif x=='log space' and y=='Intel':
Resource.append('Blue')
elif x=='CPU usage' and y=='Intel':
Resource.append('Jazz')
else:
Resource.append('Other')
print('Resource')
df['Resource'] = Resource
print(df)
</code></pre>
|
python|pandas
| 0
|
1,446
| 49,053,080
|
Python Pandas Don't Repeat Item Labels
|
<p>I have a table: <a href="https://i.stack.imgur.com/ZP6Gx.png" rel="nofollow noreferrer">Table</a></p>
<p>How would I roll up Group, so that the group numbers don't repeat? I don't want to pd.df.groupby, as I don't want to summarize the other columns. I just want to not repeat item labels, sort of like an Excel pivot table. </p>
<p>Thanks! </p>
|
<p>In your dataframe it appears that 'Group' is in the index, the purpose of the index is to label each row. Therefore, is unusual and uncommon to have blank row indexes.</p>
<p>You you could so this:</p>
<pre><code>df2.reset_index().set_index('Group', append=True).swaplevel(0,1,axis=0)
</code></pre>
<p>Or if you really must show blank row indexes you could do this, but you must change the dtype of the index to str.</p>
<pre><code>df1 = df.set_index('Group').astype(str)
df1.index = df1.index.where(~df1.index.duplicated(),[' '])
</code></pre>
|
python|pandas
| 0
|
1,447
| 58,607,480
|
In tensorflow V.2 Astroid error during TensorFlow installation and AttributeError: module tensorflow has no attribute Session
|
<p>anaconda3.7버전을 다운받고 tensorfow gpu버전을 다운 받았습니다(그 전에 CUDA v.10,cuDNN도 다운 받았어요.)</p>
<p>그런데 tensorflow설치 과정에서 에러가 하나 발생했네요.</p>
<pre><code>ERROR: astroid 2.3.1 requires typed-ast<1.5,>=1.4.0; implementation_name == "cpython" and python_version < "3.8", which is not installed.
</code></pre>
<p>위 문제가 중요한가요? 중요하다면 어떻게 해결할 수 있나요?</p>
<p>그리고 jupyter notebook과 Ipython에서 다음과 같이 입력했는데요.</p>
<pre><code>import tensorflow as tf
hello=tf.constant('Hello,Tensorflow')
sess=tf.Session()
</code></pre>
<p>다음과 같은 오류가 떴어요.</p>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-3-0f0f12c24d95> in <module>
----> 1 sess=tf.Session()
AttributeError: module 'tensorflow' has no attribute 'Session'
</code></pre>
<p>어떻게 해결해야 하나요?</p>
|
<p>In tf2, get get session with the compatible interface.</p>
<pre><code>sess = tf.compat.v1.Session()
</code></pre>
|
tensorflow
| 0
|
1,448
| 58,714,618
|
Numba Invalid use of BoundFunction on np.astype
|
<p>I'm trying to compile a function that does some computation on an image patch using numba. Here is part of the code:</p>
<pre><code>@jit(nopython=True, parallel=True)
def value_at_patch(img, coords, imgsize, patch_radius):
x_center = coords[0]; y_center = coords[1];
r = patch_radius
s = 2*r+1
xvec = np.arange(x_center-r, x_center+r+1)
xvec[xvec <= 0] = 0 #prevent negative index
xvec = xvec.astype(int)
yvec = np.arange(y_center-r, y_center+r+1)
yvec[yvec <= 0] = 0
yvec = yvec.astype(int)
A = np.zeros((s,s))
#do some parallel computation on A
p = np.any(A)
return p
</code></pre>
<p>I'm able to compile the function, but when I run it, I get the following error message:</p>
<pre><code>Failed in nopython mode pipeline (step: nopython frontend)
Invalid use of BoundFunction(array.astype for array(float64, 1d, C)) with parameters (Function(<class 'int'>))
* parameterized
[1] During: resolving callee type: BoundFunction(array.astype for array(float64, 1d, C))
[2] During: typing of call at <ipython-input-17-90e27ac302a8> (42)
File "<ipython-input-17-90e27ac302a8>", line 42:
def value_at_patch(img, coords, imgsize, patch_radius):
<source elided>
xvec[xvec <= 0] = 0 #prevent negative index
xvec = xvec.astype(int)
^
</code></pre>
<p>I checked the numba documentation and np.astype should be supported with just one argument. Do you know what could be causing the problem?</p>
|
<p>Use <code>np.int64</code> in place of <code>int</code> in following places:</p>
<pre><code>xvec = xvec.astype(np.int64)
yvec = yvec.astype(np.int64)
</code></pre>
|
python-3.x|numpy|jit|numba
| 6
|
1,449
| 59,006,469
|
What does ":" do in Python
|
<p>I was trying to the understand basic Tensorflow functions in <code>cifar10.py</code> keras lib, function <code>load_batch</code>:</p>
<pre><code>for i in range(1, 6):
fpath = os.path.join(path, 'data_batch_' + str(i))
(x_train[(i - 1) * 10000:i * 10000, :, :, :],
y_train[(i - 1) * 10000:i * 10000]) = load_batch(fpath)
</code></pre>
<p>What does <code>":"</code> mean in the line below?</p>
<pre><code>x_train[(i - 1) * 10000:i * 10000, :, :, :]
</code></pre>
|
<p>The <code>x_train</code> etc are numpy array or similar structure which implement their interface.
Here you can read about slicing in numpy <a href="https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html</a></p>
<p>Or more generic about slicing in python <a href="https://docs.python.org/3.7/library/stdtypes.html#common-sequence-operations" rel="nofollow noreferrer">https://docs.python.org/3.7/library/stdtypes.html#common-sequence-operations</a></p>
|
python|tensorflow
| 1
|
1,450
| 58,716,539
|
How can I create a pandas data frame in a certain way?
|
<p>I need to create a pandas dataframe that contains all of the required information where each row of the dataframe should be one track. I also need to sort the dataframe by popularity score, so that the most popular track is at the top and the least popular is at the bottom. I tried many ways but they did not work. Your help is much appreciated. </p>
<p>I am sharing my nested dictionary.</p>
<pre><code>{'Artist name': ['Paramore', 'Weezer', 'Lizzo'],
'Track name': (['Still into You',
"Ain't It Fun",
'Hard Times',
'Misery Business',
'The Only Exception',
'Ignorance',
'Rose-Colored Boy',
'Fake Happy',
"That's What You Get",
'Brick by Boring Brick'],
['Island In The Sun',
"Say It Ain't So",
'Buddy Holly',
'Beverly Hills',
'Africa',
'The End of the Game',
'Hash Pipe',
'Undone - The Sweater Song',
'My Name Is Jonas',
'Take On Me'],
['Truth Hurts',
'Good As Hell',
'Good As Hell (feat. Ariana Grande) - Remix',
'Juice',
'Boys',
'Tempo (feat. Missy Elliott)',
'Blame It on Your Love (feat. Lizzo)',
'Soulmate',
'Water Me',
'Like A Girl']),
'Release date': (['2013-04-05',
'2013-04-05',
'2017-05-12',
'2007-06-11',
'2009-09-28',
'2009-09-28',
'2017-05-12',
'2017-05-12',
'2007-06-11',
'2009-09-28'],
['2001-05-15',
'1994-05-10',
'1994-05-10',
'2005-05-10',
'2019-01-24',
'2019-09-10',
'2001-05-15',
'1994-05-10',
'1994-05-10',
'2019-01-24'],
['2019-05-03',
'2016-03-09',
'2019-10-25',
'2019-04-19',
'2019-04-18',
'2019-04-19',
'2019-09-13',
'2019-04-19',
'2019-04-18',
'2019-04-19']),
'Popularity score': ([76, 74, 73, 73, 72, 69, 66, 66, 65, 65],
[77, 75, 73, 71, 67, 67, 66, 65, 63, 62],
[94, 90, 86, 84, 72, 78, 68, 72, 58, 71])}
</code></pre>
|
<p>There are definitely more efficient ways, but here's a solution</p>
<pre><code>import pandas as pd
def gen_artist_frame(d):
categories = [c for c in d.keys()]
for idx, artist in enumerate(d['Artist name']):
artist_mat = [d[j][idx] for j in categories[1:]]
artist_frame = pd.DataFrame(artist_mat, index=categories[1:]).T
artist_frame[categories[0]] = artist
yield artist_frame
def collapse_nested_artist(d):
return pd.concat([
a for a in gen_artist_frame(d)
])
d = {'Artist name': ['Paramore', 'Weezer', 'Lizzo'],
'Track name': (['Still into You',
"Ain't It Fun",
'Hard Times',
'Misery Business',
'The Only Exception',
'Ignorance',
'Rose-Colored Boy',
'Fake Happy',
"That's What You Get",
'Brick by Boring Brick'],
['Island In The Sun',
"Say It Ain't So",
'Buddy Holly',
'Beverly Hills',
'Africa',
'The End of the Game',
'Hash Pipe',
'Undone - The Sweater Song',
'My Name Is Jonas',
'Take On Me'],
['Truth Hurts',
'Good As Hell',
'Good As Hell (feat. Ariana Grande) - Remix',
'Juice',
'Boys',
'Tempo (feat. Missy Elliott)',
'Blame It on Your Love (feat. Lizzo)',
'Soulmate',
'Water Me',
'Like A Girl']),
'Release date': (['2013-04-05',
'2013-04-05',
'2017-05-12',
'2007-06-11',
'2009-09-28',
'2009-09-28',
'2017-05-12',
'2017-05-12',
'2007-06-11',
'2009-09-28'],
['2001-05-15',
'1994-05-10',
'1994-05-10',
'2005-05-10',
'2019-01-24',
'2019-09-10',
'2001-05-15',
'1994-05-10',
'1994-05-10',
'2019-01-24'],
['2019-05-03',
'2016-03-09',
'2019-10-25',
'2019-04-19',
'2019-04-18',
'2019-04-19',
'2019-09-13',
'2019-04-19',
'2019-04-18',
'2019-04-19']),
'Popularity score': ([76, 74, 73, 73, 72, 69, 66, 66, 65, 65],
[77, 75, 73, 71, 67, 67, 66, 65, 63, 62],
[94, 90, 86, 84, 72, 78, 68, 72, 58, 71])}
frame = collapse_nested_artist(d)
</code></pre>
|
python|pandas|nested-loops
| 0
|
1,451
| 70,144,331
|
TypeError: Expected float32, but got auto of type 'str'. Tensorflow error , tell me how to fix it?
|
<p>I got <em><strong>TypeError: Expected float32, but got auto of type 'str'.</strong></em> error while fitting the sequential model.
I checked my inputs both are numpy.ndarray.</p>
<pre><code>type(xtrain),type(ytrain)
(numpy.ndarray, numpy.ndarray)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape = (28,28)))
model.add(tf.keras.layers.Dense(32,activation='relu'))
model.add(tf.keras.layers.Dense(32,activation='relu'))
model.add(tf.keras.layers.Dense(10,activation=tf.keras.activations.softmax))
model.compile(loss = tf.keras.losses.SparseCategoricalCrossentropy,optimizer =
tf.keras.optimizers.Adam(learning_rate=.0001),metrics = ['accuracy'])
model.fit(x =xtrain,y = ytrain,epochs=100)
Epoch 1/100
</code></pre>
<hr />
<p>TypeError Traceback (most recent call last)
in ()
----> 1 model.fit(x =xtrain,y = ytrain,epochs=100)</p>
<p>1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1127 except Exception as e: # pylint:disable=broad-except
1128 if hasattr(e, "ag_error_metadata"):
-> 1129 raise e.ag_error_metadata.to_exception(e)
1130 else:
1131 raise</p>
<p>TypeError: in user code:</p>
<pre><code>File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 878, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 867, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 860, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 810, in train_step
y, y_pred, sample_weight, regularization_losses=self.losses)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 141, in __call__
losses = call_fn(y_true, y_pred)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 245, in call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 752, in __init__ **
from_logits=from_logits)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 227, in __init__
super().__init__(reduction=reduction, name=name)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 88, in __init__
losses_utils.ReductionV2.validate(reduction)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/losses_utils.py", line 82, in validate
if key not in cls.all():
TypeError: Expected float32, but got auto of type 'str'.
</code></pre>
|
<p>The error may be in this part of the code:</p>
<pre><code>model.compile(loss = tf.keras.losses.SparseCategoricalCrossentropy,optimizer = tf.keras.optimizers.Adam(learning_rate=.0001),metrics = ['accuracy'])
</code></pre>
<p>Try changing the loss parameter from <code>tf.keras.losses.SparseCategoricalCrossentropy</code> to <code>tf.keras.losses.SparseCategoricalCrossentropy()</code>.</p>
<p>For some clarity, the difference between the two is that with <code>tf.keras.losses.SparseCategoricalCrossentropy</code> you are <em>not</em> passing and instance of the class, with <code>tf.keras.losses.SparseCategoricalCrossentropy()</code> you are.</p>
|
python|tensorflow|deep-learning|typeerror|tensorflow2.0
| 2
|
1,452
| 70,133,911
|
How to extract a word and its next 2 digit?
|
<p>I have this pandas dataframe</p>
<p><a href="https://i.stack.imgur.com/7hRKD.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>I want to extract word id and nid and its next 2 digit from log column using python. The output should be like this:</p>
<p><a href="https://i.stack.imgur.com/VbLr0.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>Using <code>str.extract</code> we can try:</p>
<pre class="lang-py prettyprint-override"><code>df["log"] = df["log"].str.extract(r'\b(n?id \d+)')
</code></pre>
<p>Here is a <a href="https://regex101.com/r/9XI3Zx/1" rel="nofollow noreferrer">regex demo</a>.</p>
|
python|python-3.x|pandas|dataframe
| 1
|
1,453
| 70,222,006
|
Need to use apply or broadcasting and masking to iterate over a DataFrame
|
<p>I have a data frame that I need to iterate over. I want to use either apply or broadcasting and masking. This is the pseudocode I am trying to improve upon.</p>
<p>2 The algorithm
Algorithm 1: The algorithm
<em>initialize</em> the population (of size n) uniformly randomly, obeying the bounds;
<strong>while</strong> <em>a pre-determined number of iterations is not complete</em> <strong>do</strong>
<em>set the random parameters (two independent parameters for each of the d
variables); find the best and the worst vectors in the population;</em>
<strong>for</strong> each vector in the population <strong>do</strong> create a new vector using the
current vector, the best vector, the worst vector, and the random
parameters;
if the new vector is at least as good as the current vector then
current vector = new vector;</p>
<p>This is the code I have so far.</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.uniform(-5.0, 10.0, size = (20, 5)), columns = list('ABCDE'))
pd.set_option('display.max_columns', 500)
df
#while portion of pseudocode
f_func = np.square(df).sum(axis=1)
final_func = np.square(f_func)
xti_best = final_func.idxmin()
xti_worst = final_func.idxmax()
print(final_func)
print(df.head())
print(df.tail())
*#for loop of pseudocode
#for row in df.iterrows():
#implement equation from assignment
#define in array math
#xi_new = row.to_numpy() + np.random.uniform(0, 1, size = (1, 5)) * (df.iloc[xti_best].values - np.absolute(row.to_numpy())) - np.random.uniform(0, 1, size = (1, 5)) * (df.iloc[xti_worst].values - np.absolute(row.to_numpy()))
#print(xi_new)*
df2 = df.apply(lambda row: 0 if row == 0 else row + np.random.uniform(0, 1, size = (1, 5)) * (df.iloc[xti_best].values - np.absolute(axis = 1)))
print(df2)
</code></pre>
<p>The formula I am trying to use for xi_new is:</p>
<p>#xi_new = xi_current + random value between 0,1(xti_best -abs(xi_current)) - random value(xti_worst - abs(xi_current))</p>
|
<p>I'm not sure I'm implementing your formula correctly, but hopefully this helps</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.uniform(-5.0, 10.0, size = (20, 5)), columns = list('ABCDE'))
#while portion of pseudocode
f_func = np.square(df).sum(axis=1)
final_func = np.square(f_func)
xti_best_idx = final_func.idxmin()
xti_worst_idx = final_func.idxmax()
xti_best = df.loc[xti_best_idx]
xti_worst = df.loc[xti_worst_idx]
#Calculate random values for the whole df for the two different areas where you need randomness
nrows,ncols = df.shape
r1 = np.random.uniform(0, 1, size = (nrows, ncols))
r2 = np.random.uniform(0, 1, size = (nrows, ncols))
#xi_new = xi_current + random value between 0,1(xti_best -abs(xi_current)) - random value(xti_worst - abs(xi_current))
df= df+r1*xti_best.sub(df.abs())-r2*xti_worst.sub(df.abs())
df
</code></pre>
|
python|pandas|numpy|apply|array-broadcasting
| 0
|
1,454
| 70,273,929
|
Fix the color order in a plotly gantt chart
|
<p>Trying to plot a gantt chart using plotly and a pandas dataframe. The plot comes fine, except the colors are in a different order.</p>
<p>Here is the dataframe</p>
<pre><code> Goss got
timestamp
16-01-21 10:00:09 M Item1
04-02-21 20:28:30 T Item2
06-02-21 00:45:57 V Item3
24-07-21 11:07:39 M Item4
26-08-21 16:15:01 T Item5
28-08-21 14:05:13 I Item6
02-09-21 16:57:26 D Item7
15-09-21 15:35:07 M Item8
13-10-21 18:19:42 D Item9
18-10-21 02:39:40 G Item10
31-10-21 22:07:27 D Item11
01-11-21 09:58:21 I Item12
01-11-21 11:37:44 V Item13
28-11-21 10:44:39 K Item14
28-11-21 14:52:39 M Item15
29-11-21 18:40:32 G Item16
01-12-21 23:34:00 G Item17
03-12-21 00:11:11 T Item18
05-12-21 15:55:50 G Item19
</code></pre>
<p>We process the above data as follows:</p>
<pre><code>import pandas as pd
import plotly
from plotly import figure_factory as FF
gdf=pd.read_clipboard(sep='\s\s+') #with this the df data above can be copied and read to a df
gdf.reset_index(inplace=True)
gdf.timestamp = pd.to_datetime(gdf.timestamp,format="%d-%m-%y %H:%M:%S")
gdf=pd.concat([gdf.timestamp.shift(),gdf], axis=1)
gdf.columns=['Start', 'Finish', 'Goss', 'Task']
gdf.at[0,'Start'] = pd.Timestamp("2021-01-01-00-00-00")
gfig = FF.create_gantt(gdf, colors=['#FFFF00',
'#A0522D',
'#808000',
'#008000',
'#FF0000',
'#00CED1',
'#00FFFF'],
index_col='Goss', show_colorbar=True,
bar_width=0.2, showgrid_x=True, showgrid_y=True)
plotly.offline.plot(gfig, filename='gorge.html')
</code></pre>
<p>Read the data from clipboard using
<code>gdf=pd.read_clipboard(sep='\s\s+')</code></p>
<p>My data gets improperly colored. I tried few reorder of the colors, didn't come out correctly. The correct order should be as given in the <code>dict</code> below, item followed by the color it should have.</p>
<pre><code>{'T':'#FFFF00','G':'#A0522D','M':'#808000','D':'#008000','V':'#FF0000', 'K':'#00CED1',"I':'#00FFFF'}
</code></pre>
|
<p>You can update the colors after the figure is created.</p>
<pre><code>cmap = {'T':'#FFFF00','G':'#A0522D','M':'#808000','D':'#008000','V':'#FF0000', 'K':'#00CED1',"I":'#00FFFF'}
gfig.for_each_trace(lambda t: t.update(fillcolor=cmap[t.name]) if t.name in cmap.keys() else t)
</code></pre>
|
pandas|plotly-python|gantt-chart
| 0
|
1,455
| 70,257,040
|
pandas dataframe columns to a single cell
|
<p>I have the dataframe:</p>
<pre><code>df = A B l1 l2 l3
1 1 2 3 4
1 1 3 5 7
1 1 1 2 9
1 2 2 7 8
</code></pre>
<p>I want to groupby A,B , per columns, and put the values as a series in a cell.
So the output will be:</p>
<pre><code>df = A B l1 l2 l3
1 1 2,3,1 3,5,2 4,7,9
1 2 2 7 8
</code></pre>
<p>How can I do it? (efficiently)</p>
<p>Also, What is the solution of no ID columns?
so</p>
<pre><code>df = l1 l2 l3
2 3 4
3 5 7
1 2 9
2 7 8
</code></pre>
<p>and the output:</p>
<pre><code>df = l1 l2 l3
2,3,1,2 3,5,2,7 4,7,9,8
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.agg.html" rel="nofollow noreferrer"><code>GroupBy.agg</code></a> with lambda function with cast to strings and <code>join</code>:</p>
<pre><code>df1 = df.groupby(['A','B']).agg(lambda x: ','.join(x.astype(str))).reset_index()
print (df1)
A B l1 l2 l3
0 1 1 2,3,1 3,5,2 4,7,9
1 1 2 2 7 8
</code></pre>
<p>For second:</p>
<pre><code>df2 = df.astype(str).agg(','.join).to_frame().T
print (df2)
l1 l2 l3
0 2,3,1,2 3,5,2,7 4,7,9,8
</code></pre>
<p>If there are strings:</p>
<pre><code>df1 = df.groupby(['A','B']).agg(','.join).reset_index()
df2 = df.agg(','.join).to_frame().T
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 3
|
1,456
| 56,433,820
|
How to groupby one column with 3 conditions in multiple columns
|
<p>I have dataframe, where i need to apply below condition</p>
<ol>
<li>Check if colA > 0</li>
<li>If it is, search for string "recycled" in colB and compare if its present in colC</li>
<li>If it satisfies, its true else false</li>
</ol>
<p>dataframe:</p>
<pre><code> Temp colA colB colC
ob1 50 HDP HDP
ob1 50 HDP recycled HDP
ob1 50 HDP HDP
ob2 0 PE PE
ob2 0 PE PE
ob3 30 PE recycled PE recycled
ob3 30 PE PE recycled
</code></pre>
<p>output:</p>
<pre><code> Temp colA colB colC output
ob1 50 HDP recycled HDP Anomaly
ob2 0 PE PE Pass
ob3 30 PE recycled PE recycled Pass
</code></pre>
<p>code i tried:</p>
<pre><code> f=pp.groupby('Temp')['colB'].apply(lambda x:
x.str.contains('Recycled').any()).map({True:'Pass',False:'anomaly'})
</code></pre>
|
<p>Try using Rank function</p>
<pre><code>data['Rank'] = data.groupby('Temp')['output'].rank(method='dense',ascending=True)
data['Final'] = data.groupby('Temp')['Rank'].rank(method='first',ascending=True)
</code></pre>
|
python-3.x|string|dataframe|pandas-groupby
| 1
|
1,457
| 56,380,114
|
Python Loop, Need to / Can't Retain Value from Original Dataframe
|
<p>I'm attempting to loop through groups of phrases to match and score them among all the members in each group. Even if some of the phrases are the same, they may have different Codes which is what I'm trimming from the loop inputs - but need to retain in the final <code>df2</code>. I have to make the comparison in the loop without the code but the issue is tying it back to the original df that contains the code so I can identify which rows need to be flagged.</p>
<p>The code below works but I need to add the original <code>DESCR</code> to df2. Appending a and b only contains the trim.</p>
<p>I've tried <code>df.at[]</code> but have mixed, incorrect results. Thank you.</p>
<pre><code>import pandas as pd
from fuzzywuzzy import fuzz as fz
import itertools
data = [[1,'Oneab'],[1,'Onebc'],[1,'Twode'],[2,'Threegh'],[2,'Threehi'],[2,'Fourjk'],[3,'Fivekl'],[3,'Fivelm'],[3,'Fiveyz']]
df = pd.DataFrame(data,columns=['Ids','DESCR'])
n_list = []
a_list = []
b_list = []
pr_list = []
tsr_list = []
groups = df.groupby('Ids')
for n,g in groups:
for a, b in itertools.product(g['DESCR'].str[:-2],g['DESCR'].str[:-2]):
if str(a) < str(b):
try:
n_list.append(n)
a_list.append(a)
b_list.append(b)
pr_list.append(fz.partial_ratio(a,b))
tsr_list.append(fz.token_set_ratio(a,b))
except:
pass
df2 = pd.DataFrame({'Group': n_list, 'First Comparator': a_list, 'Second Comparator': b_list, 'Partial Ratio': pr_list, 'Token Set Ratio': tsr_list})
</code></pre>
<p>Instead of:</p>
<pre><code>ab bc 50 50
ab de 0 0
bc de 0 0
gh hi 50 50
gh jk 0 0
hi jk 50 50
...
</code></pre>
<p>I'd like to see:</p>
<pre><code>Oneab Onebc 50 50
Oneab Twode 0 0
Onebc Twode 0 0
Threegh Threehi 50 50
Threegh Fourjk 0 0
Threehi Fourjk 50 50
...
</code></pre>
|
<p>In case anyone else runs into a similar issue - figured it out - instead of filtering the inputs at the beginning of the second level loop, I'm bringing the full value into the second loop and stripping it there:</p>
<pre><code>a2 = a[6:]
b2 = b[6:]
</code></pre>
<p>So:</p>
<pre><code>import pandas as pd
from fuzzywuzzy import fuzz as fz
import itertools
data = [[1,'Oneab'],[1,'Onebc'],[1,'Twode'],[2,'Threegh'],[2,'Threehi'],[2,'Fourjk'],[3,'Fivekl'],[3,'Fivelm'],[3,'Fiveyz']]
df = pd.DataFrame(data,columns=['Ids','DESCR'])
n_list = []
a_list = []
b_list = []
pr_list = []
tsr_list = []
groups = df.groupby('Ids')
for n,g in groups:
for a, b in itertools.product(g['DESCR'],g['DESCR']):
if str(a) < str(b):
try:
a2 = a[:-2]
b2 = b[:-2]
n_list.append(n)
a_list.append(a)
b_list.append(b)
pr_list.append(fz.partial_ratio(a2,b2))
tsr_list.append(fz.token_set_ratio(a2,b2))
except:
pass
df2 = pd.DataFrame({'Group': n_list, 'First Comparator': a_list, 'Second Comparator': b_list, 'Partial Ratio': pr_list, 'Token Set Ratio': tsr_list})
</code></pre>
|
python|pandas|loops|dataframe
| 0
|
1,458
| 55,835,671
|
How to Perform a Complicated Join in Pandas with Interaction Terms from Statsmodel output
|
<p>This is an extension of this quetion:
<a href="https://stackoverflow.com/questions/55767468/to-join-complicated-pandas-tables/55770736?noredirect=1#comment98224838_55770736">To join complicated pandas tables</a></p>
<p>I have three different interactions in a <code>statsmodels</code> GLM. I need a final table that pairs coeficients with other univariate analysis results. </p>
<p>Below is an example of what the tables look like with a marital status and age interaction in the model. The <code>final_table</code> is the table that has the univariate results in. I want to join coefficient values (among other statistics, p_values, standard_error etc) from the model results to that final table (this is model_results in the code below). </p>
<pre><code>df = {'variable': ['CLded_model','CLded_model','CLded_model','CLded_model','CLded_model','CLded_model','CLded_model'
,'married_age','married_age','married_age', 'class_cc', 'class_cc', 'class_cc', 'class_cc', 'class_v_age'
,'class_v_age','class_v_age', 'class_v_age'],
'level': [0,100,200,250,500,750,1000, 'M_60', 'M_61', 'S_62', 'Harley_100', 'Harley_1200', 'Sport_1500', 'other_100'
,'Street_10', 'other_20', 'Harley_15', 'Sport_10'],
'value': [460955.7793,955735.0532,586308.4028,12216916.67,48401773.87,1477842.472,14587994.92,10493740.36,36388470.44
,31805316.37, 123.4, 4546.50, 439854.23, 2134.4, 2304.5, 2032.30, 159.80, 22]}
final_table1 = pd.DataFrame(df)
final_table1
</code></pre>
<p>Join the above with it's different ways statsmodels communicates the results to:</p>
<pre><code>df2 = {'variable': ['intercept','driver_age_model:C(marital_status_model)[M]', 'driver_age_model:C(marital_status_model)[S]'
, 'CLded_model','C(class_model)[Harley]:v_age_model', 'C(class_model)[Sport]:v_age_model'
,'C(class_model)[Street]:v_age_model', 'C(class_model)[other]:v_age_model'
, 'C(class_model)[Harley]:cc_model', 'C(class_model)[Sport]:cc_model' , 'C(class_model)[Street]:cc_model'
, 'C(class_model)[other]:cc_model']
,'coefficient': [-2.36E-14,-1.004648e-02,-1.071730e-02, 0.00174356,-0.07222433,-0.146594998,-0.168168491,-0.084420399
,-0.000181233,0.000872798,0.001229771,0.001402564]}
model_results = pd.DataFrame(df2)
model_results
</code></pre>
<p>With the desired final result: </p>
<pre><code>df3 = {'variable': ['intercept', 'CLded_model','CLded_model','CLded_model','CLded_model','CLded_model','CLded_model','CLded_model'
,'married_age','married_age','married_age', 'class_cc', 'class_cc', 'class_cc', 'class_cc', 'class_v_age'
,'class_v_age','class_v_age', 'class_v_age'],
'level': [None,0,100,200,250,500,750,1000, 'M_60', 'M_61', 'S_62', 'Harley_100', 'Harley_1200', 'Sport_1500', 'other_100'
,'Street_10', 'other_20', 'Harley_15', 'Sport_10'],
'value': [None, 460955.7793,955735.0532,586308.4028,12216916.67,48401773.87,1477842.472,14587994.92,10493740.36,36388470.44
,31805316.37, 123.4, 4546.50, 439854.23, 2134.4, 2304.5, 2032.30, 159.80, 22],
'coefficient': [-2.36E-14, 0.00174356, 0.00174356, 0.00174356, 0.00174356, 0.00174356 ,0.00174356 , 0.00174356
,-1.004648e-02, -1.004648e-02,-1.071730e-02,-1.812330e-04,-1.812330e-04,8.727980e-04,1.402564e-03
,-1.681685e-01, -8.442040e-02, -1.812330e-04, -1.465950e-01]}
results = pd.DataFrame(df3)
results
</code></pre>
<p>When I implement the first answer, it effected this answer. </p>
|
<pre><code>df = {'variable': ['CLded_model','CLded_model','CLded_model','CLded_model','CLded_model','CLded_model','CLded_model','married_age','married_age','married_age'],
'level': [0,100,200,250,500,750,1000, 'M_60', 'M_61', 'S_62'],
'value': [460955.7793,955735.0532,586308.4028,12216916.67,48401773.87,1477842.472,14587994.92,10493740.36,36388470.44,31805316.37]}
df2 = {'variable': ['intercept','driver_age_model:C(marital_status_model)[M]', 'driver_age_model:C(marital_status_model)[S]', 'CLded_model'],
'coefficient': [-2.36E-14,-1.004648e-02,-1.071730e-02, 0.00174356]}
df3 = {'variable': ['intercept', 'CLded_model','CLded_model','CLded_model','CLded_model','CLded_model','CLded_model','CLded_model','married_age','married_age','married_age'],
'level': [None, 0,100,200,250,500,750,1000, 'M_60', 'M_61', 'S_62'],
'value': [None, 60955.7793,955735.0532,586308.4028,12216916.67,48401773.87,1477842.472,14587994.92,10493740.36, 36388470.44,31805316.37],
'coefficient': [-2.36E-14, 0.00174356, 0.00174356, 0.00174356, 0.00174356, 0.00174356 ,0.00174356 , 0.00174356,-1.004648e-02, -1.004648e-02,-1.071730e-02]}
final_table = pd.DataFrame(df)
model_results = pd.DataFrame(df2)
results = pd.DataFrame(df3)
# Change slightly df to match what we're going to merge
final_table.loc[final_table['variable'] == 'married_age', 'variable'] = 'married_age-'+final_table.loc[final_table['variable'] == 'married_age', 'level'].str[0]
# Clean df2 and get it ready for merge
model_results['variable'] = model_results['variable'].str.replace('driver_age_model:C\(marital_status_model\)\[', 'married_age-')\
.str.strip('\]')
# Merge
df4 = final_table.merge(model_results, how = 'outer', left_on = 'variable', right_on = 'variable')
#Clean
df4['variable'] = df4['variable'].str.replace('-.*', '', regex = True)
</code></pre>
<p>Pretty much the same thing as last time, the only difference was how you clean df2.</p>
|
python-3.x|pandas|join|data-manipulation
| 1
|
1,459
| 56,004,287
|
Loaded keras model in flask always predict same class
|
<p>Weird thing is happening to me. I trained a sentiment analysis model using keras as follows: </p>
<pre><code>max_fatures = 2000
tokenizer = Tokenizer(num_words=max_fatures, split=' ')
tokenizer.fit_on_texts(data)
X = tokenizer.texts_to_sequences(data)
X = pad_sequences(X)
with open('tokenizer.pkl', 'wb') as fid:
_pickle.dump(tokenizer, fid)
le = LabelEncoder()
le.fit(["pos", "neg"])
y = le.transform(data_labels)
y = keras.utils.to_categorical(y)
embed_dim = 128
lstm_out = 196
model = Sequential()
model.add(Embedding(max_fatures, embed_dim, input_length=X.shape[1]))
model.add(SpatialDropout1D(0.4))
model.add(LSTM(lstm_out, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
batch_size = 32
model.fit(X, y, epochs=10, batch_size=batch_size, verbose=2)
model.save('deep.h5')
</code></pre>
<p>When I load it in another python file, everything is fine. But when I load it inside a flask web application, all predicted classes are positive. What is going wrong? Here is the code that I use in my flask web application:</p>
<pre><code>with open('./resources/model/tokenizer.pkl', 'rb') as handle:
keras_tokenizer = _pickle.load(handle)
K.clear_session()
model = load_model('./resources/model/deep.h5')
model._make_predict_function()
session = K.get_session()
global graph
graph = tf.get_default_graph()
graph.finalize()
stop_words = []
with open('./resources/stopwords.txt', encoding="utf8") as f:
stop_words = f.read().splitlines()
normalizer = Normalizer()
stemmer = Stemmer()
tokenizer = RegexpTokenizer(r'\w+')
def predict_class(text):
tokens = tokenizer.tokenize(text)
temp = ''
for token in tokens:
if token in stop_words:
continue
token = normalizer.normalize(token)
token = stemmer.stem(token)
temp += token + ' '
if not temp.strip():
return None
text = keras_tokenizer.texts_to_sequences(temp.strip())
text = pad_sequences(text, maxlen=41)
le = LabelEncoder()
le.fit(["pos", "neg"])
with session.as_default():
with graph.as_default():
sentiment = model.predict_classes(text)
return le.inverse_transform(sentiment)[0]
</code></pre>
|
<p>You are saving the model architecture but not it's weights!</p>
<p>Given that you are using Keras and its tokenizer, I have found that the best way to load and reuse your models is to use the json representation for the architecture and the tokenizer and save the weights with h5:</p>
<pre><code>def save(model):
# Save the trained weights
model.save_weights('model_weights.h5')
# Save the model architecture
with open('model_architecture.json', 'w') as f:
f.write(model.to_json())
# Save the tokenizer
with open('tokenizer.json', 'w') as f:
f.write(tokenizer.to_json())
</code></pre>
<p>Then at your flask app load them like this:</p>
<pre><code>def models():
with open('models/tokenizer.json') as f:
tokenizer = tokenizer_from_json(f.read())
# Model reconstruction from JSON file
with open('models/model_architecture.json', 'r') as f:
model = model_from_json(f.read())
# Load weights into the new model
model.load_weights('models/model_weights.h5')
return model, tokenizer
</code></pre>
|
python|tensorflow|machine-learning|flask|keras
| 0
|
1,460
| 55,870,502
|
TensorFlow function to check whether a value is in given range?
|
<p>I know there is <code>tf.greater(x,y)</code> which will return true if x > y (element-wise). Is there a function that returns true if lower_bound < x < upper_bound (element-wise) for a tensor x?</p>
|
<p>There's not a specific function for that, but you can use a combination of <code>tf.greater</code>, <code>tf.less</code>, and <code>tf.logical_and</code> to get the same result.</p>
<pre><code>lower_tensor = tf.greater(x, lower)
upper_tensor = tf.less(x, upper)
in_range = tf.logical_and(lower_tensor, upper_tensor)
</code></pre>
|
python|tensorflow
| 3
|
1,461
| 55,673,302
|
Convert pandas column of lists into matrix representation (One Hot Encoding)
|
<p>I have a pandas column with lists of values of varying length like so:</p>
<pre><code> idx lists
0 [1,3,4,5]
1 [2]
2 [3,5]
3 [2,3,5]
</code></pre>
<p>I'd like to convert them into a matrix format where each possible value represents a column and each row populates a 1 if the value exists and 0 otherwise, like so:</p>
<pre><code>idx 1 2 3 4 5
0 1 0 1 1 1
1 0 1 0 0 0
2 0 0 1 0 1
3 0 1 1 0 1
</code></pre>
<p>I thought the term for this was one hot encoding, but I tried to use the pd.get_dummies method which states it can do one-hot encoding, but when I try to feed input as shown above:</p>
<pre><code>test_hot = pd.Series([[1,2,3],[3,4,5],[1,6]])
pd.get_dummies(test_hot)
</code></pre>
<p>I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/core/reshape/reshape.py", line 899, in get_dummies
dtype=dtype)
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/core/reshape/reshape.py", line 906, in _get_dummies_1d
codes, levels = _factorize_from_iterable(Series(data))
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/core/arrays/categorical.py", line 2515, in _factorize_from_iterable
cat = Categorical(values, ordered=True)
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/core/arrays/categorical.py", line 347, in __init__
codes, categories = factorize(values, sort=False)
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/util/_decorators.py", line 178, in wrapper
return func(*args, **kwargs)
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/core/algorithms.py", line 630, in factorize
na_value=na_value)
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/core/algorithms.py", line 476, in _factorize_array
na_value=na_value)
File "pandas/_libs/hashtable_class_helper.pxi", line 1601, in pandas._libs.hashtable.PyObjectHashTable.get_labels
TypeError: unhashable type: 'list'
</code></pre>
<p>The method works fine if I'm feeding a single list of values such as:</p>
<pre><code>[1,2,3,4,5]
</code></pre>
<p>It will show a 5x5 matrix but only populates a single row with a 1. I'm trying to expand this so that more than 1 value can be populated per row by feeding a column of lists. </p>
|
<p>If performance is important use <code>MultiLabelBinarizer</code>:</p>
<pre><code>test_hot = pd.Series([[1,2,3],[3,4,5],[1,6]])
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
df = pd.DataFrame(mlb.fit_transform(test_hot),columns=mlb.classes_)
print (df)
1 2 3 4 5 6
0 1 1 1 0 0 0
1 0 0 1 1 1 0
2 1 0 0 0 0 1
</code></pre>
<p>Your solution should be changed with create <code>DataFrame</code>, reshape and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a>, last using <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html" rel="nofollow noreferrer"><code>get_dummies</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.max.html" rel="nofollow noreferrer"><code>DataFrame.max</code></a> for aggregate:</p>
<pre><code>df = pd.get_dummies(pd.DataFrame(test_hot.values.tolist()).stack().astype(int))
.max(level=0, axis=0)
print (df)
1 2 3 4 5 6
0 1 1 1 0 0 0
1 0 0 1 1 1 0
2 1 0 0 0 0 1
</code></pre>
<p><strong>Details</strong>:</p>
<p>Created <code>MultiIndex Series</code>:</p>
<pre><code>print(pd.DataFrame(test_hot.values.tolist()).stack().astype(int))
0 0 1
1 2
2 3
1 0 3
1 4
2 5
2 0 1
1 6
dtype: int32
</code></pre>
<p>Call <code>pd.get_dummies</code>:</p>
<pre><code>print (pd.get_dummies(pd.DataFrame(test_hot.values.tolist()).stack().astype(int)))
1 2 3 4 5 6
0 0 1 0 0 0 0 0
1 0 1 0 0 0 0
2 0 0 1 0 0 0
1 0 0 0 1 0 0 0
1 0 0 0 1 0 0
2 0 0 0 0 1 0
2 0 1 0 0 0 0 0
1 0 0 0 0 0 1
</code></pre>
<p>And last aggregate <code>max</code> per first level. </p>
|
python|pandas|list
| 3
|
1,462
| 64,955,021
|
How can I add info from one row to another row multiple times before a specific word?
|
<p>Suppose I have a table as following:</p>
<pre><code>ID Description
1 code: xyz; code:axy
2 code: 238a; code: 34u; code: 482m
3 code: 24sd
4 code: 3edn; code: 3n23
</code></pre>
<p>And I want the following table:</p>
<pre><code>ID Description
1 code: xyz
1 code: axy
2 code: 238a
2 code: 34u
2 code: 482m
3 code: 24sd
4 code: 3edn
4 code: 3n23
</code></pre>
<p>My actual code:</p>
<pre><code>for line in fhand["Description"]:
x = line.replace(";" , ",")
y = txt.replace("code", [fhand['Id'] + "code"])
</code></pre>
<p>But, as you can imagine, it is not working. Can anyone help me with this?</p>
|
<p>You could try (I'm assuming your DataFrame is named <code>fhand</code>, as in your code example):</p>
<pre><code>fhand['Description'] = fhand['Description'].str.split(';')
fhand = fhand.explode('Description')
</code></pre>
<p><strong>EDIT</strong>: You might want to add an <code>lstrip</code> afterwards:</p>
<pre><code>fhand['Description'] = fhand['Description'].str.lstrip()
</code></pre>
<p>Result:</p>
<pre><code> Description
ID
1 code: xyz
1 code:axy
2 code: 238a
2 code: 34u
2 code: 482m
3 code: 24sd
4 code: 3edn
4 code: 3n23
</code></pre>
<p>(In your example <code>code:axy</code> has no blank behind the ":": Is that accidental?)</p>
|
python|pandas|database|csv|python-re
| 1
|
1,463
| 64,634,902
|
Best Loss Function for Multi-Class Multi-Target Classification Problem
|
<p>I have a classification problem and I don't know how to categorize this classification problem. As per my understanding,</p>
<blockquote>
<p>A Multiclass classification problem is where you have multiple mutually exclusive classes and each data point in the dataset can only be labelled by one class. For example, in an Image Classification task for fruits, a fruit data point labelled as an apple cannot be an orange and an orange cannot be a banana and so on. Each data point, in this case can only be any one of the fruits of the fruits class and so is labelled accordingly.</p>
</blockquote>
<p>Where as ...</p>
<blockquote>
<p>A Multilabel classification is a problem where you have multiple sets of mutually exclusive classes of which the data point can be labelled simultaneously. For example, in an Image Classification task for Cars, a car data point labelled as a sedan cannot be a hatchback and a hatchback cannot be a SUV and so on for the type of car. At the same time, the same car data point can be labelled one from VW, Ford, Mercedes, etc. as the car manufacturer. So in this case, the car data point is labeled from two different sets of mutually exclusive classes.</p>
</blockquote>
<p>Please correct my understanding if I am thinking in a wrong way here.</p>
<p>Now to my problem, my classification problem with multiple classes, lets say A, B, C, D and E. Here each data point can have one or more than one classes from the set as shown below on the left:</p>
<pre><code>|-------------|----------| |-------------|-----------------|
| X | y | | X | One-Hot-Y |
|-------------|----------| |-------------|-----------------|
| DP1 | A, B | | DP1 | [1, 1, 0, 0, 0] |
|-------------|----------| |-------------|-----------------|
| DP2 | C | | DP2 | [0, 0, 1, 0, 0] |
|-------------|----------| |-------------|-----------------|
| DP3 | B, E | | DP3 | [0, 1, 0, 0, 1] |
|-------------|----------| |-------------|-----------------|
| DP4 | A, C | | DP4 | [1, 0, 1, 0, 0] |
|-------------|----------| |-------------|-----------------|
| DP5 | D | | DP5 | [0, 0, 0, 1, 0] |
|-------------|----------| |-------------|-----------------|
</code></pre>
<p>I One-Hot Encoded the labels for training as shown above on the right. My question is:</p>
<ol>
<li>What Loss function (preferably in PyTorch) can I use for training the model to optimize for the One-Hot encoded output</li>
<li>What do we call such a classification problem? Multi-label or Multi-class?</li>
</ol>
<p>Thank you for your answers!</p>
|
<blockquote>
<p>What Loss function (preferably in PyTorch) can I use for training the
model to optimize for the One-Hot encoded output</p>
</blockquote>
<p>You can use <a href="https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html#torch.nn.BCEWithLogitsLoss" rel="nofollow noreferrer">torch.nn.BCEWithLogitsLoss</a> (or <a href="https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelSoftMarginLoss.html#torch.nn.MultiLabelSoftMarginLoss" rel="nofollow noreferrer">MultiLabelSoftMarginLoss</a> as they are equivalent) and see how this one works out. This is standard approach, other possibility could be <a href="https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelMarginLoss.html#torch.nn.MultiLabelMarginLoss" rel="nofollow noreferrer">MultilabelMarginLoss</a>.</p>
<blockquote>
<p>What do we call such a classification problem? Multi-label or Multi-class?</p>
</blockquote>
<p>It is multilabel (as multiple labels can be present simultaneously). In one-hot encoding:</p>
<pre><code>[1, 1, 0, 0, 0], [0, 1, 0, 0, 1] - multilabel
[0, 0, 1, 0, 0] - multiclass
[1], [0] - binary (special case of multiclass)
</code></pre>
<p>multiclass cannot have more than one <code>1</code> as all other labels are mutually exclusive.</p>
|
pytorch|classification|multilabel-classification|multiclass-classification
| 4
|
1,464
| 64,991,547
|
Compute and broadcast a count in pandas (with groupby transform)
|
<p>How can I compute and broadcast a count in pandas?</p>
<p>To compute a count:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby('field').size()
</code></pre>
<p>To broadcast an aggregation to the original dataframe:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby('field')['field_to_aggregate'].transform(aggregation)
</code></pre>
<p>The latter works if I specify the field to aggregate onto and aggregations like <code>sum</code>, <code>mean</code>, etc. But I am not finding a way to make it work when I want a simple count of the grouped-by field.</p>
<p>(Note: I could just use the first and re-join on the original table against the grouped-by table, but I want to avoid joins and I'm looking for an efficient solution that uses pandas' <code>transform</code>)</p>
|
<p>You could try:</p>
<pre><code>result = df.groupby('field')['field_to_aggregate'].transform('size')
</code></pre>
<p>Note that <code>'field_to_aggregate'</code> can be the same as <code>'field'</code>.</p>
|
python|pandas|aggregation|split-apply-combine
| 1
|
1,465
| 64,752,451
|
How to count number of characters in string for the column values and group rows by count of those as a result using pandas?
|
<p>I have .csv file with column name:</p>
<pre><code>id name
1 sample1
2 sample3
3 sample four
4 sample.five
5 sample.six.com
</code></pre>
<p>I need to print result as below (ordered by number of rows descending):</p>
<pre><code>chars(str_len_count) rows(id_count)
7 2
11 2
14 1
</code></pre>
<p>I've tried the below, but this is not really what I'm looking for:</p>
<pre><code>In [106]:
df['NAME_Count'] = df['name'].str.len()
df
Out[106]:
name NAME_Count
0 sample1 7
</code></pre>
|
<p>First new column is not necessary, you can pass <code>str.len</code> to <code>groupby</code> and use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>GroupBy.size</code></a> for count:</p>
<pre><code>df1 = df.groupby(df['name'].str.len().rename('chars')).size().reset_index(name='id_count')
print (df1)
chars id_count
0 7 2
1 11 2
2 14 1
</code></pre>
<p>If want first create new column solution is a bit changed:</p>
<pre><code>df['NAME_Count'] = df['name'].str.len()
df1 = df.groupby('NAME_Count').size().reset_index(name='count')
</code></pre>
|
python|pandas
| 1
|
1,466
| 64,648,964
|
Get number of elements of array satisfying a list of conditionsin Python/Numpy
|
<p>I have two arrays of the same size N, <code>array1</code> and <code>array2</code>. You can essentially think of it as a single array with shape (N,2). The entries are all numbers. I have a list of conditions and I want to see how many entries satisfy all these conditions, ideally using vectorization. For example, the conditions might be something like:</p>
<ol>
<li>element in array1 > 2</li>
<li>element in array1 < 5</li>
<li>element in array2 > 4</li>
<li>element in array2 < 7</li>
<li>element in array2 divisible by 2</li>
</ol>
<p>I want to count the number of indices "i" such that array1[i] and array2[i] satisfy all the above conditions. For eg, if</p>
<pre><code>array1 = np.array([1,2,3,4,5])
array2 = np.array([3,4,5,6,7])
</code></pre>
<p>Then the only indice satisfying the above condition would be 3,and thus the number of indices satisfying the constraints would be just 1. I was considering using several <code>numpy.logical_and</code>'s, but this looks rather ugly. I didn't know if there was a cleaner way to string together several <code>and</code> statements.</p>
|
<p>You could do the following:</p>
<pre><code>x = np.array(array1>2)
y = np.array(array1<5)
z = np.array(x == y)
sum(z)
</code></pre>
|
python|arrays|numpy|vectorization
| 0
|
1,467
| 40,275,756
|
How can I add a extra roll of each columns' mean?
|
<p>When I was using <code>df['mean'] = df.mean(axis = 1)</code>, it always added an extra COLUMN of average of each rows. But now I need an extra ROW to get each COLUMNS' mean. So I switch to <code>df['mean'] = df.mean(axis = 0)</code> but it still has an extra column but with all NaN. How can I get the row of each columns' average?</p>
<p>Here's the example ↓</p>
<p><img src="https://i.stack.imgur.com/7FTIB.png" alt="example"></p>
|
<p>IIUC</p>
<pre><code>df = pd.DataFrame(np.arange(16).reshape(-1, 4), list('abcd'), list('ABCD'))
df.loc['mean', :] = df.mean(0)
df.loc[:, 'Mean'] = df.mean(1)
df.loc['mean', 'Mean'] = np.nan
df
</code></pre>
<p><a href="https://i.stack.imgur.com/SRRwZ.png" rel="nofollow"><img src="https://i.stack.imgur.com/SRRwZ.png" alt="enter image description here"></a></p>
|
python|python-2.7|python-3.x|pandas|dataframe
| 1
|
1,468
| 44,355,229
|
Can I use `tf.nn.dropout` to implement DropConnect?
|
<p>I (think) that I grasp the basics of DropOut and the <a href="https://stackoverflow.com/a/34597667/656912">use of the TensorFlow API in implementing it</a>. But the normalization that's linked to the dropout probability in <code>tf.nn.dropout</code> seems not to be a part of <a href="http://cs.nyu.edu/%7Ewanli/dropc/" rel="noreferrer">DropConnect</a>. Is that correct? If so, does normalizing do any "harm" or can I simply apply <code>tf.nn.dropout</code> to my weights to implement DropConnect?</p>
|
<h1>Answer</h1>
<p>Yes, you can use <strong>tf.nn.dropout</strong> to do <strong>DropConnect</strong>, just use <strong>tf.nn.dropout</strong> to wrap your weight matrix instead of your post matrix multiplication. You can then <em>undo</em> the weight change by multiplying by the dropout like so</p>
<pre><code>dropConnect = tf.nn.dropout( m1, keep_prob ) * keep_prob
</code></pre>
<h1>Code Example</h1>
<p>Here is a code example that calculates the <strong>XOR</strong> function using drop connect. I've also commented out the code that does dropout that you can sub in and compare the output.</p>
<pre><code>### imports
import tensorflow as tf
### constant data
x = [[0.,0.],[1.,1.],[1.,0.],[0.,1.]]
y_ = [[1.,0.],[1.,0.],[0.,1.],[0.,1.]]
### induction
# Layer 0 = the x2 inputs
x0 = tf.constant( x , dtype=tf.float32 )
y0 = tf.constant( y_ , dtype=tf.float32 )
keep_prob = tf.placeholder( dtype=tf.float32 )
# Layer 1 = the 2x12 hidden sigmoid
m1 = tf.Variable( tf.random_uniform( [2,12] , minval=0.1 , maxval=0.9 , dtype=tf.float32 ))
b1 = tf.Variable( tf.random_uniform( [12] , minval=0.1 , maxval=0.9 , dtype=tf.float32 ))
########## DROP CONNECT
# - use this to preform "DropConnect" flavor of dropout
dropConnect = tf.nn.dropout( m1, keep_prob ) * keep_prob
h1 = tf.sigmoid( tf.matmul( x0, dropConnect ) + b1 )
########## DROP OUT
# - uncomment this to use "regular" dropout
#h1 = tf.nn.dropout( tf.sigmoid( tf.matmul( x0,m1 ) + b1 ) , keep_prob )
# Layer 2 = the 12x2 softmax output
m2 = tf.Variable( tf.random_uniform( [12,2] , minval=0.1 , maxval=0.9 , dtype=tf.float32 ))
b2 = tf.Variable( tf.random_uniform( [2] , minval=0.1 , maxval=0.9 , dtype=tf.float32 ))
y_out = tf.nn.softmax( tf.matmul( h1,m2 ) + b2 )
# loss : sum of the squares of y0 - y_out
loss = tf.reduce_sum( tf.square( y0 - y_out ) )
# training step : discovered learning rate of 1e-2 through experimentation
train = tf.train.AdamOptimizer(1e-2).minimize(loss)
### training
# run 5000 times using all the X and Y
# print out the loss and any other interesting info
with tf.Session() as sess:
sess.run( tf.initialize_all_variables() )
print "\nloss"
for step in range(5000) :
sess.run(train,feed_dict={keep_prob:0.5})
if (step + 1) % 100 == 0 :
print sess.run(loss,feed_dict={keep_prob:1.})
results = sess.run([m1,b1,m2,b2,y_out,loss],feed_dict={keep_prob:1.})
labels = "m1,b1,m2,b2,y_out,loss".split(",")
for label,result in zip(*(labels,results)) :
print ""
print label
print result
print ""
</code></pre>
<h1>Output</h1>
<p>Both flavors are able to correctly separate the input into the correct output</p>
<pre><code>y_out
[[ 7.05891490e-01 2.94108540e-01]
[ 9.99605477e-01 3.94574134e-04]
[ 4.99370173e-02 9.50062990e-01]
[ 4.39682379e-02 9.56031740e-01]]
</code></pre>
<p>Here you can see the output from dropConnect was able to correctly classify <strong>Y</strong> as true,true,false,false.</p>
|
machine-learning|tensorflow|neural-network
| 10
|
1,469
| 44,213,626
|
How to get substring in panda data frame when certain characters exist in the row?
|
<p>I have a data frame where certain rows contains a special character '#'. </p>
<p>Here's my data and I can find the index positions of '#' : </p>
<pre><code>import pandas as pd
df = pd.DataFrame(data=['fig#abc', 'strawberry', 'applepie#efg'], columns=['fruitname'])
ind= df.fruitname.str.find("#")
df['col1'].str.find(".")-1]
print df
print ind
fruitname
0 fig#abc
1 strawberry
2 applepie#efg
0 3
1 -1
2 8
</code></pre>
<p>I want to have a new column data with value of the first few characters before '#' if the index '#' is greater then 4, otherwise with value of the original data as:</p>
<pre><code> fruitname_new
0 fig#abc
1 strawberry
2 applepie
</code></pre>
<p>What is the best way to get this result? </p>
|
<pre><code>#use apply to split fruitname and then check the length before setting the new fruitname column.
df['fruitname_new'] = df.apply(lambda x: x.fruitname if len(x.fruitname.split('#')[0])<=4 else x.fruitname.split('#')[0], axis=1)
df
Out[484]:
fruitname fruitname_new
0 fig#abc fig#abc
1 strawberry strawberry
2 applepie#efg applepie
</code></pre>
|
python-2.7|pandas|dataframe|substring
| 0
|
1,470
| 69,631,477
|
How do you groupby with pandas that sums values in the same column but is offset by a set number of rows
|
<p>I have a table that looks like the table below. I want to group by id, start_time, and approach so I can add the right, thru, left, and u-turn for each similar timestamp.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">intersection_id</th>
<th style="text-align: center;">start_time</th>
<th style="text-align: right;">approach</th>
<th style="text-align: right;">movement</th>
<th style="text-align: right;">volume</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:00:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Right</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:15:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Right</td>
<td style="text-align: right;">4</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:30:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Right</td>
<td style="text-align: right;">6</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:00:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Thru</td>
<td style="text-align: right;">4</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:15:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Thru</td>
<td style="text-align: right;">6</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:30:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Thru</td>
<td style="text-align: right;">8</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:00:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Left</td>
<td style="text-align: right;">6</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:15:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Left</td>
<td style="text-align: right;">8</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:30:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Left</td>
<td style="text-align: right;">10</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:00:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">U-turn</td>
<td style="text-align: right;">10</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:15:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">U-turn</td>
<td style="text-align: right;">12</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:30:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">U-turn</td>
<td style="text-align: right;">14</td>
</tr>
</tbody>
</table>
</div>
<p>Example results:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">intersection_id</th>
<th style="text-align: center;">start_time</th>
<th style="text-align: right;">approach</th>
<th style="text-align: right;">movement</th>
<th style="text-align: right;">volume</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:00:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Right</td>
<td style="text-align: right;">24</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:15:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Right</td>
<td style="text-align: right;">30</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:30:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Right</td>
<td style="text-align: right;">38</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:00:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Thru</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:15:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Thru</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:30:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Thru</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:00:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Left</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:15:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Left</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:30:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">Left</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:00:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">U-turn</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:15:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">U-turn</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: left;">799028</td>
<td style="text-align: center;">12:30:00 AM</td>
<td style="text-align: right;">Southbound</td>
<td style="text-align: right;">U-turn</td>
<td style="text-align: right;">nan</td>
</tr>
</tbody>
</table>
</div>
<p>This will repeat itself until it goes through all the IDs and approaches.</p>
<p>I have tried a few different ways:</p>
<pre><code>df['app_sum'] = df.groupby(['intersection_id','start_time','approach'], as_index=False)['volume'].transform('sum')
</code></pre>
<p>However, this code will not group correctly and will not provide nan values; it will repeat the values once it gets through the initial set of timestamps.</p>
<p>The second code I tried was</p>
<pre><code>indexer = pd.api.indexers.FixedForwardWindowIndexer(window_size=4, offset=-3)
df['app_sum'] = (
df.groupby(['intersection_id','start_time','approach'])['volume'].rolling(window=indexer).sum().droplevel(0))
</code></pre>
<p>I know this is wrong because there is no window.</p>
<p>Any suggestions?</p>
|
<p>You could use:</p>
<pre><code>df['volume'] = (df.groupby(['intersection_id', 'start_time', 'approach'])
['volume'].transform('sum')
.where(df['movement'].eq('Right'))
)
</code></pre>
<p>output:</p>
<pre><code> intersection_id start_time approach movement volume
0 799028 12:00:00 AM Southbound Right 22.0
1 799028 12:15:00 AM Southbound Right 30.0
2 799028 12:30:00 AM Southbound Right 38.0
3 799028 12:00:00 AM Southbound Thru NaN
4 799028 12:15:00 AM Southbound Thru NaN
5 799028 12:30:00 AM Southbound Thru NaN
6 799028 12:00:00 AM Southbound Left NaN
7 799028 12:15:00 AM Southbound Left NaN
8 799028 12:30:00 AM Southbound Left NaN
9 799028 12:00:00 AM Southbound U-turn NaN
10 799028 12:15:00 AM Southbound U-turn NaN
11 799028 12:30:00 AM Southbound U-turn NaN
</code></pre>
|
python|pandas|group-by
| 3
|
1,471
| 69,506,993
|
Json to excel using python
|
<p>Json code:Below we have json data format which I am pulling from site using API</p>
<pre class="lang-py prettyprint-override"><code>response={
"result": [
{
"id": "1000",
"title": "Fishing Team View",
"sharedWithOrganization": True,
"ownerId": "324425",
"sharedWithUsers": ["1223","w2qee3"],
"filters": [
{
"field": "tag5",
"comparator": "==",
"value": "fishing"
}
]
},
{
"id": "2000",
"title": "Farming Team View",
"sharedWithOrganization": False,
"ownerId": "00000",
"sharedWithUsers": [
"00000",
"11111"
],
"filters": [
{
"field": "tag5",
"comparator": "!@",
"value": "farming"
}
]
}
]
}
</code></pre>
<p>Python Code: I am using below code to parse the json data but I am unable to filters into different column specially filters part into separate column like Inside filter i a want to make field , comparator separate column</p>
<pre class="lang-py prettyprint-override"><code> records=[]
for data in response['result']:
id = data['id']
title = data['title']
sharedWithOrganization = data['sharedWithOrganization']
ownerId = data['ownerId']
sharedWithUsers = '|'.join(data['sharedWithUsers'])
filters = data['filters']
print(filters)
records.append([id,title,sharedWithOrganization,ownerId,sharedWithUsers])
#print(records)
ExcelApp = win32.Dispatch('Excel.Application')
ExcelApp.Visible= True
#creating excel and renaming sheet
wb = ExcelApp.Workbooks.Add()
ws= wb.Worksheets(1)
ws.Name="Get_Views"
#assigning header value
header_labels=('Id','Title','SharedWithOrganization','OwnerId','sharedWithUsers')
for index,val in enumerate(header_labels):
ws.Cells(1, index+1).Value=val
row_tracker = 2
column_size = len(header_labels)
for row in records:
ws.Range(ws.cells(row_tracker,1),ws.cells(row_tracker,column_size)).value = row
row_tracker +=1
</code></pre>
<p><a href="https://i.stack.imgur.com/nykar.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nykar.png" alt="enter image description here" /></a></p>
<p>I am doing API pull i am getting this kind of format and I am passing that .json format to python to achieve the data into excel but I am unable to filters list data into separate column can you please help me with</p>
|
<p>Try:</p>
<pre><code>df = pd.Series(response).explode().apply(pd.Series).reset_index(drop=True)
df = df.join(df['filters'].explode().apply(pd.Series)).drop(columns=['filters'])
df['sharedWithUsers'] = df['sharedWithUsers'].str.join('|')
</code></pre>
<p>Output:</p>
<pre><code> id title sharedWithOrganization ownerId sharedWithUsers field comparator value
0 1000 Fishing Team View True 324425 1223|w2qee3 tag5 == fishing
1 2000 Farming Team View False 00000 00000|11111 tag5 !@ farming
</code></pre>
|
python|python-3.x|excel|pandas|dataframe
| 2
|
1,472
| 69,297,998
|
How can I delete rows from dictionaries using pandas
|
<p>If I have a data set like this one.</p>
<pre><code>date PCP1 PCP2 PCP3 PCP4
1/1/1985 0 -99 -99 -99
1/2/1985 0 -99 -99 -99
1/3/1985 0 0 -99 -99
1/4/1985 0 0 -99 -99
1/5/1985 1 -99 1 1
1/6/1985 0 -99 -99 -99
1/7/1985 0 1 -99 0
1/8/1985 0 2 -99 3
1/9/1985 0 -99 -99 -99
</code></pre>
<p>And I want to create new data frames by only having the date column and one PCP column like this.. for df1..</p>
<pre><code>df1 =
date PCP1
1/1/1985 0
1/2/1985 0
1/3/1985 0
1/4/1985 0
1/5/1985 1
1/6/1985 0
1/7/1985 0
1/8/1985 0
1/9/1985 0
</code></pre>
<p>and df2...</p>
<pre><code>df2 =
date PCP2
1/1/1985 -99
1/2/1985 -99
1/3/1985 0
1/4/1985 0
1/5/1985 -99
1/6/1985 -99
1/7/1985 1
1/8/1985 2
1/9/1985 -99
</code></pre>
<p>and so on for df3.. and df4...</p>
<p>and I want to delete rows with -99 for each data frame that will result to...</p>
<pre><code>df1 =
date PCP1
1/1/1985 0
1/2/1985 0
1/3/1985 0
1/4/1985 0
1/5/1985 1
1/6/1985 0
1/7/1985 0
1/8/1985 0
1/9/1985 0
</code></pre>
<p>and df2...</p>
<pre><code>df2 =
date PCP2
1/3/1985 0
1/4/1985 0
1/7/1985 1
1/8/1985 2
</code></pre>
<p>I'm not sure if I made it right, but I have written the following code, but I'm not sure how to remove the rows with -99 while doing the for loop..</p>
<pre><code># first I created a list of pcp list
n_cols = 4
pcp_list = []
df_names = []
for i in range(1,n_cols):
item = "PCP" + str(i)
pcp_list.append(item)
item_df = "df" + str(i)
df_names.append(item_df)
# and then I have created a new df for each name on the list by creating a dict
dfs ={}
for dfn, name in zip(df_names, pcp_list):
dfs[dfn] = pd.DataFrame(df, columns=['date', name])
# and then I was hoping I could remove the rows with -99
for df, name in zip(dfs, pcp_list):
df[name] = dfs[df[name] = -99]
</code></pre>
<p>Any help will be appreciated!</p>
<p>Thank you!</p>
|
<p>You can create in dictioanry comrehension DataFrames:</p>
<pre><code>d = {k: v[v != -99].reset_index() for k,v in df.set_index('date').to_dict('series').items()}
</code></pre>
<p>Create variables by name is not <a href="https://stackoverflow.com/a/30638956">recommended</a>, but possible:</p>
<pre><code>for i, (k, v) in enumerate(df.set_index('date').to_dict('series').items()):
globals()[f'df{i}'] = v[v != -99].reset_index()
</code></pre>
|
python|pandas|dataframe|for-loop
| 1
|
1,473
| 41,066,853
|
Tensorflow - How to manipulate Saver
|
<p>I am working with the Boston housing data tutorial for tensorflow, but am inserting my own data set:</p>
<pre><code>from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import pandas as pd
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
COLUMNS = ["crim", "zn", "indus", "nox", "rm", "age",
"dis", "tax", "ptratio", "medv"]
FEATURES = ["crim", "zn", "indus", "nox", "rm",
"age", "dis", "tax", "ptratio"]
LABEL = "medv"
def input_fn(data_set):
feature_cols = {k: tf.constant(data_set[k].values) for k in FEATURES}
labels = tf.constant(data_set[LABEL].values)
return feature_cols, labels
def main(unused_argv):
# Load datasets
training_set = pd.read_csv("boston_train.csv", skipinitialspace=True,
skiprows=1, names=COLUMNS)
test_set = pd.read_csv("boston_test.csv", skipinitialspace=True,
skiprows=1, names=COLUMNS)
# Set of 6 examples for which to predict median house values
prediction_set = pd.read_csv("boston_predict.csv", skipinitialspace=True,
skiprows=1, names=COLUMNS)
# Feature cols
feature_cols = [tf.contrib.layers.real_valued_column(k)
for k in FEATURES]
# Build 2 layer fully connected DNN with 10, 10 units respectively.
regressor = tf.contrib.learn.DNNRegressor(
feature_columns=feature_cols, hidden_units=[10, 10])
# Fit
regressor.fit(input_fn=lambda: input_fn(training_set), steps=5000)
# Score accuracy
ev = regressor.evaluate(input_fn=lambda: input_fn(test_set), steps=1)
loss_score = ev["loss"]
print("Loss: {0:f}".format(loss_score))
# Print out predictions
y = regressor.predict(input_fn=lambda: input_fn(prediction_set))
print("Predictions: {}".format(str(y)))
if __name__ == "__main__":
tf.app.run()
</code></pre>
<p>The issue I am having is that the dataset is so big that the saving of checkpoint files via tf.train.Saver() is filling up all my disk space.</p>
<p>Is there a way to either disable the saving of checkpoint files, or reduce the amount of checkpoints saved in the script above?</p>
<p>Thanks </p>
|
<p>The <a href="https://www.tensorflow.org/versions/r0.12/api_docs/python/contrib.learn.html#DNNRegressor" rel="nofollow noreferrer"><code>tf.contrib.learn.DNNRegressor</code></a> initializer takes a <a href="https://www.tensorflow.org/versions/r0.12/api_docs/python/contrib.learn.html#RunConfig" rel="nofollow noreferrer"><code>tf.contrib.learn.RunConfig</code></a> object, which can be used to control the behavior of the internally-created saver. For example, you can do the following to keep only one checkpoint:</p>
<pre><code>config = tf.contrib.learn.RunConfig(keep_checkpoint_max=1)
regressor = tf.contrib.learn.DNNRegressor(
feature_columns=feature_cols, hidden_units=[10, 10], config=config)
</code></pre>
|
tensorflow
| 0
|
1,474
| 40,909,024
|
how to output the states of lstm gates in tensorflow?
|
<p>I want to see the activation states of lstm gates, but it seems that it is not easy to get the gates states and output them to a file. <br></p>
<p>I can use "tf.Print" function like following in BasicLSTM:<br>
<code>gate = tf.Print(gate, [sigmoid(gate)])</code>
<br>But "tf.Print" displays this gate in terminal like:<br>
<code>gate name : [0.5222222, 0.444444, 0.3333333, ...]</code><br>
I can not get all the values of this gate, just "...". And I must use redirectory to output them to files.<br></p>
<hr>
<p><b>Thanks for @ben, I can use <code>tf.Print(gate, [sigmoid(gate)], summarize=10000000)</code> to solve "...". But redirectory is also needed to output them to files.</b></p>
<hr>
<p>I also try to assign a name to the gate in BasicLSTM:<br>
<code>gate = tf.identity(gate_tmp, "gate")</code><br>
then, I can get this tensor by name using <br>
<code>gate = tf.get_default_graph().get_tensor_by_name("model/RNN/while/BasicLSTMCell/gate:0")</code><br>
But when I <code>sess.run(gate)</code><br>
An error ocurred, "gate is not fetchable"</p>
<p>So I change "gate" to a variable.
<br><code>gate = tf.Variable(gate, trainable=False)</code><br>
But an new error ocurred, "All inputs to node model_1/Variable_1/Assign must be from the same frame."</p>
<p>So, how should I do to get the states of LSTM gates? And output them to a file?</p>
|
<p><a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/control_flow_ops.html#Print" rel="nofollow noreferrer">tf.Print</a> supports a special parameter "summarize" to control the number of printed elements: E.g. you can use</p>
<pre><code>tf.Print(gate, [sigmoid(gate)], summarize=10000000)
</code></pre>
|
tensorflow|recurrent-neural-network|lstm
| 0
|
1,475
| 54,103,661
|
Identify pandas column that contains None value
|
<p>I have a geopandas dataframe <code>gdf</code> that looks like the following:</p>
<pre><code> Id text float geometry
0 0 1.65 0.00 POINT (1173731.7407 5354616.9386)
1 0 None 2.20 POINT (1114084.319 5337803.2708)
2 0 2.25 6.55 POINT (1118876.2311 5307167.5724)
3 0 0 0.00 POINT (1179707.5312 5313710.8389)
</code></pre>
<p>How can I identify the column/s that contain a <code>None</code> value?</p>
<p>I have tried using the following list comprehension without success:</p>
<pre><code>import pandas as pd
import geopandas as gp
gdf = gp.read_file('/temp/myshapefile.shp')
s = [pd.isnull(col) for col in gdf.columns if True]
</code></pre>
<p>Which results in:</p>
<pre><code>In [1]: s
Out[1]: [False, False, False, False]
</code></pre>
<p>My desired output in this case is:</p>
<pre><code>['text']
</code></pre>
|
<pre><code>print(gdf.isna().any())
</code></pre>
<p>will give output which column contains null in terms of <code>true</code> or <code>false</code></p>
<pre><code>Id False
text True
float False
geometry False
</code></pre>
<hr>
<p>So use this</p>
<pre><code>print(gdf.columns[gdf.isna().any()].tolist())
</code></pre>
<p>output:</p>
<pre><code>['text']
</code></pre>
|
python|pandas|nonetype|geopandas
| 1
|
1,476
| 53,836,882
|
write an array to the next column in a CSV file
|
<pre><code>landData = []
landData = pd.read_csv('Agriculture land area.csv')
landData = landData.drop(landData.columns[[0]], axis=1)
</code></pre>
<p>I currently have a CSV file that only have 1 column:</p>
<p><img src="https://i.stack.imgur.com/IiElU.png" alt="only the first column is filled with years">\</p>
<p>I want to write my array landData to the second column after year but can't seem to find anything that works online.</p>
<p>Anyone have any idea on how to do this?</p>
|
<p>You might convert the landData DataFrame (read by read_csv) to a Series, then making that Series the column of the other DataFrame. </p>
<pre><code>landData = pd.read_csv('Agriculture land area.csv')
landData = landData.drop(landData.columns[[0]], axis=1)
landData_Series = landData.loc[:,landData.columns.values[0]]
OtherDataFrame['NewColumnName'] = landData_Series
</code></pre>
<p>Make sure that both of the indexes (from landData_series and from OtherDataFrame) are the same, otherwise you would get NaN values.</p>
<p>Let me know if that helped :)</p>
|
python|pandas|csv|data-science
| 0
|
1,477
| 66,130,516
|
Pandas: select column with the highest percentage from a frequency table
|
<p>Hi I have a dataframe that I'd like to select the column with the highest percentage from a frequency table.</p>
<pre><code>d = {'c1':['a', 'a', 'b', 'b', 'c', 'c'], 'c2':['Low', 'High', 'Low', 'High', 'High', 'High']}
dd = pd.DataFrame(data=d)
dd.groupby('c1')['c2'].value_counts(normalize=True).mul(100)
</code></pre>
<p>It will return a frequency table</p>
<pre><code>c1 c2
a High 50.0
Low 50.0
b High 50.0
Low 50.0
c High 100.0
Name: c2, dtype: float64
</code></pre>
<p>I'd like to print out <code>c</code> which has the highest percentage <code>100.0</code></p>
<p>I'm able to use <code>max()</code> to print out <code>100.0</code> but don't know how to print out <code>c</code></p>
|
<p>Lets try reset_index and drop level=1 and then find the maximum index using idxmax</p>
<pre><code>dd.groupby('c1')['c2'].value_counts(normalize=True).mul(100).reset_index(level=1, drop=True).idxmax()
</code></pre>
|
pandas
| 4
|
1,478
| 52,797,062
|
'Tensor' object has no attribute '_keras_history' Keras with no Tensorflow tensor
|
<p>This code:</p>
<pre><code>a = Input(ish)
for i in range(a.shape[1]):
x=Conv2D(filters=50, kernel_size=3, padding='same', activation=rl)(a[:,i])
x=MaxPooling2D(pool_size=2)(x)
x=Dropout(0.5)(x)
x=Conv2D(filters=100, kernel_size=5, padding='same', activation=rl)(x)
x=MaxPooling2D(pool_size=2)(x)
x=Dropout(0.5)(x)
x=Conv2D(filters=200, kernel_size=7, padding='same', activation=rl)(x)
x=MaxPooling2D(pool_size=2)(x)
t=Flatten()(x)
t=Dropout(0.7)(t)
b=Dense(num_classes, activation='softmax')(t)
model = Model(inputs=a, outputs=b)
</code></pre>
<p>in the last line, gives this error:</p>
<pre><code>AttributeError: 'Tensor' object has no attribute '_keras_history'
</code></pre>
<p>Any idea what cause the problem and how to solve it?</p>
|
<p>You have to do the indexing inside a Lambda layer, in order to keep the Keras metadata:</p>
<pre><code>a = Input(ish)
for i in range(a.shape[1]):
x=Lambda(lambda x: x[:, i])(a)
x=Conv2D(filters=50, kernel_size=3, padding='same', activation=rl)(x)
x=MaxPooling2D(pool_size=2)(x)
x=Dropout(0.5)(x)
x=Conv2D(filters=100, kernel_size=5, padding='same', activation=rl)(x)
x=MaxPooling2D(pool_size=2)(x)
x=Dropout(0.5)(x)
x=Conv2D(filters=200, kernel_size=7, padding='same', activation=rl)(x)
x=MaxPooling2D(pool_size=2)(x)
t=Flatten()(x)
t=Dropout(0.7)(t)
b=Dense(num_classes, activation='softmax')(t)
model = Model(inputs=a, outputs=b)
</code></pre>
|
python|tensorflow|keras|deep-learning
| 1
|
1,479
| 52,731,563
|
Vectorising a loop based on the order of values in a series
|
<p>This question is based on a <a href="https://stackoverflow.com/questions/52730234/descending-filtering-for-dataframe">previous question</a> I answered.</p>
<p>The input looks like:</p>
<pre><code>Index Results Price
0 Buy 10
1 Sell 11
2 Buy 12
3 Neutral 13
4 Buy 14
5 Sell 15
</code></pre>
<p>I need to find every Buy-Sell sequence (ignoring extra Buy / Sell values out of sequence) and calculate the difference in Price.</p>
<p>The desired output:</p>
<pre><code>Index Results Price Difference
0 Buy 10
1 Sell 11 1
2 Buy 12
3 Neutral 13
4 Buy 14
5 Sell 15 3
</code></pre>
<p>My solution is verbose but seems to work:</p>
<pre><code>from numba import njit
@njit
def get_diffs(results, prices):
res = np.full(prices.shape, np.nan)
prev_one, prev_zero = True, False
for i in range(len(results)):
if prev_one and (results[i] == 0):
price_start = prices[i]
prev_zero, prev_one = True, False
elif prev_zero and (results[i] == 1):
res[i] = prices[i] - price_start
prev_zero, prev_one = False, True
return res
results = df['Results'].map({'Buy': 0, 'Sell': 1})
df['Difference'] = get_diffs(results.values, df['Price'].values)
</code></pre>
<p>Is there a vectorised method? I'm concerned about code maintainability and performance over a large number of rows.</p>
<hr>
<p><strong>Edit:</strong> Benchmarking code:</p>
<pre><code>df = pd.DataFrame.from_dict({'Index': {0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5},
'Results': {0: 'Buy', 1: 'Sell', 2: 'Buy', 3: 'Neutral', 4: 'Buy', 5: 'Sell'},
'Price': {0: 10, 1: 11, 2: 12, 3: 13, 4: 14, 5: 15}})
df = pd.concat([df]*10**4, ignore_index=True)
def jpp(df):
results = df['Results'].map({'Buy': 0, 'Sell': 1})
return get_diffs(results.values, df['Price'].values)
%timeit jpp(df) # 7.99 ms ± 142 µs per loop
</code></pre>
|
<p>By using <code>cumcount</code> to find the pair:</p>
<pre><code>s=df.groupby('Results').cumcount()
df['Diff']=df.Price.groupby(s).diff().loc[df.Results.isin(['Buy','Sell'])]
df
Out[596]:
Index Results Price Diff
0 0 Buy 10 NaN
1 1 Sell 11 1.0
2 2 Buy 12 NaN
3 3 Neutral 13 NaN
4 4 Buy 14 NaN
5 5 Sell 15 3.0
</code></pre>
|
python|pandas|performance|numpy|dataframe
| 4
|
1,480
| 52,737,258
|
how can we merge csv as column side by side using python pandas?
|
<p>If I have three CSV files:</p>
<p>file1.csv<br>
file2.csv<br>
file3.csv</p>
<p>Each CSV file has a first column (A) containing values as below: </p>
<p>file1.csv </p>
<pre><code>A
asd
zxc
qwe
</code></pre>
<p>file2.csv </p>
<pre><code>A
iop
jkl
bnm
</code></pre>
<p>file3.csv </p>
<pre><code>A
rty
fgh
vbn
</code></pre>
<p>How can we horizontally merge these files into a single file with the columns as below: </p>
<p>merge.csv </p>
<pre><code>A B C
asd iop rty
zxc jkl fgh
qwe bnm vbn
</code></pre>
|
<pre><code># Read files
data_1 = pd.read_csv(file1.csv)
data_2 = pd.read_csv(file2.csv)
data_3 = pd.read_csv(file3.csv)
# Assuming the name A for the first column of each csv is not a typo
data_2.rename(columns={'A': 'B'})
data_3.rename(columns={'A': 'C'})
# Order columns
new_columns = []
for i in range(len(data_1.columns):
new_columns.extend([data_1.columns[i], data_2.columns[i], data_3.columns[i]])
# Concatenate dataframes
data_out = pd.concat([data_1, data_2, data_3], axis=1)
# Reorder columns
data_out = data_out[new_columns]
</code></pre>
|
python|pandas
| 1
|
1,481
| 52,584,035
|
Keyerror when adding a column to a Dataframe (Pandas)
|
<p>Pandas DataFrame is not really accepting adding a second column, and I cannot really troubleshoot the issue. I am trying to display Moving Averages. The code works fine just for the first one (MA_9), and gives me error as soon I try to add additional MA (MA_20).</p>
<p>Is it not possible in this case to add more than one column?</p>
<p>The code:</p>
<pre><code>import numpy as np
import pandas as pd
import pandas_datareader as pdr
import matplotlib.pyplot as plt
symbol = 'GOOG.US'
start = '20140314'
end = '20180414'
google = pdr.DataReader(symbol, 'stooq', start, end)
print(google.head())
google_close = pd.DataFrame(google.Close)
print(google_close.last_valid_index)
google_close['MA_9'] = google_close.rolling(9).mean()
google_close['MA_20'] = google_close.rolling(20).mean()
# google_close['MA_60'] = google_close.rolling(60).mean()
# print(google_close)
plt.figure(figsize=(15, 10))
plt.grid(True)
# display MA's
plt.plot(google_close['Close'], label='Google_Cls')
plt.plot(google_close['MA_9'], label='MA 9 day')
plt.plot(google_close['MA_20'], label='MA 20 day')
# plt.plot(google_close['MA_60'], label='MA 60 day')
plt.legend(loc=2)
plt.show()
</code></pre>
|
<p>Please update your code as below and then it should work:</p>
<pre><code>google_close['MA_9'] = google_close.Close.rolling(9).mean()
google_close['MA_20'] = google_close.Close.rolling(20).mean()
</code></pre>
<p>Initially there was only one column data of Close so your old code <code>google_close['MA_9'] = google_close.rolling(9).mean()</code> worked but after this line of code now it has two column and so it does not know which data you are trying to mean. So updating with the column details of data you wanted to mean, it works <code>google_close['MA_20'] = google_close.Close.rolling(20).mean()</code></p>
|
python-3.x|pandas|dataframe|matplotlib
| 3
|
1,482
| 46,434,765
|
kernel dies when performing optimization with scikit
|
<p>I'm performing some optimization with scikit on machine learning problem working with 75 mb file that has 42k rows and 784 columns containg numbers.
Working on jupyter notebook.</p>
<p>But kernel dies when I run the code. The same working with terminal.</p>
<p>Is there any way to handle this problem?</p>
<p>def train(self, X, Y):</p>
<pre><code> def train(self, X, Y):
self.X = X
self.Y = Y
self.J = []
params0 = self.N.getParams()
options = {'maxiter':1, 'disp': True}
_res = optimize.minimize(self.costFunctionWrapper, params0, jac=True,
method='BFGS', args = (X, Y),
options=options, callback = self.callbackF)
self.N.setParams(_res.x)
self.optimizationResults = _res
</code></pre>
|
<p>I ran into the same issue, my research tell me that it's a memory outage.</p>
<p>A lot of people on <a href="https://stackoverflow.com/questions/32573948/ipython-notebook-kernel-getting-dead-while-running-kmeans">stackoverflow and github</a> of recommend using a <code>.py</code> script instead of a jupyter notebook but sometimes that does not help at all. Try to be careful on the memory you are using relative to your system's capabilities.</p>
|
numpy|machine-learning|jupyter-notebook
| 0
|
1,483
| 58,480,223
|
How to read a list from an excel cell
|
<p>I put a list into an excel cell, and when I read it with pandas, it did not return me a list, it returned me a string, is there anyway I can get a list in return instead?</p>
<p>eg.
in the cell: ['a', 'b', 'c']
output from pandas: '['a', 'b', 'c']'</p>
<p>here is my code:</p>
<pre><code>df = pd.read_excel('example.xlsx', index_col=None, header=None)
print(df.iloc[5, 3])
print(type(df.iloc[5, 3]))
## and the code would return the type of df.iloc[5, 3] is equal to a list
</code></pre>
|
<p>In excel are lists converted to string repr of lists.</p>
<pre><code>df = pd.DataFrame({
'A':list('abcdef'),
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3],
'D':[1,3,5,7,1,['a', 'b', 'c']],
'E':[5,3,6,9,2,4],
'F':list('aaabbb')
})
print(df.iloc[5, 3])
['a', 'b', 'c']
df.to_excel('example.xlsx', header=None, index=False)
df = pd.read_excel('example.xlsx', index_col=None, header=None)
print(df.iloc[5, 3])
['a', 'b', 'c']
print(type(df.iloc[5, 3]))
<class 'str'>
</code></pre>
<p>So is necessary convert it to lists by <code>ast.literal_eval</code></p>
<pre><code>import ast
print(ast.literal_eval(df.iloc[5, 3]))
['a', 'b', 'c']
print(type(ast.literal_eval(df.iloc[5, 3])))
<class 'list'>
</code></pre>
<p>Or <code>eval</code>, but <a href="https://stackoverflow.com/questions/1832940/why-is-using-eval-a-bad-practice">it is bad practise</a>, so not recommended:</p>
<pre><code>print(eval(df.iloc[5, 3]))
['a', 'b', 'c']
print(type(eval(df.iloc[5, 3])))
<class 'list'>
</code></pre>
|
python|excel|string|pandas|list
| 3
|
1,484
| 58,473,700
|
is there a way to combine multiple columns with comma separated
|
<p>I have a dataframe with 1 million+ records and I am looking to combine two columns to one row with a separator, anyone help me how to do it ?</p>
<pre><code>def chunk_results(df):
n =0
for i in range(len(df)):
data_frame = df.iloc[n:n+5]
# code for combie
n=n+5
My Dataframe:
ID Value
1 a
2 b
3 c
4 d
5 e
6 f
7 g
8 h
9 i
10 j
........
........
........
999,995 xxv
999,996 xxw
999,997 xxx
999,998 xxy
999,999 xxz
</code></pre>
<p>,and I need something like this</p>
<pre><code> ID Value
1,2,3,4,5 a,b,c,d,e
6,7,8,9,10 f,g,h,i,j
........
........
........
999,995, 999,996, 999,997, 999,998, 999,999 xxv, xxw, xxx, xxy, xxz
</code></pre>
<p>I am passing chunksize data already into this function chunk_results
as these df[values] is a value to one of the request to API so I want to send it as <a href="http://api.com?value=a,b,c,d,e" rel="nofollow noreferrer">http://api.com?value=a,b,c,d,e</a> so that I can post multiple values at once, I dont want to post one request at once where I have network latency </p>
|
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> whatever the rows you want and just aggregate them</p>
<pre><code>chunksize = 5
df.astype(str).groupby(df.index // chunksize).agg(','.join)
</code></pre>
<p><strong>Output</strong></p>
<pre><code> ID Value
0 1,2,3,4,5 a,b,c,d,e
1 6,7,8,9,10 f,g,h,i,j
</code></pre>
|
python-3.x|pandas
| 2
|
1,485
| 68,967,904
|
Suppression tool help. Need to know how to not delete a row if it is empty
|
<p>So in my code I am deleting duplicates. The problem is some of my data has no entry's. because it deletes duplicates the ones with no entrys get deleted. The problem with this is I am running millions of entrys so I couldnt just go in and add a fake entry to the data. I need a line of code that will ignore the blank entrys and not delete them. I am only checking if their are duplicates in a column not a row. Thanks in advance. I am also using PANDAS in this because the data is in CSV files</p>
<p>Array Example:
1,1
2,2
3,3
4,""
5,5
6,""
1,1
2,2
what i want to happen to array:
1,1
2,2
3,3
4,""
5,5
6,""
what actually happens
1,1
2,2
3,3
5,5</p>
<pre><code>`df = df.drop_duplicates(subset = [1])
</code></pre>
<p>df = df.drop_duplicates(subset = [2])
df = df.drop_duplicates(subset = [2])`</p>
|
<p>You could filter empty rows, drop duplicates and after concat both.</p>
<pre><code>df = pd.DataFrame({'col1': ['1','1 2','2 3','3 4','','5','5 6','','1','1 2','2']})
dfempty = df.loc[df.col1 == ""]
df2 = df.loc[df.col1 != ""].drop_duplicates()
pd.concat([dfempty, df2]).sort_index()
col1
0 1
1 1 2
2 2 3
3 3 4
4
5 5
6 5 6
7
</code></pre>
<p>10 2</p>
|
python|arrays|pandas|sql-delete|suppression
| 0
|
1,486
| 68,970,873
|
How do i concat csv files to end up with the one csv file with stacked data using pandas
|
<p>I want to concat csv files on top of each other vertically/stacked and save that to a new csv. The testing.csv file should just have the 2 csv files data stacked. The images below show the original csv files for both msft and adbe, but once concat is used, it comes out with one horizontal line of data. How to do this?</p>
<pre><code>import pandas as pd
data_msft = pd.read_csv("MSFT.csv",header = 0, index_col= None)
data_adbe = pd.read_csv('ADBE.csv',header = 0, index_col= None)
data = pd.concat([data_msft,data_adbe],ignore_index=True,axis=0,sort=False)
data.to_csv("testing.csv")
enter code here
</code></pre>
<p><a href="https://i.stack.imgur.com/KUcIf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KUcIf.png" alt="enter image description here" /></a></p>
<p>above is MSFT.csv</p>
<p><a href="https://i.stack.imgur.com/bw4IX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bw4IX.png" alt="enter image description here" /></a></p>
<p>above is ADBE.csv</p>
<p><a href="https://i.stack.imgur.com/5bLQX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5bLQX.png" alt="enter image description here" /></a></p>
<p>above is how the csv concats</p>
<p><a href="https://i.stack.imgur.com/47Pgp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/47Pgp.png" alt="enter image description here" /></a></p>
<p>above is what i am trying to achieve after csv files are concat.</p>
<p>this is the plain txt of data.</p>
<p>MSFT 211001P00295000,299.09,295.0,5.37,0.0,20211001,0.0,29/08/2021 16:10:00,29/08/2021 16:11:40</p>
<p>ADBE 211001P00650000,652.39,650.0,21.22,0.0,20211001,0.0,29/08/2021 16:10:00,29/08/2021 16:11:15</p>
<p>Below is how the data returns after concat of the above csv files</p>
<p>,MSFT 211001P00295000,299.09,295.0,5.37,0.0,20211001,0.0.1,29/08/2021 16:10:00,29/08/2021 16:11:40,ADBE 211001P00650000,652.39,650.0,21.22,29/08/2021 16:11:15</p>
|
<pre class="lang-py prettyprint-override"><code>import pandas as pd
csv_files = ['path/to/file1.csv', 'path/to/file2.csv']
concat_csvs = []
for filename in csv_files:
df = pd.read_csv(filename, index_col=None, header=0)
concat_csvs.append(df)
frame = pd.concat(concat_csvs, axis=0, ignore_index=True)
frame.to_csv()
</code></pre>
|
python|pandas|csv|merge|concatenation
| 0
|
1,487
| 69,063,716
|
In python pandas, How to apply loop to create rows for multiple columns?
|
<pre><code>import pandas as pd
import numpy as np
column_names = [str(x) for x in range(1,4)]
df= pd.DataFrame ( columns = column_names )
new_row = []
for i in range(3):
new_row.append(i)
df = df.append(new_row , ignore_index = True)
print(df)
</code></pre>
<hr />
<p>output:</p>
<pre><code> 1 2 3 0
0 NaN NaN NaN 0.0
1 NaN NaN NaN 1.0
2 NaN NaN NaN 2.0
</code></pre>
<hr />
<p>Is there a way to apply the loop to column 1, column 2, and column 3?</p>
<p>I think it's possible with a simple code, isn't it?</p>
<p>I've been thinking a lot, but I don't know how.</p>
<p>I also tried the .loc() method, but I couldn't apply the loop to the row of columns.</p>
<hr />
<p>This is a supplementary explanation.</p>
<p>'column_names = [str(x) for x in range(1,4)]' creates columns 0 to 3.</p>
<p>A loop is applied to each column.</p>
<p>The "for" loop inserts 0 through 2 into column 1.</p>
<p>Therefore, 0, 1, 2 are input to the row of column 1.</p>
<p>The result I want is below.</p>
<p><a href="https://i.stack.imgur.com/TRscw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TRscw.jpg" alt="" /></a></p>
<hr />
|
<p>I know it's weird but you can use <code>.loc</code> to do that:</p>
<pre><code>df.loc[len(df.index)+1] = new_row
</code></pre>
<pre><code>>>> df
1 2 3
1 0 1 2
</code></pre>
|
python|pandas|dataframe
| 0
|
1,488
| 61,104,122
|
Converting long dataframe and extracting string
|
<p>Hi I have Dataframe like this:</p>
<pre><code> Date A_2002 B_2003 C_2004 D_2005 Type
03-2002 20 30 12 42 X
04-2002 12 321 12 23 X
03-2002 10 31 2 3 Y
</code></pre>
<p>I want to convert it to long version and extract the string type from it so the end result would be this:</p>
<pre><code> Date NewCol Extracted Type Value
03-2002 A 2002 X 20
03-2002 B 2003 X 30
03-2002 C 2004 X 12
03-2002 D 2005 X 42
04-2002 A 2002 X 12
04-2002 B 2003 X 321
04-2002 C 2004 X 12
04-2002 D 2005 X 23
03-2002 A 2002 Y 10
03-2002 B 2003 Y 31
03-2002 C 2004 Y 2
03-2002 D 2005 Y 3
</code></pre>
<p>So the end result will convert value from column name into tow new values and melt the data as seen above. Is it possible with pandas?</p>
|
<p>you can do <code>stack</code> after <code>set_index</code> and <code>str.split</code>:</p>
<pre><code>m = df.set_index(['Date','Type'])
m.columns = m.columns.str.split('_',expand=True)
out = (m.stack([0,1]).rename('Value').reset_index()
.rename(columns={'level_2':'NewCol','level_3':'Extracted'}))
</code></pre>
<hr>
<pre><code> Date Type NewCol Extracted Value
0 03-2002 X A 2002 20.0
1 03-2002 X B 2003 30.0
2 03-2002 X C 2004 12.0
3 03-2002 X D 2005 42.0
4 04-2002 X A 2002 12.0
5 04-2002 X B 2003 321.0
6 04-2002 X C 2004 12.0
7 04-2002 X D 2005 23.0
8 03-2002 Y A 2002 10.0
9 03-2002 Y B 2003 31.0
10 03-2002 Y C 2004 2.0
11 03-2002 Y D 2005 3.0
</code></pre>
|
python|pandas|dataframe
| 5
|
1,489
| 60,891,442
|
Pandas dataframe problem. Create column where a row cell gets the value of another row cell
|
<p>I have this pandas dataframe. It is sorted by the "h" column. What I want is to add two new columns where:
The items of each zone, will have a max boundary and a min boundary. (They will be the same for every item in the zone). The max boundary will be the minimum "h" value of the previous zone, and the min boundary will be the maximum "h" value of the next zone</p>
<pre><code>name h w set row zone
ZZON5 40 36 A 0 0
DWOPN 38 44 A 1 0
5SWYZ 37 22 B 2 0
TFQEP 32 55 B 3 0
OQ33H 26 41 A 4 1
FTJVQ 24 25 B 5 1
F1RK2 20 15 B 6 1
266LT 18 19 A 7 1
HSJ3X 16 24 A 8 2
L754O 12 86 B 9 2
LWHDX 11 68 A 10 2
ZKB2F 9 47 A 11 2
5KJ5L 7 72 B 12 3
CZ7ET 6 23 B 13 3
SDZ1B 2 10 A 14 3
5KWRU 1 59 B 15 3
</code></pre>
<p>what i hope for:</p>
<pre><code>name h w set row zone maxB minB
ZZON5 40 36 A 0 0 26
DWOPN 38 44 A 1 0 26
5SWYZ 37 22 B 2 0 26
TFQEP 32 55 B 3 0 26
OQ33H 26 41 A 4 1 32 16
FTJVQ 24 25 B 5 1 32 16
F1RK2 20 15 B 6 1 32 16
266LT 18 19 A 7 1 32 16
HSJ3X 16 24 A 8 2 18 7
L754O 12 86 B 9 2 18 7
LWHDX 11 68 A 10 2 18 7
ZKB2F 9 47 A 11 2 18 7
5KJ5L 7 72 B 12 3 9
CZ7ET 6 23 B 13 3 9
SDZ1B 2 10 A 14 3 9
5KWRU 1 59 B 15 3 9
</code></pre>
<p>Any ideas? </p>
|
<p>First group-by zone and find the minimum and maximum of them</p>
<pre><code>min_max_zone = df.groupby('zone').agg(min=('h', 'min'), max=('h', 'max'))
</code></pre>
<p>Now you can use apply:</p>
<pre><code>df['maxB'] = df['zone'].apply(lambda x: min_max_zone.loc[x-1, 'min']
if x-1 in min_max_zone.index else np.nan)
df['minB'] = df['zone'].apply(lambda x: min_max_zone.loc[x+1, 'max']
if x+1 in min_max_zone.index else np.nan)
</code></pre>
|
python|pandas|loops|dataframe|operation
| 2
|
1,490
| 61,038,783
|
pd.read_csv has issues with differing number of columns between csv files
|
<p>I have number of csv files that have differing numbers of columns.<br/>
Majority of the csv files are 4 columns wide and gets read and concatenated.<br/>
However, when it encounters files that exceeds 4 columns the script errors out.<br/></p>
<p>I get the following error message:<br/><code>Error tokenizing data. C error: Expected 4 fields in line 125, saw 8.</code> <br/></p>
<p>If I refactor the code (below) to include <code>error_bad_lines=False</code> for the <code>pd.read_csv</code>,<br/>the code completes and outputs a combined csv that includes only the lines that contain 4 columns. </p>
<p>How can I solve this error, and concatenate everything?<br/>There're no indexes, so i'd just have to stack the csv info on top of one another. </p>
<p>Thank you so much</p>
<pre class="lang-py prettyprint-override"><code>import os
import glob
import pandas as pd
all_filenames = [
# think this is working correctly with bunch of replies.csv extensions
i for i in glob.glob('C:\\Users\\tkim1\\Python Scripts\\output\\*\\replies.csv')
]
print(all_filenames)
# combine all files in the list
combined_csv = pd.concat([
pd.read_csv(f, error_bad_lines=False) for f in all_filenames
], sort=False)
# export to csv
combined_csv.to_csv("combined_replies.csv", index=False, encoding='utf-8-sig')
</code></pre>
|
<p>The issue here is with pandas.concat, not pandas.read_csv. The concat function does not allow you to concatenate DataFrame objects with differing number of columns.</p>
<p>The only way I can think of solving this is to find out the DataFrames that have lesser number of columns (than the DataFrame with max number of columns), set the required extra columns in each DataFrame to NaN, then apply pd.concat.</p>
<pre><code># for example, if df1 has 3 columns and df2 has 2 columns, set the third column in df2
# to NaN, then apply concat.
import pandas as pd
import numpy as np
df1 = pd.DataFrame({'0': np.arange(1, 100),
'1': np.arange(100, 1, -1)})
df2 = pd.DataFrame({'0': np.arange(100, 200),
'1': np.arange(200, 100, -1),
'2': np.arange(400, 500)})
df2['2'] = np.nan
df3 = pd.concat([df1, df2])
</code></pre>
|
python|pandas|csv|concatenation
| 0
|
1,491
| 61,041,820
|
Plotting latitude / longitude from Excel spreadsheet using Cartopy
|
<p>Trying to plot a survey transect on a map using Cartopy, a library seriously lacking in online information compared to others.
My lat/long data is in two seperate columns of an Excel spreadsheet.</p>
<p>I have managed to build a basemap, and my script for that is as follows:</p>
<pre><code>import cartopy.feature as cf
map = plt.figure(figsize=(15,15))
ax = plt.axes(projection=ccrs.EuroPP())
ax.coastlines(resolution='10m')
ax.add_feature(cf.LAND)
ax.add_feature(cf.OCEAN)
ax.add_feature(cf.COASTLINE)
ax.add_feature(cf.BORDERS, linestyle=':')
ax.add_feature(cf.LAKES, alpha=0.5)
ax.add_feature(cf.RIVERS)
ax.stock_img()
</code></pre>
<p>Producing the following figure <a href="https://i.stack.imgur.com/IGOXT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IGOXT.png" alt="figure"></a></p>
<p>I have also read in the data from Excel columns labeled 'latitude' (column 'L') and 'longitude' (column 'M') using pandas, from the following spreadsheet:
<a href="https://liveplymouthac-my.sharepoint.com/:x:/g/personal/rhodri_irranca-davies_postgrad_plymouth_ac_uk/EYGoEUAO8tBLkdT0r4dwDdsBAijsVChsLCec-DxaYo2Tew?e=rs9cBc" rel="nofollow noreferrer">https://liveplymouthac-my.sharepoint.com/:x:/g/personal/rhodri_irranca-davies_postgrad_plymouth_ac_uk/EYGoEUAO8tBLkdT0r4dwDdsBAijsVChsLCec-DxaYo2Tew?e=rs9cBc</a></p>
<p>This was done using the following code:</p>
<pre><code>file = r'C:\Users\Laptop\Documents\Python\Thesis\t2_GeoData.xlsx'
df_lat = pd.read_excel(file, index_col=None, na_values=['NA'], usecols = 'L')
df_lon = pd.read_excel(file, index_col=None, na_values=['NA'], usecols = 'M')
</code></pre>
<p>Was pandas the right option? If so, how do I now incorporate the two dataframes created ('df_lat' and 'df_lon') and plot them as transects using cartopy? </p>
<p>Thanks in advance :)</p>
|
<p>I'm not sure what you mean exactly as plot them as transects, but you can plot lon/lat lines in CartoPy with:</p>
<pre><code>ax.plot(df_lon, df_lat, linestyle='-', color='orange', transform=ccrs.PlateCarree())
</code></pre>
|
python|pandas|mapping|gis|cartopy
| 1
|
1,492
| 42,367,281
|
Tensorflow Android : retrained Inception v3 take too much time
|
<p>I have retrained the Inception-V3 final layer with my own 20 categories. When I am using retrained model in android demo app it takes 6 to 8 seconds to predict.</p>
<p>Running on</p>
<ul>
<li>LG G4 Stylus -> 6-8 sec</li>
<li>S6, -> 3-4.5 sec</li>
</ul>
<p>I have done <code>optimize_for_inference</code> it takes 6-9 sec and <code>quantize_graph</code> it takes 7-11 sec. Is there any way to improve it?</p>
<p>Output on LG G4 Stylus:</p>
<p><a href="https://i.stack.imgur.com/lKryY.png" rel="nofollow noreferrer">Output</a></p>
<p><strong>EDIT</strong></p>
<p>I have followed <a href="https://www.tensorflow.org/tutorials/image_retraining" rel="nofollow noreferrer">this</a></p>
|
<p>Comparing your debug output to the normal TF Classify app on my phone I see that you have a much larger node count which would suggest that, for some reason, your graph is a lot larger than it should be. I'm not too familiar with the quantize method but it looks like you have more conv2D layers than normal as well.</p>
<p>Without any further information it's hard to say but I think that you should rebuild the graph and check that you've added the final layer properly.</p>
|
android|tensorflow
| 0
|
1,493
| 42,495,155
|
Two pandas MultiIndex frames multiply every row with every row
|
<p>I need to multiply two MultiIndexed frames (say <code>df1, df2</code>) that have the same highest level index, such that for each of the highest level index each row of <code>df1</code> is multiplied to each row of <code>df2</code> elementwise. I have implemented the following example that does what I want, however it looks pretty ugly:</p>
<pre><code>a = ['alpha', 'beta']
b = ['A', 'B', 'C']
c = ['foo', 'bar']
df1 = pd.DataFrame(np.random.randn(6, 4),
index=pd.MultiIndex.from_product(
[a, b],
names=['greek', 'latin']),
columns=['C1', 'C2', 'C3', 'C4'])
df2 = pd.DataFrame(
np.array([[1, 0, 1, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 2, 0, 4]]),
index=pd.MultiIndex.from_product([a, c], names=['greek', 'foobar']),
columns=['C1', 'C2', 'C3', 'C4'])
df3 = pd.DataFrame(
columns=['greek', 'latin', 'foobar', 'C1', 'C2', 'C3', 'C4'])
for i in df1.index.get_level_values('greek').unique():
for j in df1.loc[i].index.get_level_values('latin').unique():
for k in df2.loc[i].index.get_level_values('foobar').unique():
df3 = df3.append(pd.Series([i, j, k],
index=['greek', 'latin', 'foobar']
).append(
df1.loc[i, j] * df2.loc[i, k]), ignore_index=True)
df3.set_index(['greek', 'latin', 'foobar'], inplace=True)
</code></pre>
<p>As you can see the code is very manual that defines the columns etc manually multiple times, and sets the index in the end. Here is the input and the optput. They are correct and exactly what I want:</p>
<p><strong>df1:</strong></p>
<pre><code> C1 C2 C3 C4
greek latin
alpha A 0.208380 0.856373 -1.041598 1.219707
B 1.547903 -0.001023 0.918973 1.153554
C 0.195868 2.772840 0.060960 0.311247
beta A 0.690405 -1.258012 0.118000 -0.346677
B 0.488327 -1.206428 0.967658 1.198287
C 0.420098 -0.165721 0.626893 -0.377909,
</code></pre>
<p><strong>df2:</strong></p>
<pre><code> C1 C2 C3 C4
greek foobar
alpha foo 1 0 1 0
bar 1 1 1 1
beta foo 0 0 0 0
bar 0 2 0 4
</code></pre>
<p><strong>result:</strong></p>
<pre><code> C1 C2 C3 C4
greek latin foobar
alpha A foo 0.208380 0.000000 -1.041598 0.000000
bar 0.208380 0.856373 -1.041598 1.219707
B foo 1.547903 -0.000000 0.918973 0.000000
bar 1.547903 -0.001023 0.918973 1.153554
C foo 0.195868 0.000000 0.060960 0.000000
bar 0.195868 2.772840 0.060960 0.311247
beta A foo 0.000000 -0.000000 0.000000 -0.000000
bar 0.000000 -2.516025 0.000000 -1.386708
B foo 0.000000 -0.000000 0.000000 0.000000
bar 0.000000 -2.412855 0.000000 4.793149
C foo 0.000000 -0.000000 0.000000 -0.000000
bar 0.000000 -0.331443 0.000000 -1.511638
</code></pre>
<p>Thanks in advance!</p>
|
<p>I created the following solution that seems to work and provide the right outcome. While Stephen's answer remains the fastest solution, this is close enough but provides a big advantage, it works for arbitrary MultiIndexed frames, as opposed to the ones where the index is a product of lists. This was the case I needed to solve for, though the example I provided did not reflect that. Thanks to Stephen for the excellent and fast solution for that case - certainly learned a few things from that code!</p>
<p><strong>Code:</strong></p>
<pre><code>dft = df2.swaplevel()
dft.sortlevel(level=0,inplace=True)
df5=pd.concat([df1*dft.loc[i,:] for i in dft.index.get_level_values('foobar').unique() ], keys=dft.index.get_level_values('foobar').unique().tolist(), names=['foobar'])
df5=df5.reorder_levels(['greek', 'latin', 'foobar'],axis=0)
df5.sortlevel(0,inplace=True)
</code></pre>
<p><strong>Test Data:</strong></p>
<pre><code>import pandas as pd
import numpy as np
a = ['alpha', 'beta']
b = ['A', 'B', 'C']
c = ['foo', 'bar']
data_columns = ['C1', 'C2', 'C3', 'C4']
columns = ['greek', 'latin', 'foobar'] + data_columns
df1 = pd.DataFrame(np.random.randn(len(a) * len(b), len(data_columns)),
index=pd.MultiIndex.from_product(
[a,b], names=columns[0:2]),
columns=data_columns
)
df2 = pd.DataFrame(np.array([[1, 0, 1, 0],
[1, 1, 1, 1],
[0, 0, 0, 0],
[0, 2, 0, 4],
]),
index=pd.MultiIndex.from_product(
[a, c],
names=[columns[0], columns[2]]),
columns=data_columns
)
</code></pre>
<p><strong>Timing Code:</strong></p>
<pre><code>def method1():
df3 = pd.DataFrame(columns=columns)
for i in df1.index.get_level_values('greek').unique():
for j in df1.loc[i].index.get_level_values('latin').unique():
for k in df2.loc[i].index.get_level_values('foobar').unique():
df3 = df3.append(pd.Series(
[i, j, k],
index=columns[:3]).append(
df1.loc[i, j] * df2.loc[i, k]), ignore_index=True)
df3.set_index(columns[:3], inplace=True)
return df3
def method2():
# build an index from the three index columns
idx = [df1.index.get_level_values(col).unique() for col in columns[:2]
] + [df2.index.get_level_values(columns[2]).unique()]
size = [len(x) for x in idx]
index = pd.MultiIndex.from_product(idx, names=columns[:3])
# get the indices needed for df1 and df2
idx_a = np.indices((size[0] * size[1], size[2])).reshape(2, -1)
idx_b = np.indices((size[0], size[1] * size[2])).reshape(2, -1)
idx_1 = idx_a[0]
idx_2 = idx_a[1] + idx_b[0] * size[2]
# map the two frames into a multiply-able form
y1 = df1.values[idx_1, :]
y2 = df2.values[idx_2, :]
# multiply the to frames
df4 = pd.DataFrame(y1 * y2, index=index, columns=columns[3:])
return df4
def method3():
dft = df2.swaplevel()
dft.sortlevel(level=0,inplace=True)
df5=pd.concat([df1*dft.loc[i,:] for i in dft.index.get_level_values('foobar').unique() ], keys=dft.index.get_level_values('foobar').unique().tolist(), names=['foobar'])
df5=df5.reorder_levels(['greek', 'latin', 'foobar'],axis=0)
df5.sortlevel(0,inplace=True)
return df5
from timeit import timeit
print(timeit(method1, number=50))
print(timeit(method2, number=50))
print(timeit(method3, number=50))
</code></pre>
<p><strong>Results:</strong></p>
<pre><code>4.089807642158121
0.12291539693251252
0.33667341712862253
</code></pre>
|
python|python-3.x|pandas|numpy|multi-index
| 2
|
1,494
| 69,799,699
|
Substract Two Dataframes by Index and Keep String Columns
|
<p>I would like to subtract two data frames by indexes:</p>
<pre><code>
# importing pandas as pd
import pandas as pd
# Creating the second dataframe
df1 = pd.DataFrame({"Type":['T1', 'T2', 'T3', 'T4', 'T5'],
"A":[10, 11, 7, 8, 5],
"B":[21, 5, 32, 4, 6],
"C":[11, 21, 23, 7, 9],
"D":[1, 5, 3, 8, 6]},
index =["2001", "2002", "2003", "2004", "2005"])
df1
# Creating the first dataframe
df2 = pd.DataFrame({"A":[1, 2, 2, 2],
"B":[3, 2, 4, 3],
"C":[2, 2, 7, 3],
"D":[1, 3, 2, 1]},
index =["2000", "2002", "2003", "2004"])
df2
# Desired
df = pd.DataFrame({"Type":['T1', 'T2', 'T3', 'T4', 'T5'],
"A":[10, 9, 5, 6, 5],
"B":[21, 3, 28, 1, 6],
"C":[11, 19, 16, 4, 9],
"D":[1, 2, 1, 7, 5]},
index =["2001", "2002", "2003", "2004", "2005"])
df
df1.subtract(df2)
</code></pre>
<p>However, it returns in some cases NAs, I would like to keep values from the first df1 if not deductable.</p>
|
<p>You could handle NaN using:</p>
<pre><code>df1.subtract(df2).combine_first(df1).dropna(how='all')
</code></pre>
<p>output:</p>
<pre><code> A B C D Type
2001 10.0 21.0 11.0 1.0 T1
2002 9.0 3.0 19.0 2.0 T2
2003 5.0 28.0 16.0 1.0 T3
2004 6.0 1.0 4.0 7.0 T4
2005 5.0 6.0 9.0 6.0 T5
</code></pre>
|
python|pandas
| 2
|
1,495
| 69,897,567
|
Numpy numbers arent integers?
|
<p>Can someone explain why some Numpy numbers aren't whole integers?
When I run this:</p>
<pre><code>print(np.sqrt(2.)**2)
</code></pre>
<p>I get:</p>
<pre><code> 2.0000000000000004
</code></pre>
<p>And why is it that I get</p>
<pre><code>[ 0. 1.11111111 2.22222222 3.33333333 4.44444444 5.55555556
6.66666667 7.77777778 8.88888889 10. ]
</code></pre>
<p>when I print</p>
<pre><code>print(np.linspace(0, 10, 10))
</code></pre>
|
<p>From numpy's documentation:</p>
<pre><code> dtype : dtype, optional
The type of the output array. If `dtype` is not given, the data type
is inferred from `start` and `stop`. The inferred dtype will never be
an integer; `float` is chosen even if the arguments would produce an
array of integers.
</code></pre>
<p>This can be overridden by explicitly specifying <code>dtype</code>:</p>
<pre><code>np.linspace(0, 10, 10, dtype=np.int64)
> array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 10])
</code></pre>
|
python|numpy
| 2
|
1,496
| 43,195,133
|
Are there packages to register/identify schemas for a Pandas data analysis workflow?
|
<p>I'm using Pandas to automate analysis of a variety of different <em>3rd party reports</em>. Most are in <code>csv</code> format. </p>
<p>Assuming only correct files are loaded into the program, I need to:</p>
<ul>
<li>identify the origin of the report (3rd party), based on
<ul>
<li>schema</li>
<li>predictable column values</li>
</ul></li>
<li>store historical reports of same origin,</li>
<li>return origin, maybe some other thing-ys</li>
</ul>
<p>I only need to manage 10 reports in the beginning. I imagine it could grow into identifying upwards of several hundred--noting that a flat file and some dictionaries couldn't handle. But why reinvent the wheel, ...</p>
<p>Are there packages to register/identify schemas for a Pandas data analysis workflow?</p>
|
<p>I've taken a first pass solution which I'll offer as answer. I've implemented a class-based solution with <code>defaultdict</code>. Here's the basic outline:</p>
<ul>
<li><a href="https://martinfowler.com/eaaCatalog/registry.html" rel="nofollow noreferrer">Register</a> class oop structure to handle and access <strong>schemas</strong> in my scripts:
<ul>
<li><code>Report(object)</code></li>
<li><code>ChildReport(Report)</code></li>
</ul></li>
<li><a href="https://stackoverflow.com/questions/635483/what-is-the-best-way-to-implement-nested-dictionaries">'vividict'</a> or multi-dimensional dictionary structure to handle the collection of reports using Python's <code>defaultdict</code>:
<ul>
<li><code>client_reports['date']['type'] = ChildReport(self)</code></li>
</ul></li>
<li><code>ReportsManager(object)</code> class. Initializes the <code>vividict</code>, and collects multiple methods for accessing and managing the collections--one for each client.</li>
<li>Python's Pickle module to store the <code>ReportManager</code> object--one for each client.</li>
</ul>
<p>I have a few doubts about how I structured the <code>defaultdict</code> with the <code>ReportsManager</code> class. It's a start.</p>
|
pandas|schema|workflow
| 0
|
1,497
| 43,067,338
|
Tensor multiplication in Tensorflow
|
<p>I am trying to carry out tensor multiplication in NumPy/Tensorflow.</p>
<p>I have 3 tensors- <code>A (M X h), B (h X N X s), C (s X T)</code>. </p>
<p>I believe that <code>A X B X C</code> should produce a tensor <code>D (M X N X T)</code>.</p>
<p>Here's the code (using both numpy and tensorflow).</p>
<pre><code>M = 5
N = 2
T = 3
h = 2
s = 3
A_np = np.random.randn(M, h)
C_np = np.random.randn(s, T)
B_np = np.random.randn(h, N, s)
A_tf = tf.Variable(A_np)
C_tf = tf.Variable(C_np)
B_tf = tf.Variable(B_np)
# Tensorflow
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print sess.run(A_tf)
p = tf.matmul(A_tf, B_tf)
sess.run(p)
</code></pre>
<p>This returns the following error:</p>
<pre><code>ValueError: Shape must be rank 2 but is rank 3 for 'MatMul_2' (op: 'MatMul') with input shapes: [5,2], [2,2,3].
</code></pre>
<p>If we try the multiplication only with numpy matrices, we get the following errors:</p>
<pre><code>np.multiply(A_np, B_np)
ValueError: operands could not be broadcast together with shapes (5,2) (2,2,3)
</code></pre>
<p>However, we can use <code>np.tensordot</code> as follows:</p>
<pre><code>np.tensordot(np.tensordot(A_np, B_np, axes=1), C_np, axes=1)
</code></pre>
<p>Is there an equivalent operation in TensorFlow? </p>
<h2>Answer</h2>
<p>In numpy, we would do as follows:</p>
<pre><code>ABC_np = np.tensordot(np.tensordot(A_np, B_np, axes=1), C_np, axes=1)
</code></pre>
<p>In tensorflow, we would do as follows:</p>
<pre><code>AB_tf = tf.tensordot(A_tf, B_tf,axes = [[1], [0]])
AB_tf_C_tf = tf.tensordot(AB_tf, C_tf, axes=[[2], [0]])
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
ABC_tf = sess.run(AB_tf_C_tf)
</code></pre>
<p><code>np.allclose(ABC_np, ABC_tf)</code> return <code>True</code>.</p>
|
<p>Try</p>
<pre><code>tf.tensordot(A_tf, B_tf,axes = [[1], [0]])
</code></pre>
<p>For example:</p>
<pre><code>x=tf.tensordot(A_tf, B_tf,axes = [[1], [0]])
x.get_shape()
TensorShape([Dimension(5), Dimension(2), Dimension(3)])
</code></pre>
<p>Here is <a href="https://www.tensorflow.org/api_docs/python/tf/tensordot" rel="noreferrer">tensordot documentation</a>, and here is the relevant <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/math_ops.py" rel="noreferrer">github repository</a>. </p>
|
python|numpy|matrix|tensorflow|matrix-multiplication
| 8
|
1,498
| 72,452,858
|
Displaying data from summarization dataset in TensorFlow (using TensorFlow datasets)
|
<p>I'm new to Machine Learning and a newbie when it comes to utilizing the TensorFlow Module in Python.</p>
<p>I'm currently working with summarization and the dataset library in TensorFlow has many convenient datasets available for training the summarizers. However, I wanted to take a look at their contents before chosing one in particular, does anyone know how to display the dataset as a Table in the Python console?</p>
<p>So far, I have the example code (for the Opinosis dataset) from the TensorFlow website, which is the following:</p>
<pre><code># Copyright 2022 The TensorFlow Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Opinosis Opinion Dataset."""
import os
import tensorflow as tf
import tensorflow_datasets.public_api as tfds
_CITATION = """
@inproceedings{ganesan2010opinosis,
title={Opinosis: a graph-based approach to abstractive summarization of highly redundant opinions},
author={Ganesan, Kavita and Zhai, ChengXiang and Han, Jiawei},
booktitle={Proceedings of the 23rd International Conference on Computational Linguistics},
pages={340--348},
year={2010},
organization={Association for Computational Linguistics}
}
"""
_DESCRIPTION = """
The Opinosis Opinion Dataset consists of sentences extracted from reviews for 51 topics.
Topics and opinions are obtained from Tripadvisor, Edmunds.com and Amazon.com.
"""
_URL = "https://github.com/kavgan/opinosis-summarization/raw/master/OpinosisDataset1.0_0.zip"
_REVIEW_SENTS = "review_sents"
_SUMMARIES = "summaries"
class Opinosis(tfds.core.GeneratorBasedBuilder):
"""Opinosis Opinion Dataset."""
VERSION = tfds.core.Version("1.0.0")
def _info(self):
return tfds.core.DatasetInfo(
builder=self,
description=_DESCRIPTION,
features=tfds.features.FeaturesDict({
_REVIEW_SENTS: tfds.features.Text(),
_SUMMARIES: tfds.features.Sequence(tfds.features.Text())
}),
supervised_keys=(_REVIEW_SENTS, _SUMMARIES),
homepage="http://kavita-ganesan.com/opinosis/",
citation=_CITATION,
)
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
extract_path = dl_manager.download_and_extract(_URL)
return [
tfds.core.SplitGenerator(
name=tfds.Split.TRAIN,
gen_kwargs={"path": extract_path},
),
]
def _generate_examples(self, path=None):
"""Yields examples."""
topics_path = os.path.join(path, "topics")
filenames = tf.io.gfile.listdir(topics_path)
for filename in filenames:
file_path = os.path.join(topics_path, filename)
topic_name = filename.split(".txt")[0]
with tf.io.gfile.GFile(file_path, "rb") as src_f:
input_data = src_f.read()
summaries_path = os.path.join(path, "summaries-gold", topic_name)
summary_lst = []
for summ_filename in sorted(tf.io.gfile.listdir(summaries_path)):
file_path = os.path.join(summaries_path, summ_filename)
with tf.io.gfile.GFile(file_path, "rb") as tgt_f:
data = tgt_f.read().strip()
summary_lst.append(data)
summary_data = summary_lst
yield filename, {_REVIEW_SENTS: input_data, _SUMMARIES: summary_data}```
</code></pre>
|
<p>That is the source code for the Opinosis dataset. You don't need to copy it over to your code. <a href="https://www.tensorflow.org/datasets/overview" rel="nofollow noreferrer">This</a> should give you a good idea of how to use tensorflow datasets. Opinosis doesn't make much sense displayed as a table, so to get an idea of the contents I would just print a few examples. E.g:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow_datasets as tfds
ds, info = tfds.load('opinosis', split='train', with_info=True)
ds_iter = iter(ds)
for i in range(3):
print(next(ds_iter))
</code></pre>
<p>If you really want to see a table, you can use:</p>
<pre class="lang-py prettyprint-override"><code>print(tfds.as_dataframe(ds.take(3), info))
</code></pre>
|
python|tensorflow|tensorflow-datasets|summarization
| 1
|
1,499
| 72,443,457
|
Pandas - dataframe to BigQuery
|
<p>I have a df like the attached one:</p>
<p><a href="https://i.stack.imgur.com/Kwm8X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Kwm8X.png" alt="enter image description here" /></a></p>
<p>I'd like to iter over this df and write data into bq, the table's schema is something like: |coicop|unit|geo\time|period|value|
Where "period" and "value", in this case, would be "2021M05" and "108.36" or "2021M06" and "107.36".
I'm struggling to rotate these fields in order to be merged into the table</p>
|
<p>Use <code>df.melt</code>:</p>
<pre><code>In [965]: df = df.melt(id_vars=['coicop', 'unit', 'geo\time'], var_name='period')
In [966]: df
Out[966]:
coicop unit geo\time period value
0 CP03 I15 AT 2021M05 108.36
1 CP03 I15 BE 2021M05 106.66
2 CP03 I15 BG 2021M05 97.33
3 CP03 I15 CH 2021M05 112.49
4 CP03 I15 CY 2021M05 101.30
5 CP03 I15 AT 2021M06 107.36
6 CP03 I15 BE 2021M06 106.65
7 CP03 I15 BG 2021M06 97.06
8 CP03 I15 CH 2021M06 110.42
9 CP03 I15 CY 2021M06 103.56
</code></pre>
<p>Then write this <code>df</code> into BQ.</p>
|
python|pandas|dataframe|google-bigquery
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.