category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
pandas
|
How to avoid pandas creating an index in a saved csv
|
https://stackoverflow.com/questions/20845213/how-to-avoid-pandas-creating-an-index-in-a-saved-csv
|
<p>I am trying to save a csv to a folder after making some edits to the file. </p>
<p>Every time I use <code>pd.to_csv('C:/Path of file.csv')</code> the csv file has a separate column of indexes. I want to avoid printing the index to csv.</p>
<p>I tried: </p>
<pre><code>pd.read_csv('C:/Path to file to edit.csv', index_col = False)
</code></pre>
<p>And to save the file...</p>
<pre><code>pd.to_csv('C:/Path to save edited file.csv', index_col = False)
</code></pre>
<p>However, I still got the unwanted index column. How can I avoid this when I save my files?</p>
|
<p>Use <code>index=False</code>.</p>
<pre><code>df.to_csv('your.csv', index=False)
</code></pre>
| 634
|
pandas
|
Difference between map, applymap and apply methods in Pandas
|
https://stackoverflow.com/questions/19798153/difference-between-map-applymap-and-apply-methods-in-pandas
|
<p>Can you tell me when to use these vectorization methods with basic examples? </p>
<p>I see that <code>map</code> is a <code>Series</code> method whereas the rest are <code>DataFrame</code> methods. I got confused about <code>apply</code> and <code>applymap</code> methods though. Why do we have two methods for applying a function to a DataFrame? Again, simple examples which illustrate the usage would be great!</p>
|
<h2>Comparing <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="noreferrer"><code>map</code></a>, <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.applymap.html" rel="noreferrer"><code>applymap</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="noreferrer"><code>apply</code></a>: Context Matters</h2>
<p>The major differences are:</p>
<h3>Definition</h3>
<ul>
<li><code>map</code> is defined on Series <strong>only</strong></li>
<li><code>applymap</code> is defined on DataFrames <strong>only</strong></li>
<li><code>apply</code> is defined on <strong>both</strong></li>
</ul>
<h3>Input argument</h3>
<ul>
<li><code>map</code> accepts <code>dict</code>, <code>Series</code>, or callable</li>
<li><code>applymap</code> and <code>apply</code> accept callable only</li>
</ul>
<h3>Behavior</h3>
<ul>
<li><code>map</code> is elementwise for Series</li>
<li><code>applymap</code> is elementwise for DataFrames</li>
<li><code>apply</code> also works elementwise but is suited to more complex operations and aggregation. The behaviour and return value depends on the function.</li>
</ul>
<h3>Use case (the most important difference)</h3>
<ul>
<li><p><code>map</code> is meant for mapping values from one domain to another, so is optimised for performance, e.g.,</p>
<pre><code>df['A'].map({1:'a', 2:'b', 3:'c'})
</code></pre>
</li>
<li><p><code>applymap</code> is good for elementwise transformations across multiple rows/columns, e.g.,</p>
<pre><code>df[['A', 'B', 'C']].applymap(str.strip)
</code></pre>
</li>
<li><p><code>apply</code> is for applying any function that cannot be vectorised, e.g.,</p>
<pre><code>df['sentences'].apply(nltk.sent_tokenize)
</code></pre>
</li>
</ul>
<p>Also see <a href="https://stackoverflow.com/questions/54432583/when-should-i-not-want-to-use-pandas-apply-in-my-code">When should I (not) want to use pandas apply() in my code?</a> for a writeup I made a while back on the most appropriate scenarios for using <code>apply</code>. (Note that there aren't many, but there are a few— apply is generally <em>slow</em>.)</p>
<hr />
<h2>Summarising</h2>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th><code>map</code></th>
<th><code>applymap</code></th>
<th><code>apply</code></th>
</tr>
</thead>
<tbody>
<tr>
<td>Defined on Series?</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Defined on DataFrame?</td>
<td>No</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Argument</td>
<td><code>dict</code>, <code>Series</code>, or callable<sup>1</sup></td>
<td>callable<sup>2</sup></td>
<td>callable</td>
</tr>
<tr>
<td>Elementwise?</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Aggregation?</td>
<td>No</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Use Case</td>
<td>Transformation/mapping<sup>3</sup></td>
<td>Transformation</td>
<td>More complex functions</td>
</tr>
<tr>
<td>Returns</td>
<td><code>Series</code></td>
<td><code>DataFrame</code></td>
<td>scalar, <code>Series</code>, or <code>DataFrame</code><sup>4</sup></td>
</tr>
</tbody>
</table>
</div>
<p><strong>Footnotes</strong></p>
<ol>
<li><p><code>map</code> when passed a dictionary/Series will map elements based on the keys in that dictionary/Series. Missing values will be recorded as NaN in the output.</p>
</li>
<li><p><code>applymap</code> in more recent versions has been optimised for some operations. You will find <code>applymap</code> slightly faster than <code>apply</code> in some cases. My suggestion is to test them both and use whatever works better.</p>
</li>
<li><p><code>map</code> is optimised for elementwise mappings and transformation. Operations that involve dictionaries or Series will enable pandas to use faster code paths for better performance.</p>
</li>
<li><p><code>Series.apply</code> returns a scalar for aggregating operations, <code>Series</code> otherwise. Similarly for <code>DataFrame.apply</code>. Note that <code>apply</code> also has fastpaths when called with certain NumPy functions such as <code>mean</code>, <code>sum</code>, etc.</p>
</li>
</ol>
| 635
|
pandas
|
Get statistics for each group (such as count, mean, etc) using pandas GroupBy?
|
https://stackoverflow.com/questions/19384532/get-statistics-for-each-group-such-as-count-mean-etc-using-pandas-groupby
|
<p>I have a dataframe <code>df</code> and I use several columns from it to <code>groupby</code>:</p>
<pre class="lang-py prettyprint-override"><code>df['col1','col2','col3','col4'].groupby(['col1','col2']).mean()
</code></pre>
<p>In the above way, I almost get the table (dataframe) that I need. What is missing is an additional column that contains number of rows in each group. In other words, I have mean but I also would like to know how many were used to get these means. For example in the first group there are 8 values and in the second one 10 and so on.</p>
<p>In short: How do I get <strong>group-wise</strong> statistics for a dataframe?</p>
|
<p>On <code>groupby</code> object, the <code>agg</code> function can take a list to <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#applying-multiple-functions-at-once" rel="noreferrer">apply several aggregation methods</a> at once. This should give you the result you need:</p>
<pre><code>df[['col1', 'col2', 'col3', 'col4']].groupby(['col1', 'col2']).agg(['mean', 'count'])
</code></pre>
| 636
|
pandas
|
How to check if any value is NaN in a Pandas DataFrame
|
https://stackoverflow.com/questions/29530232/how-to-check-if-any-value-is-nan-in-a-pandas-dataframe
|
<p>How do I check whether a pandas DataFrame has NaN values?</p>
<p>I know about <code>pd.isnan</code> but it returns a DataFrame of booleans. I also found <a href="https://stackoverflow.com/questions/27754891/python-nan-value-in-pandas">this post</a> but it doesn't exactly answer my question either.</p>
|
<p><a href="https://stackoverflow.com/users/1567452/jwilner">jwilner</a>'s response is spot on. I was exploring to see if there's a faster option, since in my experience, summing flat arrays is (strangely) faster than counting. This code seems faster:</p>
<pre class="lang-py prettyprint-override"><code>df.isnull().values.any()
</code></pre>
<p><a href="https://i.sstatic.net/7l80g.png" rel="noreferrer"><img src="https://i.sstatic.net/7l80g.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import perfplot
def setup(n):
df = pd.DataFrame(np.random.randn(n))
df[df > 0.9] = np.nan
return df
def isnull_any(df):
return df.isnull().any()
def isnull_values_sum(df):
return df.isnull().values.sum() > 0
def isnull_sum(df):
return df.isnull().sum() > 0
def isnull_values_any(df):
return df.isnull().values.any()
perfplot.save(
"out.png",
setup=setup,
kernels=[isnull_any, isnull_values_sum, isnull_sum, isnull_values_any],
n_range=[2 ** k for k in range(25)],
)
</code></pre>
<p><code>df.isnull().sum().sum()</code> is a bit slower, but of course, has additional information -- the number of <code>NaNs</code>.</p>
| 637
|
pandas
|
Set value for particular cell in pandas DataFrame using index
|
https://stackoverflow.com/questions/13842088/set-value-for-particular-cell-in-pandas-dataframe-using-index
|
<p>I have created a Pandas DataFrame</p>
<pre class="lang-py prettyprint-override"><code>df = DataFrame(index=['A','B','C'], columns=['x','y'])
</code></pre>
<p>Now, I would like to assign a value to particular cell, for example to row <code>C</code> and column <code>x</code>. In other words, I would like to perform the following transformation:</p>
<pre class="lang-none prettyprint-override"><code> x y x y
A NaN NaN A NaN NaN
B NaN NaN ⟶ B NaN NaN
C NaN NaN C 10 NaN
</code></pre>
<p>with this code:</p>
<pre class="lang-py prettyprint-override"><code>df.xs('C')['x'] = 10
</code></pre>
<p>However, the contents of <code>df</code> has not changed. The dataframe contains yet again only <code>NaN</code>s. How do I what I want?</p>
|
<p><a href="https://stackoverflow.com/a/24517695/190597">RukTech's answer</a>, <code>df.set_value('C', 'x', 10)</code>, is far and away faster than the options I've suggested below. However, it has been <a href="https://github.com/pandas-dev/pandas/issues/15269" rel="noreferrer"><strong>slated for deprecation</strong></a>.</p>
<p>Going forward, the <a href="https://github.com/pandas-dev/pandas/issues/15269#issuecomment-276382712" rel="noreferrer">recommended method is <code>.iat/.at</code></a>.</p>
<hr>
<p><strong>Why <code>df.xs('C')['x']=10</code> does not work:</strong></p>
<p><code>df.xs('C')</code> by default, returns a new dataframe <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html#pandas.DataFrame.xsY" rel="noreferrer">with a copy</a> of the data, so </p>
<pre><code>df.xs('C')['x']=10
</code></pre>
<p>modifies this new dataframe only.</p>
<p><code>df['x']</code> returns a view of the <code>df</code> dataframe, so </p>
<pre><code>df['x']['C'] = 10
</code></pre>
<p>modifies <code>df</code> itself.</p>
<p><strong>Warning</strong>: It is sometimes difficult to predict if an operation returns a copy or a view. For this reason the <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#returning-a-view-versus-a-copy" rel="noreferrer">docs recommend avoiding assignments with "chained indexing"</a>. </p>
<hr>
<p>So the recommended alternative is</p>
<pre><code>df.at['C', 'x'] = 10
</code></pre>
<p>which <em>does</em> modify <code>df</code>.</p>
<hr>
<pre><code>In [18]: %timeit df.set_value('C', 'x', 10)
100000 loops, best of 3: 2.9 µs per loop
In [20]: %timeit df['x']['C'] = 10
100000 loops, best of 3: 6.31 µs per loop
In [81]: %timeit df.at['C', 'x'] = 10
100000 loops, best of 3: 9.2 µs per loop
</code></pre>
| 638
|
pandas
|
Import multiple CSV files into pandas and concatenate into one DataFrame
|
https://stackoverflow.com/questions/20906474/import-multiple-csv-files-into-pandas-and-concatenate-into-one-dataframe
|
<p>I would like to read several CSV files from a directory into pandas and concatenate them into one big DataFrame. I have not been able to figure it out though. Here is what I have so far:</p>
<pre><code>import glob
import pandas as pd
# Get data file names
path = r'C:\DRO\DCL_rawdata_files'
filenames = glob.glob(path + "/*.csv")
dfs = []
for filename in filenames:
dfs.append(pd.read_csv(filename))
# Concatenate all data into one DataFrame
big_frame = pd.concat(dfs, ignore_index=True)
</code></pre>
<p>I guess I need some help within the <em>for</em> loop?</p>
|
<p>See <a href="https://pandas.pydata.org/docs/user_guide/io.html" rel="noreferrer">pandas: IO tools</a> for all of the available <code>.read_</code> methods.</p>
<p>Try the following code if all of the CSV files have the same columns.</p>
<p>I have added <code>header=0</code>, so that after reading the CSV file's first row, it can be assigned as the column names.</p>
<pre><code>import pandas as pd
import glob
import os
path = r'C:\DRO\DCL_rawdata_files' # use your path
all_files = glob.glob(os.path.join(path , "/*.csv"))
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
frame = pd.concat(li, axis=0, ignore_index=True)
</code></pre>
<p>Or, with attribution to a comment from <a href="https://stackoverflow.com/users/3888455/sid">Sid</a>.</p>
<pre class="lang-py prettyprint-override"><code>all_files = glob.glob(os.path.join(path, "*.csv"))
df = pd.concat((pd.read_csv(f) for f in all_files), ignore_index=True)
</code></pre>
<hr />
<ul>
<li>It's often necessary to identify each sample of data, which can be accomplished by adding a new column to the dataframe.</li>
<li><a href="https://docs.python.org/3/library/pathlib.html" rel="noreferrer"><code>pathlib</code></a> from the standard library will be used for this example. It treats paths as objects with methods, instead of strings to be sliced.</li>
</ul>
<h3>Imports and Setup</h3>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
import pandas as pd
import numpy as np
path = r'C:\DRO\DCL_rawdata_files' # or unix / linux / mac path
# Get the files from the path provided in the OP
files = Path(path).glob('*.csv') # .rglob to get subdirectories
</code></pre>
<h3>Option 1:</h3>
<ul>
<li>Add a new column with the file name</li>
</ul>
<pre class="lang-py prettyprint-override"><code>dfs = list()
for f in files:
data = pd.read_csv(f)
# .stem is method for pathlib objects to get the filename w/o the extension
data['file'] = f.stem
dfs.append(data)
df = pd.concat(dfs, ignore_index=True)
</code></pre>
<h3>Option 2:</h3>
<ul>
<li>Add a new column with a generic name using <code>enumerate</code></li>
</ul>
<pre class="lang-py prettyprint-override"><code>dfs = list()
for i, f in enumerate(files):
data = pd.read_csv(f)
data['file'] = f'File {i}'
dfs.append(data)
df = pd.concat(dfs, ignore_index=True)
</code></pre>
<h3>Option 3:</h3>
<ul>
<li>Create the dataframes with a list comprehension, and then use <a href="https://numpy.org/doc/stable/reference/generated/numpy.repeat.html" rel="noreferrer"><code>np.repeat</code></a> to add a new column.
<ul>
<li><code>[f'S{i}' for i in range(len(dfs))]</code> creates a list of strings to name each dataframe.</li>
<li><code>[len(df) for df in dfs]</code> creates a list of lengths</li>
</ul>
</li>
<li>Attribution for this option goes to this plotting <a href="https://stackoverflow.com/a/65951915/7758804">answer</a>.</li>
</ul>
<pre class="lang-py prettyprint-override"><code># Read the files into dataframes
dfs = [pd.read_csv(f) for f in files]
# Combine the list of dataframes
df = pd.concat(dfs, ignore_index=True)
# Add a new column
df['Source'] = np.repeat([f'S{i}' for i in range(len(dfs))], [len(df) for df in dfs])
</code></pre>
<h3>Option 4:</h3>
<ul>
<li>One liners using <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.assign.html" rel="noreferrer"><code>.assign</code></a> to create the new column, with attribution to a comment from <a href="https://stackoverflow.com/users/2573061/c8h10n4o2">C8H10N4O2</a></li>
</ul>
<pre class="lang-py prettyprint-override"><code>df = pd.concat((pd.read_csv(f).assign(filename=f.stem) for f in files), ignore_index=True)
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>df = pd.concat((pd.read_csv(f).assign(Source=f'S{i}') for i, f in enumerate(files)), ignore_index=True)
</code></pre>
| 639
|
pandas
|
How to apply a function to two columns of Pandas dataframe
|
https://stackoverflow.com/questions/13331698/how-to-apply-a-function-to-two-columns-of-pandas-dataframe
|
<p>Suppose I have a function and a dataframe defined as below:</p>
<pre class="lang-py prettyprint-override"><code>def get_sublist(sta, end):
return mylist[sta:end+1]
df = pd.DataFrame({'ID':['1','2','3'], 'col_1': [0,2,3], 'col_2':[1,4,5]})
mylist = ['a','b','c','d','e','f']
</code></pre>
<p>Now I want to apply <code>get_sublist</code> to <code>df</code>'s two columns <code>'col_1', 'col_2'</code> to element-wise calculate a new column <code>'col_3'</code> to get an output that looks like:</p>
<pre class="lang-none prettyprint-override"><code> ID col_1 col_2 col_3
0 1 0 1 ['a', 'b']
1 2 2 4 ['c', 'd', 'e']
2 3 3 5 ['d', 'e', 'f']
</code></pre>
<p>I tried:</p>
<pre class="lang-py prettyprint-override"><code>df['col_3'] = df[['col_1','col_2']].apply(get_sublist, axis=1)
</code></pre>
<p>but this results in the following: error:</p>
<blockquote>
<p>TypeError: get_sublist() missing 1 required positional argument:</p>
</blockquote>
<p>How do I do this?</p>
|
<p>There is a clean, one-line way of doing this in Pandas:</p>
<pre><code>df['col_3'] = df.apply(lambda x: f(x.col_1, x.col_2), axis=1)
</code></pre>
<p>This allows <code>f</code> to be a user-defined function with multiple input values, and uses (safe) column names rather than (unsafe) numeric indices to access the columns.</p>
<p>Example with data (based on original question):</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'ID':['1', '2', '3'], 'col_1': [0, 2, 3], 'col_2':[1, 4, 5]})
mylist = ['a', 'b', 'c', 'd', 'e', 'f']
def get_sublist(sta,end):
return mylist[sta:end+1]
df['col_3'] = df.apply(lambda x: get_sublist(x.col_1, x.col_2), axis=1)
</code></pre>
<p>Output of <code>print(df)</code>:</p>
<pre><code> ID col_1 col_2 col_3
0 1 0 1 [a, b]
1 2 2 4 [c, d, e]
2 3 3 5 [d, e, f]
</code></pre>
<p>If your column names contain spaces or share a name with an existing dataframe attribute, you can index with square brackets:</p>
<pre><code>df['col_3'] = df.apply(lambda x: f(x['col 1'], x['col 2']), axis=1)
</code></pre>
| 640
|
pandas
|
How can I get a value from a cell of a dataframe?
|
https://stackoverflow.com/questions/16729574/how-can-i-get-a-value-from-a-cell-of-a-dataframe
|
<p>I have constructed a condition that extracts exactly one row from my dataframe:</p>
<pre class="lang-py prettyprint-override"><code>d2 = df[(df['l_ext']==l_ext) & (df['item']==item) & (df['wn']==wn) & (df['wd']==1)]
</code></pre>
<p>Now I would like to take a value from a particular column:</p>
<pre class="lang-py prettyprint-override"><code>val = d2['col_name']
</code></pre>
<p>But as a result, I get a dataframe that contains one row and one column (i.e., one cell). It is not what I need. I need one value (one float number). How can I do it in pandas?</p>
|
<p>If you have a DataFrame with only one row, then access the first (only) row as a Series using <em><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="noreferrer">iloc</a></em>, and then the value using the column name:</p>
<pre><code>In [3]: sub_df
Out[3]:
A B
2 -0.133653 -0.030854
In [4]: sub_df.iloc[0]
Out[4]:
A -0.133653
B -0.030854
Name: 2, dtype: float64
In [5]: sub_df.iloc[0]['A']
Out[5]: -0.13365288513107493
</code></pre>
| 641
|
pandas
|
What does `ValueError: cannot reindex from a duplicate axis` mean?
|
https://stackoverflow.com/questions/27236275/what-does-valueerror-cannot-reindex-from-a-duplicate-axis-mean
|
<p>I am getting a <code>ValueError: cannot reindex from a duplicate axis</code> when I am trying to set an index to a certain value. I tried to reproduce this with a simple example, but I could not do it.</p>
<p>Here is my session inside of <code>ipdb</code> trace. I have a DataFrame with string index, and integer columns, float values. However when I try to create <code>sum</code> index for sum of all columns I am getting <code>ValueError: cannot reindex from a duplicate axis</code> error. I created a small DataFrame with the same characteristics, but was not able to reproduce the problem, what could I be missing?</p>
<p>I don't really understand what <code>ValueError: cannot reindex from a duplicate axis</code>means, what does this error message mean? Maybe this will help me diagnose the problem, and this is most answerable part of my question.</p>
<pre><code>ipdb> type(affinity_matrix)
<class 'pandas.core.frame.DataFrame'>
ipdb> affinity_matrix.shape
(333, 10)
ipdb> affinity_matrix.columns
Int64Index([9315684, 9315597, 9316591, 9320520, 9321163, 9320615, 9321187, 9319487, 9319467, 9320484], dtype='int64')
ipdb> affinity_matrix.index
Index([u'001', u'002', u'003', u'004', u'005', u'008', u'009', u'010', u'011', u'014', u'015', u'016', u'018', u'020', u'021', u'022', u'024', u'025', u'026', u'027', u'028', u'029', u'030', u'032', u'033', u'034', u'035', u'036', u'039', u'040', u'041', u'042', u'043', u'044', u'045', u'047', u'047', u'048', u'050', u'053', u'054', u'055', u'056', u'057', u'058', u'059', u'060', u'061', u'062', u'063', u'065', u'067', u'068', u'069', u'070', u'071', u'072', u'073', u'074', u'075', u'076', u'077', u'078', u'080', u'082', u'083', u'084', u'085', u'086', u'089', u'090', u'091', u'092', u'093', u'094', u'095', u'096', u'097', u'098', u'100', u'101', u'103', u'104', u'105', u'106', u'107', u'108', u'109', u'110', u'111', u'112', u'113', u'114', u'115', u'116', u'117', u'118', u'119', u'121', u'122', ...], dtype='object')
ipdb> affinity_matrix.values.dtype
dtype('float64')
ipdb> 'sums' in affinity_matrix.index
False
</code></pre>
<p>Here is the error:</p>
<pre><code>ipdb> affinity_matrix.loc['sums'] = affinity_matrix.sum(axis=0)
*** ValueError: cannot reindex from a duplicate axis
</code></pre>
<p>I tried to reproduce this with a simple example, but I failed</p>
<pre><code>In [32]: import pandas as pd
In [33]: import numpy as np
In [34]: a = np.arange(35).reshape(5,7)
In [35]: df = pd.DataFrame(a, ['x', 'y', 'u', 'z', 'w'], range(10, 17))
In [36]: df.values.dtype
Out[36]: dtype('int64')
In [37]: df.loc['sums'] = df.sum(axis=0)
In [38]: df
Out[38]:
10 11 12 13 14 15 16
x 0 1 2 3 4 5 6
y 7 8 9 10 11 12 13
u 14 15 16 17 18 19 20
z 21 22 23 24 25 26 27
w 28 29 30 31 32 33 34
sums 70 75 80 85 90 95 100
</code></pre>
|
<p>This error usually rises when you join / assign to a column when the index has duplicate values. Since you are assigning to a row, I suspect that there is a duplicate value in <code>affinity_matrix.columns</code>, perhaps not shown in your question.</p>
| 642
|
pandas
|
Filter dataframe rows if value in column is in a set list of values
|
https://stackoverflow.com/questions/12065885/filter-dataframe-rows-if-value-in-column-is-in-a-set-list-of-values
|
<p>I have a Python pandas DataFrame <code>rpt</code>:</p>
<pre><code>rpt
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 47518 entries, ('000002', '20120331') to ('603366', '20091231')
Data columns:
STK_ID 47518 non-null values
STK_Name 47518 non-null values
RPT_Date 47518 non-null values
sales 47518 non-null values
</code></pre>
<p>I can filter the rows whose stock id is <code>'600809'</code> like this: <code>rpt[rpt['STK_ID'] == '600809']</code></p>
<pre><code><class 'pandas.core.frame.DataFrame'>
MultiIndex: 25 entries, ('600809', '20120331') to ('600809', '20060331')
Data columns:
STK_ID 25 non-null values
STK_Name 25 non-null values
RPT_Date 25 non-null values
sales 25 non-null values
</code></pre>
<p>and I want to get all the rows of some stocks together, such as <code>['600809','600141','600329']</code>. That means I want a syntax like this: </p>
<pre><code>stk_list = ['600809','600141','600329']
rst = rpt[rpt['STK_ID'] in stk_list] # this does not works in pandas
</code></pre>
<p>Since pandas not accept above command, how to achieve the target? </p>
|
<p>Use the <code>isin</code> method: </p>
<p><code>rpt[rpt['STK_ID'].isin(stk_list)]</code></p>
| 643
|
pandas
|
UnicodeDecodeError when reading CSV file in Pandas
|
https://stackoverflow.com/questions/18171739/unicodedecodeerror-when-reading-csv-file-in-pandas
|
<p>I'm running a program which is processing 30,000 similar files. A random number of them are stopping and producing this error...</p>
<pre class="lang-none prettyprint-override"><code> File "C:\Importer\src\dfman\importer.py", line 26, in import_chr
data = pd.read_csv(filepath, names=fields)
File "C:\Python33\lib\site-packages\pandas\io\parsers.py", line 400, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\Python33\lib\site-packages\pandas\io\parsers.py", line 205, in _read
return parser.read()
File "C:\Python33\lib\site-packages\pandas\io\parsers.py", line 608, in read
ret = self._engine.read(nrows)
File "C:\Python33\lib\site-packages\pandas\io\parsers.py", line 1028, in read
data = self._reader.read(nrows)
File "parser.pyx", line 706, in pandas.parser.TextReader.read (pandas\parser.c:6745)
File "parser.pyx", line 728, in pandas.parser.TextReader._read_low_memory (pandas\parser.c:6964)
File "parser.pyx", line 804, in pandas.parser.TextReader._read_rows (pandas\parser.c:7780)
File "parser.pyx", line 890, in pandas.parser.TextReader._convert_column_data (pandas\parser.c:8793)
File "parser.pyx", line 950, in pandas.parser.TextReader._convert_tokens (pandas\parser.c:9484)
File "parser.pyx", line 1026, in pandas.parser.TextReader._convert_with_dtype (pandas\parser.c:10642)
File "parser.pyx", line 1046, in pandas.parser.TextReader._string_convert (pandas\parser.c:10853)
File "parser.pyx", line 1278, in pandas.parser._string_box_utf8 (pandas\parser.c:15657)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xda in position 6: invalid continuation byte
</code></pre>
<p>The source/creation of these files all come from the same place. What's the best way to correct this to proceed with the import?</p>
|
<p><code>read_csv</code> takes an <code>encoding</code> option to deal with files in different formats. I mostly use <code>read_csv('file', encoding = "ISO-8859-1")</code>, or alternatively <code>encoding = "utf-8"</code> for reading, and generally <code>utf-8</code> for <code>to_csv</code>.</p>
<p>You can also use one of several <code>alias</code> options like <code>'latin'</code> or <code>'cp1252'</code> (Windows) instead of <code>'ISO-8859-1'</code> (see <a href="https://docs.python.org/3/library/codecs.html#standard-encodings" rel="noreferrer">python docs</a>, also for numerous other encodings you may encounter).</p>
<p>See <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="noreferrer">relevant Pandas documentation</a>,
<a href="http://docs.python.org/3/library/csv.html#examples" rel="noreferrer">python docs examples on csv files</a>, and plenty of related questions here on SO. A good background resource is <a href="https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/" rel="noreferrer">What every developer should know about unicode and character sets</a>.</p>
<p>To detect the encoding (assuming the file contains non-ascii characters), you can use <code>enca</code> (see <a href="https://linux.die.net/man/1/enconv" rel="noreferrer">man page</a>) or <code>file -i</code> (linux) or <code>file -I</code> (osx) (see <a href="https://linux.die.net/man/1/file" rel="noreferrer">man page</a>).</p>
| 644
|
pandas
|
Convert Pandas dataframe to NumPy array
|
https://stackoverflow.com/questions/13187778/convert-pandas-dataframe-to-numpy-array
|
<p>How do I convert a Pandas dataframe into a NumPy array?</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
df = pd.DataFrame(
{
'A': [np.nan, np.nan, np.nan, 0.1, 0.1, 0.1, 0.1],
'B': [0.2, np.nan, 0.2, 0.2, 0.2, np.nan, np.nan],
'C': [np.nan, 0.5, 0.5, np.nan, 0.5, 0.5, np.nan],
},
index=[1, 2, 3, 4, 5, 6, 7],
).rename_axis('ID')
</code></pre>
<p>That gives this DataFrame:</p>
<pre class="lang-none prettyprint-override"><code> A B C
ID
1 NaN 0.2 NaN
2 NaN NaN 0.5
3 NaN 0.2 0.5
4 0.1 0.2 NaN
5 0.1 0.2 0.5
6 0.1 NaN 0.5
7 0.1 NaN NaN
</code></pre>
<p>I would like to convert this to a NumPy array, like so:</p>
<pre class="lang-none prettyprint-override"><code>array([[ nan, 0.2, nan],
[ nan, nan, 0.5],
[ nan, 0.2, 0.5],
[ 0.1, 0.2, nan],
[ 0.1, 0.2, 0.5],
[ 0.1, nan, 0.5],
[ 0.1, nan, nan]])
</code></pre>
<hr />
<p>Also, is it possible to preserve the dtypes, like this?</p>
<pre class="lang-none prettyprint-override"><code>array([[ 1, nan, 0.2, nan],
[ 2, nan, nan, 0.5],
[ 3, nan, 0.2, 0.5],
[ 4, 0.1, 0.2, nan],
[ 5, 0.1, 0.2, 0.5],
[ 6, 0.1, nan, 0.5],
[ 7, 0.1, nan, nan]],
dtype=[('ID', '<i4'), ('A', '<f8'), ('B', '<f8'), ('B', '<f8')])
</code></pre>
|
<h1>Use <code>df.to_numpy()</code></h1>
<p>It's better than <code>df.values</code>, here's why.<sup>*</sup></p>
<p>It's time to deprecate your usage of <code>values</code> and <code>as_matrix()</code>.</p>
<p>pandas v0.24.0 introduced two new methods for obtaining NumPy arrays from pandas objects:</p>
<ol>
<li><strong><code>to_numpy()</code></strong>, which is defined on <code>Index</code>, <code>Series</code>, and <code>DataFrame</code> objects, and</li>
<li><strong><code>array</code></strong>, which is defined on <code>Index</code> and <code>Series</code> objects only.</li>
</ol>
<p>If you visit the v0.24 docs for <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.values.html#pandas.DataFrame.values" rel="noreferrer"><code>.values</code></a>, you will see a big red warning that says:</p>
<blockquote>
<h3>Warning: We recommend using <code>DataFrame.to_numpy()</code> instead.</h3>
</blockquote>
<p>See <a href="https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.24.0.html#accessing-the-values-in-a-series-or-index" rel="noreferrer">this section of the v0.24.0 release notes</a>, and <a href="https://stackoverflow.com/a/54324513/4909087">this answer</a> for more information.</p>
<p><sub>* - <code>to_numpy()</code> is my recommended method for any production code that needs to run reliably for many versions into the future. However if you're just making a scratchpad in jupyter or the terminal, using <code>.values</code> to save a few milliseconds of typing is a permissable exception. You can always add the fit n finish later.</sub></p>
<hr />
<hr />
<h2><strong>Towards Better Consistency: <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_numpy.html" rel="noreferrer"><code>to_numpy()</code></a></strong></h2>
<p>In the spirit of better consistency throughout the API, a new method <code>to_numpy</code> has been introduced to extract the underlying NumPy array from DataFrames.</p>
<pre><code># Setup
df = pd.DataFrame(data={'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]},
index=['a', 'b', 'c'])
# Convert the entire DataFrame
df.to_numpy()
# array([[1, 4, 7],
# [2, 5, 8],
# [3, 6, 9]])
# Convert specific columns
df[['A', 'C']].to_numpy()
# array([[1, 7],
# [2, 8],
# [3, 9]])
</code></pre>
<p>As mentioned above, this method is also defined on <code>Index</code> and <code>Series</code> objects (see <a href="https://stackoverflow.com/a/54324513/4909087">here</a>).</p>
<pre><code>df.index.to_numpy()
# array(['a', 'b', 'c'], dtype=object)
df['A'].to_numpy()
# array([1, 2, 3])
</code></pre>
<p>By default, a view is returned, so any modifications made will affect the original.</p>
<pre><code>v = df.to_numpy()
v[0, 0] = -1
df
A B C
a -1 4 7
b 2 5 8
c 3 6 9
</code></pre>
<p>If you need a copy instead, use <code>to_numpy(copy=True)</code>.</p>
<hr />
<h3>pandas >= 1.0 update for ExtensionTypes</h3>
<p>If you're using pandas 1.x, chances are you'll be dealing with extension types a lot more. You'll have to be a little more careful that these extension types are correctly converted.</p>
<pre><code>a = pd.array([1, 2, None], dtype="Int64")
a
<IntegerArray>
[1, 2, <NA>]
Length: 3, dtype: Int64
# Wrong
a.to_numpy()
# array([1, 2, <NA>], dtype=object) # yuck, objects
# Correct
a.to_numpy(dtype='float', na_value=np.nan)
# array([ 1., 2., nan])
# Also correct
a.to_numpy(dtype='int', na_value=-1)
# array([ 1, 2, -1])
</code></pre>
<p>This is <a href="https://pandas.pydata.org/pandas-docs/stable/whatsnew/v1.0.0.html#arrays-integerarray-now-uses-pandas-na" rel="noreferrer">called out in the docs</a>.</p>
<hr />
<h3>If you need the <code>dtypes</code> in the result...</h3>
<p>As shown in another answer, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_records.html#pandas-dataframe-to-records" rel="noreferrer"><code>DataFrame.to_records</code></a> is a good way to do this.</p>
<pre><code>df.to_records()
# rec.array([('a', 1, 4, 7), ('b', 2, 5, 8), ('c', 3, 6, 9)],
# dtype=[('index', 'O'), ('A', '<i8'), ('B', '<i8'), ('C', '<i8')])
</code></pre>
<p>This cannot be done with <code>to_numpy</code>, unfortunately. However, as an alternative, you can use <code>np.rec.fromrecords</code>:</p>
<pre><code>v = df.reset_index()
np.rec.fromrecords(v, names=v.columns.tolist())
# rec.array([('a', 1, 4, 7), ('b', 2, 5, 8), ('c', 3, 6, 9)],
# dtype=[('index', '<U1'), ('A', '<i8'), ('B', '<i8'), ('C', '<i8')])
</code></pre>
<p>Performance wise, it's nearly the same (actually, using <code>rec.fromrecords</code> is a bit faster).</p>
<pre><code>df2 = pd.concat([df] * 10000)
%timeit df2.to_records()
%%timeit
v = df2.reset_index()
np.rec.fromrecords(v, names=v.columns.tolist())
12.9 ms ± 511 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
9.56 ms ± 291 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<hr />
<hr />
<h2><strong>Rationale for Adding a New Method</strong></h2>
<p><code>to_numpy()</code> (in addition to <code>array</code>) was added as a result of discussions under two GitHub issues <a href="https://github.com/pandas-dev/pandas/issues/19954" rel="noreferrer">GH19954</a> and <a href="https://github.com/pandas-dev/pandas/issues/23623" rel="noreferrer">GH23623</a>.</p>
<p>Specifically, <a href="https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.24.0.html#accessing-the-values-in-a-series-or-index" rel="noreferrer">the docs</a> mention the rationale:</p>
<blockquote>
<p>[...] with <code>.values</code> it was unclear whether the returned value would be the
actual array, some transformation of it, or one of pandas custom
arrays (like <code>Categorical</code>). For example, with <code>PeriodIndex</code>, <code>.values</code>
generates a new <code>ndarray</code> of period objects each time. [...]</p>
</blockquote>
<p><code>to_numpy</code> aims to improve the consistency of the API, which is a major step in the right direction. <code>.values</code> will not be deprecated in the current version, but I expect this may happen at some point in the future, so I would urge users to migrate towards the newer API, as soon as you can.</p>
<hr />
<hr />
<h2><strong>Critique of Other Solutions</strong></h2>
<p><code>DataFrame.values</code> has inconsistent behaviour, as already noted.</p>
<p><code>DataFrame.get_values()</code> was <a href="https://github.com/pandas-dev/pandas/pull/29989" rel="noreferrer">quietly removed in v1.0</a> and was previously deprecated in v0.25. Before that, it was simply a wrapper around <code>DataFrame.values</code>, so everything said above applies.</p>
<p><code>DataFrame.as_matrix()</code> was removed in v1.0 and was previously deprecated in v0.23. Do <strong>NOT</strong> use!</p>
| 645
|
pandas
|
How do I check if a pandas DataFrame is empty?
|
https://stackoverflow.com/questions/19828822/how-do-i-check-if-a-pandas-dataframe-is-empty
|
<p>How do I check if a pandas <code>DataFrame</code> is empty? I'd like to print some message in the terminal if the <code>DataFrame</code> is empty.</p>
|
<p>You can use the attribute <code>df.empty</code> to check whether it's empty or not:</p>
<pre><code>if df.empty:
print('DataFrame is empty!')
</code></pre>
<p>Source: <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.empty.html" rel="noreferrer">Pandas Documentation</a></p>
| 646
|
pandas
|
How to sort pandas dataframe by one column
|
https://stackoverflow.com/questions/37787698/how-to-sort-pandas-dataframe-by-one-column
|
<p>I have a dataframe like this:</p>
<pre class="lang-none prettyprint-override"><code> 0 1 2
0 354.7 April 4.0
1 55.4 August 8.0
2 176.5 December 12.0
3 95.5 February 2.0
4 85.6 January 1.0
5 152 July 7.0
6 238.7 June 6.0
7 104.8 March 3.0
8 283.5 May 5.0
9 278.8 November 11.0
10 249.6 October 10.0
11 212.7 September 9.0
</code></pre>
<p>As you can see, months are not in calendar order. So I created a second column to get the month number corresponding to each month (1-12). From there, how can I sort this dataframe according to calendar months' order?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="noreferrer"><code>sort_values</code></a> to sort the df by a specific column's values:</p>
<pre><code>In [18]:
df.sort_values('2')
Out[18]:
0 1 2
4 85.6 January 1.0
3 95.5 February 2.0
7 104.8 March 3.0
0 354.7 April 4.0
8 283.5 May 5.0
6 238.7 June 6.0
5 152.0 July 7.0
1 55.4 August 8.0
11 212.7 September 9.0
10 249.6 October 10.0
9 278.8 November 11.0
2 176.5 December 12.0
</code></pre>
<p>If you want to sort by two columns, pass a list of column labels to <code>sort_values</code> with the column labels ordered according to sort priority. If you use <code>df.sort_values(['2', '0'])</code>, the result would be sorted by column <code>2</code> then column <code>0</code>. Granted, this does not really make sense for this example because each value in <code>df['2']</code> is unique.</p>
| 647
|
pandas
|
Converting a Pandas GroupBy multiindex output from Series back to DataFrame
|
https://stackoverflow.com/questions/10373660/converting-a-pandas-groupby-multiindex-output-from-series-back-to-dataframe
|
<p>I have a dataframe:</p>
<pre class="lang-none prettyprint-override"><code> City Name
0 Seattle Alice
1 Seattle Bob
2 Portland Mallory
3 Seattle Mallory
4 Seattle Bob
5 Portland Mallory
</code></pre>
<p>I perform the following grouping:</p>
<pre class="lang-py prettyprint-override"><code>g1 = df1.groupby(["Name", "City"]).count()
</code></pre>
<p>which when printed looks like:</p>
<pre class="lang-none prettyprint-override"><code> City Name
Name City
Alice Seattle 1 1
Bob Seattle 2 2
Mallory Portland 2 2
Seattle 1 1
</code></pre>
<p>But what I want eventually is another DataFrame object that contains all the rows in the GroupBy object. In other words I want to get the following result:</p>
<pre class="lang-none prettyprint-override"><code> City Name
Name City
Alice Seattle 1 1
Bob Seattle 2 2
Mallory Portland 2 2
Mallory Seattle 1 1
</code></pre>
<p>How do I do it?</p>
|
<p><code>g1</code> here <em>is</em> a DataFrame. It has a hierarchical index, though:</p>
<pre><code>In [19]: type(g1)
Out[19]: pandas.core.frame.DataFrame
In [20]: g1.index
Out[20]:
MultiIndex([('Alice', 'Seattle'), ('Bob', 'Seattle'), ('Mallory', 'Portland'),
('Mallory', 'Seattle')], dtype=object)
</code></pre>
<p>Perhaps you want something like this?</p>
<pre><code>In [21]: g1.add_suffix('_Count').reset_index()
Out[21]:
Name City City_Count Name_Count
0 Alice Seattle 1 1
1 Bob Seattle 2 2
2 Mallory Portland 2 2
3 Mallory Seattle 1 1
</code></pre>
<p>Or something like:</p>
<pre><code>In [36]: DataFrame({'count' : df1.groupby( [ "Name", "City"] ).size()}).reset_index()
Out[36]:
Name City count
0 Alice Seattle 1
1 Bob Seattle 2
2 Mallory Portland 2
3 Mallory Seattle 1
</code></pre>
| 648
|
pandas
|
Pandas: Get first row value of a given column
|
https://stackoverflow.com/questions/25254016/pandas-get-first-row-value-of-a-given-column
|
<p>This seems like a ridiculously easy question... but I'm not seeing the easy answer I was expecting.</p>
<p>So, how do I get the value at an nth row of a given column in Pandas? (I am particularly interested in the first row, but would be interested in a more general practice as well).</p>
<p>For example, let's say I want to pull the 1.2 value in <code>Btime</code> as a variable.</p>
<p>Whats the right way to do this?</p>
<pre class="lang-py prettyprint-override"><code>>>> df_test
ATime X Y Z Btime C D E
0 1.2 2 15 2 1.2 12 25 12
1 1.4 3 12 1 1.3 13 22 11
2 1.5 1 10 6 1.4 11 20 16
3 1.6 2 9 10 1.7 12 29 12
4 1.9 1 1 9 1.9 11 21 19
5 2.0 0 0 0 2.0 8 10 11
6 2.4 0 0 0 2.4 10 12 15
</code></pre>
|
<p>To select the <code>ith</code> row, <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#different-choices-for-indexing-loc-iloc-and-ix" rel="noreferrer">use <code>iloc</code></a>:</p>
<pre><code>In [31]: df_test.iloc[0]
Out[31]:
ATime 1.2
X 2.0
Y 15.0
Z 2.0
Btime 1.2
C 12.0
D 25.0
E 12.0
Name: 0, dtype: float64
</code></pre>
<p>To select the ith value in the <code>Btime</code> column you could use:</p>
<pre><code>In [30]: df_test['Btime'].iloc[0]
Out[30]: 1.2
</code></pre>
<hr>
<h2>There is a difference between <code>df_test['Btime'].iloc[0]</code> (recommended) and <code>df_test.iloc[0]['Btime']</code>:</h2>
<p>DataFrames store data in column-based blocks (where each block has a single
dtype). If you select by column first, a <em>view</em> can be returned (which is
quicker than returning a copy) and the original dtype is preserved. In contrast,
if you select by row first, and if the DataFrame has columns of different
dtypes, then Pandas <em>copies</em> the data into a new Series of object dtype. So
selecting columns is a bit faster than selecting rows. Thus, although
<code>df_test.iloc[0]['Btime']</code> works, <code>df_test['Btime'].iloc[0]</code> is a little bit
more efficient.</p>
<p>There is a big difference between the two when it comes to assignment.
<code>df_test['Btime'].iloc[0] = x</code> affects <code>df_test</code>, but <code>df_test.iloc[0]['Btime']</code>
may not. See below for an explanation of why. Because a subtle difference in
the order of indexing makes a big difference in behavior, it is better to use single indexing assignment:</p>
<pre><code>df.iloc[0, df.columns.get_loc('Btime')] = x
</code></pre>
<hr>
<h2><code>df.iloc[0, df.columns.get_loc('Btime')] = x</code> (recommended):</h2>
<p>The <strong><a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#indexing-view-versus-copy" rel="noreferrer">recommended way</a></strong> to assign new values to a
DataFrame is to <a href="https://www.dataquest.io/blog/settingwithcopywarning/" rel="noreferrer">avoid chained indexing</a>, and instead use the method <a href="https://stackoverflow.com/a/32103253/190597">shown by
andrew</a>,</p>
<pre><code>df.loc[df.index[n], 'Btime'] = x
</code></pre>
<p>or </p>
<pre><code>df.iloc[n, df.columns.get_loc('Btime')] = x
</code></pre>
<p>The latter method is a bit faster, because <code>df.loc</code> has to convert the row and column labels to
positional indices, so there is a little less conversion necessary if you use
<code>df.iloc</code> instead.</p>
<hr>
<h2><code>df['Btime'].iloc[0] = x</code> works, but is not recommended:</h2>
<p>Although this works, it is taking advantage of the way DataFrames are <em>currently</em> implemented. There is no guarantee that Pandas has to work this way in the future. In particular, it is taking advantage of the fact that (currently) <code>df['Btime']</code> always returns a
view (not a copy) so <code>df['Btime'].iloc[n] = x</code> can be used to <em>assign</em> a new value
at the nth location of the <code>Btime</code> column of <code>df</code>.</p>
<p>Since Pandas makes no explicit guarantees about when indexers return a view versus a copy, assignments that use chained indexing generally always raise a <code>SettingWithCopyWarning</code> even though in this case the assignment succeeds in modifying <code>df</code>:</p>
<pre><code>In [22]: df = pd.DataFrame({'foo':list('ABC')}, index=[0,2,1])
In [24]: df['bar'] = 100
In [25]: df['bar'].iloc[0] = 99
/home/unutbu/data/binky/bin/ipython:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self._setitem_with_indexer(indexer, value)
In [26]: df
Out[26]:
foo bar
0 A 99 <-- assignment succeeded
2 B 100
1 C 100
</code></pre>
<hr>
<h2><code>df.iloc[0]['Btime'] = x</code> does not work:</h2>
<p>In contrast, assignment with <code>df.iloc[0]['bar'] = 123</code> does not work because <code>df.iloc[0]</code> is returning a copy:</p>
<pre><code>In [66]: df.iloc[0]['bar'] = 123
/home/unutbu/data/binky/bin/ipython:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
In [67]: df
Out[67]:
foo bar
0 A 99 <-- assignment failed
2 B 100
1 C 100
</code></pre>
<hr>
<p><strong>Warning</strong>: I had previously suggested <code>df_test.ix[i, 'Btime']</code>. But this is not guaranteed to give you the <code>ith</code> value since <code>ix</code> tries to index by <em>label</em> before trying to index by <em>position</em>. So if the DataFrame has an integer index which is not in sorted order starting at 0, then using <code>ix[i]</code> will return the row <em>labeled</em> <code>i</code> rather than the <code>ith</code> row. For example,</p>
<pre><code>In [1]: df = pd.DataFrame({'foo':list('ABC')}, index=[0,2,1])
In [2]: df
Out[2]:
foo
0 A
2 B
1 C
In [4]: df.ix[1, 'foo']
Out[4]: 'C'
</code></pre>
| 649
|
pandas
|
How to replace NaN values in a dataframe column
|
https://stackoverflow.com/questions/13295735/how-to-replace-nan-values-in-a-dataframe-column
|
<p>I have a Pandas Dataframe as below:</p>
<pre class="lang-none prettyprint-override"><code> itm Date Amount
67 420 2012-09-30 00:00:00 65211
68 421 2012-09-09 00:00:00 29424
69 421 2012-09-16 00:00:00 29877
70 421 2012-09-23 00:00:00 30990
71 421 2012-09-30 00:00:00 61303
72 485 2012-09-09 00:00:00 71781
73 485 2012-09-16 00:00:00 NaN
74 485 2012-09-23 00:00:00 11072
75 485 2012-09-30 00:00:00 113702
76 489 2012-09-09 00:00:00 64731
77 489 2012-09-16 00:00:00 NaN
</code></pre>
<p>When I try to apply a function to the Amount column, I get the following error:</p>
<pre class="lang-none prettyprint-override"><code>ValueError: cannot convert float NaN to integer
</code></pre>
<p>I have tried applying a function using <code>math.isnan</code>, pandas' <code>.replace</code> method, <code>.sparse</code> data attribute from pandas 0.9, if <code>NaN == NaN</code> statement in a function; I have also looked at <a href="https://stackoverflow.com/questions/8161836/how-do-i-replace-na-values-with-zeros-in-r">this Q/A</a>; none of them works.</p>
<p>How do I do it?</p>
|
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="noreferrer"><code>DataFrame.fillna()</code></a> or <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.fillna.html" rel="noreferrer"><code>Series.fillna()</code></a> will do this for you.</p>
<p>Example:</p>
<pre><code>In [7]: df
Out[7]:
0 1
0 NaN NaN
1 -0.494375 0.570994
2 NaN NaN
3 1.876360 -0.229738
4 NaN NaN
In [8]: df.fillna(0)
Out[8]:
0 1
0 0.000000 0.000000
1 -0.494375 0.570994
2 0.000000 0.000000
3 1.876360 -0.229738
4 0.000000 0.000000
</code></pre>
<p>To fill the NaNs in only one column, select just that column.</p>
<pre><code>In [12]: df[1] = df[1].fillna(0)
In [13]: df
Out[13]:
0 1
0 NaN 0.000000
1 -0.494375 0.570994
2 NaN 0.000000
3 1.876360 -0.229738
4 NaN 0.000000
</code></pre>
<p>Or you can use the built in column-specific functionality:</p>
<pre><code>df = df.fillna({1: 0})
</code></pre>
| 650
|
pandas
|
Convert Python dict into a dataframe
|
https://stackoverflow.com/questions/18837262/convert-python-dict-into-a-dataframe
|
<p>I have a Python dictionary:</p>
<pre class="lang-py prettyprint-override"><code>{u'2012-07-01': 391,
u'2012-07-02': 392,
u'2012-07-03': 392,
u'2012-07-04': 392,
u'2012-07-05': 392,
u'2012-07-06': 392}
</code></pre>
<p>I would like to convert this into a pandas dataframe by having the dates and their corresponding values as two separate columns; the expected result looks like:</p>
<pre class="lang-none prettyprint-override"><code> Date DateValue
0 2012-07-01 391
1 2012-07-02 392
2 2012-07-03 392
. 2012-07-04 392
. ... ...
</code></pre>
<p>Is there a direct way to do this?</p>
|
<p>The error here, is since calling the DataFrame constructor with scalar values (where it expects values to be a list/dict/... i.e. have multiple columns):</p>
<pre><code>pd.DataFrame(d)
ValueError: If using all scalar values, you must must pass an index
</code></pre>
<p>You could take the items from the dictionary (i.e. the key-value pairs):</p>
<pre><code>In [11]: pd.DataFrame(d.items()) # or list(d.items()) in python 3
Out[11]:
0 1
0 2012-07-01 391
1 2012-07-02 392
2 2012-07-03 392
3 2012-07-04 392
4 2012-07-05 392
5 2012-07-06 392
In [12]: pd.DataFrame(d.items(), columns=['Date', 'DateValue'])
Out[12]:
Date DateValue
0 2012-07-01 391
1 2012-07-02 392
2 2012-07-03 392
3 2012-07-04 392
4 2012-07-05 392
5 2012-07-06 392
</code></pre>
<p>But I think it makes more sense to pass the Series constructor:</p>
<pre><code>In [20]: s = pd.Series(d, name='DateValue')
In [21]: s
Out[21]:
2012-07-01 391
2012-07-02 392
2012-07-03 392
2012-07-04 392
2012-07-05 392
2012-07-06 392
Name: DateValue, dtype: int64
In [22]: s.index.name = 'Date'
In [23]: s.reset_index()
Out[23]:
Date DateValue
0 2012-07-01 391
1 2012-07-02 392
2 2012-07-03 392
3 2012-07-04 392
4 2012-07-05 392
5 2012-07-06 392
</code></pre>
| 651
|
pandas
|
How to check if a column exists in Pandas
|
https://stackoverflow.com/questions/24870306/how-to-check-if-a-column-exists-in-pandas
|
<p>How do I check if a column exists in a Pandas DataFrame <code>df</code>?</p>
<pre class="lang-none prettyprint-override"><code> A B C
0 3 40 100
1 6 30 200
</code></pre>
<p>How would I check if the column <code>"A"</code> exists in the above DataFrame so that I can compute:</p>
<pre class="lang-py prettyprint-override"><code>df['sum'] = df['A'] + df['C']
</code></pre>
<p>And if <code>"A"</code> doesn't exist:</p>
<pre class="lang-py prettyprint-override"><code>df['sum'] = df['B'] + df['C']
</code></pre>
|
<p>This will work:</p>
<pre><code>if 'A' in df:
</code></pre>
<p>But for clarity, I'd probably write it as:</p>
<pre><code>if 'A' in df.columns:
</code></pre>
| 652
|
pandas
|
Selecting/excluding sets of columns in pandas
|
https://stackoverflow.com/questions/14940743/selecting-excluding-sets-of-columns-in-pandas
|
<p>I would like to create views or dataframes from an existing dataframe based on column selections.</p>
<p>For example, I would like to create a dataframe <code>df2</code> from a dataframe <code>df1</code> that holds all columns from it except two of them. I tried doing the following, but it didn't work:</p>
<pre><code>import numpy as np
import pandas as pd
# Create a dataframe with columns A,B,C and D
df = pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD'))
# Try to create a second dataframe df2 from df with all columns except 'B' and D
my_cols = set(df.columns)
my_cols.remove('B').remove('D')
# This returns an error ("unhashable type: set")
df2 = df[my_cols]
</code></pre>
<p>What am I doing wrong? Perhaps more generally, what mechanisms does pandas have to support the picking and <strong>exclusions</strong> of arbitrary sets of columns from a dataframe?</p>
|
<p>You can either Drop the columns you do not need OR Select the ones you need</p>
<pre><code># Using DataFrame.drop
df.drop(df.columns[[1, 2]], axis=1, inplace=True)
# drop by Name
df1 = df1.drop(['B', 'C'], axis=1)
# Select the ones you want
df1 = df[['a','d']]
</code></pre>
| 653
|
pandas
|
Sorting columns in pandas dataframe based on column name
|
https://stackoverflow.com/questions/11067027/sorting-columns-in-pandas-dataframe-based-on-column-name
|
<p>I have a <code>dataframe</code> with over 200 columns. The issue is as they were generated the order is</p>
<pre><code>['Q1.3','Q6.1','Q1.2','Q1.1',......]
</code></pre>
<p>I need to <em>sort</em> the columns as follows:</p>
<pre><code>['Q1.1','Q1.2','Q1.3',.....'Q6.1',......]
</code></pre>
<p>Is there some way for me to do this within Python?</p>
|
<pre><code>df = df.reindex(sorted(df.columns), axis=1)
</code></pre>
<p>This assumes that sorting the column names will give the order you want. If your column names won't sort lexicographically (e.g., if you want column Q10.3 to appear after Q9.1), you'll need to sort differently, but that has nothing to do with pandas.</p>
| 654
|
pandas
|
How to reset index in a pandas dataframe?
|
https://stackoverflow.com/questions/20490274/how-to-reset-index-in-a-pandas-dataframe
|
<p>I have a dataframe from which I remove some rows. As a result, I get a dataframe in which index is something like <code>[1,5,6,10,11]</code> and I would like to reset it to <code>[0,1,2,3,4]</code>. How can I do it?</p>
<hr />
<p>The following seems to work:</p>
<pre class="lang-py prettyprint-override"><code>df = df.reset_index()
del df['index']
</code></pre>
<p>The following does not work:</p>
<pre class="lang-py prettyprint-override"><code>df = df.reindex()
</code></pre>
|
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="noreferrer"><code>DataFrame.reset_index</code></a> is what you're looking for. If you don't want it saved as a column, then do:</p>
<pre><code>df = df.reset_index(drop=True)
</code></pre>
<p>If you don't want to reassign:</p>
<pre><code>df.reset_index(drop=True, inplace=True)
</code></pre>
| 655
|
pandas
|
Create new column based on values from other columns / apply a function of multiple columns, row-wise in Pandas
|
https://stackoverflow.com/questions/26886653/create-new-column-based-on-values-from-other-columns-apply-a-function-of-multi
|
<p>I want to apply my custom function (it uses an if-else ladder) to these six columns (<code>ERI_Hispanic</code>, <code>ERI_AmerInd_AKNatv</code>, <code>ERI_Asian</code>, <code>ERI_Black_Afr.Amer</code>, <code>ERI_HI_PacIsl</code>, <code>ERI_White</code>) in each row of my dataframe.</p>
<p>I've tried different methods from other questions but still can't seem to find the right answer for my problem. The critical piece of this is that if the person is counted as Hispanic they can't be counted as anything else. Even if they have a "1" in another ethnicity column they still are counted as Hispanic not two or more races. Similarly, if the sum of all the ERI columns is greater than 1 they are counted as two or more races and can't be counted as a unique ethnicity(except for Hispanic).</p>
<p>It's almost like doing a for loop through each row and if each record meets a criterion they are added to one list and eliminated from the original.</p>
<p>From the dataframe below I need to calculate a new column based on the following spec in SQL:</p>
<p><strong>CRITERIA</strong></p>
<pre><code>IF [ERI_Hispanic] = 1 THEN RETURN “Hispanic”
ELSE IF SUM([ERI_AmerInd_AKNatv] + [ERI_Asian] + [ERI_Black_Afr.Amer] + [ERI_HI_PacIsl] + [ERI_White]) > 1 THEN RETURN “Two or More”
ELSE IF [ERI_AmerInd_AKNatv] = 1 THEN RETURN “A/I AK Native”
ELSE IF [ERI_Asian] = 1 THEN RETURN “Asian”
ELSE IF [ERI_Black_Afr.Amer] = 1 THEN RETURN “Black/AA”
ELSE IF [ERI_HI_PacIsl] = 1 THEN RETURN “Haw/Pac Isl.”
ELSE IF [ERI_White] = 1 THEN RETURN “White”
</code></pre>
<p>Comment: If the ERI Flag for Hispanic is True (1), the employee is classified as “Hispanic”</p>
<p>Comment: If more than 1 non-Hispanic ERI Flag is true, return “Two or More”</p>
<p><strong>DATAFRAME</strong></p>
<pre><code> lname fname rno_cd eri_afr_amer eri_asian eri_hawaiian eri_hispanic eri_nat_amer eri_white rno_defined
0 MOST JEFF E 0 0 0 0 0 1 White
1 CRUISE TOM E 0 0 0 1 0 0 White
2 DEPP JOHNNY 0 0 0 0 0 1 Unknown
3 DICAP LEO 0 0 0 0 0 1 Unknown
4 BRANDO MARLON E 0 0 0 0 0 0 White
5 HANKS TOM 0 0 0 0 0 1 Unknown
6 DENIRO ROBERT E 0 1 0 0 0 1 White
7 PACINO AL E 0 0 0 0 0 1 White
8 WILLIAMS ROBIN E 0 0 1 0 0 0 White
9 EASTWOOD CLINT E 0 0 0 0 0 1 White
</code></pre>
|
<p>OK, two steps to this - first is to write a function that does the translation you want - I've put an example together based on your pseudo-code:</p>
<pre><code>def label_race(row):
if row['eri_hispanic'] == 1:
return 'Hispanic'
if row['eri_afr_amer'] + row['eri_asian'] + row['eri_hawaiian'] + row['eri_nat_amer'] + row['eri_white'] > 1:
return 'Two Or More'
if row['eri_nat_amer'] == 1:
return 'A/I AK Native'
if row['eri_asian'] == 1:
return 'Asian'
if row['eri_afr_amer'] == 1:
return 'Black/AA'
if row['eri_hawaiian'] == 1:
return 'Haw/Pac Isl.'
if row['eri_white'] == 1:
return 'White'
return 'Other'
</code></pre>
<p>You may want to go over this, but it seems to do the trick - notice that the parameter going into the function is considered to be a Series object labelled "row".</p>
<p>Next, use the apply function in pandas to apply the function - e.g.</p>
<pre><code>df.apply(label_race, axis=1)
</code></pre>
<p>Note the <code>axis=1</code> specifier, that means that the application is done at a row, rather than a column level. The results are here:</p>
<pre><code>0 White
1 Hispanic
2 White
3 White
4 Other
5 White
6 Two Or More
7 White
8 Haw/Pac Isl.
9 White
</code></pre>
<p>If you're happy with those results, then run it again, saving the results into a new column in your original dataframe.</p>
<pre><code>df['race_label'] = df.apply(label_race, axis=1)
</code></pre>
<p>The resultant dataframe looks like this (scroll to the right to see the new column):</p>
<pre><code> lname fname rno_cd eri_afr_amer eri_asian eri_hawaiian eri_hispanic eri_nat_amer eri_white rno_defined race_label
0 MOST JEFF E 0 0 0 0 0 1 White White
1 CRUISE TOM E 0 0 0 1 0 0 White Hispanic
2 DEPP JOHNNY NaN 0 0 0 0 0 1 Unknown White
3 DICAP LEO NaN 0 0 0 0 0 1 Unknown White
4 BRANDO MARLON E 0 0 0 0 0 0 White Other
5 HANKS TOM NaN 0 0 0 0 0 1 Unknown White
6 DENIRO ROBERT E 0 1 0 0 0 1 White Two Or More
7 PACINO AL E 0 0 0 0 0 1 White White
8 WILLIAMS ROBIN E 0 0 1 0 0 0 White Haw/Pac Isl.
9 EASTWOOD CLINT E 0 0 0 0 0 1 White White
</code></pre>
| 656
|
pandas
|
How can I display full (non-truncated) dataframe information in HTML when converting from Pandas dataframe to HTML?
|
https://stackoverflow.com/questions/25351968/how-can-i-display-full-non-truncated-dataframe-information-in-html-when-conver
|
<p>I converted a Pandas dataframe to an HTML output using the <code>DataFrame.to_html</code> function. When I save this to a separate HTML file, the file shows truncated output.</p>
<p>For example, in my TEXT column,</p>
<p><code>df.head(1)</code> will show</p>
<p><em>The film was an excellent effort...</em></p>
<p>instead of</p>
<p><em>The film was an excellent effort in deconstructing the complex social sentiments that prevailed during this period.</em></p>
<p>This rendition is fine in the case of a screen-friendly format of a massive Pandas dataframe, but I need an HTML file that will show complete tabular data contained in the dataframe, that is, something that will show the latter text element rather than the former text snippet.</p>
<p>How would I be able to show the complete, non-truncated text data for each element in my TEXT column in the HTML version of the information? I would imagine that the HTML table would have to display long cells to show the complete data, but as far as I understand, only column-width parameters can be passed into the <code>DataFrame.to_html</code> function.</p>
|
<p>Set the <code>display.max_colwidth</code> option to <code>None</code> (or <code>-1</code> before version 1.0):</p>
<pre><code>pd.set_option('display.max_colwidth', None)
</code></pre>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.set_option.html#pandas-set-option" rel="noreferrer"><code>set_option</code> documentation</a></p>
<p>For example, in <a href="https://en.wikipedia.org/wiki/IPython" rel="noreferrer">IPython</a>, we see that the information is truncated to 50 characters. Anything in excess is ellipsized:</p>
<p><a href="https://i.sstatic.net/hANGS.png" rel="noreferrer"><img src="https://i.sstatic.net/hANGS.png" alt="Truncated result" /></a></p>
<p>If you set the <code>display.max_colwidth</code> option, the information will be displayed fully:</p>
<p><a href="https://i.sstatic.net/Nxg2q.png" rel="noreferrer"><img src="https://i.sstatic.net/Nxg2q.png" alt="Non-truncated result" /></a></p>
| 657
|
pandas
|
How to flatten a hierarchical index in columns
|
https://stackoverflow.com/questions/14507794/how-to-flatten-a-hierarchical-index-in-columns
|
<p>I have a data frame with a hierarchical index in axis 1 (columns) (from a <code>groupby.agg</code> operation):</p>
<pre><code> USAF WBAN year month day s_PC s_CL s_CD s_CNT tempf
sum sum sum sum amax amin
0 702730 26451 1993 1 1 1 0 12 13 30.92 24.98
1 702730 26451 1993 1 2 0 0 13 13 32.00 24.98
2 702730 26451 1993 1 3 1 10 2 13 23.00 6.98
3 702730 26451 1993 1 4 1 0 12 13 10.04 3.92
4 702730 26451 1993 1 5 3 0 10 13 19.94 10.94
</code></pre>
<p>I want to flatten it, so that it looks like this (names aren't critical - I could rename):</p>
<pre><code> USAF WBAN year month day s_PC s_CL s_CD s_CNT tempf_amax tmpf_amin
0 702730 26451 1993 1 1 1 0 12 13 30.92 24.98
1 702730 26451 1993 1 2 0 0 13 13 32.00 24.98
2 702730 26451 1993 1 3 1 10 2 13 23.00 6.98
3 702730 26451 1993 1 4 1 0 12 13 10.04 3.92
4 702730 26451 1993 1 5 3 0 10 13 19.94 10.94
</code></pre>
<p>How do I do this? (I've tried a lot, to no avail.) </p>
<p>Per a suggestion, here is the head in dict form</p>
<pre><code>{('USAF', ''): {0: '702730',
1: '702730',
2: '702730',
3: '702730',
4: '702730'},
('WBAN', ''): {0: '26451', 1: '26451', 2: '26451', 3: '26451', 4: '26451'},
('day', ''): {0: 1, 1: 2, 2: 3, 3: 4, 4: 5},
('month', ''): {0: 1, 1: 1, 2: 1, 3: 1, 4: 1},
('s_CD', 'sum'): {0: 12.0, 1: 13.0, 2: 2.0, 3: 12.0, 4: 10.0},
('s_CL', 'sum'): {0: 0.0, 1: 0.0, 2: 10.0, 3: 0.0, 4: 0.0},
('s_CNT', 'sum'): {0: 13.0, 1: 13.0, 2: 13.0, 3: 13.0, 4: 13.0},
('s_PC', 'sum'): {0: 1.0, 1: 0.0, 2: 1.0, 3: 1.0, 4: 3.0},
('tempf', 'amax'): {0: 30.920000000000002,
1: 32.0,
2: 23.0,
3: 10.039999999999999,
4: 19.939999999999998},
('tempf', 'amin'): {0: 24.98,
1: 24.98,
2: 6.9799999999999969,
3: 3.9199999999999982,
4: 10.940000000000001},
('year', ''): {0: 1993, 1: 1993, 2: 1993, 3: 1993, 4: 1993}}
</code></pre>
|
<p>I think the easiest way to do this would be to set the columns to the top level:</p>
<pre><code>df.columns = df.columns.get_level_values(0)
</code></pre>
<p><em>Note: if the to level has a name you can also access it by this, rather than 0.</em></p>
<p>.</p>
<p>If you want to combine/<a href="http://docs.python.org/2/library/stdtypes.html#str.join"><code>join</code></a> your MultiIndex into one Index <em>(assuming you have just string entries in your columns)</em> you could:</p>
<pre><code>df.columns = [' '.join(col).strip() for col in df.columns.values]
</code></pre>
<p><em>Note: we must <a href="http://docs.python.org/2/library/stdtypes.html#str.strip"><code>strip</code></a> the whitespace for when there is no second index.</em></p>
<pre><code>In [11]: [' '.join(col).strip() for col in df.columns.values]
Out[11]:
['USAF',
'WBAN',
'day',
'month',
's_CD sum',
's_CL sum',
's_CNT sum',
's_PC sum',
'tempf amax',
'tempf amin',
'year']
</code></pre>
| 658
|
pandas
|
How can I pivot a dataframe?
|
https://stackoverflow.com/questions/47152691/how-can-i-pivot-a-dataframe
|
<p><strong>How do I pivot the pandas dataframe <code>df</code> defined at bottom such that the <code>col</code> values become columns, <code>row</code> values become the index, and mean of <code>val0</code> becomes the values?</strong> (in some cases this is called transforming from long-format to wide-format)</p>
<p>(See note at bottom: Why is this question not a duplicate? and why this is thematically one question and not too broad.)</p>
<h3>Subquestions</h3>
<ol>
<li><p>How to avoid getting <code>ValueError: Index contains duplicate entries, cannot reshape</code>?</p>
</li>
<li><p><strong>How do I pivot <code>df</code> defined at bottom, such that the <code>col</code> values become columns, <code>row</code> values become the index, and mean of <code>val0</code> are the values?</strong></p>
<pre><code>col col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65
row2 0.13 NaN 0.395 0.500 0.25
row3 NaN 0.310 NaN 0.545 NaN
row4 NaN 0.100 0.395 0.760 0.24
</code></pre>
</li>
</ol>
<p>How do I pivot...</p>
<ol start="3">
<li><p>... so that missing values are <code>0</code>?</p>
<pre><code>col col0 col1 col2 col3 col4
row
row0 0.77 0.605 0.000 0.860 0.65
row2 0.13 0.000 0.395 0.500 0.25
row3 0.00 0.310 0.000 0.545 0.00
row4 0.00 0.100 0.395 0.760 0.24
</code></pre>
</li>
<li><p>... to do an aggregate function other than <code>mean</code>, like <code>sum</code>?</p>
<pre><code>col col0 col1 col2 col3 col4
row
row0 0.77 1.21 0.00 0.86 0.65
row2 0.13 0.00 0.79 0.50 0.50
row3 0.00 0.31 0.00 1.09 0.00
row4 0.00 0.10 0.79 1.52 0.24
</code></pre>
</li>
<li><p>... to do more that one aggregation at a time?</p>
<pre><code> sum mean
col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4
row
row0 0.77 1.21 0.00 0.86 0.65 0.77 0.605 0.000 0.860 0.65
row2 0.13 0.00 0.79 0.50 0.50 0.13 0.000 0.395 0.500 0.25
row3 0.00 0.31 0.00 1.09 0.00 0.00 0.310 0.000 0.545 0.00
row4 0.00 0.10 0.79 1.52 0.24 0.00 0.100 0.395 0.760 0.24
</code></pre>
</li>
<li><p>... to aggregate over multiple 'value' columns?</p>
<pre><code> val0 val1
col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4
row
row0 0.77 0.605 0.000 0.860 0.65 0.01 0.745 0.00 0.010 0.02
row2 0.13 0.000 0.395 0.500 0.25 0.45 0.000 0.34 0.440 0.79
row3 0.00 0.310 0.000 0.545 0.00 0.00 0.230 0.00 0.075 0.00
row4 0.00 0.100 0.395 0.760 0.24 0.00 0.070 0.42 0.300 0.46
</code></pre>
</li>
<li><p>... <strong>to subdivide by multiple columns?</strong> (item0,item1,item2..., col0,col1,col2...)</p>
<pre><code>item item0 item1 item2
col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4
row
row0 0.00 0.00 0.00 0.77 0.00 0.00 0.00 0.00 0.00 0.605 0.86 0.65
row2 0.35 0.00 0.37 0.00 0.00 0.44 0.00 0.00 0.13 0.000 0.50 0.13
row3 0.00 0.00 0.00 0.00 0.31 0.00 0.81 0.00 0.00 0.000 0.28 0.00
row4 0.15 0.64 0.00 0.00 0.10 0.64 0.88 0.24 0.00 0.000 0.00 0.00
</code></pre>
</li>
<li><p>... <strong>to subdivide by multiple rows:</strong> (key0,key1... row0,row1,row2...)</p>
<pre><code>item item0 item1 item2
col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4
key row
key0 row0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.86 0.00
row2 0.00 0.00 0.37 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00
row3 0.00 0.00 0.00 0.00 0.31 0.00 0.81 0.00 0.00 0.00 0.00 0.00
row4 0.15 0.64 0.00 0.00 0.00 0.00 0.00 0.24 0.00 0.00 0.00 0.00
key1 row0 0.00 0.00 0.00 0.77 0.00 0.00 0.00 0.00 0.00 0.81 0.00 0.65
row2 0.35 0.00 0.00 0.00 0.00 0.44 0.00 0.00 0.00 0.00 0.00 0.13
row3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.28 0.00
row4 0.00 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00
key2 row0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 0.00 0.00
row2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.13 0.00 0.00 0.00
row4 0.00 0.00 0.00 0.00 0.00 0.64 0.88 0.00 0.00 0.00 0.00 0.00
</code></pre>
</li>
<li><p>... <strong>to aggregate the frequency</strong> in which the column and rows occur together, aka <strong>"cross tabulation"</strong>?</p>
<pre><code>col col0 col1 col2 col3 col4
row
row0 1 2 0 1 1
row2 1 0 2 1 2
row3 0 1 0 2 0
row4 0 1 2 2 1
</code></pre>
</li>
<li><p>... to convert a DataFrame <strong>from long-to-wide</strong> by pivoting on ONLY two columns? Given:</p>
<pre><code>np.random.seed([3, 1415])
df2 = pd.DataFrame({'A': list('aaaabbbc'), 'B': np.random.choice(15, 8)})
df2
A B
0 a 0
1 a 11
2 a 2
3 a 11
4 b 10
5 b 10
6 b 14
7 c 7
</code></pre>
<p>The expected should look something like</p>
<pre><code> a b c
0 0.0 10.0 7.0
1 11.0 10.0 NaN
2 2.0 14.0 NaN
3 11.0 NaN NaN
</code></pre>
</li>
<li><p>... <strong>to flatten the multiple index to a single multi-index</strong> after pivot?</p>
<p>From:</p>
<pre><code> 1 2
1 1 2
a 2 1 1
b 2 1 0
c 1 0 0
</code></pre>
<p>To:</p>
<pre><code> 1|1 2|1 2|2
a 2 1 1
b 2 1 0
c 1 0 0
</code></pre>
</li>
</ol>
<h2>Setup</h2>
<p>Consider a dataframe df with columns 'key', 'row', 'item', 'col', and random float values 'val0', 'val1'. I conspicuously named the columns and relevant column values to correspond with how I want to pivot them.</p>
<pre><code>import numpy as np
import pandas as pd
from numpy.core.defchararray import add
np.random.seed([3,1415])
n = 20
cols = np.array(['key', 'row', 'item', 'col'])
arr1 = (np.random.randint(5, size=(n, 4)) // [2, 1, 2, 1]).astype(str)
df = pd.DataFrame(
add(cols, arr1), columns=cols
).join(
pd.DataFrame(np.random.rand(n, 2).round(2)).add_prefix('val')
)
print(df)
</code></pre>
<pre><code> key row item col val0 val1
0 key0 row3 item1 col3 0.81 0.04
1 key1 row2 item1 col2 0.44 0.07
2 key1 row0 item1 col0 0.77 0.01
3 key0 row4 item0 col2 0.15 0.59
4 key1 row0 item2 col1 0.81 0.64
5 key1 row2 item2 col4 0.13 0.88
6 key2 row4 item1 col3 0.88 0.39
7 key1 row4 item1 col1 0.10 0.07
8 key1 row0 item2 col4 0.65 0.02
9 key1 row2 item0 col2 0.35 0.61
10 key2 row0 item2 col1 0.40 0.85
11 key2 row4 item1 col2 0.64 0.25
12 key0 row2 item2 col3 0.50 0.44
13 key0 row4 item1 col4 0.24 0.46
14 key1 row3 item2 col3 0.28 0.11
15 key0 row3 item1 col1 0.31 0.23
16 key0 row0 item2 col3 0.86 0.01
17 key0 row4 item0 col3 0.64 0.21
18 key2 row2 item2 col0 0.13 0.45
19 key0 row2 item0 col4 0.37 0.70
</code></pre>
<hr />
<p>Why is this question not a duplicate? and more useful than the following autosuggestions:</p>
<ol>
<li><p><a href="https://stackoverflow.com/q/28337117/2336654">How to pivot a dataframe in Pandas?</a> only covers the specific case of 'Country' to row-index, values of 'Indicator' for 'Year' to multiple columns and no aggregation of values.</p>
</li>
<li><p><a href="https://stackoverflow.com/q/42708193/2336654">pandas pivot table to data frame</a>
asks how to pivot in pandas like in R, i.e. autogenerate an individual column for each value of <code>strength...</code></p>
</li>
<li><p><a href="https://stackoverflow.com/q/11400181/2336654">pandas pivoting a dataframe, duplicate rows</a> asks about the syntax for <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pivot.html#pandas.DataFrame.pivot" rel="nofollow noreferrer">pivoting</a> multiple columns, without needing to list them all.</p>
</li>
</ol>
<p>None of the existing questions and answers are comprehensive, so this is an attempt at a <a href="https://meta.stackoverflow.com/questions/291992/what-is-a-canonical-question-answer-and-what-is-their-purpose">canonical question and answer</a> that encompasses all aspects of pivoting.</p>
|
<p>Here is a list of idioms we can use to pivot</p>
<ol>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="noreferrer"><code>pd.DataFrame.pivot_table</code></a></p>
<ul>
<li>A glorified version of <code>groupby</code> with more intuitive API. For many people, this is the preferred approach. And it is the intended approach by the developers.</li>
<li>Specify row level, column levels, values to be aggregated, and function(s) to perform aggregations.</li>
</ul>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="noreferrer"><code>pd.DataFrame.groupby</code></a> + <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="noreferrer"><code>pd.DataFrame.unstack</code></a></p>
<ul>
<li>Good general approach for doing just about any type of pivot</li>
<li>You specify all columns that will constitute the pivoted row levels and column levels in one group by. You follow that by selecting the remaining columns you want to aggregate and the function(s) you want to perform the aggregation. Finally, you <code>unstack</code> the levels that you want to be in the column index.</li>
</ul>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="noreferrer"><code>pd.DataFrame.set_index</code></a> + <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="noreferrer"><code>pd.DataFrame.unstack</code></a></p>
<ul>
<li>Convenient and intuitive for some (myself included). Cannot handle duplicate grouped keys.</li>
<li>Similar to the <code>groupby</code> paradigm, we specify all columns that will eventually be either row or column levels and set those to be the index. We then <code>unstack</code> the levels we want in the columns. If either the remaining index levels or column levels are not unique, this method will fail.</li>
</ul>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="noreferrer"><code>pd.DataFrame.pivot</code></a></p>
<ul>
<li>Very similar to <code>set_index</code> in that it shares the duplicate key limitation. The API is very limited as well. It only takes scalar values for <code>index</code>, <code>columns</code>, <code>values</code>.</li>
<li>Similar to the <code>pivot_table</code> method in that we select rows, columns, and values on which to pivot. However, we cannot aggregate and if either rows or columns are not unique, this method will fail.</li>
</ul>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="noreferrer"><code>pd.crosstab</code></a></p>
<ul>
<li>This a specialized version of <code>pivot_table</code> and in its purest form is the most intuitive way to perform several tasks.</li>
</ul>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.factorize.html" rel="noreferrer"><code>pd.factorize</code></a> + <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.bincount.html" rel="noreferrer"><code>np.bincount</code></a></p>
<ul>
<li>This is a highly advanced technique that is very obscure but is very fast. It cannot be used in all circumstances, but when it can be used and you are comfortable using it, you will reap the performance rewards.</li>
</ul>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html" rel="noreferrer"><code>pd.get_dummies</code></a> + <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dot.html" rel="noreferrer"><code>pd.DataFrame.dot</code></a></p>
<ul>
<li>I use this for cleverly performing cross tabulation.</li>
</ul>
</li>
</ol>
<p>See also:</p>
<ul>
<li><a href="https://pandas.pydata.org/docs/user_guide/reshaping.html" rel="noreferrer">Reshaping and pivot tables</a> — pandas User Guide</li>
</ul>
<hr />
<h3>Question 1</h3>
<blockquote>
<p>Why do I get <code>ValueError: Index contains duplicate entries, cannot reshape</code></p>
</blockquote>
<p>This occurs because pandas is attempting to reindex either a <code>columns</code> or <code>index</code> object with duplicate entries. There are varying methods to use that can perform a pivot. Some of them are not well suited to when there are duplicates of the keys on which it is being asked to pivot. For example: Consider <code>pd.DataFrame.pivot</code>. I know there are duplicate entries that share the <code>row</code> and <code>col</code> values:</p>
<pre><code>df.duplicated(['row', 'col']).any()
True
</code></pre>
<p>So when I <code>pivot</code> using</p>
<pre><code>df.pivot(index='row', columns='col', values='val0')
</code></pre>
<p>I get the error mentioned above. In fact, I get the same error when I try to perform the same task with:</p>
<pre><code>df.set_index(['row', 'col'])['val0'].unstack()
</code></pre>
<hr />
<h2>Examples</h2>
<p>What I'm going to do for each subsequent question is to answer it using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="noreferrer"><code>pd.DataFrame.pivot_table</code></a>. Then I'll provide alternatives to perform the same task.</p>
<h3>Questions 2 and 3</h3>
<blockquote>
<p>How do I pivot <code>df</code> such that the <code>col</code> values are columns, <code>row</code> values are the index, and mean of <code>val0</code> are the values?</p>
</blockquote>
<ul>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="noreferrer"><code>pd.DataFrame.pivot_table</code></a></p>
<pre><code>df.pivot_table(
values='val0', index='row', columns='col',
aggfunc='mean')
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65
row2 0.13 NaN 0.395 0.500 0.25
row3 NaN 0.310 NaN 0.545 NaN
row4 NaN 0.100 0.395 0.760 0.24
</code></pre>
<ul>
<li><code>aggfunc='mean'</code> is the default and I didn't have to set it. I included it to be explicit.</li>
</ul>
</li>
</ul>
<blockquote>
<p>How do I make it so that missing values are 0?</p>
</blockquote>
<ul>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="noreferrer"><code>pd.DataFrame.pivot_table</code></a></p>
<ul>
<li><code>fill_value</code> is not set by default. I tend to set it appropriately. In this case I set it to <code>0</code>.</li>
</ul>
<pre><code>df.pivot_table(
values='val0', index='row', columns='col',
fill_value=0, aggfunc='mean')
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 0.000 0.860 0.65
row2 0.13 0.000 0.395 0.500 0.25
row3 0.00 0.310 0.000 0.545 0.00
row4 0.00 0.100 0.395 0.760 0.24
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="noreferrer"><code>pd.DataFrame.groupby</code></a></p>
<pre><code>df.groupby(['row', 'col'])['val0'].mean().unstack(fill_value=0)
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="noreferrer"><code>pd.crosstab</code></a></p>
<pre><code>pd.crosstab(
index=df['row'], columns=df['col'],
values=df['val0'], aggfunc='mean').fillna(0)
</code></pre>
</li>
</ul>
<hr />
<h3>Question 4</h3>
<blockquote>
<p>Can I get something other than <code>mean</code>, like maybe <code>sum</code>?</p>
</blockquote>
<ul>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="noreferrer"><code>pd.DataFrame.pivot_table</code></a></p>
<pre><code>df.pivot_table(
values='val0', index='row', columns='col',
fill_value=0, aggfunc='sum')
col col0 col1 col2 col3 col4
row
row0 0.77 1.21 0.00 0.86 0.65
row2 0.13 0.00 0.79 0.50 0.50
row3 0.00 0.31 0.00 1.09 0.00
row4 0.00 0.10 0.79 1.52 0.24
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="noreferrer"><code>pd.DataFrame.groupby</code></a></p>
<pre><code>df.groupby(['row', 'col'])['val0'].sum().unstack(fill_value=0)
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="noreferrer"><code>pd.crosstab</code></a></p>
<pre><code>pd.crosstab(
index=df['row'], columns=df['col'],
values=df['val0'], aggfunc='sum').fillna(0)
</code></pre>
</li>
</ul>
<hr />
<h3>Question 5</h3>
<blockquote>
<p>Can I do more that one aggregation at a time?</p>
</blockquote>
<p>Notice that for <code>pivot_table</code> and <code>crosstab</code> I needed to pass list of callables. On the other hand, <code>groupby.agg</code> is able to take strings for a limited number of special functions. <code>groupby.agg</code> would also have taken the same callables we passed to the others, but it is often more efficient to leverage the string function names as there are efficiencies to be gained.</p>
<ul>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="noreferrer"><code>pd.DataFrame.pivot_table</code></a></p>
<pre><code>df.pivot_table(
values='val0', index='row', columns='col',
fill_value=0, aggfunc=[np.size, np.mean])
size mean
col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4
row
row0 1 2 0 1 1 0.77 0.605 0.000 0.860 0.65
row2 1 0 2 1 2 0.13 0.000 0.395 0.500 0.25
row3 0 1 0 2 0 0.00 0.310 0.000 0.545 0.00
row4 0 1 2 2 1 0.00 0.100 0.395 0.760 0.24
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="noreferrer"><code>pd.DataFrame.groupby</code></a></p>
<pre><code>df.groupby(['row', 'col'])['val0'].agg(['size', 'mean']).unstack(fill_value=0)
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="noreferrer"><code>pd.crosstab</code></a></p>
<pre><code>pd.crosstab(
index=df['row'], columns=df['col'],
values=df['val0'], aggfunc=[np.size, np.mean]).fillna(0, downcast='infer')
</code></pre>
</li>
</ul>
<hr />
<h3>Question 6</h3>
<blockquote>
<p>Can I aggregate over multiple value columns?</p>
</blockquote>
<ul>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="noreferrer"><code>pd.DataFrame.pivot_table</code></a> we pass <code>values=['val0', 'val1']</code> but we could've left that off completely</p>
<pre><code>df.pivot_table(
values=['val0', 'val1'], index='row', columns='col',
fill_value=0, aggfunc='mean')
val0 val1
col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4
row
row0 0.77 0.605 0.000 0.860 0.65 0.01 0.745 0.00 0.010 0.02
row2 0.13 0.000 0.395 0.500 0.25 0.45 0.000 0.34 0.440 0.79
row3 0.00 0.310 0.000 0.545 0.00 0.00 0.230 0.00 0.075 0.00
row4 0.00 0.100 0.395 0.760 0.24 0.00 0.070 0.42 0.300 0.46
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="noreferrer"><code>pd.DataFrame.groupby</code></a></p>
<pre><code>df.groupby(['row', 'col'])['val0', 'val1'].mean().unstack(fill_value=0)
</code></pre>
</li>
</ul>
<hr />
<h3>Question 7</h3>
<blockquote>
<p>Can I subdivide by multiple columns?</p>
</blockquote>
<ul>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="noreferrer"><code>pd.DataFrame.pivot_table</code></a></p>
<pre><code>df.pivot_table(
values='val0', index='row', columns=['item', 'col'],
fill_value=0, aggfunc='mean')
item item0 item1 item2
col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4
row
row0 0.00 0.00 0.00 0.77 0.00 0.00 0.00 0.00 0.00 0.605 0.86 0.65
row2 0.35 0.00 0.37 0.00 0.00 0.44 0.00 0.00 0.13 0.000 0.50 0.13
row3 0.00 0.00 0.00 0.00 0.31 0.00 0.81 0.00 0.00 0.000 0.28 0.00
row4 0.15 0.64 0.00 0.00 0.10 0.64 0.88 0.24 0.00 0.000 0.00 0.00
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="noreferrer"><code>pd.DataFrame.groupby</code></a></p>
<pre><code>df.groupby(
['row', 'item', 'col']
)['val0'].mean().unstack(['item', 'col']).fillna(0).sort_index(1)
</code></pre>
</li>
</ul>
<hr />
<h3>Question 8</h3>
<blockquote>
<p>Can I subdivide by multiple columns?</p>
</blockquote>
<ul>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="noreferrer"><code>pd.DataFrame.pivot_table</code></a></p>
<pre><code>df.pivot_table(
values='val0', index=['key', 'row'], columns=['item', 'col'],
fill_value=0, aggfunc='mean')
item item0 item1 item2
col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4
key row
key0 row0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.86 0.00
row2 0.00 0.00 0.37 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00
row3 0.00 0.00 0.00 0.00 0.31 0.00 0.81 0.00 0.00 0.00 0.00 0.00
row4 0.15 0.64 0.00 0.00 0.00 0.00 0.00 0.24 0.00 0.00 0.00 0.00
key1 row0 0.00 0.00 0.00 0.77 0.00 0.00 0.00 0.00 0.00 0.81 0.00 0.65
row2 0.35 0.00 0.00 0.00 0.00 0.44 0.00 0.00 0.00 0.00 0.00 0.13
row3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.28 0.00
row4 0.00 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00
key2 row0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 0.00 0.00
row2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.13 0.00 0.00 0.00
row4 0.00 0.00 0.00 0.00 0.00 0.64 0.88 0.00 0.00 0.00 0.00 0.00
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="noreferrer"><code>pd.DataFrame.groupby</code></a></p>
<pre><code>df.groupby(
['key', 'row', 'item', 'col']
)['val0'].mean().unstack(['item', 'col']).fillna(0).sort_index(1)
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="noreferrer"><code>pd.DataFrame.set_index</code></a> because the set of keys are unique for both rows and columns</p>
<pre><code>df.set_index(
['key', 'row', 'item', 'col']
)['val0'].unstack(['item', 'col']).fillna(0).sort_index(1)
</code></pre>
</li>
</ul>
<hr />
<h3>Question 9</h3>
<blockquote>
<p>Can I aggregate the frequency in which the column and rows occur together, aka "cross tabulation"?</p>
</blockquote>
<ul>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="noreferrer"><code>pd.DataFrame.pivot_table</code></a></p>
<pre><code>df.pivot_table(index='row', columns='col', fill_value=0, aggfunc='size')
col col0 col1 col2 col3 col4
row
row0 1 2 0 1 1
row2 1 0 2 1 2
row3 0 1 0 2 0
row4 0 1 2 2 1
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="noreferrer"><code>pd.DataFrame.groupby</code></a></p>
<pre><code>df.groupby(['row', 'col'])['val0'].size().unstack(fill_value=0)
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="noreferrer"><code>pd.crosstab</code></a></p>
<pre><code>pd.crosstab(df['row'], df['col'])
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.factorize.html" rel="noreferrer"><code>pd.factorize</code></a> + <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.bincount.html" rel="noreferrer"><code>np.bincount</code></a></p>
<pre><code># get integer factorization `i` and unique values `r`
# for column `'row'`
i, r = pd.factorize(df['row'].values)
# get integer factorization `j` and unique values `c`
# for column `'col'`
j, c = pd.factorize(df['col'].values)
# `n` will be the number of rows
# `m` will be the number of columns
n, m = r.size, c.size
# `i * m + j` is a clever way of counting the
# factorization bins assuming a flat array of length
# `n * m`. Which is why we subsequently reshape as `(n, m)`
b = np.bincount(i * m + j, minlength=n * m).reshape(n, m)
# BTW, whenever I read this, I think 'Bean, Rice, and Cheese'
pd.DataFrame(b, r, c)
col3 col2 col0 col1 col4
row3 2 0 0 1 0
row2 1 2 1 0 2
row0 1 0 1 2 1
row4 2 2 0 1 1
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html" rel="noreferrer"><code>pd.get_dummies</code></a></p>
<pre><code>pd.get_dummies(df['row']).T.dot(pd.get_dummies(df['col']))
col0 col1 col2 col3 col4
row0 1 2 0 1 1
row2 1 0 2 1 2
row3 0 1 0 2 0
row4 0 1 2 2 1
</code></pre>
</li>
</ul>
<hr />
<h3>Question 10</h3>
<blockquote>
<p>How do I convert a DataFrame from long to wide by pivoting on ONLY two
columns?</p>
</blockquote>
<ul>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="noreferrer"><code>DataFrame.pivot</code></a></p>
<p>The first step is to assign a number to each row - this number will be the row index of that value in the pivoted result. This is done using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="noreferrer"><code>GroupBy.cumcount</code></a>:</p>
<pre><code>df2.insert(0, 'count', df2.groupby('A').cumcount())
df2
count A B
0 0 a 0
1 1 a 11
2 2 a 2
3 3 a 11
4 0 b 10
5 1 b 10
6 2 b 14
7 0 c 7
</code></pre>
<p>The second step is to use the newly created column as the index to call <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="noreferrer"><code>DataFrame.pivot</code></a>.</p>
<pre><code>df2.pivot(*df2)
# df2.pivot(index='count', columns='A', values='B')
A a b c
count
0 0.0 10.0 7.0
1 11.0 10.0 NaN
2 2.0 14.0 NaN
3 11.0 NaN NaN
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="noreferrer"><code>DataFrame.pivot_table</code></a></p>
<p>Whereas <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="noreferrer"><code>DataFrame.pivot</code></a> only accepts columns, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="noreferrer"><code>DataFrame.pivot_table</code></a> also accepts arrays, so the <code>GroupBy.cumcount</code> can be passed directly as the <code>index</code> without creating an explicit column.</p>
<pre><code>df2.pivot_table(index=df2.groupby('A').cumcount(), columns='A', values='B')
A a b c
0 0.0 10.0 7.0
1 11.0 10.0 NaN
2 2.0 14.0 NaN
3 11.0 NaN NaN
</code></pre>
</li>
</ul>
<hr />
<h3>Question 11</h3>
<blockquote>
<p>How do I flatten the multiple index to single index after <code>pivot</code></p>
</blockquote>
<p>If <code>columns</code> type <code>object</code> with string <code>join</code></p>
<pre><code>df.columns = df.columns.map('|'.join)
</code></pre>
<p>else <code>format</code></p>
<pre><code>df.columns = df.columns.map('{0[0]}|{0[1]}'.format)
</code></pre>
| 659
|
pandas
|
How do I create test and train samples from one dataframe with pandas?
|
https://stackoverflow.com/questions/24147278/how-do-i-create-test-and-train-samples-from-one-dataframe-with-pandas
|
<p>I have a fairly large dataset in the form of a dataframe and I was wondering how I would be able to split the dataframe into two random samples (80% and 20%) for training and testing.</p>
<p>Thanks!</p>
|
<p>I would just use numpy's <code>randn</code>:</p>
<pre><code>In [11]: df = pd.DataFrame(np.random.randn(100, 2))
In [12]: msk = np.random.rand(len(df)) < 0.8
In [13]: train = df[msk]
In [14]: test = df[~msk]
</code></pre>
<p>And just to see this has worked:</p>
<pre><code>In [15]: len(test)
Out[15]: 21
In [16]: len(train)
Out[16]: 79
</code></pre>
| 660
|
pandas
|
Python Pandas: Get index of rows where column matches certain value
|
https://stackoverflow.com/questions/21800169/python-pandas-get-index-of-rows-where-column-matches-certain-value
|
<p>Given a DataFrame with a column "BoolCol", we want to find the indexes of the DataFrame in which the values for "BoolCol" == True</p>
<p>I currently have the iterating way to do it, which works perfectly:</p>
<pre class="lang-py prettyprint-override"><code>for i in range(100,3000):
if df.iloc[i]['BoolCol']== True:
print i,df.iloc[i]['BoolCol']
</code></pre>
<p>But this is not the correct pandas way to do it. After some research, I am currently using this code:</p>
<pre class="lang-py prettyprint-override"><code>df[df['BoolCol'] == True].index.tolist()
</code></pre>
<p>This one gives me a list of indexes, but they don't match, when I check them by doing:</p>
<pre class="lang-py prettyprint-override"><code>df.iloc[i]['BoolCol']
</code></pre>
<p>The result is actually False!!</p>
<p>Which would be the correct pandas way to do this?</p>
|
<p><code>df.iloc[i]</code> returns the <code>ith</code> row of <code>df</code>. <code>i</code> does not refer to the index label, <code>i</code> is a 0-based index.</p>
<p>In contrast, <strong>the attribute <code>index</code> returns actual index labels</strong>, not numeric row-indices:</p>
<pre><code>df.index[df['BoolCol'] == True].tolist()
</code></pre>
<p>or equivalently,</p>
<pre><code>df.index[df['BoolCol']].tolist()
</code></pre>
<p>You can see the difference quite clearly by playing with a DataFrame with
a non-default index that does not equal to the row's numerical position:</p>
<pre><code>df = pd.DataFrame({'BoolCol': [True, False, False, True, True]},
index=[10,20,30,40,50])
In [53]: df
Out[53]:
BoolCol
10 True
20 False
30 False
40 True
50 True
[5 rows x 1 columns]
In [54]: df.index[df['BoolCol']].tolist()
Out[54]: [10, 40, 50]
</code></pre>
<hr>
<p><strong>If you want to use the index</strong>, </p>
<pre><code>In [56]: idx = df.index[df['BoolCol']]
In [57]: idx
Out[57]: Int64Index([10, 40, 50], dtype='int64')
</code></pre>
<p><strong>then you can select the rows using <code>loc</code> instead of <code>iloc</code></strong>:</p>
<pre><code>In [58]: df.loc[idx]
Out[58]:
BoolCol
10 True
40 True
50 True
[3 rows x 1 columns]
</code></pre>
<hr>
<p>Note that <strong><code>loc</code> can also accept boolean arrays</strong>:</p>
<pre><code>In [55]: df.loc[df['BoolCol']]
Out[55]:
BoolCol
10 True
40 True
50 True
[3 rows x 1 columns]
</code></pre>
<hr>
<p><strong>If you have a boolean array, <code>mask</code>, and need ordinal index values, you can compute them using <code>np.flatnonzero</code></strong>:</p>
<pre><code>In [110]: np.flatnonzero(df['BoolCol'])
Out[112]: array([0, 3, 4])
</code></pre>
<p>Use <code>df.iloc</code> to select rows by ordinal index:</p>
<pre><code>In [113]: df.iloc[np.flatnonzero(df['BoolCol'])]
Out[113]:
BoolCol
10 True
40 True
50 True
</code></pre>
| 661
|
pandas
|
Remap values in pandas column with a dict, preserve NaNs
|
https://stackoverflow.com/questions/20250771/remap-values-in-pandas-column-with-a-dict-preserve-nans
|
<p>I have a dictionary which looks like this: <code>di = {1: "A", 2: "B"}</code></p>
<p>I would like to apply it to the <code>col1</code> column of a dataframe similar to:</p>
<pre class="lang-none prettyprint-override"><code> col1 col2
0 w a
1 1 2
2 2 NaN
</code></pre>
<p>to get:</p>
<pre class="lang-none prettyprint-override"><code> col1 col2
0 w a
1 A 2
2 B NaN
</code></pre>
<p>How can I best do this?</p>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html" rel="noreferrer"><code>.replace</code></a>. For example:</p>
<pre><code>>>> df = pd.DataFrame({'col2': {0: 'a', 1: 2, 2: np.nan}, 'col1': {0: 'w', 1: 1, 2: 2}})
>>> di = {1: "A", 2: "B"}
>>> df
col1 col2
0 w a
1 1 2
2 2 NaN
>>> df.replace({"col1": di})
col1 col2
0 w a
1 A 2
2 B NaN
</code></pre>
<p>or directly on the <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html" rel="noreferrer"><code>Series</code></a>, i.e. <code>df["col1"].replace(di, inplace=True)</code>.</p>
| 662
|
pandas
|
How to reversibly store and load a Pandas dataframe to/from disk
|
https://stackoverflow.com/questions/17098654/how-to-reversibly-store-and-load-a-pandas-dataframe-to-from-disk
|
<p>Right now I'm importing a fairly large <code>CSV</code> as a dataframe every time I run the script. Is there a good solution for keeping that dataframe constantly available in between runs so I don't have to spend all that time waiting for the script to run?</p>
|
<p>The easiest way is to <a href="https://docs.python.org/3/library/pickle.html" rel="noreferrer">pickle</a> it using <a href="http://pandas.pydata.org/pandas-docs/stable/io.html#pickling" rel="noreferrer"><code>to_pickle</code></a>:</p>
<pre><code>df.to_pickle(file_name) # where to save it, usually as a .pkl
</code></pre>
<p>Then you can load it back using:</p>
<pre><code>df = pd.read_pickle(file_name)
</code></pre>
<p><em>Note: before 0.11.1 <code>save</code> and <code>load</code> were the only way to do this (they are now deprecated in favor of <code>to_pickle</code> and <code>read_pickle</code> respectively).</em></p>
<hr />
<p>Another popular choice is to use <a href="http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables" rel="noreferrer">HDF5</a> (<a href="http://www.pytables.org" rel="noreferrer">pytables</a>) which offers <a href="https://stackoverflow.com/questions/16628329/hdf5-and-sqlite-concurrency-compression-i-o-performance">very fast</a> access times for large datasets:</p>
<pre><code>import pandas as pd
store = pd.HDFStore('store.h5')
store['df'] = df # save it
store['df'] # load it
</code></pre>
<p><em>More advanced strategies are discussed in the <a href="http://pandas-docs.github.io/pandas-docs-travis/#pandas-powerful-python-data-analysis-toolkit" rel="noreferrer">cookbook</a>.</em></p>
<hr />
<p>Since 0.13 there's also <a href="http://pandas.pydata.org/pandas-docs/stable/io.html#msgpack-experimental" rel="noreferrer">msgpack</a> which may be be better for interoperability, as a faster alternative to JSON, or if you have python object/text-heavy data (see <a href="https://stackoverflow.com/q/30651724/1240268">this question</a>).</p>
| 663
|
pandas
|
Pandas read_csv: low_memory and dtype options
|
https://stackoverflow.com/questions/24251219/pandas-read-csv-low-memory-and-dtype-options
|
<pre><code>df = pd.read_csv('somefile.csv')
</code></pre>
<p>...gives an error:</p>
<blockquote>
<p>.../site-packages/pandas/io/parsers.py:1130:
DtypeWarning: Columns (4,5,7,16) have mixed types. Specify dtype
option on import or set low_memory=False.</p>
</blockquote>
<p>Why is the <code>dtype</code> option related to <code>low_memory</code>, and why might <code>low_memory=False</code> help?</p>
|
<h1>The deprecated low_memory option</h1>
<p>The <code>low_memory</code> option is not properly deprecated, but it should be, since it does not actually do anything differently[<a href="https://github.com/pydata/pandas/issues/5888" rel="noreferrer">source</a>]</p>
<p>The reason you get this <code>low_memory</code> warning is because guessing dtypes for each column is very memory demanding. Pandas tries to determine what dtype to set by analyzing the data in each column.</p>
<h1>Dtype Guessing (very bad)</h1>
<p>Pandas can only determine what dtype a column should have once the whole file is read. This means nothing can really be parsed before the whole file is read unless you risk having to change the dtype of that column when you read the last value.</p>
<p>Consider the example of one file which has a column called user_id.
It contains 10 million rows where the user_id is always numbers.
Since pandas cannot know it is only numbers, it will probably keep it as the original strings until it has read the whole file.</p>
<h1>Specifying dtypes (should always be done)</h1>
<p>adding</p>
<pre><code>dtype={'user_id': int}
</code></pre>
<p>to the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="noreferrer"><code>pd.read_csv()</code></a> call will make pandas know when it starts reading the file, that this is only integers.</p>
<p>Also worth noting is that if the last line in the file would have <code>"foobar"</code> written in the <code>user_id</code> column, the loading would crash if the above dtype was specified.</p>
<h3>Example of broken data that breaks when dtypes are defined</h3>
<pre><code>import pandas as pd
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
csvdata = """user_id,username
1,Alice
3,Bob
foobar,Caesar"""
sio = StringIO(csvdata)
pd.read_csv(sio, dtype={"user_id": int, "username": "string"})
ValueError: invalid literal for long() with base 10: 'foobar'
</code></pre>
<p>dtypes are typically a numpy thing, read more about them here:
<a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.html" rel="noreferrer">http://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.html</a></p>
<h1>What dtypes exists?</h1>
<p>We have access to numpy dtypes: float, int, bool, timedelta64[ns] and datetime64[ns]. Note that the numpy date/time dtypes are <em>not</em> time zone aware.</p>
<p>Pandas extends this set of dtypes with its own:</p>
<p><code>'datetime64[ns, <tz>]'</code> Which is a time zone aware timestamp.</p>
<p>'category' which is essentially an enum (strings represented by integer keys to save</p>
<p>'period[]' Not to be confused with a timedelta, these objects are actually anchored to specific time periods</p>
<p>'Sparse', 'Sparse[int]', 'Sparse[float]' is for sparse data or 'Data that has a lot of holes in it' Instead of saving the NaN or None in the dataframe it omits the objects, saving space.</p>
<p>'Interval' is a topic of its own but its main use is for indexing. <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#advanced-intervalindex" rel="noreferrer">See more here</a></p>
<p>'Int8', 'Int16', 'Int32', 'Int64', 'UInt8', 'UInt16', 'UInt32', 'UInt64' are all pandas specific integers that are nullable, unlike the numpy variant.</p>
<p>'string' is a specific dtype for working with string data and gives access to the <code>.str</code> attribute on the series.</p>
<p>'boolean' is like the numpy 'bool' but it also supports missing data.</p>
<p>Read the complete reference here:</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dtypes.html" rel="noreferrer">Pandas dtype reference</a></p>
<h1>Gotchas, caveats, notes</h1>
<p>Setting <code>dtype=object</code> will silence the above warning, but will not make it more memory efficient, only process efficient if anything.</p>
<p>Setting <code>dtype=unicode</code> will not do anything, since to numpy, a <code>unicode</code> is represented as <code>object</code>.</p>
<h3>Usage of converters</h3>
<p>@sparrow correctly points out the usage of converters to avoid pandas blowing up when encountering <code>'foobar'</code> in a column specified as <code>int</code>. I would like to add that converters are really heavy and inefficient to use in pandas and should be used as a last resort. This is because the read_csv process is a single process.</p>
<p>CSV files can be processed line by line and thus can be processed by multiple converters in parallel more efficiently by simply cutting the file into segments and running multiple processes, something that pandas does not support. But this is a different story.</p>
| 664
|
pandas
|
Pandas: drop a level from a multi-level column index?
|
https://stackoverflow.com/questions/22233488/pandas-drop-a-level-from-a-multi-level-column-index
|
<p>If I've got a multi-level column index:</p>
<pre><code>>>> cols = pd.MultiIndex.from_tuples([("a", "b"), ("a", "c")])
>>> pd.DataFrame([[1,2], [3,4]], columns=cols)
</code></pre>
<pre>
a
---+--
b | c
--+---+--
0 | 1 | 2
1 | 3 | 4
</pre>
<p>How can I drop the "a" level of that index, so I end up with:</p>
<pre>
b | c
--+---+--
0 | 1 | 2
1 | 3 | 4
</pre>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/version/0.18.0/generated/pandas.MultiIndex.droplevel.html" rel="noreferrer"><code>MultiIndex.droplevel</code></a>:</p>
<pre><code>>>> cols = pd.MultiIndex.from_tuples([("a", "b"), ("a", "c")])
>>> df = pd.DataFrame([[1,2], [3,4]], columns=cols)
>>> df
a
b c
0 1 2
1 3 4
[2 rows x 2 columns]
>>> df.columns = df.columns.droplevel()
>>> df
b c
0 1 2
1 3 4
[2 rows x 2 columns]
</code></pre>
| 665
|
pandas
|
Improve subplot size/spacing with many subplots
|
https://stackoverflow.com/questions/6541123/improve-subplot-size-spacing-with-many-subplots
|
<p>I need to generate a whole bunch of vertically-stacked plots in matplotlib. The result will be saved using <code>savefig</code> and viewed on a webpage, so I don't care how tall the final image is, as long as the subplots are spaced so they don't overlap.</p>
<p>No matter how big I allow the figure to be, the subplots always seem to overlap.</p>
<p>My code currently looks like</p>
<pre><code>import matplotlib.pyplot as plt
import my_other_module
titles, x_lists, y_lists = my_other_module.get_data()
fig = plt.figure(figsize=(10,60))
for i, y_list in enumerate(y_lists):
plt.subplot(len(titles), 1, i)
plt.xlabel("Some X label")
plt.ylabel("Some Y label")
plt.title(titles[i])
plt.plot(x_lists[i],y_list)
fig.savefig('out.png', dpi=100)
</code></pre>
|
<p>Please review <a href="https://matplotlib.org/stable/users/explain/axes/tight_layout_guide.html" rel="noreferrer">matplotlib: Tight Layout guide</a> and try using <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.tight_layout.html" rel="noreferrer"><code>matplotlib.pyplot.tight_layout</code></a>, or <a href="https://matplotlib.org/stable/api/figure_api.html#matplotlib.figure.Figure.tight_layout" rel="noreferrer"><code>matplotlib.figure.Figure.tight_layout</code></a></p>
<p>As a quick example:</p>
<pre><code>import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=4, ncols=4, figsize=(8, 8))
fig.tight_layout() # Or equivalently, "plt.tight_layout()"
plt.show()
</code></pre>
<hr />
<p>Without Tight Layout</p>
<p><a href="https://i.sstatic.net/U7agc.png" rel="noreferrer"><img src="https://i.sstatic.net/U7agc.png" alt="enter image description here" /></a></p>
<hr />
<p>With Tight Layout</p>
<p><a href="https://i.sstatic.net/G4NNT.png" rel="noreferrer"><img src="https://i.sstatic.net/G4NNT.png" alt="enter image description here" /></a></p>
| 666
|
pandas
|
How to group dataframe rows into list in pandas groupby
|
https://stackoverflow.com/questions/22219004/how-to-group-dataframe-rows-into-list-in-pandas-groupby
|
<p>Given a dataframe, I want to groupby the first column and get second column as lists in rows, so that a dataframe like:</p>
<pre><code>a b
A 1
A 2
B 5
B 5
B 4
C 6
</code></pre>
<p>becomes</p>
<pre><code>A [1,2]
B [5,5,4]
C [6]
</code></pre>
<p>How do I do this?</p>
|
<p>You can do this using <code>groupby</code> to group on the column of interest and then <code>apply</code> <code>list</code> to every group:</p>
<pre><code>In [1]: df = pd.DataFrame( {'a':['A','A','B','B','B','C'], 'b':[1,2,5,5,4,6]})
df
Out[1]:
a b
0 A 1
1 A 2
2 B 5
3 B 5
4 B 4
5 C 6
In [2]: df.groupby('a')['b'].apply(list)
Out[2]:
a
A [1, 2]
B [5, 5, 4]
C [6]
Name: b, dtype: object
In [3]: df1 = df.groupby('a')['b'].apply(list).reset_index(name='new')
df1
Out[3]:
a new
0 A [1, 2]
1 B [5, 5, 4]
2 C [6]
</code></pre>
| 667
|
pandas
|
Get list from pandas dataframe column or row?
|
https://stackoverflow.com/questions/22341271/get-list-from-pandas-dataframe-column-or-row
|
<p>I have a dataframe <code>df</code> imported from an Excel document like this:</p>
<pre class="lang-none prettyprint-override"><code>cluster load_date budget actual fixed_price
A 1/1/2014 1000 4000 Y
A 2/1/2014 12000 10000 Y
A 3/1/2014 36000 2000 Y
B 4/1/2014 15000 10000 N
B 4/1/2014 12000 11500 N
B 4/1/2014 90000 11000 N
C 7/1/2014 22000 18000 N
C 8/1/2014 30000 28960 N
C 9/1/2014 53000 51200 N
</code></pre>
<p>I want to be able to return the contents of column 1 <code>df['cluster']</code> as a list, so I can run a for-loop over it, and create an Excel worksheet for every cluster.</p>
<p>Is it also possible to return the contents of a whole column or row to a list? e.g.</p>
<pre class="lang-py prettyprint-override"><code>list = [], list[column1] or list[df.ix(row1)]
</code></pre>
|
<p>Pandas DataFrame columns are Pandas Series when you pull them out, which you can then call <code>x.tolist()</code> on to turn them into a Python list. Alternatively you cast it with <code>list(x)</code>.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data_dict = {'one': pd.Series([1, 2, 3], index=['a', 'b', 'c']),
'two': pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd'])}
df = pd.DataFrame(data_dict)
print(f"DataFrame:\n{df}\n")
print(f"column types:\n{df.dtypes}")
col_one_list = df['one'].tolist()
col_one_arr = df['one'].to_numpy()
print(f"\ncol_one_list:\n{col_one_list}\ntype:{type(col_one_list)}")
print(f"\ncol_one_arr:\n{col_one_arr}\ntype:{type(col_one_arr)}")
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>DataFrame:
one two
a 1.0 1
b 2.0 2
c 3.0 3
d NaN 4
column types:
one float64
two int64
dtype: object
col_one_list:
[1.0, 2.0, 3.0, nan]
type:<class 'list'>
col_one_arr:
[ 1. 2. 3. nan]
type:<class 'numpy.ndarray'>
</code></pre>
| 668
|
pandas
|
Selecting a row of pandas series/dataframe by integer index
|
https://stackoverflow.com/questions/16096627/selecting-a-row-of-pandas-series-dataframe-by-integer-index
|
<p>I am curious as to why <code>df[2]</code> is not supported, while <code>df.ix[2]</code> and <code>df[2:3]</code> both work. </p>
<pre><code>In [26]: df.ix[2]
Out[26]:
A 1.027680
B 1.514210
C -1.466963
D -0.162339
Name: 2000-01-03 00:00:00
In [27]: df[2:3]
Out[27]:
A B C D
2000-01-03 1.02768 1.51421 -1.466963 -0.162339
</code></pre>
<p>I would expect <code>df[2]</code> to work the same way as <code>df[2:3]</code> to be consistent with Python indexing convention. Is there a design reason for not supporting indexing row by single integer?</p>
|
<p>echoing @HYRY, see the new docs in 0.11</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html" rel="noreferrer">http://pandas.pydata.org/pandas-docs/stable/indexing.html</a></p>
<p>Here we have new operators, <code>.iloc</code> to explicity support only integer indexing, and <code>.loc</code> to explicity support only label indexing</p>
<p>e.g. imagine this scenario</p>
<pre><code>In [1]: df = pd.DataFrame(np.random.rand(5,2),index=range(0,10,2),columns=list('AB'))
In [2]: df
Out[2]:
A B
0 1.068932 -0.794307
2 -0.470056 1.192211
4 -0.284561 0.756029
6 1.037563 -0.267820
8 -0.538478 -0.800654
In [5]: df.iloc[[2]]
Out[5]:
A B
4 -0.284561 0.756029
In [6]: df.loc[[2]]
Out[6]:
A B
2 -0.470056 1.192211
</code></pre>
<p><code>[]</code> slices the rows (by label location) only</p>
| 669
|
pandas
|
How can I use the apply() function for a single column?
|
https://stackoverflow.com/questions/34962104/how-can-i-use-the-apply-function-for-a-single-column
|
<p>I have a pandas dataframe with multiple columns. I want to change the values of the only the first column without affecting the other columns. How can I do that using <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.apply.html" rel="noreferrer"><code>apply()</code></a> in pandas?</p>
|
<p>Given a sample dataframe <code>df</code> as:</p>
<pre><code> a b
0 1 2
1 2 3
2 3 4
3 4 5
</code></pre>
<p>what you want is:</p>
<pre><code>df['a'] = df['a'].apply(lambda x: x + 1)
</code></pre>
<p>that returns:</p>
<pre><code> a b
0 2 2
1 3 3
2 4 4
3 5 5
</code></pre>
| 670
|
pandas
|
How to select all columns except one in pandas?
|
https://stackoverflow.com/questions/29763620/how-to-select-all-columns-except-one-in-pandas
|
<p>I have a dataframe that look like this:</p>
<pre class="lang-none prettyprint-override"><code> a b c d
0 0.418762 0.042369 0.869203 0.972314
1 0.991058 0.510228 0.594784 0.534366
2 0.407472 0.259811 0.396664 0.894202
3 0.726168 0.139531 0.324932 0.906575
</code></pre>
<p>How I can get all columns except <code>b</code>?</p>
|
<p>When the columns are not a MultiIndex, <code>df.columns</code> is just an array of column names so you can do:</p>
<pre><code>df.loc[:, df.columns != 'b']
a c d
0 0.561196 0.013768 0.772827
1 0.882641 0.615396 0.075381
2 0.368824 0.651378 0.397203
3 0.788730 0.568099 0.869127
</code></pre>
| 671
|
pandas
|
How to add an empty column to a dataframe?
|
https://stackoverflow.com/questions/16327055/how-to-add-an-empty-column-to-a-dataframe
|
<p>What's the easiest way to add an empty column to a pandas DataFrame object? The best I've stumbled upon is something like</p>
<pre class="lang-py prettyprint-override"><code>df['foo'] = df.apply(lambda _: '', axis=1)
</code></pre>
<p>Is there a less perverse method?</p>
|
<p>If I understand correctly, assignment should fill:</p>
<pre><code>>>> import numpy as np
>>> import pandas as pd
>>> df = pd.DataFrame({"A": [1,2,3], "B": [2,3,4]})
>>> df
A B
0 1 2
1 2 3
2 3 4
>>> df["C"] = ""
>>> df["D"] = np.nan
>>> df
A B C D
0 1 2 NaN
1 2 3 NaN
2 3 4 NaN
</code></pre>
| 672
|
pandas
|
How do I create a new column where the values are selected based on an existing column?
|
https://stackoverflow.com/questions/19913659/how-do-i-create-a-new-column-where-the-values-are-selected-based-on-an-existing
|
<p>How do I add a <code>color</code> column to the following dataframe so that <code>color='green'</code> if <code>Set == 'Z'</code>, and <code>color='red'</code> otherwise?</p>
<pre><code> Type Set
1 A Z
2 B Z
3 B X
4 C Y
</code></pre>
|
<p><strong>If you only have two choices to select from then use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="noreferrer"><code>np.where</code></a>:</strong></p>
<pre><code>df['color'] = np.where(df['Set']=='Z', 'green', 'red')
</code></pre>
<p>For example,</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'Type':list('ABBC'), 'Set':list('ZZXY')})
df['color'] = np.where(df['Set']=='Z', 'green', 'red')
print(df)
</code></pre>
<p>yields</p>
<pre><code> Set Type color
0 Z A green
1 Z B green
2 X B red
3 Y C red
</code></pre>
<hr />
<p><strong>If you have more than two conditions then use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.select.html" rel="noreferrer"><code>np.select</code></a></strong>. For example, if you want <code>color</code> to be</p>
<ul>
<li><code>yellow</code> when <code>(df['Set'] == 'Z') & (df['Type'] == 'A')</code></li>
<li>otherwise <code>blue</code> when <code>(df['Set'] == 'Z') & (df['Type'] == 'B')</code></li>
<li>otherwise <code>purple</code> when <code>(df['Type'] == 'B')</code></li>
<li>otherwise <code>black</code>,</li>
</ul>
<p>then use</p>
<pre><code>df = pd.DataFrame({'Type':list('ABBC'), 'Set':list('ZZXY')})
conditions = [
(df['Set'] == 'Z') & (df['Type'] == 'A'),
(df['Set'] == 'Z') & (df['Type'] == 'B'),
(df['Type'] == 'B')]
choices = ['yellow', 'blue', 'purple']
df['color'] = np.select(conditions, choices, default='black')
print(df)
</code></pre>
<p>which yields</p>
<pre><code> Set Type color
0 Z A yellow
1 Z B blue
2 X B purple
3 Y C black
</code></pre>
| 673
|
pandas
|
How to draw vertical lines on a given plot
|
https://stackoverflow.com/questions/24988448/how-to-draw-vertical-lines-on-a-given-plot
|
<p>Given a plot of a signal in time representation, how can I draw lines marking the corresponding time index?</p>
<p>Specifically, given a signal plot with a time index ranging from 0 to 2.6 (seconds), I want to draw vertical red lines indicating the corresponding time index for the list <code>[0.22058956, 0.33088437, 2.20589566]</code>. How can I do it?</p>
|
<p>The standard way to add vertical lines that will cover your entire plot window without you having to specify their actual height is <code>plt.axvline</code></p>
<pre><code>import matplotlib.pyplot as plt
plt.axvline(x=0.22058956)
plt.axvline(x=0.33088437)
plt.axvline(x=2.20589566)
</code></pre>
<p>OR</p>
<pre><code>xcoords = [0.22058956, 0.33088437, 2.20589566]
for xc in xcoords:
plt.axvline(x=xc)
</code></pre>
<p>You can use many of the keywords available for other plot commands (e.g. <code>color</code>, <code>linestyle</code>, <code>linewidth</code> ...). You can pass in keyword arguments <code>ymin</code> and <code>ymax</code> if you like in axes corrdinates (e.g. <code>ymin=0.25</code>, <code>ymax=0.75</code> will cover the middle half of the plot). There are corresponding functions for horizontal lines (<code>axhline</code>) and rectangles (<code>axvspan</code>). </p>
| 674
|
pandas
|
pandas get rows which are NOT in other dataframe
|
https://stackoverflow.com/questions/28901683/pandas-get-rows-which-are-not-in-other-dataframe
|
<p>I've two pandas data frames that have some rows in common.</p>
<p>Suppose dataframe2 is a subset of dataframe1.</p>
<p><strong>How can I get the rows of dataframe1 which are not in dataframe2?</strong></p>
<pre><code>df1 = pandas.DataFrame(data = {'col1' : [1, 2, 3, 4, 5], 'col2' : [10, 11, 12, 13, 14]})
df2 = pandas.DataFrame(data = {'col1' : [1, 2, 3], 'col2' : [10, 11, 12]})
</code></pre>
<p>df1</p>
<pre><code> col1 col2
0 1 10
1 2 11
2 3 12
3 4 13
4 5 14
</code></pre>
<p>df2</p>
<pre><code> col1 col2
0 1 10
1 2 11
2 3 12
</code></pre>
<p>Expected result:</p>
<pre><code> col1 col2
3 4 13
4 5 14
</code></pre>
|
<p>One method would be to store the result of an inner merge form both dfs, then we can simply select the rows when one column's values are not in this common:</p>
<pre><code>In [119]:
common = df1.merge(df2,on=['col1','col2'])
print(common)
df1[(~df1.col1.isin(common.col1))&(~df1.col2.isin(common.col2))]
col1 col2
0 1 10
1 2 11
2 3 12
Out[119]:
col1 col2
3 4 13
4 5 14
</code></pre>
<p><strong>EDIT</strong></p>
<p>Another method as you've found is to use <code>isin</code> which will produce <code>NaN</code> rows which you can drop:</p>
<pre><code>In [138]:
df1[~df1.isin(df2)].dropna()
Out[138]:
col1 col2
3 4 13
4 5 14
</code></pre>
<p>However if df2 does not start rows in the same manner then this won't work:</p>
<pre><code>df2 = pd.DataFrame(data = {'col1' : [2, 3,4], 'col2' : [11, 12,13]})
</code></pre>
<p>will produce the entire df:</p>
<pre><code>In [140]:
df1[~df1.isin(df2)].dropna()
Out[140]:
col1 col2
0 1 10
1 2 11
2 3 12
3 4 13
4 5 14
</code></pre>
| 675
|
pandas
|
pandas: filter rows of DataFrame with operator chaining
|
https://stackoverflow.com/questions/11869910/pandas-filter-rows-of-dataframe-with-operator-chaining
|
<p>Most operations in <code>pandas</code> can be accomplished with operator chaining (<code>groupby</code>, <code>aggregate</code>, <code>apply</code>, etc), but the only way I've found to filter rows is via normal bracket indexing</p>
<pre><code>df_filtered = df[df['column'] == value]
</code></pre>
<p>This is unappealing as it requires I assign <code>df</code> to a variable before being able to filter on its values. Is there something more like the following?</p>
<pre><code>df_filtered = df.mask(lambda x: x['column'] == value)
</code></pre>
|
<p>I'm not entirely sure what you want, and your last line of code does not help either, but anyway:</p>
<p>"Chained" filtering is done by "chaining" the criteria in the boolean index.</p>
<pre><code>In [96]: df
Out[96]:
A B C D
a 1 4 9 1
b 4 5 0 2
c 5 5 1 0
d 1 3 9 6
In [99]: df[(df.A == 1) & (df.D == 6)]
Out[99]:
A B C D
d 1 3 9 6
</code></pre>
<p>If you want to chain methods, you can add your own mask method and use that one.</p>
<pre><code>In [90]: def mask(df, key, value):
....: return df[df[key] == value]
....:
In [92]: pandas.DataFrame.mask = mask
In [93]: df = pandas.DataFrame(np.random.randint(0, 10, (4,4)), index=list('abcd'), columns=list('ABCD'))
In [95]: df.ix['d','A'] = df.ix['a', 'A']
In [96]: df
Out[96]:
A B C D
a 1 4 9 1
b 4 5 0 2
c 5 5 1 0
d 1 3 9 6
In [97]: df.mask('A', 1)
Out[97]:
A B C D
a 1 4 9 1
d 1 3 9 6
In [98]: df.mask('A', 1).mask('D', 6)
Out[98]:
A B C D
d 1 3 9 6
</code></pre>
| 676
|
pandas
|
What does axis in pandas mean?
|
https://stackoverflow.com/questions/22149584/what-does-axis-in-pandas-mean
|
<p>Here is my code to generate a dataframe:</p>
<pre><code>import pandas as pd
import numpy as np
dff = pd.DataFrame(np.random.randn(1, 2), columns=list('AB'))
</code></pre>
<p>then I got the dataframe:</p>
<pre><code> A B
0 0.626386 1.52325
</code></pre>
<p>When I type the command <code>dff.mean(axis=1)</code>, I get:</p>
<pre><code>0 1.074821
dtype: float64
</code></pre>
<p>According to the reference of pandas, <code>axis=1</code> stands for columns and I expect the result of the command to be</p>
<pre><code>A 0.626386
B 1.523255
dtype: float64
</code></pre>
<p>So what does axis in pandas mean?</p>
|
<p>It specifies the axis <strong>along which</strong> the means are computed. By default <code>axis=0</code>. This is consistent with the <code>numpy.mean</code> usage when <code>axis</code> is specified <em>explicitly</em> (in <code>numpy.mean</code>, axis==None by default, which computes the mean value over the flattened array) , in which <code>axis=0</code> along the <em>rows</em> (namely, <em>index</em> in pandas), and <code>axis=1</code> along the <em>columns</em>. For added clarity, one may choose to specify <code>axis='index'</code> (instead of <code>axis=0</code>) or <code>axis='columns'</code> (instead of <code>axis=1</code>).</p>
<pre><code> A B
0 0.626386 1.52325 → → axis=1 → →
↓ ↓
↓ axis=0 ↓
↓ ↓
</code></pre>
| 677
|
pandas
|
Normalize columns of a dataframe
|
https://stackoverflow.com/questions/26414913/normalize-columns-of-a-dataframe
|
<p>I have a dataframe in pandas where each column has different value range. For example:</p>
<p>df:</p>
<pre><code>A B C
1000 10 0.5
765 5 0.35
800 7 0.09
</code></pre>
<p>Any idea how I can normalize the columns of this dataframe where each value is between 0 and 1?</p>
<p>My desired output is:</p>
<pre><code>A B C
1 1 1
0.765 0.5 0.7
0.8 0.7 0.18(which is 0.09/0.5)
</code></pre>
|
<p>You can use the package sklearn and its associated preprocessing utilities to normalize the data.</p>
<pre><code>import pandas as pd
from sklearn import preprocessing
x = df.values #returns a numpy array
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
df = pd.DataFrame(x_scaled)
</code></pre>
<p>For more information look at the scikit-learn <a href="http://scikit-learn.org/stable/modules/preprocessing.html#scaling-features-to-a-range" rel="noreferrer">documentation</a> on preprocessing data: scaling features to a range.</p>
| 678
|
pandas
|
Remove pandas rows with duplicate indices
|
https://stackoverflow.com/questions/13035764/remove-pandas-rows-with-duplicate-indices
|
<p>How to remove rows with duplicate index values?</p>
<p>In the weather DataFrame below, sometimes a scientist goes back and corrects observations -- not by editing the erroneous rows, but by appending a duplicate row to the end of a file.</p>
<p>I'm reading some automated weather data from the web (observations occur every 5 minutes, and compiled into monthly files for each weather station.) After parsing a file, the DataFrame looks like:</p>
<pre><code> Sta Precip1hr Precip5min Temp DewPnt WindSpd WindDir AtmPress
Date
2001-01-01 00:00:00 KPDX 0 0 4 3 0 0 30.31
2001-01-01 00:05:00 KPDX 0 0 4 3 0 0 30.30
2001-01-01 00:10:00 KPDX 0 0 4 3 4 80 30.30
2001-01-01 00:15:00 KPDX 0 0 3 2 5 90 30.30
2001-01-01 00:20:00 KPDX 0 0 3 2 10 110 30.28
</code></pre>
<p>Example of a duplicate case:</p>
<pre><code>import pandas as pd
import datetime
startdate = datetime.datetime(2001, 1, 1, 0, 0)
enddate = datetime.datetime(2001, 1, 1, 5, 0)
index = pd.date_range(start=startdate, end=enddate, freq='H')
data1 = {'A' : range(6), 'B' : range(6)}
data2 = {'A' : [20, -30, 40], 'B' : [-50, 60, -70]}
df1 = pd.DataFrame(data=data1, index=index)
df2 = pd.DataFrame(data=data2, index=index[:3])
df3 = df2.append(df1)
df3
A B
2001-01-01 00:00:00 20 -50
2001-01-01 01:00:00 -30 60
2001-01-01 02:00:00 40 -70
2001-01-01 03:00:00 3 3
2001-01-01 04:00:00 4 4
2001-01-01 05:00:00 5 5
2001-01-01 00:00:00 0 0
2001-01-01 01:00:00 1 1
2001-01-01 02:00:00 2 2
</code></pre>
<p>And so I need <code>df3</code> to eventually become:</p>
<pre><code> A B
2001-01-01 00:00:00 0 0
2001-01-01 01:00:00 1 1
2001-01-01 02:00:00 2 2
2001-01-01 03:00:00 3 3
2001-01-01 04:00:00 4 4
2001-01-01 05:00:00 5 5
</code></pre>
<p>I thought that adding a column of row numbers (<code>df3['rownum'] = range(df3.shape[0])</code>) would help me select the bottom-most row for any value of the <code>DatetimeIndex</code>, but I am stuck on figuring out the <code>group_by</code> or <code>pivot</code> (or ???) statements to make that work.</p>
|
<p>I would suggest using the <a href="https://pandas.pydata.org/docs/reference/api/pandas.Index.duplicated.html" rel="nofollow noreferrer">duplicated</a> method on the Pandas Index itself:</p>
<pre><code>df3 = df3[~df3.index.duplicated(keep='first')]
</code></pre>
<p>While all the other methods work, <code>.drop_duplicates</code> is by far the least performant for the provided example. Furthermore, while the <a href="https://stackoverflow.com/a/13036848/3622349">groupby method</a> is only slightly less performant, I find the duplicated method to be more readable.</p>
<p>Using the sample data provided:</p>
<pre><code>>>> %timeit df3.reset_index().drop_duplicates(subset='index', keep='first').set_index('index')
1000 loops, best of 3: 1.54 ms per loop
>>> %timeit df3.groupby(df3.index).first()
1000 loops, best of 3: 580 µs per loop
>>> %timeit df3[~df3.index.duplicated(keep='first')]
1000 loops, best of 3: 307 µs per loop
</code></pre>
<p>Note that you can keep the last element by changing the keep argument to <code>'last'</code>.</p>
<p>It should also be noted that this method works with <code>MultiIndex</code> as well (using df1 as specified in <a href="https://stackoverflow.com/a/13036848/3622349">Paul's example</a>):</p>
<pre><code>>>> %timeit df1.groupby(level=df1.index.names).last()
1000 loops, best of 3: 771 µs per loop
>>> %timeit df1[~df1.index.duplicated(keep='last')]
1000 loops, best of 3: 365 µs per loop
</code></pre>
| 679
|
pandas
|
Converting between datetime, Timestamp and datetime64
|
https://stackoverflow.com/questions/13703720/converting-between-datetime-timestamp-and-datetime64
|
<p>How do I convert a <code>numpy.datetime64</code> object to a <code>datetime.datetime</code> (or <code>Timestamp</code>)?</p>
<p>In the following code, I create a datetime, timestamp and datetime64 objects.</p>
<pre><code>import datetime
import numpy as np
import pandas as pd
dt = datetime.datetime(2012, 5, 1)
# A strange way to extract a Timestamp object, there's surely a better way?
ts = pd.DatetimeIndex([dt])[0]
dt64 = np.datetime64(dt)
In [7]: dt
Out[7]: datetime.datetime(2012, 5, 1, 0, 0)
In [8]: ts
Out[8]: <Timestamp: 2012-05-01 00:00:00>
In [9]: dt64
Out[9]: numpy.datetime64('2012-05-01T01:00:00.000000+0100')
</code></pre>
<p><em>Note: it's easy to get the datetime from the Timestamp:</em></p>
<pre><code>In [10]: ts.to_datetime()
Out[10]: datetime.datetime(2012, 5, 1, 0, 0)
</code></pre>
<p>But how do we extract the <code>datetime</code> or <code>Timestamp</code> from a <code>numpy.datetime64</code> (<code>dt64</code>)?</p>
<p>.</p>
<p>Update: a somewhat nasty example in my dataset (perhaps the motivating example) seems to be:</p>
<pre><code>dt64 = numpy.datetime64('2002-06-28T01:00:00.000000000+0100')
</code></pre>
<p>which should be <code>datetime.datetime(2002, 6, 28, 1, 0)</code>, and not a long (!) (<code>1025222400000000000L</code>)...</p>
|
<p>To convert <code>numpy.datetime64</code> to <code>datetime</code> object that represents time in UTC on <code>numpy-1.8</code>:</p>
<pre><code>>>> from datetime import datetime
>>> import numpy as np
>>> dt = datetime.utcnow()
>>> dt
datetime.datetime(2012, 12, 4, 19, 51, 25, 362455)
>>> dt64 = np.datetime64(dt)
>>> ts = (dt64 - np.datetime64('1970-01-01T00:00:00Z')) / np.timedelta64(1, 's')
>>> ts
1354650685.3624549
>>> datetime.utcfromtimestamp(ts)
datetime.datetime(2012, 12, 4, 19, 51, 25, 362455)
>>> np.__version__
'1.8.0.dev-7b75899'
</code></pre>
<p>The above example assumes that a naive <code>datetime</code> object is interpreted by <code>np.datetime64</code> as time in UTC.</p>
<hr />
<p>To convert <code>datetime</code> to <code>np.datetime64</code> and back (<code>numpy-1.6</code>):</p>
<pre><code>>>> np.datetime64(datetime.utcnow()).astype(datetime)
datetime.datetime(2012, 12, 4, 13, 34, 52, 827542)
</code></pre>
<p>It works both on a single <code>np.datetime64</code> object and a numpy array of <code>np.datetime64</code>.</p>
<p>Think of <code>np.datetime64</code> the same way you would about <code>np.int8</code>, <code>np.int16</code>, etc and apply the same methods to convert between Python objects such as <code>int</code>, <code>datetime</code> and corresponding numpy objects.</p>
<p>Your "nasty example" works correctly:</p>
<pre><code>>>> from datetime import datetime
>>> import numpy
>>> numpy.datetime64('2002-06-28T01:00:00.000000000+0100').astype(datetime)
datetime.datetime(2002, 6, 28, 0, 0)
>>> numpy.__version__
'1.6.2' # current version available via pip install numpy
</code></pre>
<p>I can reproduce the <code>long</code> value on <code>numpy-1.8.0</code> installed as:</p>
<pre class="lang-none prettyprint-override"><code>pip install git+https://github.com/numpy/numpy.git#egg=numpy-dev
</code></pre>
<p>The same example:</p>
<pre><code>>>> from datetime import datetime
>>> import numpy
>>> numpy.datetime64('2002-06-28T01:00:00.000000000+0100').astype(datetime)
1025222400000000000L
>>> numpy.__version__
'1.8.0.dev-7b75899'
</code></pre>
<p>It returns <code>long</code> because for <code>numpy.datetime64</code> type <code>.astype(datetime)</code> is equivalent to <code>.astype(object)</code> that returns Python integer (<code>long</code>) on <code>numpy-1.8</code>.</p>
<p>To get <code>datetime</code> object you could:</p>
<pre><code>>>> dt64.dtype
dtype('<M8[ns]')
>>> ns = 1e-9 # number of seconds in a nanosecond
>>> datetime.utcfromtimestamp(dt64.astype(int) * ns)
datetime.datetime(2002, 6, 28, 0, 0)
</code></pre>
<p>To get <code>datetime64</code> that uses seconds directly:</p>
<pre><code>>>> dt64 = numpy.datetime64('2002-06-28T01:00:00.000000000+0100', 's')
>>> dt64.dtype
dtype('<M8[s]')
>>> datetime.utcfromtimestamp(dt64.astype(int))
datetime.datetime(2002, 6, 28, 0, 0)
</code></pre>
<p>The <a href="http://docs.scipy.org/doc/numpy-dev/reference/arrays.datetime.html" rel="noreferrer">numpy docs</a> say that the datetime API is experimental and may change in future numpy versions.</p>
| 680
|
pandas
|
How to drop a list of rows from Pandas dataframe?
|
https://stackoverflow.com/questions/14661701/how-to-drop-a-list-of-rows-from-pandas-dataframe
|
<p>I have a dataframe df :</p>
<pre><code>>>> df
sales discount net_sales cogs
STK_ID RPT_Date
600141 20060331 2.709 NaN 2.709 2.245
20060630 6.590 NaN 6.590 5.291
20060930 10.103 NaN 10.103 7.981
20061231 15.915 NaN 15.915 12.686
20070331 3.196 NaN 3.196 2.710
20070630 7.907 NaN 7.907 6.459
</code></pre>
<p>Then I want to drop rows with certain sequence numbers which indicated in a list, suppose here is <code>[1,2,4],</code> then left:</p>
<pre><code> sales discount net_sales cogs
STK_ID RPT_Date
600141 20060331 2.709 NaN 2.709 2.245
20061231 15.915 NaN 15.915 12.686
20070630 7.907 NaN 7.907 6.459
</code></pre>
<p>How or what function can do that ?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="noreferrer">DataFrame.drop</a> and pass it a Series of index labels:</p>
<pre><code>In [65]: df
Out[65]:
one two
one 1 4
two 2 3
three 3 2
four 4 1
In [66]: df.drop(df.index[[1,3]])
Out[66]:
one two
one 1 4
three 3 2
</code></pre>
| 681
|
pandas
|
How to sort a pandas dataFrame by two or more columns?
|
https://stackoverflow.com/questions/17141558/how-to-sort-a-pandas-dataframe-by-two-or-more-columns
|
<p>Suppose I have a dataframe with columns <code>a</code>, <code>b</code> and <code>c</code>. I want to sort the dataframe by column <code>b</code> in ascending order, and by column <code>c</code> in descending order. How do I do this?</p>
|
<p>As of the 0.17.0 release, the <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.sort.html" rel="noreferrer"><code>sort</code></a> method was deprecated in favor of <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="noreferrer"><code>sort_values</code></a>. <code>sort</code> was completely removed in the 0.20.0 release. The arguments (and results) remain the same:</p>
<pre><code>df.sort_values(['a', 'b'], ascending=[True, False])
</code></pre>
<hr>
<p>You can use the ascending argument of <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort.html" rel="noreferrer"><code>sort</code></a>:</p>
<pre><code>df.sort(['a', 'b'], ascending=[True, False])
</code></pre>
<p>For example:</p>
<pre><code>In [11]: df1 = pd.DataFrame(np.random.randint(1, 5, (10,2)), columns=['a','b'])
In [12]: df1.sort(['a', 'b'], ascending=[True, False])
Out[12]:
a b
2 1 4
7 1 3
1 1 2
3 1 2
4 3 2
6 4 4
0 4 3
9 4 3
5 4 1
8 4 1
</code></pre>
<hr>
<p>As commented by @renadeen</p>
<blockquote>
<p>Sort isn't in place by default! So you should assign result of the sort method to a variable or add inplace=True to method call.</p>
</blockquote>
<p>that is, if you want to reuse df1 as a sorted DataFrame:</p>
<pre><code>df1 = df1.sort(['a', 'b'], ascending=[True, False])
</code></pre>
<p>or</p>
<pre><code>df1.sort(['a', 'b'], ascending=[True, False], inplace=True)
</code></pre>
| 682
|
pandas
|
how do I insert a column at a specific column index in pandas?
|
https://stackoverflow.com/questions/18674064/how-do-i-insert-a-column-at-a-specific-column-index-in-pandas
|
<p>Can I insert a column at a specific column index in pandas? </p>
<pre><code>import pandas as pd
df = pd.DataFrame({'l':['a','b','c','d'], 'v':[1,2,1,2]})
df['n'] = 0
</code></pre>
<p>This will put column <code>n</code> as the last column of <code>df</code>, but isn't there a way to tell <code>df</code> to put <code>n</code> at the beginning?</p>
|
<p>see docs: <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.insert.html" rel="noreferrer">http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.insert.html</a></p>
<p>using loc = 0 will insert at the beginning</p>
<pre><code>df.insert(loc, column, value)
</code></pre>
<hr />
<pre><code>df = pd.DataFrame({'B': [1, 2, 3], 'C': [4, 5, 6]})
df
Out:
B C
0 1 4
1 2 5
2 3 6
idx = 0
new_col = [7, 8, 9] # can be a list, a Series, an array or a scalar
df.insert(loc=idx, column='A', value=new_col)
df
Out:
A B C
0 7 1 4
1 8 2 5
2 9 3 6
</code></pre>
| 683
|
pandas
|
How to add pandas data to an existing csv file?
|
https://stackoverflow.com/questions/17530542/how-to-add-pandas-data-to-an-existing-csv-file
|
<p>I want to know if it is possible to use the pandas <code>to_csv()</code> function to add a dataframe to an existing csv file. The csv file has the same structure as the loaded data. </p>
|
<p>You can specify a python write mode in the pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="noreferrer"><code>to_csv</code></a> function. For append it is 'a'.</p>
<p>In your case:</p>
<pre><code>df.to_csv('my_csv.csv', mode='a', header=False)
</code></pre>
<p>The default mode is 'w'.</p>
<p>If the file initially might be missing, you can make sure the header is printed at the first write using this variation:</p>
<pre><code>output_path='my_csv.csv'
df.to_csv(output_path, mode='a', header=not os.path.exists(output_path))
</code></pre>
| 684
|
pandas
|
Count the frequency that a value occurs in a dataframe column
|
https://stackoverflow.com/questions/22391433/count-the-frequency-that-a-value-occurs-in-a-dataframe-column
|
<p>I have a dataset</p>
<pre class="lang-none prettyprint-override"><code>category
cat a
cat b
cat a
</code></pre>
<p>I'd like to return something like the following which shows the unique values and their frequencies</p>
<pre class="lang-none prettyprint-override"><code>category freq
cat a 2
cat b 1
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="noreferrer"><code>value_counts()</code></a> as @DSM commented.</p>
<pre><code>In [37]:
df = pd.DataFrame({'a':list('abssbab')})
df['a'].value_counts()
Out[37]:
b 3
a 2
s 2
dtype: int64
</code></pre>
<p>Also <code>groupby</code> and <code>count</code>. Many ways to skin a cat here.</p>
<pre><code>In [38]:
df.groupby('a').count()
Out[38]:
a
a
a 2
b 3
s 2
[3 rows x 1 columns]
</code></pre>
<p>See <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html" rel="noreferrer">the online docs</a>.</p>
<p>If you wanted to add frequency back to the original dataframe use <code>transform</code> to return an aligned index:</p>
<pre><code>In [41]:
df['freq'] = df.groupby('a')['a'].transform('count')
df
Out[41]:
a freq
0 a 2
1 b 3
2 s 2
3 s 2
4 b 3
5 a 2
6 b 3
[7 rows x 2 columns]
</code></pre>
| 685
|
pandas
|
Extracting just Month and Year separately from Pandas Datetime column
|
https://stackoverflow.com/questions/25146121/extracting-just-month-and-year-separately-from-pandas-datetime-column
|
<p>I have a Dataframe, <code>df</code>, with the following column:</p>
<pre class="lang-none prettyprint-override"><code> ArrivalDate
936 2012-12-31
938 2012-12-29
965 2012-12-31
966 2012-12-31
967 2012-12-31
968 2012-12-31
969 2012-12-31
970 2012-12-29
971 2012-12-31
972 2012-12-29
973 2012-12-29
</code></pre>
<p>The elements of the column are <code>pandas.tslib.Timestamp</code> type. I want to extract the year and month.</p>
<p>Here's what I've tried:</p>
<pre class="lang-py prettyprint-override"><code>df['ArrivalDate'].resample('M', how = 'mean')
</code></pre>
<p>which throws the following error:</p>
<pre class="lang-none prettyprint-override"><code>Only valid with DatetimeIndex or PeriodIndex
</code></pre>
<p>Then I tried:</p>
<pre class="lang-py prettyprint-override"><code>df['ArrivalDate'].apply(lambda(x):x[:-2])
</code></pre>
<p>which throws the following error:</p>
<pre class="lang-none prettyprint-override"><code>'Timestamp' object has no attribute '__getitem__'
</code></pre>
<p>My current solution is</p>
<pre class="lang-py prettyprint-override"><code>df.index = df['ArrivalDate']
</code></pre>
<p>Then, I can resample another column using the index.</p>
<p>But I'd still like a method for reconfiguring the entire column. Any ideas?</p>
|
<p>If you want new columns showing year and month separately you can do this:</p>
<pre><code>df['year'] = pd.DatetimeIndex(df['ArrivalDate']).year
df['month'] = pd.DatetimeIndex(df['ArrivalDate']).month
</code></pre>
<p>or...</p>
<pre><code>df['year'] = df['ArrivalDate'].dt.year
df['month'] = df['ArrivalDate'].dt.month
</code></pre>
<p>Then you can combine them or work with them just as they are.</p>
| 686
|
pandas
|
Add column to dataframe with constant value
|
https://stackoverflow.com/questions/29517072/add-column-to-dataframe-with-constant-value
|
<p>I have an existing dataframe which I need to add an additional column to which will contain the same value for every row.</p>
<p>Existing df:</p>
<pre><code>Date, Open, High, Low, Close
01-01-2015, 565, 600, 400, 450
</code></pre>
<p>New df:</p>
<pre><code>Name, Date, Open, High, Low, Close
abc, 01-01-2015, 565, 600, 400, 450
</code></pre>
<p>I know how to append an existing series / dataframe column. But this is a different situation, because all I need is to add the 'Name' column and set every row to the same value, in this case 'abc'.</p>
|
<p><code>df['Name']='abc'</code> will add the new column and set all rows to that value:</p>
<pre><code>In [79]:
df
Out[79]:
Date, Open, High, Low, Close
0 01-01-2015, 565, 600, 400, 450
In [80]:
df['Name'] = 'abc'
df
Out[80]:
Date, Open, High, Low, Close Name
0 01-01-2015, 565, 600, 400, 450 abc
</code></pre>
| 687
|
pandas
|
Get the row(s) which have the max value in groups using groupby
|
https://stackoverflow.com/questions/15705630/get-the-rows-which-have-the-max-value-in-groups-using-groupby
|
<p>How do I find all rows in a pandas DataFrame which have the max value for <code>count</code> column, after grouping by <code>['Sp','Mt']</code> columns?</p>
<p><strong>Example 1:</strong> the following DataFrame:</p>
<pre><code> Sp Mt Value count
0 MM1 S1 a **3**
1 MM1 S1 n 2
2 MM1 S3 cb **5**
3 MM2 S3 mk **8**
4 MM2 S4 bg **10**
5 MM2 S4 dgd 1
6 MM4 S2 rd 2
7 MM4 S2 cb 2
8 MM4 S2 uyi **7**
</code></pre>
<p>Expected output is to get the result rows whose count is max in each group, like this:</p>
<pre><code> Sp Mt Value count
0 MM1 S1 a **3**
2 MM1 S3 cb **5**
3 MM2 S3 mk **8**
4 MM2 S4 bg **10**
8 MM4 S2 uyi **7**
</code></pre>
<p><strong>Example 2:</strong></p>
<pre><code> Sp Mt Value count
4 MM2 S4 bg 10
5 MM2 S4 dgd 1
6 MM4 S2 rd 2
7 MM4 S2 cb 8
8 MM4 S2 uyi 8
</code></pre>
<p>Expected output:</p>
<pre><code> Sp Mt Value count
4 MM2 S4 bg 10
7 MM4 S2 cb 8
8 MM4 S2 uyi 8
</code></pre>
|
<p>Firstly, we can get the max count for each group like this:</p>
<pre><code>In [1]: df
Out[1]:
Sp Mt Value count
0 MM1 S1 a 3
1 MM1 S1 n 2
2 MM1 S3 cb 5
3 MM2 S3 mk 8
4 MM2 S4 bg 10
5 MM2 S4 dgd 1
6 MM4 S2 rd 2
7 MM4 S2 cb 2
8 MM4 S2 uyi 7
In [2]: df.groupby(['Sp', 'Mt'])['count'].max()
Out[2]:
Sp Mt
MM1 S1 3
S3 5
MM2 S3 8
S4 10
MM4 S2 7
Name: count, dtype: int64
</code></pre>
<p>To get the indices of the original DF you can do:</p>
<pre><code>In [3]: idx = df.groupby(['Sp', 'Mt'])['count'].transform(max) == df['count']
In [4]: df[idx]
Out[4]:
Sp Mt Value count
0 MM1 S1 a 3
2 MM1 S3 cb 5
3 MM2 S3 mk 8
4 MM2 S4 bg 10
8 MM4 S2 uyi 7
</code></pre>
<p>Note that if you have multiple max values per group, all will be returned.</p>
<hr />
<p><strong>Update</strong></p>
<p>On a Hail Mary chance that this is what the OP is requesting:</p>
<pre><code>In [5]: df['count_max'] = df.groupby(['Sp', 'Mt'])['count'].transform(max)
In [6]: df
Out[6]:
Sp Mt Value count count_max
0 MM1 S1 a 3 3
1 MM1 S1 n 2 3
2 MM1 S3 cb 5 5
3 MM2 S3 mk 8 8
4 MM2 S4 bg 10 10
5 MM2 S4 dgd 1 10
6 MM4 S2 rd 2 7
7 MM4 S2 cb 2 7
8 MM4 S2 uyi 7 7
</code></pre>
| 688
|
pandas
|
Create Pandas DataFrame from a string
|
https://stackoverflow.com/questions/22604564/create-pandas-dataframe-from-a-string
|
<p>In order to test some functionality I would like to create a <code>DataFrame</code> from a string. Let's say my test data looks like:</p>
<pre><code>TESTDATA="""col1;col2;col3
1;4.4;99
2;4.5;200
3;4.7;65
4;3.2;140
"""
</code></pre>
<p>What is the simplest way to read that data into a Pandas <code>DataFrame</code>?</p>
|
<p>A simple way to do this is to use <a href="https://docs.python.org/2/library/io.html#io.StringIO" rel="noreferrer"><code>StringIO.StringIO</code> (python2)</a> or <a href="https://docs.python.org/3/library/io.html#io.StringIO" rel="noreferrer"><code>io.StringIO</code> (python3)</a> and pass that to the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html#pandas.read_csv" rel="noreferrer"><code>pandas.read_csv</code></a> function. E.g:</p>
<pre><code>import sys
if sys.version_info[0] < 3:
from StringIO import StringIO
else:
from io import StringIO
import pandas as pd
TESTDATA = StringIO("""col1;col2;col3
1;4.4;99
2;4.5;200
3;4.7;65
4;3.2;140
""")
df = pd.read_csv(TESTDATA, sep=";")
</code></pre>
| 689
|
pandas
|
How to split a dataframe string column into two columns?
|
https://stackoverflow.com/questions/14745022/how-to-split-a-dataframe-string-column-into-two-columns
|
<p>I have a data frame with one (string) column and I'd like to split it into two (string) columns, with one column header as '<code>fips'</code> and the other <code>'row'</code></p>
<p>My dataframe <code>df</code> looks like this:</p>
<pre><code> row
0 00000 UNITED STATES
1 01000 ALABAMA
2 01001 Autauga County, AL
3 01003 Baldwin County, AL
4 01005 Barbour County, AL
</code></pre>
<p>I do not know how to use <code>df.row.str[:]</code> to achieve my goal of splitting the row cell. I can use <code>df['fips'] = hello</code> to add a new column and populate it with <code>hello</code>. Any ideas?</p>
<pre><code> fips row
0 00000 UNITED STATES
1 01000 ALABAMA
2 01001 Autauga County, AL
3 01003 Baldwin County, AL
4 01005 Barbour County, AL
</code></pre>
|
<p>There might be a better way, but this here's one approach:</p>
<pre><code> row
0 00000 UNITED STATES
1 01000 ALABAMA
2 01001 Autauga County, AL
3 01003 Baldwin County, AL
4 01005 Barbour County, AL
</code></pre>
<pre><code>df = pd.DataFrame(df.row.str.split(' ',1).tolist(),
columns = ['fips','row'])
</code></pre>
<pre><code> fips row
0 00000 UNITED STATES
1 01000 ALABAMA
2 01001 Autauga County, AL
3 01003 Baldwin County, AL
4 01005 Barbour County, AL
</code></pre>
| 690
|
pandas
|
Dropping infinite values from dataframes in pandas?
|
https://stackoverflow.com/questions/17477979/dropping-infinite-values-from-dataframes-in-pandas
|
<p>How do I drop <code>nan</code>, <code>inf</code>, and <code>-inf</code> values from a <code>DataFrame</code> without resetting <code>mode.use_inf_as_null</code>?</p>
<p>Can I tell <code>dropna</code> to include <code>inf</code> in its definition of missing values so that the following works?</p>
<pre><code>df.dropna(subset=["col1", "col2"], how="all")
</code></pre>
|
<p>First <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html" rel="noreferrer"><code>replace()</code></a> infs with NaN:</p>
<pre><code>df.replace([np.inf, -np.inf], np.nan, inplace=True)
</code></pre>
<p>and then drop NaNs via <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="noreferrer"><code>dropna()</code></a>:</p>
<pre><code>df.dropna(subset=["col1", "col2"], how="all", inplace=True)
</code></pre>
<hr />
<p>For example:</p>
<pre><code>>>> df = pd.DataFrame({"col1": [1, np.inf, -np.inf], "col2": [2, 3, np.nan]})
>>> df
col1 col2
0 1.0 2.0
1 inf 3.0
2 -inf NaN
>>> df.replace([np.inf, -np.inf], np.nan, inplace=True)
>>> df
col1 col2
0 1.0 2.0
1 NaN 3.0
2 NaN NaN
>>> df.dropna(subset=["col1", "col2"], how="all", inplace=True)
>>> df
col1 col2
0 1.0 2.0
1 NaN 3.0
</code></pre>
<hr />
<p><em>The same method also works for <code>Series</code>.</em></p>
| 691
|
pandas
|
Convert a Pandas DataFrame to a dictionary
|
https://stackoverflow.com/questions/26716616/convert-a-pandas-dataframe-to-a-dictionary
|
<p>I have a DataFrame with four columns. I want to convert this DataFrame to a python dictionary. I want the elements of first column be <code>keys</code> and the elements of other columns in the same row be <code>values</code>.</p>
<p>DataFrame:</p>
<pre class="lang-py prettyprint-override"><code> ID A B C
0 p 1 3 2
1 q 4 3 2
2 r 4 0 9
</code></pre>
<p>Output should be like this:</p>
<pre class="lang-py prettyprint-override"><code>{'p': [1,3,2], 'q': [4,3,2], 'r': [4,0,9]}
</code></pre>
|
<p>The <a href="http://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.to_dict.html" rel="noreferrer"><code>to_dict()</code></a> method sets the column names as dictionary keys so you'll need to reshape your DataFrame slightly. Setting the 'ID' column as the index and then transposing the DataFrame is one way to achieve this.</p>
<p><code>to_dict()</code> also accepts an 'orient' argument which you'll need in order to output a <em>list</em> of values for each column. Otherwise, a dictionary of the form <code>{index: value}</code> will be returned for each column.</p>
<p>These steps can be done with the following line:</p>
<pre><code>>>> df.set_index('ID').T.to_dict('list')
{'p': [1, 3, 2], 'q': [4, 3, 2], 'r': [4, 0, 9]}
</code></pre>
<hr>
<p>In case a different dictionary format is needed, here are examples of the possible orient arguments. Consider the following simple DataFrame:</p>
<pre><code>>>> df = pd.DataFrame({'a': ['red', 'yellow', 'blue'], 'b': [0.5, 0.25, 0.125]})
>>> df
a b
0 red 0.500
1 yellow 0.250
2 blue 0.125
</code></pre>
<p>Then the options are as follows.</p>
<p><strong>dict</strong> - the default: column names are keys, values are dictionaries of index:data pairs</p>
<pre><code>>>> df.to_dict('dict')
{'a': {0: 'red', 1: 'yellow', 2: 'blue'},
'b': {0: 0.5, 1: 0.25, 2: 0.125}}
</code></pre>
<p><strong>list</strong> - keys are column names, values are lists of column data</p>
<pre><code>>>> df.to_dict('list')
{'a': ['red', 'yellow', 'blue'],
'b': [0.5, 0.25, 0.125]}
</code></pre>
<p><strong>series</strong> - like 'list', but values are Series</p>
<pre><code>>>> df.to_dict('series')
{'a': 0 red
1 yellow
2 blue
Name: a, dtype: object,
'b': 0 0.500
1 0.250
2 0.125
Name: b, dtype: float64}
</code></pre>
<p><strong>split</strong> - splits columns/data/index as keys with values being column names, data values by row and index labels respectively</p>
<pre><code>>>> df.to_dict('split')
{'columns': ['a', 'b'],
'data': [['red', 0.5], ['yellow', 0.25], ['blue', 0.125]],
'index': [0, 1, 2]}
</code></pre>
<p><strong>records</strong> - each row becomes a dictionary where key is column name and value is the data in the cell</p>
<pre><code>>>> df.to_dict('records')
[{'a': 'red', 'b': 0.5},
{'a': 'yellow', 'b': 0.25},
{'a': 'blue', 'b': 0.125}]
</code></pre>
<p><strong>index</strong> - like 'records', but a dictionary of dictionaries with keys as index labels (rather than a list)</p>
<pre><code>>>> df.to_dict('index')
{0: {'a': 'red', 'b': 0.5},
1: {'a': 'yellow', 'b': 0.25},
2: {'a': 'blue', 'b': 0.125}}
</code></pre>
| 692
|
pandas
|
Convert Pandas Column to DateTime
|
https://stackoverflow.com/questions/26763344/convert-pandas-column-to-datetime
|
<p>I have one field in a pandas DataFrame that was imported as string format.</p>
<p>It should be a datetime variable. How do I convert it to a datetime column, and then filter based on date?</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>raw_data = pd.DataFrame({'Mycol': ['05SEP2014:00:00:00.000']})
</code></pre>
|
<p>Use the <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="noreferrer"><code>to_datetime</code></a> function, specifying a <a href="http://strftime.org/" rel="noreferrer">format</a> to match your data.</p>
<pre><code>df['Mycol'] = pd.to_datetime(df['Mycol'], format='%d%b%Y:%H:%M:%S.%f')
</code></pre>
| 693
|
pandas
|
How to determine whether a Pandas Column contains a particular value
|
https://stackoverflow.com/questions/21319929/how-to-determine-whether-a-pandas-column-contains-a-particular-value
|
<p>I am trying to determine whether there is an entry in a Pandas column that has a particular value. I tried to do this with <code>if x in df['id']</code>. I thought this was working, except when I fed it a value that I knew was not in the column <code>43 in df['id']</code> it still returned <code>True</code>. When I subset to a data frame only containing entries matching the missing id <code>df[df['id'] == 43]</code> there are, obviously, no entries in it. How to I determine if a column in a Pandas data frame contains a particular value and why doesn't my current method work? (FYI, I have the same problem when I use the implementation in this <a href="https://stackoverflow.com/a/19630449/2327821">answer</a> to a similar question).</p>
|
<p><code>in</code> of a Series checks whether the value is in the index:</p>
<pre><code>In [11]: s = pd.Series(list('abc'))
In [12]: s
Out[12]:
0 a
1 b
2 c
dtype: object
In [13]: 1 in s
Out[13]: True
In [14]: 'a' in s
Out[14]: False
</code></pre>
<p>One option is to see if it's in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unique.html">unique</a> values:</p>
<pre><code>In [21]: s.unique()
Out[21]: array(['a', 'b', 'c'], dtype=object)
In [22]: 'a' in s.unique()
Out[22]: True
</code></pre>
<p>or a python set:</p>
<pre><code>In [23]: set(s)
Out[23]: {'a', 'b', 'c'}
In [24]: 'a' in set(s)
Out[24]: True
</code></pre>
<p>As pointed out by @DSM, it may be more efficient (especially if you're just doing this for one value) to just use in directly on the values:</p>
<pre><code>In [31]: s.values
Out[31]: array(['a', 'b', 'c'], dtype=object)
In [32]: 'a' in s.values
Out[32]: True
</code></pre>
| 694
|
pandas
|
How to invert the x or y axis
|
https://stackoverflow.com/questions/2051744/how-to-invert-the-x-or-y-axis
|
<p>I have a scatter plot graph with a bunch of random x, y coordinates. Currently the Y-Axis starts at 0 and goes up to the max value. I would like the Y-Axis to start at the max value and go up to 0.</p>
<pre><code>points = [(10,5), (5,11), (24,13), (7,8)]
x_arr = []
y_arr = []
for x,y in points:
x_arr.append(x)
y_arr.append(y)
plt.scatter(x_arr,y_arr)
</code></pre>
|
<p>There is a new API that makes this even simpler.</p>
<pre><code>plt.gca().invert_xaxis()
</code></pre>
<p>and/or</p>
<pre><code>plt.gca().invert_yaxis()
</code></pre>
| 695
|
pandas
|
How to show all columns' names on a large pandas dataframe?
|
https://stackoverflow.com/questions/49188960/how-to-show-all-columns-names-on-a-large-pandas-dataframe
|
<p>I have a dataframe that consist of hundreds of columns, and I need to see all column names.</p>
<p>What I did:</p>
<pre><code>In[37]:
data_all2.columns
</code></pre>
<p>The output is:</p>
<pre><code>Out[37]:
Index(['customer_id', 'incoming', 'outgoing', 'awan', 'bank', 'family', 'food',
'government', 'internet', 'isipulsa',
...
'overdue_3months_feature78', 'overdue_3months_feature79',
'overdue_3months_feature80', 'overdue_3months_feature81',
'overdue_3months_feature82', 'overdue_3months_feature83',
'overdue_3months_feature84', 'overdue_3months_feature85',
'overdue_3months_feature86', 'loan_overdue_3months_total_y'],
dtype='object', length=102)
</code></pre>
<p>How do I show <em>all</em> columns, instead of a truncated list?</p>
|
<p>You can globally set printing options. I think this should work:</p>
<p><strong>Method 1:</strong></p>
<pre><code>pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
</code></pre>
<p><strong>Method 2:</strong></p>
<pre><code>pd.options.display.max_columns = None
pd.options.display.max_rows = None
</code></pre>
<p>This will allow you to see all column names & rows when you are doing <code>.head()</code>. None of the column name will be truncated.</p>
<hr />
<p>If you just want to see the column names you can do:</p>
<pre><code>print(df.columns.tolist())
</code></pre>
| 696
|
pandas
|
Appending to an empty DataFrame in Pandas?
|
https://stackoverflow.com/questions/16597265/appending-to-an-empty-dataframe-in-pandas
|
<p>Is it possible to append to an empty data frame that doesn't contain any indices or columns?</p>
<p>I have tried to do this, but keep getting an empty dataframe at the end.</p>
<p>e.g.</p>
<pre><code>import pandas as pd
df = pd.DataFrame()
data = ['some kind of data here' --> I have checked the type already, and it is a dataframe]
df.append(data)
</code></pre>
<p>The result looks like this:</p>
<pre><code>Empty DataFrame
Columns: []
Index: []
</code></pre>
|
<p>The answers are very useful, but since <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.append.html#" rel="noreferrer"><code>pandas.DataFrame.append</code></a> was deprecated (as already mentioned by various users), and the answers using <a href="https://pandas.pydata.org/docs/reference/api/pandas.concat.html#" rel="noreferrer"><code>pandas.concat</code></a> are not "Runnable Code Snippets" I would like to add the following snippet:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(columns =['name','age'])
row_to_append = pd.DataFrame([{'name':"Alice", 'age':"25"},{'name':"Bob", 'age':"32"}])
df = pd.concat([df,row_to_append])
</code></pre>
<p>So <code>df</code> is now:</p>
<pre><code> name age
0 Alice 25
1 Bob 32
</code></pre>
| 697
|
pandas
|
Convert DataFrame column type from string to datetime
|
https://stackoverflow.com/questions/17134716/convert-dataframe-column-type-from-string-to-datetime
|
<p>How can I convert a DataFrame column of strings (in <em><strong>dd/mm/yyyy</strong></em> format) to datetime dtype?</p>
|
<p>The easiest way is to use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="noreferrer"><code>to_datetime</code></a>:</p>
<pre><code>df['col'] = pd.to_datetime(df['col'])
</code></pre>
<p>It also offers a <code>dayfirst</code> argument for European times (but beware <a href="https://github.com/pydata/pandas/issues/3341" rel="noreferrer">this isn't strict</a>).</p>
<p>Here it is in action:</p>
<pre><code>In [11]: pd.to_datetime(pd.Series(['05/23/2005']))
Out[11]:
0 2005-05-23 00:00:00
dtype: datetime64[ns]
</code></pre>
<p>You can pass a specific <a href="https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior" rel="noreferrer">format</a>:</p>
<pre><code>In [12]: pd.to_datetime(pd.Series(['05/23/2005']), format="%m/%d/%Y")
Out[12]:
0 2005-05-23
dtype: datetime64[ns]
</code></pre>
| 698
|
pandas
|
python pandas remove duplicate columns
|
https://stackoverflow.com/questions/14984119/python-pandas-remove-duplicate-columns
|
<p>What is the easiest way to remove duplicate columns from a dataframe?</p>
<p>I am reading a text file that has duplicate columns via:</p>
<pre><code>import pandas as pd
df=pd.read_table(fname)
</code></pre>
<p>The column names are:</p>
<pre><code>Time, Time Relative, N2, Time, Time Relative, H2, etc...
</code></pre>
<p>All the Time and Time Relative columns contain the same data. I want:</p>
<pre><code>Time, Time Relative, N2, H2
</code></pre>
<p>All my attempts at dropping, deleting, etc such as:</p>
<pre><code>df=df.T.drop_duplicates().T
</code></pre>
<p>Result in uniquely valued index errors:</p>
<pre><code>Reindexing only valid with uniquely valued index objects
</code></pre>
<p>Sorry for being a Pandas noob. Any Suggestions would be appreciated.</p>
<hr>
<p><strong>Additional Details</strong></p>
<p>Pandas version: 0.9.0<br>
Python Version: 2.7.3<br>
Windows 7<br>
(installed via Pythonxy 2.7.3.0)</p>
<p>data file (note: in the real file, columns are separated by tabs, here they are separated by 4 spaces):</p>
<pre><code>Time Time Relative [s] N2[%] Time Time Relative [s] H2[ppm]
2/12/2013 9:20:55 AM 6.177 9.99268e+001 2/12/2013 9:20:55 AM 6.177 3.216293e-005
2/12/2013 9:21:06 AM 17.689 9.99296e+001 2/12/2013 9:21:06 AM 17.689 3.841667e-005
2/12/2013 9:21:18 AM 29.186 9.992954e+001 2/12/2013 9:21:18 AM 29.186 3.880365e-005
... etc ...
2/12/2013 2:12:44 PM 17515.269 9.991756+001 2/12/2013 2:12:44 PM 17515.269 2.800279e-005
2/12/2013 2:12:55 PM 17526.769 9.991754e+001 2/12/2013 2:12:55 PM 17526.769 2.880386e-005
2/12/2013 2:13:07 PM 17538.273 9.991797e+001 2/12/2013 2:13:07 PM 17538.273 3.131447e-005
</code></pre>
|
<p>Here's a one line solution to remove columns based on duplicate <strong>column names</strong>:</p>
<pre><code>df = df.loc[:,~df.columns.duplicated()].copy()
</code></pre>
<p><strong>How it works:</strong></p>
<p>Suppose the columns of the data frame are <code>['alpha','beta','alpha']</code></p>
<p><code>df.columns.duplicated()</code> returns a boolean array: a <code>True</code> or <code>False</code> for each column. If it is <code>False</code> then the column name is unique up to that point, if it is <code>True</code> then the column name is duplicated earlier. For example, using the given example, the returned value would be <code>[False,False,True]</code>.</p>
<p><code>Pandas</code> allows one to index using boolean values whereby it selects only the <code>True</code> values. Since we want to keep the unduplicated columns, we need the above boolean array to be flipped (ie <code>[True, True, False] = ~[False,False,True]</code>)</p>
<p>Finally, <code>df.loc[:,[True,True,False]]</code> selects only the non-duplicated columns using the aforementioned indexing capability.</p>
<p>The final <code>.copy()</code> is there to copy the dataframe to (mostly) avoid getting errors about trying to modify an existing dataframe later down the line.</p>
<p><strong>Note</strong>: the above only checks columns names, <em>not</em> column values.</p>
<h5>To remove duplicated indexes</h5>
<p>Since it is similar enough, do the same thing on the index:</p>
<pre><code>df = df.loc[~df.index.duplicated(),:].copy()
</code></pre>
<h3>To remove duplicates by checking values without transposing</h3>
<p><strong>Update and caveat</strong>: please be careful in applying this. Per the counter-example provided by DrWhat in the comments, this solution may <strong>not</strong> have the desired outcome in all cases.</p>
<pre><code>df = df.loc[:,~df.apply(lambda x: x.duplicated(),axis=1).all()].copy()
</code></pre>
<p>This avoids the issue of transposing. Is it fast? No. Does it work? In some cases. Here, try it on this:</p>
<pre><code># create a large(ish) dataframe
ldf = pd.DataFrame(np.random.randint(0,100,size= (736334,1312)))
#to see size in gigs
#ldf.memory_usage().sum()/1e9 #it's about 3 gigs
# duplicate a column
ldf.loc[:,'dup'] = ldf.loc[:,101]
# take out duplicated columns by values
ldf = ldf.loc[:,~ldf.apply(lambda x: x.duplicated(),axis=1).all()].copy()
</code></pre>
| 699
|
tenserflow
|
pip install tenserflow in command
|
https://stackoverflow.com/questions/54931918/pip-install-tenserflow-in-command
|
<p>I checked them,
python --version
Python 3.7.1</p>
<p>pip --version
pip 18.1</p>
<p>and I'm using Windows 10.</p>
<p>but,pip install tenserflow running command line,The following error comes out.</p>
<p>pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.</p>
<pre class="lang-none prettyprint-override"><code>Could not fetch URL https://pypi.org/simple/pip/:
There was a problem confirming the ssl certificate:
HTTPSConnectionPool(host='pypi.org', port=443):
Max retries exceeded with url: /simple/pip/
(Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
</code></pre>
<p>Why?</p>
|
<p>First, update pip by</p>
<pre><code>pip install --upgrade pip
</code></pre>
<p>Then install TensorFlow using</p>
<pre><code>pip install tensorflow
</code></pre>
| 700
|
tenserflow
|
Installing Tenserflow on GPU NVIDIA GeForce MX230
|
https://stackoverflow.com/questions/67633123/installing-tenserflow-on-gpu-nvidia-geforce-mx230
|
<p>Is it worth installing Tenserflow on a <strong>NVIDIA GeForce MX230</strong> (It is CUDA Supported ). <a href="https://www.nvidia.com/en-us/geforce/gaming-laptops/geforce-mx230/specifications/" rel="nofollow noreferrer">https://www.nvidia.com/en-us/geforce/gaming-laptops/geforce-mx230/specifications/</a> or should I just install it on a <strong>intel i5 10th gen CPU</strong>.</p>
|
<p>Tensorflow is going to perform better with GPU, but MX230 isn't that powerful. I also have MX230, but I instead use google colab.</p>
| 701
|
tenserflow
|
Training Tenserflow Model for Speech Recognition in React
|
https://stackoverflow.com/questions/70567700/training-tenserflow-model-for-speech-recognition-in-react
|
<p>I'm using Electron + ReactJS and Tenserflow.</p>
<p>I want to have like a collection of 500-1000 words like 'dog', 'newline', 'cat' be recognized when i talk.</p>
<ol>
<li>How much time can it take for the model to be trained with 500 words? I used 5 words and it took a bit of time. I don't want to have a loader and to take too much time to train on client. Can i train the model on server and fetch it to the user, or do i need to train it everytime he enters the app?</li>
<li>I tried using model training but it doesn't work. Also i'm not even talking and it shows random words. I didn't find much information about model training in tenserflow javascript. If collectExample just transfers the words, how can i train the model with custom words?</li>
</ol>
<p>Also i'm quite new to Tenserflow. Here is the code:</p>
<pre><code>const loadModel = async () => {
setLoading(true);
// start loading model
const recognizer = await speech.create('BROWSER_FFT');
// check if model is loaded
await recognizer.ensureModelLoaded();
const transferRecognizer = recognizer.createTransfer('programming');
await transferRecognizer.collectExample('cat');
await transferRecognizer.collectExample('dog');
await transferRecognizer.collectExample('newline');
await transferRecognizer.collectExample('_background_noise_');
await transferRecognizer.collectExample('newline');
await transferRecognizer.collectExample('dog');
await transferRecognizer.collectExample('cat');
await transferRecognizer.collectExample('_background_noise_');
await transferRecognizer.train({
epochs: 25,
callback: {
onEpochEnd: async (epoch, logs) => {
console.log(`Epoch ${epoch}: loss=${logs.loss}, accuracy=${logs.acc}`);
}
}
});
setModel(transferRecognizer);
// store command word list to state
console.log('transferRecognizer.wordLabels():', transferRecognizer.wordLabels());
setLabels(transferRecognizer.wordLabels());
setLoading(false);
};
</code></pre>
<pre><code>
const recognizeCommands = async () => {
model?.listen(
result => {
// add argMax function
setAction(labels[argMax(Object.values(result.scores))]);
},
{ includeSpectrogram: true, probabilityThreshold: 0.9 }
);
};
</code></pre>
| 702
|
|
tenserflow
|
how to check if tenserflow is using gpu?
|
https://stackoverflow.com/questions/60863574/how-to-check-if-tenserflow-is-using-gpu
|
<p>I am using Jupyter notebook for training neural network. I choose in the anaconda applications on tenserflow-gpu however I dont think it is using GPU. How can I check it if it is using GPU for processing? </p>
|
<p>You could use the </p>
<pre><code><tf.config.list_physical_devices('GPU')>
</code></pre>
<p>For tensorflow 2.1.
Also check the documentation found <a href="https://www.tensorflow.org/api_docs/python/tf/config/list_physical_devices" rel="nofollow noreferrer">here</a></p>
| 703
|
tenserflow
|
problem while installing tenserflow on EC2 machine the process gets killed
|
https://stackoverflow.com/questions/62701637/problem-while-installing-tenserflow-on-ec2-machine-the-process-gets-killed
|
<p>Whenever I try to install tenserflow on my free tier eligible ec2 machine on AWS it downloads and then the further process gets killed I tried many things but didn't get any solution for the same<a href="https://i.sstatic.net/hxw1Z.png" rel="nofollow noreferrer">as you can see the tenserflow has been downloaded and then there is a keyword "killed" and the process exits </a></p>
|
<p>I found the solution there were multiple factors one of them was configuration of machine didn't support Tensorflow thats the reason the job waas getting cancelled .</p>
| 704
|
tenserflow
|
No MSE in tenserflow trainmodel in JS
|
https://stackoverflow.com/questions/78538556/no-mse-in-tenserflow-trainmodel-in-js
|
<p>I am trying to train a tenserflow model in JS but I can't get the MSE score. It returns NaN even in the Epoch, so I am not sure it trains as it should. This is part of my code:</p>
<pre><code>// Check tensors
checkForNaNs(xs, 'Features');
//returns :
//Features does not contains NaNs
//Features does not contains Infinities
checkForNaNs(ys, 'Labels');
//returns :
//Labels does not contains NaNs
//Labels does not contains Infinities
xs = normalizeTensor(xs);
const optimizerType = 'sgd';
const lossFunction = 'meanSquaredError';
let model = createRegressionModel(features[0].length);
const metrics = 'mse';
await trainModel(model, xs, ys, optimizerType, lossFunction, metrics);
const evalOutput = await model.evaluate(xs, ys);
console.log(`Debug: evalOutput: ${evalOutput}`)
// returns debug: evalOutput: Tensor
// NaN, Tensor
// NaN
const mse = evalOutput[0].dataSync()[0]; // Access the first element for MSE
console.log(`Mean Squared Error (MSE): ${mse}`);
//returns Mean Squared Error (MSE): NaN
//Functions I use:
function createRegressionModel(inputShape) {
return tf.sequential({
layers: [
tf.layers.dense({ inputShape: [inputShape], units: 10, activation: 'relu' }),
tf.layers.dense({ units: 1, activation: 'linear' })
]
});
}
async function trainModel(model, xTrain, yTrain, xValidation, yValidation, optimizerType, lossFunction, metrics) {
model.compile({ optimizer: optimizerType, loss: lossFunction, metrics: metrics });
console.log(`Training model using metrix: ${metrics}`);
// returns Training model using metrix: mse
await model.fit(xTrain, yTrain, {
epochs: 10,
validationData: validationData,
callbacks: {
onEpochEnd: (epoch, logs) => {
console.log(logs);
console.log(`Epoch ${epoch + 1}: loss = ${logs.loss}, MSE = ${logs.mse}, val_loss = ${logs.val_loss}, val_MSE = ${logs.val_mse}`);
// returns for example: Epoch 1: loss = NaN, MSE = NaN, val_loss = undefined, val_MSE = undefined
}
}
});
}
function checkForNaNs(tensor, tensorName) {
if (tensor.isNaN().any().dataSync()[0]) {
console.log(`${tensorName} contains NaNs`);
} else {
console.log(`${tensorName} does not contains NaNs`);
}
if (tensor.isInf().any().dataSync()[0]) {
console.log(`${tensorName} contains Infinities`);
} else {
console.log(`${tensorName} does not contains Infinities`);
}
}
function normalizeTensor(tensor) {
const mean = tensor.mean(0);
const std = tensor.sub(mean).square().mean(0).sqrt();
return tensor.sub(mean).div(std);
}
</code></pre>
<p>I think this is all the code that matters. The code before just grabs the data from a csv file and splits it into ys and xs like :</p>
<pre><code>let xs = tf.tensor2d(features, [features.length, features[0].length]);
let ys = tf.tensor2d(labels, [labels.length, 1]);
</code></pre>
|
<p>The problem was in the optimizerType, I changed it to Adam and it worked first try!</p>
| 705
|
tenserflow
|
How to switch keras backend from tenserflow to theano?
|
https://stackoverflow.com/questions/65475584/how-to-switch-keras-backend-from-tenserflow-to-theano
|
<p>I need keras for my work, it needs tenserflow but I'm operating Windows 32. So I decided to switch keras backend. I created json file in "C/USERS/admin/.keras" it looks like:</p>
<pre><code>{
"image_data_format":"channels first",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "theano"
}
</code></pre>
<p>I try to import keras but the problem the same.</p>
<p><a href="https://i.sstatic.net/wLaef.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>Maybe should I change anything in init.py?</p>
<pre><code>try:
from tensorflow.keras.layers.experimental.preprocessing import RandomRotation
except ImportError:
raise ImportError(
'Keras requires TensorFlow 2.2 or higher. '
'Install TensorFlow via `pip install tensorflow`')
from . import utils
from . import activations
from . import applications
from . import backend
from . import datasets
from . import engine
from . import layers
from . import preprocessing
from . import wrappers
from . import callbacks
from . import constraints
from . import initializers
from . import metrics
from . import models
from . import losses
from . import optimizers
from . import regularizers
# Also importable from root
from .layers import Input
from .models import Model
from .models import Sequential
</code></pre>
| 706
|
tenserflow
|
why I can not use tenserflow and Keras in my Jupyter notebook?
|
https://stackoverflow.com/questions/60728132/why-i-can-not-use-tenserflow-and-keras-in-my-jupyter-notebook
|
<p>I successfully installed tenserflow and Keras but when I import them in Jupyter notebook it say </p>
<pre><code>ModuleNotFoundError
</code></pre>
<p>but in my VScode is working! what is problem?</p>
<p>I use Mac I installed them in terminal like this:</p>
<pre><code>pip3 install --upgrade tensorflow
pip install tensorflow
pip install keras -U
</code></pre>
<p>Then I did this in the terminal:</p>
<p><code>conda create -n keras python=3.7.6</code></p>
<pre><code>conda activate keras
conda install keras
</code></pre>
| 707
|
|
tenserflow
|
Installing/Using tenserflow
|
https://stackoverflow.com/questions/74526250/installing-using-tenserflow
|
<p>I am getting this error when trying to import tensorflow:</p>
<blockquote>
<p>File "/Users/alexkaram/untitled3.py", line 7, in
import tensorflow as tf
ModuleNotFoundError: No module named 'tensorflow'</p>
</blockquote>
<p>My Spyder version is 5.3.3 and anaconda navigator 2.3.1.</p>
<p>I tried to install tensorflow but am unsure if I was successful.</p>
<p>I tried the following:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
</code></pre>
| 708
|
|
tenserflow
|
long() argument must be a string or a number error in tenserflow
|
https://stackoverflow.com/questions/49627073/long-argument-must-be-a-string-or-a-number-error-in-tenserflow
|
<p>Iḿ trying to run an objects detection code using tensorflow</p>
<pre><code>import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from utils import label_map_util
from utils import visualization_utils as vis_util
# # Model preparation
# Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file.
# By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
# ## Download Model
if not os.path.exists(MODEL_NAME + '/frozen_inference_graph.pb'):
print ('Downloading the model')
opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
print ('Download complete')
else:
print ('Model already exists')
# ## Load a (frozen) Tensorflow model into memory.
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
# ## Loading label map
# Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
#intializing the web camera device
import cv2
cap = cv2.VideoCapture(0)
# Running the tensorflow session
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
ret = True
while (ret):
ret,image_np = cap.read()
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
scores = detection_graph.get_tensor_by_name('detection_scores:0')
classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Actual detection.
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
# plt.figure(figsize=IMAGE_SIZE)
# plt.imshow(image_np)
cv2.imshow('image',cv2.resize(image_np,(1280,960)))
if cv2.waitKey(25) & 0xFF == ord('q'):
cv2.destroyAllWindows()
cap.release()
break
</code></pre>
<p>But it shows me this problem</p>
<pre><code>Traceback (most recent call last):
File "object_detection_webcam.py", line 97, in <module>
feed_dict={image_tensor: image_np_expanded})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 954, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "/usr/lib/python2.7/dist-packages/numpy/core/numeric.py", line 531, in asarray
return array(a, dtype, copy=False, order=order)
TypeError: long() argument must be a string or a number, not 'NoneType'
</code></pre>
<p>Weŕe using tenserflow on Raspberry PI 3 and weŕe using the Raspberry camera.</p>
<p>We tried to import every library and it works(All imported libraries are installed)</p>
<p>We added the <code>tensorflow/models/research/slim</code> folder to the <code>$PYTHONPATH</code></p>
<p>What could this problem cause be, and how can We solve it?</p>
|
<p>The problem is with this part:</p>
<pre><code>ret = True
while (ret):
ret,image_np = cap.read()
</code></pre>
<p>After the last frame it will read <code>None</code> into image_np, but the inside of your loop still runs and tries to feed that <code>None</code> to the network, and would only stop when it gets to the while again.</p>
<p>You need to restructure similar to this:</p>
<pre><code> while ( True ):
ret, image_np = cap.read()
if not ret:
break
</code></pre>
| 709
|
tenserflow
|
NMT Using Tenserflow
|
https://stackoverflow.com/questions/73656822/nmt-using-tenserflow
|
<p>My Code:</p>
<pre><code>vocab_size = 10000
total_sentences = 25000
maxlen = 10
epochs = 50
validation_split = 0.05
</code></pre>
<pre><code>split = int(0.95 * total_sentences)
X_train = [encoder_inputs[:split], decoder_inputs[:split]]
y_train = decoder_outputs[:split]
# Test data to evaluate our NMT model using BLEU score
X_test = en_data[:split]
y_test = hi_data[:split]
print(X_train[0].shape, X_train[1].shape, y_train.shape)
</code></pre>
<p>This is the main part which I've tried tweaking many times but still getting various kinds of errors:</p>
<pre><code>d_model = 256
inputs = tf.keras.layers.Input(shape=(None,))
x = tf.keras.layers.Embedding(english_vocab_size, d_model, mask_zero=True)(inputs)
_,state_h,state_c = tf.keras.layers.LSTM(d_model,activation='relu',return_state=True)(x)
targets = tf.keras.layers.Input(shape=(None,))
embedding_layer = tf.keras.layers.Embedding(hindi_vocab_size, d_model, mask_zero=True)
x = embedding_layer(targets)
decoder_lstm = tf.keras.layers.LSTM(d_model,activation='relu',return_sequences=True, return_state=True)
x,_,_ = decoder_lstm(x, initial_state=[state_h, state_c])
dense1 = tf.keras.layers.Dense(hindi_vocab_size, activation='softmax')
x = dense1(x)
model = tf.keras.models.Model(inputs=[inputs, targets],outputs=x)
model.summary()
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
</code></pre>
<p>My Model:</p>
<pre><code>Model: "model_3"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_7 (InputLayer) [(None, None)] 0 []
input_8 (InputLayer) [(None, None)] 0 []
embedding_6 (Embedding) (None, None, 256) 2053120 ['input_7[0][0]']
embedding_7 (Embedding) (None, None, 256) 2405120 ['input_8[0][0]']
lstm_6 (LSTM) [(None, 256), 525312 ['embedding_6[0][0]']
(None, 256),
(None, 256)]
lstm_7 (LSTM) [(None, None, 256), 525312 ['embedding_7[0][0]',
(None, 256), 'lstm_6[0][1]',
(None, 256)] 'lstm_6[0][2]']
dense_3 (Dense) (None, None, 9395) 2414515 ['lstm_7[0][0]']
==================================================================================================
Total params: 7,923,379
Trainable params: 7,923,379
Non-trainable params: 0
__________________________________________________________________________________________________
</code></pre>
<p>I get error while trying to fit the above model:</p>
<pre><code>model.fit(X_train, y_train, epochs=epochs, validation_split=validation_split, callbacks=[save_model_callback, tf.keras.callbacks.TerminateOnNaN()])
</code></pre>
<p>ERROR:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-25-b2b3107dbda8> in <module>
----> 1 model.fit(X_train, y_train, epochs=epochs, validation_split=validation_split, callbacks=[save_model_callback, tf.keras.callbacks.TerminateOnNaN()])
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
ValueError: Shapes (None, 10) and (None, 10, 9395) are incompatible
</code></pre>
<p>My Tries:</p>
<p>I tried changing the loss function:</p>
<pre><code>d_model = 256
inputs = tf.keras.layers.Input(shape=(None,))
x = tf.keras.layers.Embedding(english_vocab_size, d_model, mask_zero=True)(inputs)
_,state_h,state_c = tf.keras.layers.LSTM(d_model,activation='relu',return_state=True)(x)
targets = tf.keras.layers.Input(shape=(None,))
embedding_layer = tf.keras.layers.Embedding(hindi_vocab_size, d_model, mask_zero=True)
x = embedding_layer(targets)
decoder_lstm = tf.keras.layers.LSTM(d_model,activation='relu',return_sequences=False, return_state=True)
x,_,_ = decoder_lstm(x, initial_state=[state_h, state_c])
dense1 = tf.keras.layers.Dense(hindi_vocab_size, activation='softmax')
x = dense1(x)
model = tf.keras.models.Model(inputs=[inputs, targets],outputs=x)
model.summary()
loss = tf.keras.losses.SparseCategoricalCrossentropy()
model.compile(optimizer='rmsprop', loss=loss, metrics=['accuracy'])
</code></pre>
<p>Error then:</p>
<pre><code>InvalidArgumentError: {{function_node __inference_train_function_11558}} logits and labels must have the same first dimension, got logits shape [32,9395] and labels shape [320]
[[{{node sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits}}]]
</code></pre>
<p>However all these errors get removed when I set <code>return_sequences=True</code> but then after fitting and predicting I get:</p>
<pre><code>Input 0 of layer "lstm_1" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 256)
</code></pre>
| 710
|
|
tenserflow
|
Read .mp4 file from firebase storage using fs to send that video to tenserflow model
|
https://stackoverflow.com/questions/62159550/read-mp4-file-from-firebase-storage-using-fs-to-send-that-video-to-tenserflow-m
|
<p>my graduation project is to convert video into text.
I'm trying to read video uploaded in Firebase storage & sent from android app, to send it to TenserFlow model.
but I can't read the video.</p>
<h3>here is my function:</h3>
<pre><code>exports.readVideo = functions.storage
.object()
.onFinalize(async (object) => {
const bucket = admin.storage().bucket(object.bucket);
const tempFilePath = path.join(os.tmpdir(), object.name);
console.log(tempFilePath);
console.log('download');
// note download
await bucket
.file(object.name!)
.download({
destination: tempFilePath,
})
.then()
.catch((err) => {
console.log({
type: 'download',
err: err,
});
});
console.log('read');
// note read
let stream = await bucket
.file(object.name!)
.createReadStream({
start: 10000,
end: 20000,
})
.on('error', function (err) {
console.log('error 1');
console.log({ error: err });
})
await new Promise((resolve, reject) => {
console.log('error 2');
stream.on('finish', resolve);
console.log('error 3');
stream.on('error', reject);
console.log("end!")
stream.on('end', resolve);
}).catch((error) => {
// successMessage is whatever we passed in the resolve(...) function above.
// It doesn't have to be a string, but if it is only a succeed message, it probably will be.
console.log("oups! " + error)
});
console.log('tempFile size2', fs.statSync(tempFilePath).size);
return fs.unlinkSync(tempFilePath);
});
</code></pre>
<p>and I got that error:</p>
<blockquote>
<p>Function execution took 60008 ms, finished with status: 'timeout'</p>
</blockquote>
|
<p>As the error message shows, the regular file system on Cloud Functions is read only. The only place you can write to is <code>/tmp</code>, as also shown in the documentation on <a href="https://cloud.google.com/functions/docs/concepts/exec#file_system" rel="nofollow noreferrer">file system access in Cloud Functions</a>. I'm not sure why <code>os.tmpdir()</code> doesn't give you a location at that path, but you might want to hard-code the directory.</p>
<p>One thing to keep in mind: <code>/tmp</code> is a RAM disk and not a physical disk, so your allocated memory will need to have enough space for the files you write to it.</p>
| 711
|
tenserflow
|
tenserflow kernel keeps dying after installing "pydot (version 1.4.1)" and "python-graphviz (version 0.8.4)"
|
https://stackoverflow.com/questions/71845340/tenserflow-kernel-keeps-dying-after-installing-pydot-version-1-4-1-and-pyth
|
<p>I installed "pydot (version 1.4.1)" and "python-graphviz (version 0.8.4)" to my tensorflow environment in anaconda. Now my tenserflow kernel keeps dying. I did get this warning once when I was trying to import the tensorflow libraries.</p>
<p>C:\Users\lbasnet\Anaconda3\envs\tflow\lib\site-packages\h5py_<em>init</em>_.py:40: UserWarning: h5py is running against HDF5 1.10.5 when it was built against 1.10.4, this may cause problems '{0}.{1}.{2}'.format(*version.hdf5_built_version_tuple)</p>
<p>Any idea how I can resolve this?</p>
|
<p>I got it resolved by myself. I uninstalled h5py <code>pip uninstall h5py</code> and reinstalled it <code>pip install h5py</code></p>
<p>If you were trying to get <code>plot_model</code> to work and ended up with the above issue I faced, the following links can be very helpful to get "pydot" and "graphviz" to work.</p>
<ol>
<li><a href="https://github.com/XifengGuo/CapsNet-Keras/issues/7#issuecomment-394536439" rel="nofollow noreferrer">Link 1 for "pydot"</a></li>
<li><a href="https://github.com/XifengGuo/CapsNet-Keras/issues/7#issuecomment-370745440" rel="nofollow noreferrer">Link 2 for "pydot"</a></li>
<li><a href="https://stackoverflow.com/questions/28312534/graphvizs-executables-are-not-found-python-3-4">Link 3 for "graphviz"</a>, refer to the answer by Silvia Bakalova if you are a windows user.</li>
</ol>
| 712
|
tenserflow
|
How to deploy python project with tenserflow to exe file?And is it even possible?
|
https://stackoverflow.com/questions/66656250/how-to-deploy-python-project-with-tenserflow-to-exe-fileand-is-it-even-possible
|
<p>I have a project based on ImageAI (the code is taken directly from the documentation), which uses tenserflow, keras and other dependencies, and it needs to be packed into an exe file.</p>
<p>The problem is that, so far, I haven't been able to do it, I've been using the pyinstaller library.But the problem I have with it is that I can not create a working exe file. Exe file closes in a fraction of a second.All hope is on you.</p>
<p>Now in more detail, here is the code itself:</p>
<pre><code>from imageai.Detection import ObjectDetection
import easygui
import os
path = easygui.fileopenbox();
print(path);
exec_path = os.getcwd()
detector = ObjectDetection()
detector.setModelTypeAsRetinaNet()
detector.setModelPath(os.path.join(exec_path, "resnet50_coco_best_v2.0.1.h5"))
detector.loadModel()
list = detector.detectObjectsFromImage(
input_image = os.path.join(exec_path, path),
output_image_path= os.path.join(exec_path, "new_objects.jpg"),
minimum_percentage_probability = 60)
</code></pre>
<p>I use the virtual environment "virtual env", in order to store all dependencies there, and nothing extra.
At first I tried packing like this: pyinstaller detect.py, that is the most usual packing, but on this I got warnings that the right tenserflow files were not found, such as these:</p>
<pre><code>166260 WARNING: Hidden import "tensorflow_core._api.v1.compat.v1.keras.mixed_precision" not found!
166440 WARNING: Hidden import "tensorflow_core._api.v1.compat.v2.keras.callbacks" not found!
166615 WARNING: Hidden import "tensorflow_core._api.v1.compat.v1.estimator" not found!
166626 WARNING: Hidden import "tensorflow_core._api.v1.compat.v1.estimator.tpu" not found!
168300 WARNING: Hidden import "tensorflow_core._api.v1.compat.v1.keras.estimator" not found!
168307 WARNING: Hidden import "tensorflow_core._api.v1.compat.v1.keras.applications.nasnet" not found!
168839 WARNING: Hidden import "tensorflow_core._api.v1.compat.v1.keras.activations" not found!
168854 WARNING: Hidden import "tensorflow_core._api.v1.compat.v1.keras.preprocessing.text" not found!
169538 WARNING: Hidden import "tensorflow_core._api.v1.compat.v1.keras.metrics" not found!
169711 WARNING: Hidden import "tensorflow_core._api.v1.compat.v1.keras.applications.inception_v3" not found!
169815 WARNING: Hidden import "tensorflow_core._api.v1.compat.v1.keras.datasets.cifar10" not found!
</code></pre>
<p>And also some problems in the syntax of the library (although the code works correctly):</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 21, in walk_packages
File "c:\users\sleping\appdata\local\programs\python\python37-32\lib\site-packages\notebook\terminal\__init__.py", line 4, in <module>
import terminado
File "c:\users\sleping\appdata\local\programs\python\python37-32\lib\site-packages\terminado\__init__.py", line 7, in <module>
from .websocket import TermSocket
File "c:\users\sleping\appdata\local\programs\python\python37-32\lib\site-packages\terminado\websocket.py", line 18, in <module>
import tornado.web
File "c:\users\sleping\appdata\local\programs\python\python37-32\lib\site-packages\tornado\web.py", line 84, in <module>
from tornado.concurrent import Future, future_set_result_unless_cancelled
File "c:\users\sleping\appdata\local\programs\python\python37-32\lib\site-packages\tornado\concurrent.py", line 28, in <module>
import asyncio
File "c:\users\sleping\appdata\local\programs\python\python37-32\lib\site-packages\asyncio\__init__.py", line 21, in <module>
from .base_events import *
File "c:\users\sleping\appdata\local\programs\python\python37-32\lib\site-packages\asyncio\base_events.py", line 296
future = tasks.async(future, loop=self)
^
SyntaxError: invalid syntax
</code></pre>
<p>And everything in this spirit, after that I realized that it is necessary to connect the hidden imports and immediately connect the model (yolo.h5), which is actually what I did:</p>
<pre><code>pyinstaller --paths ..\env\Lib\site-packages --add-data yolo.h5;. --hidden-import=h5py;. --hidden-import=h5py.defs --hidden-import=h5py.utils --hidden-import=h5py.h5ac --hidden-import=h5py._proxy detect.py
</code></pre>
<p>Unfortunately the result is still disappointing, although there is no message about the lack of tenserflow, and some other errors are also gone, in their place appeared new ones, such as:</p>
<pre><code>241397 WARNING: Hidden import "pkg_resources.py2_warn" not found!
241417 WARNING: Hidden import "pkg_resources.markers" not found!
</code></pre>
<p>or:</p>
<pre><code>255797 WARNING: lib not found: libopenblas.QVLO2T66WEPI7JZ63PS3HMOHFEY472BC.gfortran-win_amd64.dll dependency of C:\bitwise\pyinstaller_demo\env\Lib\site-packages\numpy\core\_multiarray_umath.cp37-win_amd64.pyd
</code></pre>
<p>And the problems with invalid syntax are the same, exactly the same as above.</p>
<p>To sum it up, I'm sorry the question got so big, I apologize for that. But I need this exe file badly, very badly.
What solutions are available? Is it at all possible to pack such a project into an exe file? If yes, by what means is it possible? Or is it possible to solve the problem with pyinstaller?</p>
<p>Here are the dependencies I put in: tensorflow==2.4.0, keras==2.4.3, numpy==1.19.3, pillow==7.0.0, scipy==1.4.1, h5py==2.10.0, matplotlib==3.3.2, opencv-python, keras-resnet==0.2.0, imageai.In the project folder, I have the model "yolo.h5", to run machine learning</p>
|
<p>1.first you need to save your tensorflow model weight i.e (model.pb)</p>
<ol start="2">
<li><p>convert this into tensorflowlite (TF.lite) & you can use this in android phone</p>
<p><a href="https://www.tensorflow.org/tutorials/keras/save_and_load" rel="nofollow noreferrer">load and save model -tensorflow</a></p>
</li>
</ol>
<p>even you can refer this coursera course <a href="https://www.coursera.org/lecture/device-based-models-tensorflow/devices-5LO51" rel="nofollow noreferrer">device-based-models-tensorflow</a></p>
| 713
|
tenserflow
|
No module named tenserflow
|
https://stackoverflow.com/questions/66411278/no-module-named-tenserflow
|
<p>i came to this problem when im about to generate tfrecord for my test and training data. can anyone help me?</p>
<pre><code>C:\Object_detection\models-master\research\object_detection>python generate_tfrecord.py --csv_input=images/test_labels.csv --image_dir=images/test --output_path=test.record
</code></pre>
<p>Traceback (most recent call last):
File "generate_tfrecord.py", line 17, in
from tensorflow.python.framework.versions import VERSION
ModuleNotFoundError: No module named 'tensorflow'</p>
<p>I am really stuck lol. Thank you for the help!</p>
|
<p>Try to run</p>
<pre><code>pip show tensorflow
</code></pre>
<p>to check you tensorflow version.
It's possible that you will need to downgrade or upgrade you tf package to be able to run your tfrecord. Check the documentation of the function to know the version you need.</p>
| 714
|
tenserflow
|
Tenserflow issue when tokenizing sentences
|
https://stackoverflow.com/questions/74123446/tenserflow-issue-when-tokenizing-sentences
|
<p>I followed a tutorial about tokenizing sentences using Tensorflow, here's the code I'm trying:</p>
<pre><code>from tensorflow.keras.preprocessing.text import Tokenizer #API for tokenization
t = Tokenizer(num_words=4) #meant to catch most imp _
listofsentences=['Apples are fruits', 'An orange is a tasty fruit', 'Fruits are tasty!']
t.fit_on_texts(listofsentences) #processes words
print(t.word_index)
print(t.texts_to_sequences(listofsentences)) #arranges tokens, returns nested list
</code></pre>
<p>The first print statement shows a dictionary as expected:</p>
<pre><code>{'are': 1, 'fruits': 2, 'tasty': 3, 'apples': 4, 'an': 5, 'orange': 6, 'is': 7, 'a': 8, 'fruit': 9}
</code></pre>
<p>But the last line outputs a list that misses many words:</p>
<pre><code>[[1, 2], [3], [2, 1, 3]]
</code></pre>
<p>Please let me know what I'm doing wrong and how to get the expected list:</p>
<pre><code>[[4,1,2],[5,6,7,8,3,9],[2,1,3]]
</code></pre>
|
<p>To specify an unlimited amount of tokens use:</p>
<pre><code>t = Tokenizer(num_words=None)
</code></pre>
<p>Output:</p>
<pre><code>{'are': 1, 'fruits': 2, 'tasty': 3, 'apples': 4, 'an': 5, 'orange': 6, 'is': 7, 'a': 8, 'fruit': 9}
[[4, 1, 2], [5, 6, 7, 8, 3, 9], [2, 1, 3]]
</code></pre>
| 715
|
tenserflow
|
tenserflow tutorial error or mistake
|
https://stackoverflow.com/questions/42706111/tenserflow-tutorial-error-or-mistake
|
<p>Incorrect code from training examples
<a href="https://www.tensorflow.org/get_started/get_started" rel="nofollow noreferrer">https://www.tensorflow.org/get_started/get_started</a></p>
<pre><code> sess=tf.InteractiveSession()
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b # + provides a shortcut for tf.add(a, b)
print(sess.run(adder_node, {a: 13, b:4.5}))
print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))
</code></pre>
<p>0.0
[ 0. 0.]</p>
<p>What is the problem?</p>
| 716
|
|
tenserflow
|
element wise multiplication tenserflow Error
|
https://stackoverflow.com/questions/59050126/element-wise-multiplication-tenserflow-error
|
<p>i made my autoencoder in this way </p>
<pre><code>autoencoder = Sequential()
Atac=Atac.iloc[range(2),range(2)]
autoencoder.add(Dense(minFeature, activation='relu',name="encoder4",input_shape=(Atac.shape[1],),kernel_constraint=prova()))
autoencoder.add(Dense(Atac.shape[1], activation='relu',name="decoder4",kernel_constraint=prova()))
autoencoder.compile(optimizer=keras.optimizers.Adam( lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0), loss='mean_squared_error')
autoencoder.summary()
autoencoder_train = autoencoder.fit(Atac, Atac, batch_size=256, epochs=1,validation_data=(Atac, Atac))
</code></pre>
<p>and im using the following kernel_constraint function : </p>
<pre><code>class prova(constraints.Constraint):
def __call__(self, w):
#return tf.math.multiply(w,tf.constant(np.asarray(relationMatrix),tf.float32))
return tf.math.multiply(w,tf.transpose(tf.constant(np.asarray(relationMatrix),tf.float32)))
</code></pre>
<p>but then is crazy, if i use the first multilpy i have the following error : </p>
<blockquote>
<pre><code>ValueError: Dimensions must be equal, but are 795 and 2 for 'training_63/Adam/Mul_17' (op: 'Mul') with input shapes: [795,2],
</code></pre>
<p>[2,795].</p>
</blockquote>
<p>if i use the second multiply </p>
<blockquote>
<p>ValueError: Dimensions must be equal, but are 2 and 795 for
'training_64/Adam/Mul_6' (op: 'Mul') with input shapes: [2,795],
[795,2].</p>
</blockquote>
<p>I cant understand whereis the error. Thank you in advance</p>
| 717
|
|
tenserflow
|
Tenserflow module is giving errors
|
https://stackoverflow.com/questions/76003183/tenserflow-module-is-giving-errors
|
<p>I am trying to import some modules but I get errors back. These are what I am trying to import and install:</p>
<pre><code>%pip install pandas
%pip install numpy
%pip install requests
%pip install beautifulsoup4
%pip install tensorflow
import requests
import pandas
import numpy
import requests
from bs4 import BeautifulSoup
import tensorflow
</code></pre>
<p>after running the code, what I get is:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[2], line 12
10 import requests
11 from bs4 import BeautifulSoup
---> 12 import tensorflow
13 import keras
File ~/opt/anaconda3/lib/python3.9/site-packages/tensorflow/__init__.py:41
38 import six as _six
39 import sys as _sys
---> 41 from tensorflow.python.tools import module_util as _module_util
42 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader
44 # Make sure code inside the TensorFlow codebase can use tf2.enabled() at import.
File ~/opt/anaconda3/lib/python3.9/site-packages/tensorflow/python/__init__.py:40
31 import traceback
33 # We aim to keep this file minimal and ideally remove completely.
34 # If you are adding a new file with @tf_export decorators,
35 # import it in modules_with_exports.py instead.
36
37 # go/tf-wildcard-import
38 # pylint: disable=wildcard-import,g-bad-import-order,g-import-not-at-top
---> 40 from tensorflow.python.eager import context
41 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
...
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
</code></pre>
<p>I am using python 3.9.13 - TensorFlow is 2.12.0, conda 23.3.1 and anaconda custom py29_1</p>
| 718
|
|
tenserflow
|
import error on transformers and tenserflow
|
https://stackoverflow.com/questions/78085996/import-error-on-transformers-and-tenserflow
|
<pre><code>RuntimeError: Failed to import transformers.models.bert.modeling_tf_bert because of the following error (look up to see its traceback):
module 'tensorflow._api.v2.compat.v2.__internal__' has no attribute 'register_load_context_function'
</code></pre>
<pre><code>sentiment_analysis = pipeline("sentiment-analysis", model="ProsusAI/finbert")
@app.route('/news', methods=['POST'])
def analyze_news_sentiment():
data = request.json
news_text = data.get('news_text')
# Perform sentiment analysis
result = sentiment_analysis(news_text)
return jsonify(sentiment=result[0]['label'])
</code></pre>
<p>how to solve this error</p>
|
<p>Seems a version compatibility problem.</p>
<p>I would try to pin version of python to 3.10 and tensorflow/keras seeking stability.</p>
<pre><code>conda create -n my_env python=3.10
conda activate my_env
python -m pip install tensorflow==2.15.0 keras==2.15.0
</code></pre>
<p>And try to run your program with this setup</p>
| 719
|
tenserflow
|
Where in tenserflow to show elements
|
https://stackoverflow.com/questions/46367159/where-in-tenserflow-to-show-elements
|
<p>This code shows only indexes of array, where it used</p>
<pre><code>tensor1 = tf.convert_to_tensor(np.array([1536, 2, 5], dtype='float32'))
tf.where(tensor1 > 3).eval().reshape(1, 2)[0]
</code></pre>
<p>Output is:</p>
<blockquote>
<p>array([0, 2], dtype=int64)</p>
</blockquote>
<p>I did for loop to print using indexes:</p>
<pre><code>for i in tf.where(tensor1 > 3).eval().reshape(1, 2)[0]:
print(tensor1[i].eval())
</code></pre>
<p>Is there any way to do it without for loop?</p>
|
<p>tf.gather can also be used to index into arrays, so</p>
<pre><code>indices = tf.where(tensor1 > 3)
tf.gather(tensor1, indices)
</code></pre>
<p>should do the right thing</p>
| 720
|
tenserflow
|
tenserflow quites before it is done installing
|
https://stackoverflow.com/questions/57560411/tenserflow-quites-before-it-is-done-installing
|
<p>When I do</p>
<pre><code>pip install tenserflow
</code></pre>
<p>tensorflow stops before is is finished, no message saying successful
( i tried this several times thought maybe it was a network issue)</p>
<pre><code>Successfully installed absl-py-0.7.1 astor-0.8.0 gast-0.2.2 google-pasta-0.1.7 grpcio-1.23.0 h5py-2.9.0 keras-applications-1.0.8 keras-preprocessing-1.1.0 markdown-3.1.1 numpy-1.17.0 protobuf-3.9.1 six-1.12.0 tensorboard-1.14.0 tensorflow-1.14.0 tensorflow-estimator-1.14.0 termcolor-1.1.0 werkzeug-0.15.5 wheel-0.33.6 wrapt-1.11.2
You are using pip version 10.0.1, however version 19.2.2 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
( no message saying successful)
</code></pre>
<p>When I try to import keras i get a bunch of errors, Im assuming it is because tenser flow did not do the complete instalation</p>
<pre><code>import keras
Using TensorFlow backend.
C:\nur6\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\nur6\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\nur6\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\nur6\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\nur6\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\nur6\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
C:\nur6\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\nur6\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\nur6\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\nur6\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\nur6\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\nur6\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
</code></pre>
|
<p>Those are simply warnings (So you can safely ignore them, they are not actual errors), your numpy version is not compatible with the tensorflow version you are using. Tensorflow did install correctly; as seen in the outpout from pip:</p>
<pre><code>**Successfully installed** absl-py-0.7.1 astor-0.8.0 gast-0.2.2 google-pasta-0.1.7 grpcio-1.23.0 h5py-2.9.0 keras-applications-1.0.8 keras-preprocessing-1.1.0 markdown-3.1.1 numpy-1.17.0 protobuf-3.9.1 six-1.12.0 tensorboard-1.14.0 ***tensorflow-1.14.0*** tensorflow-estimator-1.14.0 termcolor-1.1.0 werkzeug-0.15.5 wheel-0.33.6 wrapt-1.11.2 You are using pip version 10.0.1, however version 19.2.2 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command.
</code></pre>
<p>Either upgrade tensorflow or downgrade numpy and the warnings will disappear. </p>
| 721
|
tenserflow
|
Keras/Tenserflow - Cannot make model.fit() work
|
https://stackoverflow.com/questions/70178206/keras-tenserflow-cannot-make-model-fit-work
|
<p>I am trying to make a CNN network to make predictions on images of mushrooms.</p>
<p>Sadly, I can't even begin to train my model, the fit() method always gives me errors.</p>
<p>There are 10 classes, the tf Datasets correctly found their names based on their subfolders.</p>
<p>With my current code, it says:</p>
<pre><code>InvalidArgumentError: logits and labels must have the same first
dimension, got logits shape [12800,10] and labels shape [32]
</code></pre>
<p>Model summary:</p>
<pre><code> Layer (type) Output Shape Param #
=================================================================
input_5 (InputLayer) [(None, 64, 64, 3)] 0
conv2d_4 (Conv2D) (None, 62, 62, 32) 896
max_pooling2d_2 (MaxPooling (None, 20, 20, 32) 0
2D)
re_lu_2 (ReLU) (None, 20, 20, 32) 0
dense_2 (Dense) (None, 20, 20, 10) 330
=================================================================
</code></pre>
<p>Here's my code:</p>
<pre><code>#Data loading
train_set = keras.preprocessing.image_dataset_from_directory(
data_path,
labels="inferred",
label_mode="int",
batch_size=32,
image_size=(64, 64),
shuffle=True,
seed=1446,
validation_split = 0.2,
subset="training")
validation_set = keras.preprocessing.image_dataset_from_directory(
data_path,
labels="inferred",
label_mode="int",
batch_size=32,
image_size=(64, 64),
shuffle=True,
seed=1446,
validation_split = 0.2,
subset="validation")
#Constructing layers
input_layer = keras.Input(shape=(64, 64, 3))
x = layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu")(input_layer)
x = layers.MaxPooling2D(pool_size=(3, 3))(x)
x = keras.layers.ReLU()(x)
output = layers.Dense(10, activation="softmax")(x)
#Making and fitting the model
model = keras.Model(inputs=input_layer, outputs=output)
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=['accuracy'])
model.fit(train_set, epochs=5, validation_data=validation_set)
</code></pre>
|
<p>I think you need to flatten before passing to the <code>Dense</code> layer</p>
<pre><code>input_layer = keras.Input(shape=(64, 64, 3))
x = layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu")(input_layer)
x = layers.MaxPooling2D(pool_size=(3, 3))(x)
x = keras.layers.ReLU()(x)
x = keras.layers.Flatten()(x) # try adding this
output = layers.Dense(10, activation="softmax")(x)
</code></pre>
| 722
|
tenserflow
|
Exporting a Trained Inference Graph TENSERFLOW
|
https://stackoverflow.com/questions/57563619/exporting-a-trained-inference-graph-tenserflow
|
<p>I have trained my custom model and want to export a trained inference graph</p>
<p>I ran the following command</p>
<pre><code>INPUT_TYPE=image_tensor
PIPELINE_CONFIG_PATH= training/ ssd_mobilenet_v1_pets.config
TRAINED_CKPT_PREFIX= training/model.ckpt-2509
EXPORT_DIR= training/new_model
python exporter.py \
--input_type=${INPUT_TYPE} \
--pipeline_config_path=${PIPELINE_CONFIG_PATH} \
--trained_checkpoint_prefix=${TRAINED_CKPT_PREFIX} \
--output_directory=${EXPORT_DIR}
</code></pre>
<p>And I got the following output</p>
<pre><code>W0819 22:08:54.649750 2680 deprecation_wrapper.py:119] From C:\Users\Aleksej\Anaconda3\envs\cocosynth4\lib\site-packages\object_detection-0.1-py3.6.egg\nets\mobilenet\mobilenet.py:397: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.
(cocosynth4) D:\yolo\models\research\object_detection> --input_type=${INPUT_TYPE} \
'--input_type' is not recognized as an internal or external command,
operable program or batch file.
(cocosynth4) D:\yolo\models\research\object_detection> --pipeline_config_path=${PIPELINE_CONFIG_PATH} \
'--pipeline_config_path' is not recognized as an internal or external command,
operable program or batch file.
(cocosynth4) D:\yolo\models\research\object_detection> --trained_checkpoint_prefix=${TRAINED_CKPT_PREFIX} \
'--trained_checkpoint_prefix' is not recognized as an internal or external command,
operable program or batch file.
(cocosynth4) D:\yolo\models\research\object_detection> --output_directory=${EXPORT_DIR}
'--output_directory' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>I am running a windows 10 and python3.
Does anyone have any suggestions on how to solve this issue</p>
|
<p>Fixed with this code</p>
<p>python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/ssd_mobilenet_v1_pets.config --trained_checkpoint_prefix training/model.ckpt-3970 --output_directory ships_inference_graph</p>
| 723
|
tenserflow
|
Tenserflow custom layer input doesn't work
|
https://stackoverflow.com/questions/64955811/tenserflow-custom-layer-input-doesnt-work
|
<p>I need to create custom layer for my model. Here is the code:</p>
<pre><code>class Custom_layer_2(keras.layers.Layer):
def __init__(self, input_neurons_nmber, nn_neurons_number, connections_matrix, inputs_neurons):
super(Custom_layer_2, self).__init__()
self.cn = tf.dtypes.cast(tf.constant(connections_matrix),tf.float32)
self.inputs_neurons = tf.dtypes.cast(tf.constant(inputs_neurons), tf.float32)
self.nn_neurons_len = nn_neurons_number
self.inpt_neurons_len = input_neurons_nmber
self.w = self.add_weight(shape=([self.inpt_neurons_len+self.nn_neurons_len]), initializer="random_normal", trainable=True)
self.b = self.add_weight(shape=([self.nn_neurons_len]), initializer="zeros", trainable=True)
def call(self, inputs_nn_neurons):
self.input_matrix = tf.repeat([tf.concat([self.inputs_neurons, inputs_nn_neurons], 0)], repeats=self.nn_neurons_len, axis=0)
return tf.concat([self.inputs_neurons, inputs_nn_neurons], 0)
</code></pre>
<p>It works perfectly, when I use as input predefined tensor of some shape:</p>
<pre><code>nn_layer = Custom_layer_2(inputs_number, nn_number, conections_matrix, first_input_neurons)
input_tensor = [SOME TENSOR]
layer_1 = nn_layer(input_tensor)
</code></pre>
<p>Unfortunately, to create a model and to train it, I need to specify an input, so when instead of exact tensor I use input layer, code doesn't work for some reason:</p>
<pre><code>nn_layer = Custom_layer_2(inputs_number, nn_number, conections_matrix, first_input_neurons)
inputs_layer = tf.keras.Input(shape=(SOME SHAPE,))
layer_1 = nn_layer(inputs_layer)
</code></pre>
<p>This code throw ValueError: Shape must be rank 1 but is rank 2. As far as I understand, when I use tf.keras.Input it create a tensor of shape (Nine,SOME SHAPE), whereas if I use exact tensor its shape is specified. What should I do in this situation, if a want to create a mode with custom layers?</p>
| 724
|
|
tenserflow
|
Where does Bazel store TenserFlow build?
|
https://stackoverflow.com/questions/42354675/where-does-bazel-store-tenserflow-build
|
<p>I'm trying to build TensorFlow from sources following this guide: <a href="https://www.tensorflow.org/install/install_sources" rel="nofollow noreferrer">Installing TensorFlow from Sources</a>. The build seems to have worked fine, but then there's the last step:</p>
<blockquote>
<p>Invoke pip install to install that pip package. The filename of the
.whlfile depends on your platform. For example, the following command
will install the pip package for TensorFlow 1.0.0 on Linux: </p>
<pre><code>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.0.0-py2-none-any.whl
</code></pre>
</blockquote>
<p>I suppose that's great if you run Linux, but I would have appreciated the location on Mac OS X as well.</p>
<p>Where is the package stored on Mac? I can't find it int <code>/tmp</code>, nor <code>/Users/Library/Caches</code>. And since search is broken on Mac, I'm out of luck.</p>
|
<p>It sounds like you may have skipped a step. Bazel does not create this file. The program that Bazel builds does.</p>
<p>The prior step on <a href="https://www.tensorflow.org/install/install_sources" rel="nofollow noreferrer">https://www.tensorflow.org/install/install_sources</a> to the one that you mention is to run</p>
<pre><code>$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
</code></pre>
<p>The second argument specified where to put the wheel file. Furthermore, that program logs its output directory:</p>
<pre><code>$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
Mon Feb 20 22:08:08 EST 2017 : === Using tmpdir: /var/folders/yt/9r8s598x603bx58zq85yrx680096yv/T/tmp.XXXXXXXXXX.gp5oIM0Z
Mon Feb 20 22:08:13 EST 2017 : === Building wheel
Mon Feb 20 22:08:45 EST 2017 : === Output wheel file is in: /tmp/tensorflow_pkg
$ ls /tmp/tensorflow_pkg/
tensorflow-1.0.0-cp27-cp27m-macosx_10_12_intel.whl
</code></pre>
| 725
|
tenserflow
|
Jupyter Notebook Kernel dies when importing tenserflow
|
https://stackoverflow.com/questions/68511374/jupyter-notebook-kernel-dies-when-importing-tenserflow
|
<p>I am using Macbook Air with M1 chip. When trying to import tensorflow in Jupyter notebook, the kernel dies and displays a prompt that "Kernel has died and will restart in sometime". Could someone help me fix this?</p>
<p>Tensorflow version - 2.5.0
Python version - 3.8.8</p>
|
<p>Try running the notebook file within VS Code, there are extensions to help with that. Also check this article on how to install tf on M1 <a href="https://towardsdatascience.com/installing-tensorflow-on-the-m1-mac-410bb36b776" rel="nofollow noreferrer">https://towardsdatascience.com/installing-tensorflow-on-the-m1-mac-410bb36b776</a></p>
| 726
|
tenserflow
|
Setting up on Macbook Pro M1 Tenserflow with OpenCV, Scipy, Scikit-learn
|
https://stackoverflow.com/questions/69427204/setting-up-on-macbook-pro-m1-tenserflow-with-opencv-scipy-scikit-learn
|
<p>I think I read pretty much most of the guides on setting up tensorflow, tensorflow-hub, object detection on Mac M1 on BigSur v11.6. I managed to figure out most of the errors after more than 2 weeks. But I am stuck at OpenCV setup. I tried to compile it from source but seems like it can't find the modules from its core package so constantly can't make the file after the successful cmake build. It fails at different stages, crying for different libraries, despite they are there but max reached 31% after multiple cmake and deletion of the build folder or the cmake cash file. So I am not sure what to do in order to make successfully the file.
I git cloned and unzipped the <strong>opencv-4.5.0</strong> and <strong>opencv_contrib-4.5.0</strong> in my miniforge3 directory. Then I created a folder "<strong>build</strong>" in my opencv-4.5.0 folder and the cmake command I use in it is (my miniforge conda environment is called silicon and made sure I am using arch arm64 in bash environment):</p>
<pre><code>cmake -DCMAKE_SYSTEM_PROCESSOR=arm64 -DCMAKE_OSX_ARCHITECTURES=arm64 -DWITH_OPENJPEG=OFF -DWITH_IPP=OFF -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=/Users/adi/miniforge3/opencv_contrib-4.5.0/modules -D PYTHON3_EXECUTABLE=/Users/adi/miniforge3/envs/silicon/bin/python3.8 -D BUILD_opencv_python2=OFF -D BUILD_opencv_python3=ON -D INSTALL_PYTHON_EXAMPLES=ON -D INSTALL_C_EXAMPLES=OFF -D OPENCV_ENABLE_NONFREE=ON -D BUILD_EXAMPLES=ON /Users/adi/miniforge3/opencv-4.5.0
</code></pre>
<p>So it cries like:</p>
<pre><code>[ 20%] Linking CXX shared library ../../lib/libopencv_core.dylib
[ 20%] Built target opencv_core
make: *** [all] Error 2
</code></pre>
<p>or also like in another tries was initially asking for <strong>calib3d</strong> or <strong>dnn</strong> but those libraries are there in the main folder opencv-4.5.0.</p>
<p><strong>The other way I try to install openCV is with conda:</strong></p>
<pre><code>conda install opencv
</code></pre>
<p>But then when I test with</p>
<pre><code>python -c "import cv2; cv2.__version__"
</code></pre>
<p>it seems like it searches for the <strong>ffmepg</strong> via homebrew (I didn't install any of these via homebrew but with conda). So it complained:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/adi/miniforge3/envs/silicon/lib/python3.8/site-packages/cv2/__init__.py", line 5, in <module>
from .cv2 import *
ImportError: dlopen(/Users/adi/miniforge3/envs/silicon/lib/python3.8/site-packages/cv2/cv2.cpython-38-darwin.so, 2): Library not loaded: /opt/homebrew/opt/ffmpeg/lib/libavcodec.58.dylib
Referenced from: /Users/adi/miniforge3/envs/silicon/lib/python3.8/site-packages/cv2/cv2.cpython-38-darwin.so
Reason: image not found
</code></pre>
<p>Though I have these libs, so when I searched with: find /usr/ -name 'libavcodec.58.dylib' I could find many locations:</p>
<pre><code>find: /usr//sbin/authserver: Permission denied
find: /usr//local/mysql-8.0.22-macos10.15-x86_64/keyring: Permission denied
find: /usr//local/mysql-8.0.22-macos10.15-x86_64/data: Permission denied
find: /usr//local/hw_mp_userdata/Internet_Manager/OnlineUpdate: Permission denied
/usr//local/lib/libavcodec.58.dylib
/usr//local/Cellar/ffmpeg/4.4_2/lib/libavcodec.58.dylib
(silicon) MacBook-Pro:opencv-4.5.0 adi$ ln -s /usr/local/Cellar/ffmpeg/4.4_2/lib/libavcodec.58.dylib /opt/homebrew/opt/ffmpeg/lib/libavcodec.58.dylib
ln: /opt/homebrew/opt/ffmpeg/lib/libavcodec.58.dylib: No such file or directory
</code></pre>
<p>One of the guides said to install homebrew also in arm64 env, so I did it with:</p>
<pre><code>/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
export PATH="/opt/homebrew/bin:/usr/local/bin:$PATH"
alias ibrew='arch -x86_64 /usr/local/bin/brew' # create brew for intel (ibrew) and arm/ silicon
</code></pre>
<p>Not sure if that is affecting it but seems like it didn't do anything because still uses <strong>/opt/homebrew/</strong> instead of <strong>/usr/local/</strong>.
So any help would be highly appreciated if I can make any of the ways work. Ultimately I want to use Tenserflow Model Zoo Object Detection models. So all the other dependencies seems fine (for now) besides either OpenCV not working or if it is working with conda install then it seems that scipy and scikit-learn don't work.</p>
|
<p>In my case I also had lot of trouble trying to install both modules. I finally managed to do so but to be honest not really sure how and why. I leave below the requirements in case you might want to recreate the environment that worked in my case. You should have the conda Miniforge 3 installed :</p>
<pre><code># This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: osx-arm64
absl-py=1.0.0=pypi_0
astunparse=1.6.3=pypi_0
autocfg=0.0.8=pypi_0
blas=2.113=openblas
blas-devel=3.9.0=13_osxarm64_openblas
boto3=1.22.10=pypi_0
botocore=1.25.10=pypi_0
c-ares=1.18.1=h1a28f6b_0
ca-certificates=2022.2.1=hca03da5_0
cachetools=5.0.0=pypi_0
certifi=2021.10.8=py39hca03da5_2
charset-normalizer=2.0.12=pypi_0
cycler=0.11.0=pypi_0
expat=2.4.4=hc377ac9_0
flatbuffers=2.0=pypi_0
fonttools=4.31.1=pypi_0
gast=0.5.3=pypi_0
gluoncv=0.10.5=pypi_0
google-auth=2.6.0=pypi_0
google-auth-oauthlib=0.4.6=pypi_0
google-pasta=0.2.0=pypi_0
grpcio=1.42.0=py39h95c9599_0
h5py=3.6.0=py39h7fe8675_0
hdf5=1.12.1=h5aa262f_1
idna=3.3=pypi_0
importlib-metadata=4.11.3=pypi_0
jmespath=1.0.0=pypi_0
keras=2.8.0=pypi_0
keras-preprocessing=1.1.2=pypi_0
kiwisolver=1.4.0=pypi_0
krb5=1.19.2=h3b8d789_0
libblas=3.9.0=13_osxarm64_openblas
libcblas=3.9.0=13_osxarm64_openblas
libclang=13.0.0=pypi_0
libcurl=7.80.0=hc6d1d07_0
libcxx=12.0.0=hf6beb65_1
libedit=3.1.20210910=h1a28f6b_0
libev=4.33=h1a28f6b_1
libffi=3.4.2=hc377ac9_2
libgfortran=5.0.0=11_1_0_h6a59814_26
libgfortran5=11.1.0=h6a59814_26
libiconv=1.16=h1a28f6b_1
liblapack=3.9.0=13_osxarm64_openblas
liblapacke=3.9.0=13_osxarm64_openblas
libnghttp2=1.46.0=h95c9599_0
libopenblas=0.3.18=openmp_h5dd58f0_0
libssh2=1.9.0=hf27765b_1
llvm-openmp=12.0.0=haf9daa7_1
markdown=3.3.6=pypi_0
matplotlib=3.5.1=pypi_0
mxnet=1.6.0=pypi_0
ncurses=6.3=h1a28f6b_2
numpy=1.21.2=py39hb38b75b_0
numpy-base=1.21.2=py39h6269429_0
oauthlib=3.2.0=pypi_0
openblas=0.3.18=openmp_h3b88efd_0
opencv-python=4.5.5.64=pypi_0
openssl=1.1.1m=h1a28f6b_0
opt-einsum=3.3.0=pypi_0
packaging=21.3=pypi_0
pandas=1.4.1=pypi_0
pillow=9.0.1=pypi_0
pip=22.0.4=pypi_0
portalocker=2.4.0=pypi_0
protobuf=3.19.4=pypi_0
pyasn1=0.4.8=pypi_0
pyasn1-modules=0.2.8=pypi_0
pydot=1.4.2=pypi_0
pyparsing=3.0.7=pypi_0
python=3.9.7=hc70090a_1
python-dateutil=2.8.2=pypi_0
python-graphviz=0.8.4=pypi_0
pytz=2022.1=pypi_0
pyyaml=6.0=pypi_0
readline=8.1.2=h1a28f6b_1
requests=2.27.1=pypi_0
requests-oauthlib=1.3.1=pypi_0
rsa=4.8=pypi_0
s3transfer=0.5.2=pypi_0
scipy=1.8.0=pypi_0
setuptools=58.0.4=py39hca03da5_1
six=1.16.0=pyhd3eb1b0_1
sqlite=3.38.0=h1058600_0
tensorboard=2.8.0=pypi_0
tensorboard-data-server=0.6.1=pypi_0
tensorboard-plugin-wit=1.8.1=pypi_0
tensorflow-deps=2.8.0=0
tensorflow-macos=2.8.0=pypi_0
termcolor=1.1.0=pypi_0
tf-estimator-nightly=2.8.0.dev2021122109=pypi_0
tk=8.6.11=hb8d0fd4_0
tqdm=4.63.1=pypi_0
typing-extensions=4.1.1=pypi_0
tzdata=2021e=hda174b7_0
urllib3=1.26.9=pypi_0
werkzeug=2.0.3=pypi_0
wheel=0.37.1=pyhd3eb1b0_0
wrapt=1.14.0=pypi_0
xz=5.2.5=h1a28f6b_0
yacs=0.1.8=pypi_0
zipp=3.7.0=pypi_0
zlib=1.2.11=h5a0b063_4
</code></pre>
| 727
|
tenserflow
|
AttributeError: 'module' object has no attribute 'contrib'
|
https://stackoverflow.com/questions/44092475/attributeerror-module-object-has-no-attribute-contrib
|
<p>I am using tenserflow first time with Python 2.7 . I am following tenserflow tutorial and when I am running the below line features </p>
<pre><code>[tf.contrib.layers.real_valued_column("x", dimension=1)]
</code></pre>
<p>its throwing the error </p>
<blockquote>
<p>"AttributeError: 'module' object has no attribute 'contrib'". Please help.</p>
</blockquote>
|
<p>However, it got solved after reinstalling the protofub.</p>
| 728
|
tenserflow
|
Tenserflow hangs when running inference with GPU enabled
|
https://stackoverflow.com/questions/61382445/tenserflow-hangs-when-running-inference-with-gpu-enabled
|
<p>I am new to AI and TensorFlow and I am trying to use the TensorFlow object detection API on windows. <br>
My current goal is to do real time human detection in a video stream. <br>
For this I modified a python example from the TensorFlow Model Garden (<a href="https://github.com/tensorflow/models" rel="nofollow noreferrer">https://github.com/tensorflow/models</a>).<br>
At the moment it detects all objects (not just humans) and shows the bounding boxes using opencv.</p>
<p>It works fine when I have the GPU disabled (os.environ["CUDA_VISIBLE_DEVICES"] = "-1")<br>
But when I enable the GPU and start the script it will hang on the first frame. </p>
<p>Output:</p>
<pre><code>2020-04-22 16:00:53.597492: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-04-22 16:00:56.942141: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2020-04-22 16:00:56.976635: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 960M computeCapability: 5.0
coreClock: 1.176GHz coreCount: 5 deviceMemorySize: 2.00GiB deviceMemoryBandwidth: 74.65GiB/s
2020-04-22 16:00:56.989129: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-04-22 16:00:57.000622: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-04-22 16:00:57.012247: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2020-04-22 16:00:57.020575: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2020-04-22 16:00:57.031536: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2020-04-22 16:00:57.042564: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2020-04-22 16:00:57.066289: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-04-22 16:00:57.075760: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2020-04-22 16:00:59.239211: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2020-04-22 16:00:59.256577: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1f3f73cd670 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-04-22 16:00:59.264241: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-04-22 16:00:59.272280: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 960M computeCapability: 5.0
coreClock: 1.176GHz coreCount: 5 deviceMemorySize: 2.00GiB deviceMemoryBandwidth: 74.65GiB/s
2020-04-22 16:00:59.281409: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-04-22 16:00:59.288204: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-04-22 16:00:59.293112: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2020-04-22 16:00:59.298222: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2020-04-22 16:00:59.305446: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2020-04-22 16:00:59.310590: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2020-04-22 16:00:59.316250: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-04-22 16:00:59.324588: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2020-04-22 16:01:00.831569: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-04-22 16:01:00.839147: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0
2020-04-22 16:01:00.842279: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N
2020-04-22 16:01:00.846140: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1024 MB memory) -> physical GPU (device: 0, name: GeForce GTX 960M, pci bus id: 0000:01:00.0, compute capability: 5.0)
2020-04-22 16:01:00.865546: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1f39174cba0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-04-22 16:01:00.873656: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 960M, Compute Capability 5.0
[<tf.Tensor 'image_tensor:0' shape=(None, None, None, 3) dtype=uint8>]
2020-04-22 16:01:10.876733: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-04-22 16:01:11.814909: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: Invoking GPU asm compilation is supported on Cuda non-Windows platforms only
Relying on driver to perform ptx compilation.
Modify $PATH to customize ptxas location.
This message will be only logged once.
2020-04-22 16:01:11.852909: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-04-22 16:01:12.149312: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.04GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-04-22 16:01:12.179484: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.04GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-04-22 16:01:12.209036: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.06GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-04-22 16:01:12.237205: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.05GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-04-22 16:01:12.266147: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.09GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-04-22 16:01:12.295182: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.08GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-04-22 16:01:12.325645: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.15GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-04-22 16:01:12.357550: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.15GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-04-22 16:01:12.405332: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.14GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-04-22 16:01:12.436336: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.27GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
</code></pre>
<p>This is the code I am using:</p>
<pre><code>#!/usr/bin/env python
# coding: utf-8
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from PIL import Image
from IPython.display import display
import cv2
cap = cv2.VideoCapture(1)
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
# os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
model = model.signatures['serving_default']
return model
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
# model_name= 'faster_rcnn_inception_v2_coco_2017_11_08';
detection_model = load_model(model_name)
print(detection_model.inputs)
detection_model.output_dtypes
detection_model.output_shapes
def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference (it hangs here)
output_dict = model(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(output_dict['detection_masks'], output_dict['detection_boxes'],image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict
def show_inference(model):
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
ret, image_np = cap.read()
#percent by which the image is resized
#scale_percent = 30
#calculate the 50 percent of original dimensions
#width = int(image_np.shape[1] * scale_percent / 100)
#height = int(image_np.shape[0] * scale_percent / 100)
# dsize
#dsize = (width, height)
# resize image
#image_np = cv2.resize(image_np, dsize)
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
cv2.imshow('object detection', cv2.resize(image_np, (800,600)))
while True:
show_inference(detection_model)
if cv2.waitKey(25) & 0xFF == ord('q'):
cv2.destroyAllWindows()
break
</code></pre>
<p>I have installed the following versions:<br>
Python: 3.7 64 bit <br>
Tensorflow: 2.2.0-rc3 <br>
Cuda: 10.1 <br>
cudnn 7.6.5.32<br></p>
<p>I tried this on 2 different machines:<br>
Machine 1:<br>
- CPU: i7-6700HQ<br>
- RAM: 16 GB<br>
- GPU: NVIDIA GeForce GTX 960M<br>
<br>
Machine 2:<br>
- CPU: i5-6400 <br>
- RAM: 16 GB<br>
- GPU: NVIDIA GeForce GTX 960<br>
<br>
I am not sure how to debug this. I tried the same code on two different machine and the result was almost the same. <br>
The only difference was the time it took for it to hang. Machine 1 would hang immediately and machine 2 took roughly 30 seconds. <br>
Machine 2 was able to process the video and detect object up until the hang.<br></p>
<p>I looked into the 'Allocator (GPU_0_bfc) ran out of memory' warnings. <br>
I tried some options that limited the available GPU memory size, but this did not help. <br></p>
<p>There where also multiple posts that sugested reducing the batch size. <br>
My interpretation was that this was only helpful when training your own model. <br>
And because I am using pre-trained models, this was not applicable. <br></p>
<p>I also tried to use different models: ssd_mobilenet_v1_coco_2017_11_17 and faster_rcnn_inception_v2_coco_2017_11_08. Both models have the same result. </p>
<p>The last thing I tried was to reduce the image size before processing it. This also did not help.<br></p>
<p>Any help would be much appreciated<br></p>
<p><strong>Update</strong><br>
I also tried it on a RTX2070 super GPU. There are no warnings about memory allocation. This is also not able to complete a single inference.
Just for completeness, this is the console output [The text 'inference start' is printed before running the inference. If the inference would complete, it would print 'inference end']:</p>
<pre><code>2020-04-24 11:30:16.579805: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-04-24 11:30:18.916146: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2020-04-24 11:30:18.941805: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce RTX 2070 SUPER computeCapability: 7.5
coreClock: 1.785GHz coreCount: 40 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 417.29GiB/s
2020-04-24 11:30:18.946134: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-04-24 11:30:18.951172: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-04-24 11:30:18.954809: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2020-04-24 11:30:18.957258: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2020-04-24 11:30:18.961662: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2020-04-24 11:30:18.965553: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2020-04-24 11:30:18.978671: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-04-24 11:30:18.980998: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-04-24 11:30:18.982226: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2020-04-24 11:30:18.984167: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce RTX 2070 SUPER computeCapability: 7.5
coreClock: 1.785GHz coreCount: 40 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 417.29GiB/s
2020-04-24 11:30:18.987291: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-04-24 11:30:18.988809: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-04-24 11:30:18.990303: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2020-04-24 11:30:18.991792: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2020-04-24 11:30:18.993320: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2020-04-24 11:30:18.996960: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2020-04-24 11:30:18.998497: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-04-24 11:30:19.000191: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-04-24 11:30:19.430864: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-04-24 11:30:19.433076: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0
2020-04-24 11:30:19.434566: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N
2020-04-24 11:30:19.436400: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6281 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2070 SUPER, pci bus id: 0000:01:00.0, compute capability: 7.5)
[<tf.Tensor 'image_tensor:0' shape=(None, None, None, 3) dtype=uint8>]
inference start
2020-04-24 11:30:24.728554: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-04-24 11:30:25.608426: W tensorflow/stream_executor/gpu/redzone_allocator.cc:312] Internal: Invoking GPU asm compilation is supported on Cuda non-Windows platforms only
Relying on driver to perform ptx compilation. This message will be only logged once.
2020-04-24 11:30:25.625904: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
</code></pre>
<p><strong>Update 2</strong><br>
When Eager mode is disabled everything runs fine (even on GPU), but then I am not able to retreive the found objects.<br>
The next thing I tried was running it with sessions (like TensorFlow 1 I think). Here the function session.run() blocks indefinitely on GPU. And again on CPU it works fine.</p>
|
<p>If you are using GPU try to install tensorflow-gpu. The tensorflow you are using seems based on the documentation to support GPU but you can try and specify is implicitly. Try this on a python virtual environment first.</p>
<pre><code> pip uninstall tensorflow
</code></pre>
<p>uninstall tensorflow-gpu: (make sure to run this even if you are not sure if you installed it)</p>
<pre><code> pip uninstall tensorflow-gpu
</code></pre>
<p>Install specific tensorflow-gpu version:</p>
<pre><code> pip install tensorflow-gpu==2.0.0
</code></pre>
| 729
|
tenserflow
|
Tenserflow model for text classification doesn't predict as expected?
|
https://stackoverflow.com/questions/60015820/tenserflow-model-for-text-classification-doesnt-predict-as-expected
|
<p>I am trying to train a model for sentiment analysis and it shows an accuracy of 90% when splitting the data into training and testing! But whenever I am testing it on a new phrase is has pretty much the same result(usually it's in the range 0.86 - 0.95)!
Here is the code:</p>
<pre><code>sentences = data['text'].values.astype('U')
y = data['label'].values
sentences_train, sentences_test, y_train, y_test = train_test_split(sentences, y, test_size=0.2, random_state=1000)
tokenizer = Tokenizer(num_words=5000)
tokenizer.fit_on_texts(sentences_train)
X_train = tokenizer.texts_to_sequences(sentences_train)
X_test = tokenizer.texts_to_sequences(sentences_test)
vocab_size = len(tokenizer.word_index) + 1
maxlen = 100
X_train = pad_sequences(X_train, padding='post', maxlen=maxlen)
X_test = pad_sequences(X_test, padding='post', maxlen=maxlen)
embedding_dim = 50
model = Sequential()
model.add(Embedding(input_dim=vocab_size, output_dim=embedding_dim,input_length=maxlen))
model.add(layers.Flatten())
model.add(layers.Dense(10, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
history = model.fit(X_train, y_train,
epochs=5,
verbose=True,
validation_data=(X_test, y_test),
batch_size=10)
loss, accuracy = model.evaluate(X_train, y_train, verbose=False)
print("Training Accuracy: {:.4f}".format(accuracy))
loss, accuracy = model.evaluate(X_test, y_test, verbose=False)
print("Testing Accuracy: {:.4f}".format(accuracy))
</code></pre>
<p>The training data is a CSV file with 3 coumns: (id, text, labels(0,1)), where 0 is positive and 1 is negative. </p>
<pre><code>Training Accuracy: 0.9855
Testing Accuracy: 0.9013
</code></pre>
<p>Testing it on new sentences like 'This is just a text!' and 'hate preachers!' would predict the same result [0.85],[0.83]. </p>
|
<p>It seems that you're victim of <code>overfitting</code>. In other words, our model would overfit to the <code>training data</code>. Although it’s often possible to achieve <em>high accuracy</em> on the training set, as in your case, what we really want is to develop models that generalize well to <em>testing data</em> (or data they haven’t seen before).</p>
<p>You can follow <a href="https://machinelearningmastery.com/introduction-to-regularization-to-reduce-overfitting-and-improve-generalization-error/" rel="nofollow noreferrer">these</a> steps in order to prevent overfitting.</p>
<p>Also, in order to increase the algorithm performance I suggest you to increase the number of <em>neurons</em> for <code>Dense</code> layer and set more <code>epochs</code>in order to increase the performance of the algorithm when comes to testing it to new data.</p>
| 730
|
tenserflow
|
Can we transfer model trained on one PC to another PC
|
https://stackoverflow.com/questions/55080843/can-we-transfer-model-trained-on-one-pc-to-another-pc
|
<p>I have trained cnn model on MAC using tenserflow and keras.Can I move this trained model to another PC which has windows? If Yes, then can we use that trained model for prediction whithout having tenserflow , keras installed on it?</p>
|
<p>You can save your trained model with <code>save()</code> method. To load again your trained model you can use the function provided by the library:</p>
<pre><code>from keras.models import load_model
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model
# returns a compiled model
# identical to the previous one
model = load_model('my_model.h5')
</code></pre>
<p>As shown in the documentation: <a href="https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model" rel="nofollow noreferrer">Documentation</a></p>
<p>To the question about executing your model without keras or tensorflow installed on it, the short answer is no. The long answer is yes, but you will have to use any other library or implement by yourself a function which takes the model trained weights and architecture and do a common feedforward to return the result.</p>
| 731
|
tenserflow
|
Tenserflow Keras model not accepting my python generator output
|
https://stackoverflow.com/questions/64682863/tenserflow-keras-model-not-accepting-my-python-generator-output
|
<p>I am following some tutorials on setting up my first conv NN for some image classifications.</p>
<p>The tutorials load all images into memory and pass them into model.fit(). I can't do that because my data set is too large.</p>
<p>I wrote this generator to "drip feed" preprocessed images to model.fit, but I am getting an error and because I am a newbie I am having trouble diagnosing.</p>
<p>These are processed only as greyscale images also.</p>
<p>Here is the generator that I made...</p>
<pre><code># need to preprocess image in batches because memory
# tdata expects list of tuples<(string) file_path, (int) class_num)>
def image_generator(tdata, batch_size):
start_from = 0;
while True:
# Slice array into batch data
batch = tdata[start_from:start_from+batch_size]
# Keep track of position
start_from += batch_size
# Create batch lists
batch_x = []
batch_y = []
# Read in each input, perform preprocessing and get labels
for img_path, class_num in batch:
# Read raw img data as np array
# Returns as shape (600, 300, 1)
img_arr = create_np_img_array(img_path)
# Normalize img data (/255)
img_arr = normalize_img_array(img_arr)
# Add to the batch x data list
batch_x.append(img_arr)
# Add to the batch y classification list
batch_y.append(class_num)
yield (batch_x, batch_y)
</code></pre>
<p>Creating an instance of the generator:</p>
<pre><code>img_gen = image_generator(training_data, 30)
</code></pre>
<p>Setting up my model like so:</p>
<pre><code># create the model
model = Sequential()
# input layer has the input_shape param which is the dimentions of the np array
model.add( Conv2D(256, (3, 3), activation='relu', input_shape = (600, 300, 1)) )
model.add( MaxPooling2D( (2,2)) )
# second hidden layer
model.add( MaxPooling2D((2, 2)) )
model.add( Conv2D(256, (3, 3), activation='relu') )
# third hidden layer
model.add( MaxPooling2D((2, 2)))
model.add( Conv2D(256, (3, 3), activation='relu') )
# forth hidden layer
model.add( Flatten() )
model.add( Dense(64, activation='relu') )
# ouput layer
model.add( Dense(2) )
model.summary()
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# pass generator
model.fit(img_gen, epochs=5)
</code></pre>
<p>Then model.fit() fails from trying to call shape on an int.</p>
<pre><code>~\anaconda3\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py in _get_dynamic_shape(t)
797
798 def _get_dynamic_shape(t):
--> 799 shape = t.shape
800 # Unknown number of dimensions, `as_list` cannot be called.
801 if shape.rank is None:
AttributeError: 'int' object has no attribute 'shape'
</code></pre>
<p>Any suggestions on what I've done wrong??</p>
|
<p>Converting the outputs from the generator to numpy arrays seems to have stopped the error.</p>
<pre><code>np_x = np.array(batch_x)
np_y = np.array(batch_y)
</code></pre>
<p>Seems like it didn't like the classifications as a std list of ints.</p>
| 732
|
tenserflow
|
ERROR: Could not install packages due to an OSError: [WinError 5]
|
https://stackoverflow.com/questions/68133846/error-could-not-install-packages-due-to-an-oserror-winerror-5
|
<p>i was trying to install tensorflow-gpu on my pycharm (<code>pip install tensorflow-gpu</code>), but unfortunately im getting a Error Message. How can i install this package on my pycharm? What is wrong here? Should i install it directly with cmd? How can I install them with pycharm? However, I was able to install the tenserflow Version 2.5.0 without any problems. Only the Tenserflow gpu I cannot install. Im using python Version 3.7.9</p>
|
<p>You need to run the command prompt or terminal as an administrator. This will permit you to install packages.</p>
<p>And also, you need to upgrade pip to the latest version - <code>python -m pip install --upgrade pip</code> in cmd or terminal.</p>
| 733
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.