Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
8,600
| 66,745,564
|
How to apply a row-wise function to a pandas dataframe and a shifted version of itself
|
<p>I have a pandas dataframe where I would like to apply a simple sign and multiply operation to each row and the row two indices back (shifted by 2). For example if we had</p>
<pre><code>row_a = np.array([0.45, -0.78, 0.92])
row_b = np.array([1.2, -0.73, -0.46])
sgn_row_a = np.sign(row_a)
sgn_row_b = np.sign(row_b)
result = sgn_row_a * sgn_row_b
result
>>> array([1., 1., -1.])
</code></pre>
<p>What I have tried</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(42)
df = pd.DataFrame(np.random.normal(0, 1, (100, 5)), columns=["a", "b", "c", "d", "e"])
def kernel(row_a, row_b):
"""Take the sign of both rows and multiply them"""
sgn_a = np.sign(row_a)
sgn_b = np.sign(row_b)
return sgn_a * sgn_b
def func(data):
"""Apply 'kernel' to the dataframe row-wise, axis=1"""
out = data.apply(lambda x: kernel(x, x.shift(2)), axis=1)
return out
</code></pre>
<p>But then when I run the function I get the below as output which is incorrect. It seems to shift the columns rather than the rows. But when I tried different <code>axis</code> in the shift operation, I just got errors (<code>ValueError: No axis named 1 for object type Series</code>)</p>
<pre><code>out = func(df)
out
>>>
a b c d e
0 NaN NaN 1.0 -1.0 -1.0
1 NaN NaN -1.0 -1.0 1.0
2 NaN NaN -1.0 1.0 -1.0
3 NaN NaN -1.0 1.0 -1.0
4 NaN NaN 1.0 1.0 -1.0
.. .. .. ... ... ...
</code></pre>
<p>What I expect is</p>
<pre><code>out = func(df)
out
>>>
a b c d e
0 -1. 1. 1. -1. 1.
1 1. -1. 1. 1. -1.
2 -1. 1. 1. 1. 1.
3 -1. 1. 1. 1. 1.
4 -1. -1. -1. 1. -1.
.. .. .. ... ... ...
</code></pre>
<p>How can I achieve a shifted row-wise operation as I have outlined above?</p>
|
<p>It seems the simplest way to do this prticular operation is</p>
<pre><code>df.apply(np.sign) * df.shift(2).apply(np.sign)
>>>
a b c d e
0 NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN
2 -1.0 1.0 1.0 -1.0 1.0
3 1.0 -1.0 1.0 1.0 -1.0
4 -1.0 1.0 1.0 1.0 1.0
.. ... ... ... ... ...
</code></pre>
<p>And just apply a negative sign to the shift to shift the other way.</p>
|
python|pandas|apply
| 2
|
8,601
| 66,604,223
|
Manually find the distance between centroid and labelled data points
|
<p>I have carried out some clustering analysis on some data <code>X</code> and have arrived at both the labels <code>y</code> and the centroids <code>c</code>. Now, I'm trying to calculate the distance between <code>X</code> and <em>their assigned cluster's centroid</em> <code>c</code>. This is easy when we have a small number of points:</p>
<pre><code>import numpy as np
# 10 random points in 3D space
X = np.random.rand(10,3)
# define the number of clusters, say 3
clusters = 3
# give each point a random label
# (in the real code this is found using KMeans, for example)
y = np.asarray([np.random.randint(0,clusters) for i in range(10)]).reshape(-1,1)
# randomly assign location of centroids
# (in the real code this is found using KMeans, for example)
c = np.random.rand(clusters,3)
# calculate distances
distances = []
for i in range(len(X)):
distances.append(np.linalg.norm(X[i]-c[y[i][0]]))
</code></pre>
<p>Unfortunately, the actual data has many more rows. Is there a way to vectorise this somehow (instead of using a <code>for loop</code>)? I can't seem to get my head around the mapping.</p>
|
<p>Thanks to numpy's <a href="https://numpy.org/doc/stable/reference/arrays.indexing.html" rel="nofollow noreferrer">array indexing</a>, you can actually turn your for loop into a one-liner and avoid explicit looping altogether:</p>
<pre><code>distances = np.linalg.norm(X- np.einsum('ijk->ik', c[y]), axis=1)
</code></pre>
<p>will do the same thing as your original for loop.</p>
<p><strong>EDIT</strong>: Thanks @Kris, I forgot the <code>axis</code> keyword, and since I didn't specify it, numpy automatically computed the norm of the entire flattened matrix, not just along the rows (axis 1). I've updated it now, and it should return an array of distances for each point. Also, einsum was suggested by @Kris for their specific application.</p>
|
python|numpy|cluster-analysis|k-means
| 2
|
8,602
| 66,529,964
|
pd.read_sql - Unsupported format character error (0x27)
|
<p>As above, I'm trying to use pd.read_sql to query our mysql database, and getting an error for double/single quotes.</p>
<p>When I remove the % operators from the LIKE clause (lines 84-87) the query runs, but these are needed. I know I need to format the strings but I don't know how within such a big query.</p>
<p>Here's the query:</p>
<pre><code>SELECT
s.offer_id,
s.cap_id,
vi.make,
vi.model,
vi.derivative,
i.vehicle_orders,
s.lowest_offer,
CASE
WHEN f.previous_avg = f.previous_low THEN "n/a"
ELSE FORMAT(f.previous_avg, 2)
END as previous_avg,
f.previous_low,
CASE
WHEN ( ( (s.lowest_offer - f.previous_avg) / f.previous_avg) * 100) = ( ( (s.lowest_offer - f.previous_low) / f.previous_low) * 100) THEN "n/a"
ELSE CONCAT(FORMAT( ( ( (s.lowest_offer - f.previous_avg) / f.previous_avg) * 100), 2), "%")
END as diff_avg,
CONCAT(FORMAT( ( ( (s.lowest_offer - f.previous_low) / f.previous_low) * 100), 2), "%") as diff_low,
s.broker,
CASE
WHEN s.in_stock = '1' THEN "In Stock"
ELSE "Factory Order"
END as in_stock,
CASE
WHEN s.special IS NOT NULL THEN "Already in Specials"
ELSE "n/a"
END as special
FROM
( SELECT o.id as offer_id,
o.cap_id as cap_id,
MIN(o.monthly_payment) as lowest_offer,
b.name as broker,
o.stock as in_stock,
so.id as special
FROM
offers o
INNER JOIN brands b ON ( o.brand_id = b.id )
LEFT JOIN special_offers so ON ( so.cap_id = o.cap_id )
WHERE
( o.date_modified >= DATE_ADD(NOW(), INTERVAL -1 DAY) OR o.date_created >= DATE_ADD(NOW(), INTERVAL -1 DAY) )
AND o.deposit_value = 9
AND o.term = 48
AND o.annual_mileage = 8000
AND o.finance_type = 'P'
AND o.monthly_payment > 100
GROUP BY
o.cap_id
ORDER BY
special DESC) s
INNER JOIN
( SELECT o.cap_id as cap_id,
AVG(o.monthly_payment) as previous_avg,
MIN(o.monthly_payment) as previous_low
FROM
offers o
WHERE
o.date_modified < DATE_ADD(NOW(), INTERVAL -1 DAY)
AND o.date_modified >= DATE_ADD(NOW(), INTERVAL -1 WEEK)
AND o.deposit_value = 9
AND o.term = 48
AND o.annual_mileage = 8000
AND o.finance_type = 'P'
AND o.monthly_payment > 100
GROUP BY
o.cap_id ) f ON ( s.cap_id = f.cap_id )
LEFT JOIN
( SELECT a.cap_id as cap_id,
v.manufacturer as make,
v.model as model,
v.derivative as derivative,
COUNT(*) as vehicle_orders
FROM
( SELECT o.id,
o.name as name,
o.email as email,
o.date_created as date,
SUBSTRING_INDEX(SUBSTRING(offer_serialized, LOCATE("capId", offer_serialized) +12, 10), '"', 1) as cap_id
FROM moneyshake.orders o
WHERE o.name NOT LIKE 'test%'
AND o.email NOT LIKE 'jawor%'
AND o.email NOT LIKE 'test%'
AND o.email NOT LIKE '%moneyshake%'
AND o.phone IS NOT NULL
AND o.date_created > DATE_ADD(NOW(), INTERVAL -1 MONTH)
) a JOIN moneyshake.vehicles_view v ON a.cap_id = v.id
GROUP BY
v.manufacturer,
v.model,
v.derivative,
a.cap_id) i ON ( f.cap_id = i.cap_id )
INNER JOIN
( SELECT v.id as id,
v.manufacturer as make,
v.model as model,
v.derivative as derivative
FROM moneyshake.vehicles_view v
GROUP BY v.id ) vi ON s.cap_id = vi.id
WHERE
( ( s.lowest_offer - f.previous_low ) / f.previous_low) * 100 <= -15
GROUP BY
s.cap_id
</code></pre>
<p>Thanks!</p>
|
<p>That error occurs then the DBAPI layer (e.g., mysqlclient) natively uses the "format" <a href="https://www.python.org/dev/peps/pep-0249/#paramstyle" rel="nofollow noreferrer">paramstyle</a> and the percent sign (<code>%</code>) is misinterpreted as a format character instead of a <code>LIKE</code> wildcard.</p>
<p>The fix is to wrap the SQL statement in a SQLAlchemy <code>text()</code> object. For example, this will fail:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import sqlalchemy as sa
engine = sa.create_engine("mysql+mysqldb://scott:tiger@localhost:3307/mydb")
sql = """\
SELECT * FROM million_rows
WHERE varchar_col LIKE 'record00000%'
ORDER BY id
"""
df = pd.read_sql_query(sql, engine)
</code></pre>
<p>but simply changing the <code>read_sql_query()</code> call to</p>
<pre class="lang-py prettyprint-override"><code>df = pd.read_sql_query(sa.text(sql), engine)
</code></pre>
<p>will work.</p>
|
python|pandas|sqlalchemy|pymysql
| 3
|
8,603
| 66,492,618
|
Pandas fillna based on a condition
|
<p>I'm still new to pandas, but I have a dataframe in the following format:</p>
<pre><code> d_title d_prefix d_header d_country d_subtitles d_season d_episode
0 NaN NaN ##### MOROCCO ##### Morocco NaN NaN NaN
1 title1 AR NaN NaN NaN NaN NaN
2 title2 AR NaN NaN NaN NaN NaN
3 NaN NaN ##### MOROCCO 2 ##### Morocco NaN NaN NaN
4 title3 AR NaN NaN NaN NaN NaN
5 NaN NaN ##### ALGERIA ##### Algeria NaN NaN NaN
6 title4 AR NaN NaN NaN NaN NaN
7 title5 AR NaN NaN NaN NaN NaN
8 title6 IT NaN NaN NaN NaN NaN
9 title7 PL NaN NaN NaN 1.0 1.0
10 title8 UK NaN NaN NaN NaN NaN
11 title9 UK NaN NaN NaN NaN NaN
</code></pre>
<p>and I'm trying to fill all NaN fields in the 'd_header' column using the following conditions:</p>
<ul>
<li>'d_header' column should be set only for rows belonging to the same group</li>
<li>the group should be determined by the 'd_prefix' column value of a row immediately after non-Nan 'd_header' row</li>
</ul>
<p>So in the following example:</p>
<ul>
<li>0: 'd_header' == '##### MOROCCO #####'</li>
<li>1: check 'd_prefix' and set 'd_header' column for all rows going forward to '##### MOROCCO #####' until 'd_prefix' has changed (set value to NaN) OR new 'd_header' found (start over)</li>
</ul>
<pre><code> d_title d_prefix d_header d_country d_subtitles d_season d_episode
0 NaN NaN ##### MOROCCO ##### Morocco NaN NaN NaN
1 title1 AR ##### MOROCCO ##### NaN NaN NaN NaN
2 title2 AR ##### MOROCCO ##### NaN NaN NaN NaN
3 NaN NaN ##### MOROCCO TNT ##### Morocco NaN NaN NaN
4 title3 AR ##### MOROCCO TNT ##### NaN NaN NaN NaN
5 NaN NaN ##### ALGERIA ##### Algeria NaN NaN NaN
6 title4 AR ##### ALGERIA ##### NaN NaN NaN NaN
7 title5 AR ##### ALGERIA ##### NaN NaN NaN NaN
8 title6 IT NaN NaN NaN NaN NaN
9 title7 PL NaN NaN NaN 1.0 1.0
10 title8 UK NaN NaN NaN NaN NaN
11 title9 UK NaN NaN NaN NaN NaN
</code></pre>
<p>but I'm not having any luck with this approach. Would there be a better way to achieve the same result?</p>
|
<ul>
<li><strong>d_prefix</strong> is almost the grouping key you need. <code>bfill</code> it then <code>groupby()</code></li>
<li>reduced to simple <code>ffill</code></li>
</ul>
<pre><code>df = df.assign(d_header=df.assign(t_prefix=df.d_prefix.fillna(method="bfill"))
.groupby("t_prefix", as_index=False).apply(lambda dfa: dfa.d_header.fillna(method="ffill"))
.reset_index(drop=True)
)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">d_title</th>
<th style="text-align: left;">d_prefix</th>
<th style="text-align: left;">d_header</th>
<th style="text-align: left;">d_country</th>
<th style="text-align: right;">d_subtitles</th>
<th style="text-align: right;">d_season</th>
<th style="text-align: right;">d_episode</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">nan</td>
<td style="text-align: left;">nan</td>
<td style="text-align: left;">##### MOROCCO #####</td>
<td style="text-align: left;">Morocco</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">title1</td>
<td style="text-align: left;">AR</td>
<td style="text-align: left;">##### MOROCCO #####</td>
<td style="text-align: left;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">title2</td>
<td style="text-align: left;">AR</td>
<td style="text-align: left;">##### MOROCCO #####</td>
<td style="text-align: left;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">nan</td>
<td style="text-align: left;">nan</td>
<td style="text-align: left;">##### MOROCCO 2 #####</td>
<td style="text-align: left;">Morocco</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: left;">title3</td>
<td style="text-align: left;">AR</td>
<td style="text-align: left;">##### MOROCCO 2 #####</td>
<td style="text-align: left;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td style="text-align: left;">nan</td>
<td style="text-align: left;">nan</td>
<td style="text-align: left;">##### ALGERIA #####</td>
<td style="text-align: left;">Algeria</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td style="text-align: left;">title4</td>
<td style="text-align: left;">AR</td>
<td style="text-align: left;">##### ALGERIA #####</td>
<td style="text-align: left;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td style="text-align: left;">title5</td>
<td style="text-align: left;">AR</td>
<td style="text-align: left;">##### ALGERIA #####</td>
<td style="text-align: left;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td style="text-align: left;">title6</td>
<td style="text-align: left;">IT</td>
<td style="text-align: left;">nan</td>
<td style="text-align: left;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">9</td>
<td style="text-align: left;">title7</td>
<td style="text-align: left;">PL</td>
<td style="text-align: left;">nan</td>
<td style="text-align: left;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">10</td>
<td style="text-align: left;">title8</td>
<td style="text-align: left;">UK</td>
<td style="text-align: left;">nan</td>
<td style="text-align: left;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">11</td>
<td style="text-align: left;">title9</td>
<td style="text-align: left;">UK</td>
<td style="text-align: left;">nan</td>
<td style="text-align: left;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
</tr>
</tbody>
</table>
</div>
|
python|pandas|dataframe|conditional-statements|nan
| 1
|
8,604
| 57,417,089
|
How can a tensor in tensorflow be sliced using elements of another array as an index?
|
<p>I'm looking for a similar function to tf.unsorted_segment_sum, but I don't want to sum the segments, I want to get every segment as a tensor.</p>
<p>So for example, I have this code:
(In real, I have a tensor with shapes of (10000, 63), and the number of segments would be 2500)</p>
<pre><code> to_be_sliced = tf.constant([[0.1, 0.2, 0.3, 0.4, 0.5],
[0.3, 0.2, 0.2, 0.6, 0.3],
[0.9, 0.8, 0.7, 0.6, 0.5],
[2.0, 2.0, 2.0, 2.0, 2.0]])
indices = tf.constant([0, 2, 0, 1])
num_segments = 3
tf.unsorted_segment_sum(to_be_sliced, indices, num_segments)
</code></pre>
<p>The output would be here </p>
<pre><code>array([sum(row1+row3), row4, row2]
</code></pre>
<p>What I am looking for is 3 tensor with different shapes (maybe a list of tensors), first containing the first and third rows of the original (shape of (2, 5)), the second contains the 4th row (shape of (1, 5)), the third contains the second row, like this:</p>
<pre><code>[array([[0.1, 0.2, 0.3, 0.4, 0.5],
[0.9, 0.8, 0.7, 0.6, 0.5]]),
array([[2.0, 2.0, 2.0, 2.0, 2.0]]),
array([[0.3, 0.2, 0.2, 0.6, 0.3]])]
</code></pre>
<p>Thanks in advance!</p>
|
<p>For your case, you can do Numpy slicing in Tensorflow. So this will work:</p>
<pre><code>sliced_1 = to_be_sliced[:3, :]
# [[0.4 0.5 0.5 0.7 0.8]
# [0.3 0.2 0.2 0.6 0.3]
# [0.3 0.2 0.2 0.6 0.3]]
sliced_2 = to_be_sliced[3, :]
# [0.3 0.2 0.2 0.6 0.3]
</code></pre>
<p>Or a more general option, you can do it in the following way:</p>
<pre><code>to_be_sliced = tf.constant([[0.1, 0.2, 0.3, 0.4, 0.5],
[0.3, 0.2, 0.2, 0.6, 0.3],
[0.9, 0.8, 0.7, 0.6, 0.5],
[2.0, 2.0, 2.0, 2.0, 2.0]])
first_tensor = tf.gather_nd(to_be_sliced, [[0], [2]])
second_tensor = tf.gather_nd(to_be_sliced, [[3]])
third_tensor = tf.gather_nd(to_be_sliced, [[1]])
concat = tf.concat([first_tensor, second_tensor, third_tensor], axis=0)
</code></pre>
|
python|tensorflow
| 0
|
8,605
| 57,564,307
|
Dynamically appending columns of different length while looping through an empty pandas dataframe of ncols = len(columns)
|
<p>Using for loops, I'm trying to append columns of different lengths to a pre-initialized empty dataframe. Within each iteration, I have to wrangle the data to return my desired output but the lengths of my desired outputs are all different.
I would like to conserve every data there is (meaning that columns with shorter length will be populated with <code>nan</code> values to match the column with the longest length). </p>
<p>However, I realized that the shape (nrows) of the empty dataframe gets determined by the first column returned by the first iterator. </p>
<p>Now, I know that I can modify the shape of the empty dataframe using the nrow count from the column with max length. However, I'm curious to know if there is a pythonic way in python/pandas to modify the length of the dataframe dynamically so that the shape of the dataframe gets determined NOT from the results of the first iterator but from whichever iterator returns the column with the maximum length. </p>
<p>Simplified Version of the Code </p>
<pre><code>column_list = ['File_A', 'File_B', 'File_C']
empty_df = pd.DataFrame(columns=range(len(column_list))
for i in range(len(column_list)):
# "Some Code" that returns a modified dataframe of each File
# Trying to append the `values` column from each modified dataframe into the `empty_df`
empty_df[i] = modified_df.values
</code></pre>
<p><strong>Wanted Dataframe</strong></p>
<pre><code>_|0 |1 |2
0|839.0 |1163.0 |730.0
1|647.0 |826.0 |878.0
2|851.0 |725.0 |730.0
3|nan |1459.0 |924.0
4|nan |651.0 |279.0
5|nan |1239.0 |nan
6|nan |373.0 |nan
</code></pre>
<p><strong>Resulting Dataframe</strong> </p>
<pre><code>_|0 |1 |2
0|839.0 |1163.0 |730.0
1|647.0 |826.0 |878.0
2|851.0 |725.0 |730.0
</code></pre>
<p>--> Note that <code>Column 1</code> and <code>Column 2</code> have been truncated to match the length of <code>Column 0</code> (which was the first output from the first iterator) </p>
<p>Thanks in advance!</p>
|
<p>Inside the loop, append the <code>Series</code> to a list. Outside the loop, use <code>pd.concat</code> to concatenate the <code>Series</code>:</p>
<pre><code>import numpy as np
import pandas as pd
column_list = ['File_A', 'File_B', 'File_C']
result = []
for i in range(len(column_list)):
# "Some Code" that returns a modified dataframe of each File
modified_df = pd.DataFrame({'values': np.random.randint(1, 5, size=np.random.randint(10))})
# append the `values` column to a list
result.append(pd.Series(modified_df['values'], name=i))
result = pd.concat(result, axis=1)
print(result)
</code></pre>
<p>prints a result such as</p>
<pre><code> 0 1 2
0 3.0 3 2.0
1 2.0 1 3.0
2 2.0 4 1.0
3 4.0 3 1.0
4 3.0 4 2.0
5 NaN 4 NaN
6 NaN 1 NaN
</code></pre>
<hr>
<ul>
<li><p>The name of the Series will become the column label in the <code>result</code> DataFrame.</p></li>
<li><p>If a DataFrame, <code>df</code>, has a column named <code>values</code>, then it must be accessed with <code>df['values']</code>, NOT <code>df.values</code>. The latter, <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DataFrame.values.html" rel="nofollow noreferrer"><code>df.values</code>, returns a NumPy array</a> of all the data in the DataFrame since <code>values</code> is a builtin DataFrame attribute.</p></li>
</ul>
|
python|pandas
| 2
|
8,606
| 57,574,409
|
Applying style to a pandas DataFrame row-wise
|
<p>I'm experimenting/learning Python with a data set containing customers information.</p>
<p>The DataFrame structure is the following (these are made up records):</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'left_name' : ['James', 'Mary', 'John', 'Patricia'],
'left_age' : [30, 37, 30, 35],
'right_name' : ['Robert', 'Jennifer', 'Michael', 'Linda'],
'right_age' : [30, 31, 38, 35]})
print(df1)
left_name left_age right_name right_age
0 James 30 Robert 30
1 Mary 37 Jennifer 31
2 John 30 Michael 38
3 Patricia 35 Linda 35
</code></pre>
<p>Applying the <code>transpose</code> method to <code>df1</code>, we get the following view: </p>
<pre><code>df2 = df1.T
print(df2)
0 1 2 3
left_name James Mary John Patricia
left_age 30 37 30 35
right_name Robert Jennifer Michael Linda
right_age 30 31 38 35
</code></pre>
<p>My goal is to apply some styling to <code>df2</code>. Specifically,</p>
<ul>
<li>The <code>left_name</code> and <code>right_name</code> rows should be highlighted in yellow;</li>
<li>The <code>left_age</code> and <code>right_age</code> rows should be highlighted in blue.</li>
</ul>
<p>I did some research before posting here and I managed to highlight <strong>one</strong> subset the following way: </p>
<pre><code>df2.style.set_properties(subset = pd.IndexSlice[['left_name', 'right_name'], :], **{'background-color' : 'yellow'})
</code></pre>
<p><a href="https://i.stack.imgur.com/xE5Os.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xE5Os.png" alt="enter image description here"></a></p>
<p>The problem is that I'm unable to combine multiple styles together. If I add an additional blue color for <code>left_age</code> and <code>right_age</code> using the same method as above, I "lose" the previous style. </p>
<p>Ideally, I would like to have a function that takes <code>df2</code> as input and returns the styled DataFrame. </p>
|
<p>You can create DataFrame of styles with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.io.formats.style.Styler.apply.html" rel="nofollow noreferrer"><code>Styler.apply</code></a> and set rows by index value with <code>loc</code>:</p>
<pre><code>def highlight(x):
c1 = 'background-color: yellow'
c2 = 'background-color: blue'
df1 = pd.DataFrame('', index=x.index, columns=x.columns)
df1.loc[['left_name','right_name'], :] = c1
df1.loc[['left_age','right_age'], :] = c2
return df1
df1.T.style.apply(highlight, axis=None)
</code></pre>
|
python|pandas|dataframe|formatting|pandas-styles
| 5
|
8,607
| 57,642,019
|
PyTorch not downloading
|
<p>I go to the PyTorch website and select the following options</p>
<p>PyTorch Build: Stable (1.2)</p>
<p>Your OS: Windows</p>
<p>Package: pip</p>
<p>Language: Python 3.7</p>
<p>CUDA: None</p>
<p>(All of these are correct)</p>
<p>Than it displays a command to run</p>
<p><code>pip3 install torch==1.2.0+cpu torchvision==0.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html</code></p>
<p>I have already tried to mix around the the different options but none of them has worked.</p>
<hr>
<p>ERROR: <code>ERROR: Could not find a version that satisfies the requirement torch==1.2.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.2.0+cpu</code></p>
<p>I tried to do pip install pytorch but pytorch doesn't support pypi</p>
|
<p>I've been in same situation.
My prob was, the python version... I mean, in the 'bit' way.</p>
<p>It was 32 bit that the python I'd installed.
You should check which bit of python you installed.
you can check in the app in setting, search python, then you will see the which bit you've installed.</p>
<p>After I installed the 64 bit of Python, it solved.</p>
<p>I hope you figure it out! </p>
<p>environment : win 10</p>
|
python|pip|pytorch
| 1
|
8,608
| 57,538,543
|
Why aren't some dimensions shown in the output even when according to the indexing they should be?
|
<pre><code>b = np.array([[[0, 2, 3], [10, 12, 13]], [[20, 22, 23], [110, 112, 113]]])
print(b[..., -1])
>>>[[3, 13], [23, 113]]
</code></pre>
<p>Why does this output show the first axis but not the second axis (to show the second axis, it would have to show each number in its own list)? Is Numpy trying to minimize unnecessary display of dimensions when there is only one number per each second dimension list being shown? Why doesn’t numpy replicate the dimensions of the original array exactly?</p>
|
<blockquote>
<p>Why does this output show the first axis but not the second axis (to show the second axis, it would have to show each number in its own list)?</p>
</blockquote>
<p>It does show the first and the second axis. Note that you have a 2d array here, and the first and second axis are retained. Only the third axis has "collapsed".</p>
<p>Your indexing is, for a 3d array, equivalent to:</p>
<pre><code>b[:, :, -1]
</code></pre>
<p>It thus means that you create a 2d array <em>c</em> where <em>c<sub>ij</sub> = b<sub>ij-1</sub></em>. <code>-1</code> means the last element, so <em>c<sub>ij</sub>=b<sub>ij2</sub></em>.</p>
<p><code>b</code> has as values:</p>
<pre><code>>>> b
array([[[ 0, 2, <b>3</b>],
[ 10, 12, <b>13</b>]],
[[ 20, 22, <b>23</b>],
[110, 112, <b>113</b>]]])</code></pre>
<p>So that means that our result <em>c</em> has as <em>c<sub>00</sub>=b<sub>002</sub></em> which is <code>3</code>; for <em>c<sub>01</sub>=b<sub>012</sub></em> which is <code>13</code>; for <em>c<sub>10</sub>=b<sub>102</sub></em> which is <code>23</code>; and for <em>c<sub>11</sub>=b<sub>112</sub></em>, which is <code>113</code>.</p>
<p>So the end product is:</p>
<pre><code>>>> b[:,:,-1]
array([[ 3, 13],
[ 23, 113]])
>>> b[...,-1]
array([[ 3, 13],
[ 23, 113]])
</code></pre>
<p>By specifying a value for a given dimension that dimension "collapses". Another sensical alternative would have been to have a dimension of size <code>1</code>, but frequently such subscripting is done to retrieve arrays with a lower number of dimensions.</p>
|
numpy|multidimensional-array|indexing
| 2
|
8,609
| 57,351,939
|
Pandas groupby error: groupby() takes at least 3 arguments (2 given)
|
<p>I have the dataframe as following:</p>
<p>(cusid means the customer id; product means product id bought by the customer; count means the purchased count of this product.)</p>
<pre><code>cusid product count
1521 30 2
18984 99 1
25094 1 1
2363 36 1
3316 21 1
19249 228 1
13220 78 1
1226 79 4
1117 112 2
</code></pre>
<p>I want to calculate the average number of every product that every customer would buy.
Seeming need to get groupby product in cusid, then groupby product in count, then get the mean.
my expect ouput:</p>
<pre><code>product mean(count)
30
99
1
36
</code></pre>
<p>Here is my code:</p>
<pre><code>(df.groupby(['product','cusid']).mean().groupby('product')['count'].mean())
</code></pre>
<p>got the error:</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-43-0fac990bbd61> in <module>()
----> 1 (df.groupby(['product','cusid']).mean().groupby('product')['count'].mean())
TypeError: groupby() takes at least 3 arguments (2 given
</code></pre>
<p>have no idea how to fix it</p>
|
<pre><code>df.groupby(['cusid', 'product']).mean().reset_index().groupby('product')['count'].mean()
</code></pre>
<p>OUTPUT:</p>
<pre><code>product
1 1
21 1
30 2
36 1
78 1
79 4
99 1
112 2
228 1
</code></pre>
<p>python version: <code>3.7.4</code>
pandas version: <code>0.25.0</code></p>
|
python|pandas|machine-learning
| 0
|
8,610
| 43,923,012
|
TensorFlow: remember weigth of previous epochs
|
<p>I am experimenting with TensorFlow. I've just posted a <a href="https://stackoverflow.com/questions/43922819/tensorflow-improve-accuracy-on-training-data">question</a> regarding an issue it am facing with it. However I also have a perhaps more theoretically question but with practical consequences.</p>
<p>When training the models I find that the accuracy may vary. So, it may happen that the last epoch does not shows up the best accuracy. For instance, on the epoch N I may have an accuracy of 85% whereas on the last epoch the accuracy is 65%. I would like to predict using the weights on the N epoch. </p>
<p>I was wondering whether there is a way of remember the weights values of the epoch with the best accuracy for using later?</p>
<p>The very first and simple approach wold be:</p>
<ol>
<li>Run N epochs</li>
<li>Rememberer the best accuracy</li>
<li>Re-start the training until we reach an epoch that shows the same accuracy than the one stored on step 2.</li>
<li>Predict using the current weigths</li>
</ol>
<p>Is there a better one?</p>
|
<p>Yes! You need to make a <a href="https://www.tensorflow.org/programmers_guide/variables" rel="nofollow noreferrer">saver and save your session periodically</a> through your training process. The pseudo-code implementation looks like:</p>
<pre class="lang-py prettyprint-override"><code>model = my_model()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init_op)
for epoch in range(NUM_EPOCHS):
for batch in range(NUM_BATCHES):
# ... train your model ...
if batch % VALIDATION_FREQUENCY == 0:
# Periodically test against a validation set.
error = sess.run(model.error, feed_dict=valid_dict)
if error < min_error:
min_error = error # store your best error so far
saver.save(sess, MODEL_PATH) # save the best-performing network so far
</code></pre>
<p>Then when you want to test your model against your best-performing iteration:</p>
<pre class="lang-py prettyprint-override"><code>saver.restore(sess, MODEL_PATH)
test_error = sess.run(model.error, feed_dict=test_dict)
</code></pre>
<p>Check out <a href="https://www.tensorflow.org/programmers_guide/meta_graph" rel="nofollow noreferrer">this tutorial</a> on saving and loading metagraphs as well. I found the loading step to be a bit tricky depending on your use case.</p>
|
python|machine-learning|tensorflow|neural-network
| 2
|
8,611
| 43,544,989
|
pandas: map more than 2 columns to one column
|
<p>This is an updated version of <a href="https://stackoverflow.com/questions/43543634/pandas-map-multiple-columns-to-one-column">this question</a>, which dealt with mapping only two columns to a new column.</p>
<p>Now I have three columns that I want to map to a single new column using the same dictionary (and return 0 if there is no matching key in the dictionary).</p>
<pre><code>>> codes = {'2':1,
'31':1,
'88':9,
'99':9}
>> df[['driver_action1','driver_action2','driver_action3']].to_dict()
{'driver_action1': {0: '1',
1: '1',
2: '77',
3: '77',
4: '1',
5: '4',
6: '2',
7: '1',
8: '77',
9: '99'},
'driver_action2': {0: '4',
1: '99',
2: '99',
3: '99',
4: '1',
5: '2',
6: '2',
7: '99',
8: '99',
9: '99'},
'driver_action3': {0: '4',
1: '99',
2: '99',
3: '99',
4: '1',
5: '99',
6: '99',
7: '99',
8: '31',
9: '31'}}
</code></pre>
<p>Expected output:</p>
<pre><code> driver_action1 driver_action2 driver_action3 newcolumn
0 1 4 4 0
1 1 99 99 9
2 77 99 99 9
3 77 99 99 9
4 1 1 1 9
5 4 2 99 1
6 2 2 99 1
7 1 99 99 9
8 77 99 31 1
9 99 99 31 1
</code></pre>
<p>I am not sure how to do this with .applymap() or combine_first().</p>
|
<p>Try this:</p>
<pre><code>In [174]: df['new'] = df.stack(dropna=False).map(codes).unstack() \
...: .iloc[:, ::-1].ffill(axis=1) \
...: .iloc[:, -1].fillna(0)
...:
In [175]: df
Out[175]:
driver_action1 driver_action2 driver_action3 new
0 1 4 4 0.0
1 1 99 99 9.0
2 77 99 99 9.0
3 77 99 99 9.0
4 1 1 1 0.0
5 4 2 99 1.0
6 2 2 99 1.0
7 1 99 99 9.0
8 77 99 31 9.0
9 99 99 31 9.0
</code></pre>
<p>alternative solution:</p>
<pre><code>df['new'] = df.stack(dropna=False).map(codes).unstack().T \
.apply(lambda x: x[x.first_valid_index()]
if x.first_valid_index() else 0)
</code></pre>
<p>Explanation:</p>
<p>stack, map, unstack mapped values:</p>
<pre><code>In [188]: df.stack(dropna=False).map(codes).unstack()
Out[188]:
driver_action1 driver_action2 driver_action3
0 NaN NaN NaN
1 NaN 9.0 9.0
2 NaN 9.0 9.0
3 NaN 9.0 9.0
4 NaN NaN NaN
5 NaN 1.0 9.0
6 1.0 1.0 9.0
7 NaN 9.0 9.0
8 NaN 9.0 1.0
9 9.0 9.0 1.0
</code></pre>
<p>reverse columns order and apply forward fill along <code>columns</code> axis:</p>
<pre><code>In [190]: df.stack(dropna=False).map(codes).unstack().iloc[:, ::-1].ffill(axis=1)
Out[190]:
driver_action3 driver_action2 driver_action1
0 NaN NaN NaN
1 9.0 9.0 9.0
2 9.0 9.0 9.0
3 9.0 9.0 9.0
4 NaN NaN NaN
5 9.0 1.0 1.0
6 9.0 1.0 1.0
7 9.0 9.0 9.0
8 1.0 9.0 9.0
9 1.0 9.0 9.0
</code></pre>
<p>select last column and fill <code>NaN</code>'s with <code>0</code>:</p>
<pre><code>In [191]: df.stack(dropna=False).map(codes).unstack().iloc[:, ::-1].ffill(axis=1).iloc[:, -1].fillna(0)
Out[191]:
0 0.0
1 9.0
2 9.0
3 9.0
4 0.0
5 1.0
6 1.0
7 9.0
8 9.0
9 9.0
Name: driver_action1, dtype: float64
</code></pre>
|
python|pandas
| 1
|
8,612
| 43,921,338
|
Multiplying by pattern matching
|
<p>I have a matrix of the following format:</p>
<pre><code>matrix = np.array([1, 2, 3, np.nan],
[1, np.nan, 3, 4],
[np.nan, 2, 3, np.nan])
</code></pre>
<p>and coefficients I want to selectively multiply element-wise with my matrix:</p>
<pre><code>coefficients = np.array([0.5, np.nan, 0.2, 0.3],
[0.3, 0.3, 0.2, np.nan],
[np.nan, 0.2, 0.1, np.nan])
</code></pre>
<p>In this case, I would want the first row in <code>matrix</code> to be multiplied with the second row in <code>coefficients</code>, while the second row in <code>matrix</code> would be multiplied with the first row in <code>coefficients</code>. In short, I want to select the row in <code>coefficients</code> that matches row in <code>matrix</code> in terms of where <code>np.nan</code> values are located. </p>
<p>The location of <code>np.nan</code> values will be different for each row in <code>coefficients</code>, as they describe the coefficients for different cases of data availability.</p>
<p>Is there a quick way to do this, that doesn't require writing if-statements for all possible cases?</p>
|
<p><strong>Approach #1</strong></p>
<p>A <em>quick</em> way would be with <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow noreferrer"><code>NumPy broadcasting</code></a> -</p>
<pre><code># Mask of NaNs
mask1 = np.isnan(matrix)
mask2 = np.isnan(coefficients)
# Perform comparison between each row of mask1 against every row of mask2
# leading to a 3D array. Look for all-matching ones along the last axis.
# These are the ones that shows the row matches between the two input arrays -
# matrix and coefficients. Then, we use find the corresponding matching
# indices that gives us the pair of matches betweel those two arrays
r,c = np.nonzero((mask1[:,None] == mask2).all(-1))
# Index into arrays with those indices and perform elementwise multiplication
out = matrix[r] * coefficients[c]
</code></pre>
<p>Output for given sample data -</p>
<pre><code>In [40]: out
Out[40]:
array([[ 0.3, 0.6, 0.6, nan],
[ 0.5, nan, 0.6, 1.2],
[ nan, 0.4, 0.3, nan]])
</code></pre>
<p><strong>Approach #2</strong></p>
<p>For performance, reduce each row of NaNs mask to its decimal equivalent and then create a storing array in which we can store elements off <code>matrix</code> and then multiply into the elements off <code>coefficients</code> indexed by those decimal equivalents -</p>
<pre><code>R = 2**np.arange(matrix.shape[1])
idx1 = mask1.dot(R)
idx2 = mask2.dot(R)
A = np.empty((idx1.max()+1, matrix.shape[1]))
A[idx1] = matrix
A[idx2] *= coefficients
out = A[idx1]
</code></pre>
|
python|numpy|matrix|pattern-matching
| 2
|
8,613
| 43,541,425
|
Pandas Reordering Dataframe to Time Series
|
<p>I have a dataframe with unlabeled columns with the following structure</p>
<pre><code>0 101 100001 DT23 NaT 1900-01-01 20:00:00 DT24 1900-01-01 20:02:00
1 101 100002 DT24 1900-01-01 20:02:00 1900-01-01 20:04:00 DT23 1900-01-01 20:05:05
2 102 200001 DT23 NaT 1900-01-01 20:05:00 DT24 1900-01-01 20:07:00
3 102 200002 DT24 1900-01-01 20:07:00 1900-01-01 20:09:00 DT23 1900-01-01 20:10:05
</code></pre>
<p>I would like to shape the data to be different time series for each unique first column value. For "101" the data would be:</p>
<pre><code>1900-01-01 20:00:00 DT23
1900-01-01 20:02:00 DT24
1900-01-01 20:04:00 DT24
1900-01-01 20:05:05 DT23
</code></pre>
<p>I have tried to iterate through the column for each unique value and append to a new series, but since the column names are not the same, I end up with a dataframe rather than a time series. How would I go about doing this? Thanks</p>
|
<p>If you are successfully ending with a DataFrame, then this answer can show you how to convert a DF to a time series</p>
<p><a href="https://stackoverflow.com/questions/19914944/convert-pandas-dataframe-to-time-series">Convert Pandas dataframe to time series</a></p>
<p>Hope this helps.</p>
|
python|pandas
| 0
|
8,614
| 43,826,004
|
How to Multiply Matrix Values By a Constant if a Condition is Met?
|
<p>In python I have a matrix and I need to have that same matrix returned to me, except I have a rule, if there are elements in that matrix that are <0 I multiply their individual values by a constant. I am not sure how to go about doing this though.</p>
<p>Example: a=[[0, 2, 1, 4], [-2, 3, 5, 2]] and let's say my constant is -0.1, then I would be returned a=[[0, 2, 1, 4], [.2, 3, 5, 2]</p>
|
<p>Demo:</p>
<pre><code>In [55]: a = np.random.randint(-10, 10, size=(10,10))
In [56]: a
Out[56]:
array([[ 7, 6, 0, 2, 3, -9, 2, -2, 9, -10],
[ 8, 4, -10, 5, 7, 6, 7, -3, 1, -3],
[ 5, -10, -8, 4, -2, -9, 0, 8, -1, 7],
[ 6, 7, 6, 2, -3, 3, 0, -7, -6, -4],
[ 8, 0, -7, 7, 9, -4, -5, 7, -5, -9],
[-10, -9, -6, -9, -1, 2, -6, -9, 8, -3],
[ 5, -3, -6, -5, 6, -8, -10, 7, 3, -5],
[ 9, 4, 5, 9, 2, -5, -8, 5, -1, -7],
[ -9, -7, -7, -3, -10, -7, 3, -1, 5, 3],
[ 0, -4, 9, -9, -5, -1, -8, 9, -4, -5]])
In [57]: a[a<0] *= 10
In [58]: a
Out[58]:
array([[ 7, 6, 0, 2, 3, -90, 2, -20, 9, -100],
[ 8, 4, -100, 5, 7, 6, 7, -30, 1, -30],
[ 5, -100, -80, 4, -20, -90, 0, 8, -10, 7],
[ 6, 7, 6, 2, -30, 3, 0, -70, -60, -40],
[ 8, 0, -70, 7, 9, -40, -50, 7, -50, -90],
[-100, -90, -60, -90, -10, 2, -60, -90, 8, -30],
[ 5, -30, -60, -50, 6, -80, -100, 7, 3, -50],
[ 9, 4, 5, 9, 2, -50, -80, 5, -10, -70],
[ -90, -70, -70, -30, -100, -70, 3, -10, 5, 3],
[ 0, -40, 9, -90, -50, -10, -80, 9, -40, -50]])
</code></pre>
|
python|numpy
| 2
|
8,615
| 43,648,890
|
How to return a specific cell that have color style and font style for iloc[1,1] only?
|
<p>I have a dataframe and below is my color code:</p>
<pre><code> def color (val):
if final.iloc[1,1]<final.iloc[1,0]:
return "background-color: green"
else:
return "background-color: red"
</code></pre>
<p>I wish to return only <code>final.iloc[1,1]</code> to have green color background, if above code was apply, my whole data frame become green color already.</p>
<p>I also wish the <code>final.iloc[1,1]</code> able to change the font style, anyone can share me some idea? </p>
|
<p>You can use:</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(100)
df = pd.DataFrame(np.random.randn(5, 3), columns=list('ABC'))
print (df)
def highlight_col(x):
#copy df to new - original data are not changed
df = x.copy()
#set default values to all values
df.loc[:,:] = 'background-color: ""'
#set by condition
if x.iloc[1,1]<x.iloc[1,0]:
df.iloc[1,1] = 'background-color: red'
else:
df.iloc[1,1] = 'background-color: green'
return df
df.style.apply(highlight_col, axis=None)
</code></pre>
|
python|pandas|dataframe
| 4
|
8,616
| 43,800,994
|
Pulling certain dates from a dataframe in python
|
<p>I am using pandas to clean a database and I have a list of dates all in format like 08-Jun-2017 , 12-Jun-2017 etc within a dataframe. I would like to pull out all the rows where the date is less than 14 days from the current date. Thanks</p>
|
<p>Demo:</p>
<pre><code>In [118]: df = pd.DataFrame({'date': pd.date_range(end='2017-05-05', freq='9D', periods=20)}) \
.sample(frac=1).reset_index(drop=True)
In [119]: df
Out[119]:
date
0 2016-11-15
1 2017-03-30
2 2017-01-17
3 2017-04-17
4 2017-03-12
5 2017-02-22
6 2017-01-08
7 2017-04-26
8 2017-05-05
9 2016-12-03
10 2017-03-03
11 2016-12-21
12 2017-02-04
13 2017-04-08
14 2017-03-21
15 2016-11-24
16 2017-01-26
17 2016-12-30
18 2017-02-13
19 2016-12-12
In [120]: df.loc[df.date > pd.datetime.now() - pd.Timedelta('14 days')]
Out[120]:
date
7 2017-04-26
8 2017-05-05
</code></pre>
<hr>
<p>the same solution, but for dates (as strings):</p>
<pre><code>In [122]: df['dt_str'] = df.date.dt.strftime('%d-%b-%Y')
In [123]: df
Out[123]:
date dt_str
0 2016-11-15 15-Nov-2016
1 2017-03-30 30-Mar-2017
2 2017-01-17 17-Jan-2017
3 2017-04-17 17-Apr-2017
4 2017-03-12 12-Mar-2017
5 2017-02-22 22-Feb-2017
6 2017-01-08 08-Jan-2017
7 2017-04-26 26-Apr-2017
8 2017-05-05 05-May-2017
9 2016-12-03 03-Dec-2016
10 2017-03-03 03-Mar-2017
11 2016-12-21 21-Dec-2016
12 2017-02-04 04-Feb-2017
13 2017-04-08 08-Apr-2017
14 2017-03-21 21-Mar-2017
15 2016-11-24 24-Nov-2016
16 2017-01-26 26-Jan-2017
17 2016-12-30 30-Dec-2016
18 2017-02-13 13-Feb-2017
19 2016-12-12 12-Dec-2016
In [124]: df.loc[pd.to_datetime(df['dt_str'], errors='coerce') >= pd.datetime.now() - pd.Timedelta('14 days')]
Out[124]:
date dt_str
7 2017-04-26 26-Apr-2017
8 2017-05-05 05-May-2017
</code></pre>
|
python|date|pandas
| 3
|
8,617
| 1,377,130
|
How do you deal with missing data using numpy/scipy?
|
<p>One of the things I deal with most in data cleaning is missing values. R deals with this well using its "NA" missing data label. In python, it appears that I'll have to deal with masked arrays which seem to be a major pain to set up and don't seem to be well documented. Any suggestions on making this process easier in Python? This is becoming a deal-breaker in moving into Python for data analysis. Thanks</p>
<p><strong>Update</strong> It's obviously been a while since I've looked at the methods in the numpy.ma module. It appears that at least the basic analysis functions are available for masked arrays, and the examples provided helped me understand how to create masked arrays (thanks to the authors). I would like to see if some of the newer statistical methods in Python (being developed in this year's GSoC) incorporates this aspect, and at least does the complete case analysis.</p>
|
<p>If you are willing to consider a library, pandas (http://pandas.pydata.org/) is a library built on top of numpy which amongst many other things provides:</p>
<blockquote>
<p>Intelligent data alignment and integrated handling of missing data: gain automatic label-based alignment in computations and easily manipulate messy data into an orderly form</p>
</blockquote>
<p>I've been using it for almost one year in the financial industry where missing and badly aligned data is the norm and it really made my life easier.</p>
|
python|numpy|data-analysis
| 4
|
8,618
| 73,128,413
|
Python pandas value error for code conversion
|
<p>I am trying to convert SQL code into equivalent python code.</p>
<p><strong>My SQL code is given below:</strong></p>
<pre><code>select count(*),sum(days) into :_cnt_D_DmRP_1D, :_pd_D_DmRP_1D
from _fw_bfr_wgt
where pk=1 and type=1 and input(zipcode,8.) not in (700001:735999)
and input(zipcode,8.) not in (&cal_list.);
</code></pre>
<p>I was trying to convert this code into python.</p>
<p><strong>My python code is given below:</strong></p>
<pre><code>cnt_D_DmRP_1D,_pd_D_DmRP_1D=df.loc[(df['pk']==1) & (df['type']==1) & (df['zipcode'] not in(list(range(700001,(735999+1))))),'days'].agg(['size','sum']) & (df['zipcode'] not in(cal_list)),df.loc['days'].agg(['size','sum'])
</code></pre>
<p>Here df is the dataframe which is _fw_bfr_wgt in SQL code, that I have already created.</p>
<p><strong>But I am getting an error:</strong></p>
<blockquote>
<p>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p>
</blockquote>
<p>Unable to resolve this issue as I am a beginner in python. Need help and suggestion to fix this error.</p>
|
<p>Problem is in <code>df['zipcode'] not in (list(range(700001,(735999+1))))</code>, you can use <code>Series</code> in <code>in</code> operator</p>
<pre class="lang-py prettyprint-override"><code>>>> pd.Series(range(3)) in list(range(10))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "~/.local/lib/python3.10/site-packages/pandas/core/generic.py", line 1527, in __nonzero__
raise ValueError(
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>Instead you can use <code>Series.isin</code></p>
<pre class="lang-py prettyprint-override"><code>~df['zipcode'].isin(list(range(700001,(735999+1))))
</code></pre>
|
python|python-3.x|pandas|dataframe
| 0
|
8,619
| 73,009,672
|
How to serialize named tuple with np array to file
|
<p>How to serialize named tuple with np array to file?</p>
<p>I know how to serialize/deserialize an np_array.</p>
<pre><code>import numpy as np
a = [
np.arange(300),
np.arange(200)
]
np.save('output.pkl', a, allow_pickle=True)
</code></pre>
<p>But for my case, I am dealing with a named tuple with a string and a NumPy array.</p>
<pre><code>from collections import namedtuple
Point = namedtuple('Address', 'city embedding')
</code></pre>
<p>So here city is string and embedding is NumPy array.</p>
<p>Not sure how to persist a list of Points to file and load it back.</p>
<p>Any help/pointer would be appreciated.</p>
|
<p>You can use <a href="https://github.com/cloudpipe/cloudpickle" rel="nofollow noreferrer"><code>cloudpickle</code></a>. Cloudpickle makes it possible to serialize Python constructs not supported by the default pickle module from the Python standard library.</p>
<pre><code>import cloudpickle
Point = namedtuple('Address', 'city embedding')
pick = Point(CITY_EMBEDDING_CONTAINER)
with open('deleteme.cloudpickle', 'wb') as f:
cloudpickle.dump(pick, f)
</code></pre>
|
python|arrays|python-3.x|numpy|pickle
| 0
|
8,620
| 72,964,800
|
What is the proper way to install TensorFlow on Apple M1 in 2022
|
<p>I am facing 4 problems when I tried to install TensorFlow on Apple M1:</p>
<ol>
<li><p><a href="https://www.anaconda.com/blog/new-release-anaconda-distribution-now-supporting-m1" rel="noreferrer">Conda has supported M1 since 2022.05.06</a> but most of articles I googled talk about using Miniforge, e.g. So I feel they are all kind of outdated.</p>
<ol>
<li><a href="https://caffeinedev.medium.com/how-to-install-tensorflow-on-m1-mac-8e9b91d93706" rel="noreferrer">How To Install TensorFlow on M1 Mac (The Easy Way)</a></li>
<li><a href="https://makeoptim.com/en/deep-learning/tensorflow-metal" rel="noreferrer">AI - Apple Silicon Mac M1 natively supports TensorFlow 2.8 GPU acceleration</a></li>
<li><a href="https://www.mrdbourke.com/setup-apple-m1-pro-and-m1-max-for-machine-learning-and-data-science/" rel="noreferrer">How to Setup TensorFlow on Apple M1 Pro and M1 Max (works for M1 too)</a></li>
<li><a href="https://betterdatascience.com/install-tensorflow-2-7-on-macbook-pro-m1-pro/" rel="noreferrer">How To Install TensorFlow 2.7 on MacBook Pro M1 Pro With Ease</a></li>
</ol>
</li>
<li><p>I used the latest conda 4.13 to setup my python environment(3.8, 3.9 and 3.10) successfully but when I tried to install tensorflow I got the error "<strong>No matching distribution found for tensorflow</strong>" (all failed).</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none)
ERROR: No matching distribution found for tensorflow
</code></pre>
</li>
</ol>
<p>The answers in <a href="https://stackoverflow.com/questions/48720833/could-not-find-a-version-that-satisfies-the-requirement-tensorflow">Could not find a version that satisfies the requirement tensorflow</a> didn't help. I can't find useful information on <a href="https://www.tensorflow.org/" rel="noreferrer">https://www.tensorflow.org/</a> too, actually <a href="https://www.tensorflow.org/install" rel="noreferrer">https://www.tensorflow.org/install</a> just said <code>pip install tensorflow</code>.</p>
<ol start="3">
<li><p>I tried to run <code>pip install tensorflow-macos</code> and it succeeded.
I read from the above "works for M1 too" article mentioned "<strong>Apple's fork of TensorFlow is called tensorflow-macos</strong>" although I can't find much information about that. For example, <a href="https://www.tensorflow.org/" rel="noreferrer">https://www.tensorflow.org/</a> does not mention that. I also found from <a href="https://developer.apple.com/forums/thread/686926" rel="noreferrer">https://developer.apple.com/forums/thread/686926</a> that someone hit that "<strong>ERROR: No matching distribution found for tensorflow-macos</strong>" (but I didn't).</p>
</li>
<li><p>All the articles I googled, including above 4 articles and this <a href="https://stackoverflow.com/questions/69215644/tensorflow-on-macos-apple-m1">Tensorflow on macOS Apple M1</a>, all say I also need to run the following 2 commands</p>
<p><code>conda install -c apple tensorflow-deps</code></p>
<p><code>pip install tensorflow-metal</code></p>
</li>
</ol>
<p>But do I really need to that? I can't find this information from <a href="https://www.tensorflow.org/" rel="noreferrer">https://www.tensorflow.org/</a>.
What are these 2 packages <code>tensorflow-deps</code> and <code>tensorflow-metal</code> ?</p>
|
<p>Distilling <a href="https://developer.apple.com/metal/tensorflow-plugin/" rel="nofollow noreferrer">the official directions from Apple</a> (as of 13 July 2022), one would create an environment using the following YAML:</p>
<p><strong>tf-metal-arm64.yaml</strong></p>
<pre class="lang-yaml prettyprint-override"><code>name: tf-metal
channels:
- apple
- conda-forge
dependencies:
- python=3.9 ## specify desired version
- pip
- tensorflow-deps
## uncomment for use with Jupyter
## - ipykernel
## PyPI packages
- pip:
- tensorflow-macos
- tensorflow-metal ## optional, but recommended
</code></pre>
<p>Edit to include additional packages.</p>
<h2>Creating environment</h2>
<p>Before creating the environment we need to know what the base architecture is. Check this with <code>conda config --show subdir</code>.</p>
<h3>Native (<strong>osx-arm64</strong>) base</h3>
<p>If you have installed a native <strong>osx-arm64</strong> Miniforge variant (I recommend <a href="https://github.com/conda-forge/miniforge#mambaforge" rel="nofollow noreferrer">Mambaforge</a>), then you can create with:</p>
<pre class="lang-bash prettyprint-override"><code>mamba env create -n my_tf_env -f tf-metal-arm64.yaml
</code></pre>
<p><sub><strong>Note</strong>: If you don't have Mamba, then substitute <code>conda</code> for <code>mamba</code>; or install it for much faster solving: <code>conda install -n base mamba</code>.</sub></p>
<h3>Emulated (<strong>osx-64</strong>) base</h3>
<p>If you do not have a native <strong>base</strong>, then you will need to override the <code>subdir</code> setting:</p>
<pre class="lang-bash prettyprint-override"><code>## create env
CONDA_SUBDIR=osx-arm64 mamba env create -n my_tf_env -f tf-metal-arm64.yaml
## activate
mamba activate my_tf_env
## permanently set the subdir
conda config --env --set subdir osx-arm64
</code></pre>
<p>Be sure to always activate the environment before installing or updating packages.</p>
|
tensorflow|conda|apple-m1
| 4
|
8,621
| 42,817,612
|
optimized way of iterating through dataframe
|
<p>I have a pandas dataframe, called Visits2 contains 20M records. Here are sample of records from Visits2.</p>
<pre><code>num srv_edt inpt_flag
000423733A 8/15/2016 N
001013135D 7/11/2016 N
001013135D 7/11/2016 N
001047851M 4/29/2016 N
001067291M 2/29/2016 Y
001067291M 8/3/2016 N
001067291M 8/3/2016 N
001067291M 9/4/2016 N
001070817A 5/25/2016 N
001070817A 5/25/2016 Y
001072424A 1/13/2016 N
001072424A 2/17/2016 Y
001072424A 3/21/2016 N
001072424A 3/21/2016 N
001072424A 5/10/2016 N
001072424A 6/6/2016 N
</code></pre>
<p>I'm executing below code, Assign inpt_any with <code>N</code>, when srv_edt is first occurrence in the group of num. if the inpt_flag already has the value as <code>Y</code> then assign inpt_flag with <code>Y</code>.</p>
<p>This is running fine, But consider at 20M volume, it is taking hours to run.
Somebody, please suggest me optimize way of looping through the dataframe.</p>
<pre><code>prev_srv_edt = " "
for vv in Visits2.itertuples():
inpt_any = 'N'
if (prev_srv_edt != vv[1]):
prev_srv_edt = vv[1]
Visits2.loc[vv[0],'inpt_any'] = 'N'
if (vv[2] == 'Y'):
Visits2.loc[vv[0],'inpt_any'] = 'Y'
</code></pre>
<p>I did try with <code>list(zip(visit['srv_edt'],visit['inpt_flag']))</code>, but I see <code>zip</code> also taking lot of time to run.</p>
|
<p>IIUC you can do it this way:</p>
<pre><code>In [37]: df.loc[df.groupby('num')['srv_edt'].idxmin(), 'inpt_any'] = 'N'
In [38]: df
Out[38]:
num srv_edt inpt_flag inpt_any
0 000423733A 2016-08-15 N N
1 001013135D 2016-07-11 N N
2 001013135D 2016-07-11 N NaN
3 001047851M 2016-04-29 N N
4 001067291M 2016-02-29 Y N
5 001067291M 2016-08-03 N NaN
6 001067291M 2016-08-03 N NaN
7 001067291M 2016-09-04 N NaN
8 001070817A 2016-05-25 N N
9 001070817A 2016-05-25 Y NaN
10 001072424A 2016-01-13 N N
11 001072424A 2016-02-17 Y NaN
12 001072424A 2016-03-21 N NaN
13 001072424A 2016-03-21 N NaN
14 001072424A 2016-05-10 N NaN
15 001072424A 2016-06-06 N NaN
</code></pre>
|
python|pandas|optimization
| 1
|
8,622
| 42,913,969
|
Python: properly iterating through a dictionary of numpy arrays
|
<p>Given the following <code>numpy</code> arrays:</p>
<pre><code>import numpy
a=numpy.array([[1,1,1],[1,1,1],[1,1,1]])
b=numpy.array([[2,2,2],[2,2,2],[2,2,2]])
c=numpy.array([[3,3,3],[3,3,3],[3,3,3]])
</code></pre>
<p>and this dictionary containing them all:</p>
<pre><code>mydict={0:a,1:b,2:c}
</code></pre>
<p>What is the most efficient way of iterating through <code>mydict</code> so to compute the average numpy array that has <code>(1+2+3)/3=2</code> as values?</p>
<p>My attempt fails as I am giving it too many values to unpack. It is also extremely inefficient as it has an <code>O(n^3)</code> time complexity:</p>
<pre><code>aver=numpy.empty([a.shape[0],a.shape[1]])
for c,v in mydict.values():
for i in range(0,a.shape[0]):
for j in range(0,a.shape[1]):
aver[i][j]=mydict[c][i][j] #<-too many values to unpack
</code></pre>
<p>The final result should be:</p>
<pre><code>In[17]: aver
Out[17]:
array([[ 2., 2., 2.],
[ 2., 2., 2.],
[ 2., 2., 2.]])
</code></pre>
<p><strong>EDIT</strong></p>
<p>I am not looking for an average value for each numpy array. I am looking for an average value <em>for each element</em> of my colleciton of numpy arrays. This is a minimal example, but the real thing I am working on has over 120,000 elements per array, and for the same position the values change from array to array. </p>
|
<p>I think you're making this harder than it needs to be. Either sum them and divide by the number of terms:</p>
<pre><code>In [42]: v = mydict.values()
In [43]: sum(v) / len(v)
Out[43]:
array([[ 2., 2., 2.],
[ 2., 2., 2.],
[ 2., 2., 2.]])
</code></pre>
<p>Or stack them into one big array -- which it sounds like is the format they probably should have been in to start with -- and take the mean over the stacked axis:</p>
<pre><code>In [44]: np.array(list(v)).mean(axis=0)
Out[44]:
array([[ 2., 2., 2.],
[ 2., 2., 2.],
[ 2., 2., 2.]])
</code></pre>
|
python|arrays|numpy|dictionary|for-loop
| 1
|
8,623
| 27,186,244
|
compute a xi-xj matrix in numpy without loops (by api calls)
|
<p>How to compute a xi-xj matrix in numpy without loops (by api calls)?</p>
<p>Here's what to start with:</p>
<pre><code>import numpy as np
x = np.random.rand(4)
xij = np.matrix([xi-xj for xj in x for xi in x]).reshape(4,4)
</code></pre>
|
<p>You can take advantage of broadcasting to subtract <code>x</code> as a column vector from <code>x</code> as a flat array and produce the matrix.</p>
<pre><code>>>> x = np.random.rand(4)
</code></pre>
<p>Then:</p>
<pre><code>>>> x - x[:,np.newaxis]
array([[ 0. , 0.89175647, 0.80930233, 0.37955823],
[-0.89175647, 0. , -0.08245415, -0.51219825],
[-0.80930233, 0.08245415, 0. , -0.4297441 ],
[-0.37955823, 0.51219825, 0.4297441 , 0. ]])
</code></pre>
<p>If you want a matrix object (and not the default array object) you could write:</p>
<pre><code>np.matrix(x - x[:,np.newaxis])
</code></pre>
|
python|numpy|matrix|vectorization
| 5
|
8,624
| 27,063,962
|
select rows from dataframe where any of the columns is higher 0.001
|
<p>I would normally write</p>
<pre><code>df[ (df.Col1>0.0001) | (df.Col2>0.0001) | (df.Col3>0.0001) ].index
</code></pre>
<p>to get the labels where the condition holds True. If I have many columns, and say I had a tuple</p>
<pre><code>cols = ('Col1', 'Col2', 'Col3')
</code></pre>
<p><code>cols</code> is a subset of df columns.</p>
<p>Is there a more succinct way of writing the above?</p>
|
<p>You can combine <a href="http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.any.html" rel="nofollow"><code>pandas.DataFrame.any</code></a> and list indexing to create a mask for use in indexing. </p>
<p>Note that <code>cols</code> has to be a list, not a tuple.</p>
<pre><code>import pandas as pd
import numpy as np
N = 10
M = 0.8
df = pd.DataFrame(data={'Col1':np.random.random(N), 'Col2':np.random.random(N),
'Col3':np.random.random(N), 'Col4':np.random.random(N)})
cols = ['Col1', 'Col2', 'Col3']
mask = (df[cols] > M).any(axis=1)
print(df[mask].index)
# Int64Index([0, 1, 4, 5, 6, 7], dtype='int64')
</code></pre>
|
python|pandas
| 1
|
8,625
| 27,296,645
|
Pandas Dataseries get values by level
|
<p>I am dealing with pandas series like the following </p>
<blockquote>
<p>x=pd.Series([1, 2, 1, 4, 2, 6, 7, 8, 1, 1], index=['a', 'b', 'a', 'c', 'b', 'd', 'e', 'f', 'g', 'g'])</p>
</blockquote>
<p>The indices are non unique, but will always map to the same value, for example 'a' always corresponds to '1' in my sample, b always maps to '2' etc. So if I want to see which values correspond to each index value I simply need to write </p>
<pre><code>x.mean(level=0)
a 1
b 2
c 4
d 6
e 7
f 8
g 1
dtype: int64
</code></pre>
<p>The difficulty arises when the values are strings, I can't call 'mean()' on strings but I would still like to return a similar list in this case. Any ideas on a good way to do that?</p>
<blockquote>
<p>x=pd.Series(['1', '2', '1', '4', '2', '6', '7', '8', '1', '1'], index=['a', 'b', 'a', 'c', 'b', 'd', 'e', 'f', 'g', 'g'])</p>
</blockquote>
|
<p>So long as your indices map directly to the values then you can simply call <code>drop_duplicates</code>:</p>
<pre><code>In [83]:
x.drop_duplicates()
Out[83]:
a 1
b 2
c 4
d 6
e 7
f 8
dtype: int64
</code></pre>
<p>example:</p>
<pre><code>In [86]:
x = pd.Series(['XX', 'hello', 'XX', '4', 'hello', '6', '7', '8'], index=['a', 'b', 'a', 'c', 'b', 'd', 'e', 'f'])
x
Out[86]:
a XX
b hello
a XX
c 4
b hello
d 6
e 7
f 8
dtype: object
In [87]:
x.drop_duplicates()
Out[87]:
a XX
b hello
c 4
d 6
e 7
f 8
dtype: object
</code></pre>
<p><strong>EDIT</strong> a roundabout method would be to reset the index so that the index values are a new column, drop duplicates and then set the index back again:</p>
<pre><code>In [100]:
x.reset_index().drop_duplicates().set_index('index')
Out[100]:
0
index
a 1
b 2
c 4
d 6
e 7
f 8
g 1
</code></pre>
|
python|pandas
| 1
|
8,626
| 25,100,046
|
Building 3D arrays in Python to replace loops for optimization
|
<p>I'm trying to better understand python optimization so this is a dummy case, but hopefully outlines my idea...</p>
<p>Say I have a function which takes two variables:</p>
<pre><code>def func(param1, param2):
return some_func(param1) + some_const*(param2/2)
</code></pre>
<p>and I have arrays for param1 and param2 (of different lengths), at which I want the function to be evaluated, (some_func is an arbitrary function of param1) e.g.</p>
<pre><code>param1 = np.array((1,2,3,4,5))
param2 = np.array((5,2,3,1,9, 9, 10))
</code></pre>
<p>I can evaluate over all parameter space by doing:</p>
<pre><code>result = []
for p in param1:
result.append(func(p, param2))
result = np.asarray(result)
</code></pre>
<p>However, loops in Python are slower than array operations. Therefore, I wonder is there a way to achieve a 3D array which contains the results of func for all values in both param1 and param2 arrays?</p>
|
<p><em>Original answer for some_func(param1) x param2</em></p>
<p>Write the <code>some_func</code> in such a way that it can accept and return numpy arrays. Then use;</p>
<pre><code>numpy.outer(some_func(param1), param2)
</code></pre>
<p>This works because in your example, both <code>param1</code> and <code>param2</code> are <em>vectors</em> (1D arrays), so you can use <code>outer</code> and the result will be a 2D array, not 3D.</p>
<p><em>Edit</em></p>
<p>As long as the operation you want to do is a
<a href="http://docs.scipy.org/doc/numpy/reference/ufuncs.html" rel="nofollow">universal function</a> ("ufunc"), you can use its <code>outer</code> method;</p>
<pre><code>In [1]: import numpy as np
In [2]: a = np.arange(10)
In [3]: a
Out[3]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [4]: b = np.arange(15)
In [5]: np.add.outer(a, b)
Out[5]:
array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
[ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16],
[ 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17],
[ 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18],
[ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[ 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20],
[ 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21],
[ 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22],
[ 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]])
</code></pre>
|
python|arrays|optimization|numpy|matrix
| 3
|
8,627
| 30,750,782
|
NumPy vs MATLAB
|
<p>I've started to use NumPy instead of MATLAB for a lot of things and for most things it appears to be much faster. I've just tried to replicate a code in Python and it is much slower though. I was wondering if someone who knows both could have a look at it and see why it is the case</p>
<p>NumPy:</p>
<pre><code>longTicker = np.empty([1,len(ticker)],dtype='U15')
genericTicker = np.empty([len(ticker)],dtype='U15')
tickerType = np.empty([len(ticker)],dtype='U10')
tickerList = np.vstack((np.empty([2,len(ticker)],dtype='U30'),np.ones([len(ticker)],dtype='U30')))
tickerListnum = 0
modelList = np.empty([2,9999],dtype='U2')
modelListnum = 0
derivativeType = np.ones(len(ticker))
for l in range(0,len(ticker)):
tickerType[l] = 'Future'
if not modCode[l] in list(modelList[1,:]):
modelList[0,modelListnum] = modelListnum + 1
modelList[1,modelListnum] = modCode[l]
modelListnum += 1
if ticker.item(l).find('3 MONTH') >= 0:
x = list(metalTicks[:,0]).index(ticker[l])
longTicker[0,l] = metalTicks[x,3]
if not longTicker[0,l] in list(tickerList[1,:]):
tickerList[0,tickerListnum] = tickerListnum + 1
tickerList[1,tickerListnum] = longTicker[0,l]
tickerList[2,tickerListnum] = 4
tickerListnum += 1
derivativeType[l] = 4
tickerType[l] = 'Future'
if ticker.item(l).find('CURNCY') >= 0:
if ticker.item(l).find('KRWUSD CURNCY'):
prices[l] = 1/float(prices.item(l))
longTicker[0,l] = ticker[l,0]
if not longTicker[0,l] in list(tickerList[1,:]):
tickerList[0,tickerListnum] = tickerListnum + 1
tickerList[1,tickerListnum] = longTicker[0,l]
tickerList[2,tickerListnum] = 2
tickerListnum += 1
derivativeType[l] = 2
tickerType[l] = 'FX'
if ticker.item(l).find('_') >= 0:
x = ticker[l] == sasTick
longTicker[0,l] = bbgTick[x]
if not longTicker[0,l] in list(tickerList[1,:]):
tickerList[0,tickerListnum] = tickerListnum + 1
tickerList[1,tickerListnum] = longTicker[0,l]
tickerList[2,tickerListnum] = 3
tickerListnum += 1
derivativeType[l] = 3
tickerType[l] = 'Option'
# need convert ticker thing
if not longTicker[0,l] in list(tickerList[1,:]):
tickerList[0,tickerListnum] = tickerListnum + 1
tickerList[1,tickerListnum] = longTicker[0,l]
tickerList[2,tickerListnum] = 1
tickerListnum += 1
</code></pre>
<p>MATLAB Code:</p>
<pre><code>longTicker = cell(size(ticker));
genericTicker = cell(size(ticker));
type = repmat({'Future'},size(ticker));
tickerList = repmat([cell(1);cell(1);{1}],1,9999);
%tickerList = cell(3,9999);
tickerListnum = 0;
modelList = cell(2,9999);
modelListnum = 0;
derivativeType = ones(size(ticker));
for j=1:length(ticker)
if isempty(find(strcmp(modCode{j},modelList(2,:)), 1))
modelListnum = modelListnum+1;
modelList{1,modelListnum}= modelListnum;
modelList(2,modelListnum)= modCode(j);
end
if ~isempty(strfind(ticker{j},'3 MONTH'))
x =strcmp(ticker{j},metalTicks(:,1));
longTicker{j} = metalTicks{x,4};
% genericTicker{j} = metalTicks{x,4};
if isempty(find(strcmp(longTicker(j),tickerList(2,:)), 1))
tickerListnum = tickerListnum+1;
tickerList{1,tickerListnum}= tickerListnum;
tickerList(2,tickerListnum)=longTicker(j);
tickerList{3,tickerListnum}=4;
end
derivativeType(j) = 4;
type{j} = 'Future';
continue;
end
if ~isempty(regexp(ticker{j},'[A-Z]{6}\sCURNCY', 'once'))
if strcmpi('KRWUSD CURNCY',ticker{j})
prices{j}=1/prices{j};
end
longTicker{j} = ticker{j};
% genericTicker{j} = ticker{j};
if isempty(find(strcmp(longTicker(j),tickerList(2,:)), 1))
tickerListnum = tickerListnum+1;
tickerList{1,tickerListnum}= tickerListnum;
tickerList(2,tickerListnum)=longTicker(j);
tickerList{3,tickerListnum}=2;
end
derivativeType(j) = 2;
type{j} = 'FX';
continue;
end
if ~isempty(regexp(ticker{j},'_', 'once'))
z = strcmp(ticker{j},sasTick);
try
longTicker(j) = bbgTick(z);
catch
keyboard; % I did this - Dave
end
% genericTicker(j) = bbgTick(z);
if isempty(find(strcmp(longTicker(j),tickerList(2,:)), 1))
tickerListnum = tickerListnum+1;
tickerList{1,tickerListnum}= tickerListnum;
tickerList(2,tickerListnum)=longTicker(j);
tickerList{3,tickerListnum}=3;
end
derivativeType(j) = 3;
type{j} = 'Option';
continue;
end
try
longTicker{j} = ConvertTicker(ticker{j},'short','long',tradeDate(j));
% genericTicker{j} = ConvertTicker(ticker{j},'short','generic',tradeDate(j));
catch
longTicker{j} = ticker{j};
% genericTicker{j} = ticker{j};
end
if isempty(find(strcmp(longTicker(j),tickerList(2,:)), 1))
tickerListnum = tickerListnum+1;
tickerList{1,tickerListnum}= tickerListnum;
tickerList(2,tickerListnum)=longTicker(j);
tickerList{3,tickerListnum}=1;
end
end
</code></pre>
<p>MATLAB appears to be faster by a factor of around 100 in this case. Are loops much slower in Python or something?</p>
|
<p>Although I can't be sure what is the primary source of the slowdown, I do notice some things that will cause a slowdown, are easy to fix, and will result in cleaner code:</p>
<ol>
<li>You do a lot of conversion from numpy arrays to lists. Type conversions are expensive, try to avoid them whenever possible. In your case, little you do benefits from numpy. You are better off just using lists in place of 1D arrays or or lists of lists in place of 2D arrays in almost all your cases. This is closer to cell arrays in MATLAB, except that they can be dynamically resized with good performance. The only possible exceptions are <code>sastick</code>, <code>bbgtick</code>, and <code>prices</code>, with the latter two working fine either way. For the others, in cases where you just put the value incrementally just create empty lists and use <code>append</code>, and for cases where you need to access an arbitrary element pre-allocate with <code>None</code> or empty strings <code>''</code>. For <code>tickerList</code> it is probably easier to have two lists.</li>
<li>You assign a lot of integers to unicode arrays. This also involves a type conversion (integer to unicode). This also wouldn't be an issue if you used lists.</li>
<li>You use <code>foo.item(l)</code> a lot. This converts a numpy element to an ordinary python data type. Again, this is a type conversion, so don't do this if you can possible avoid it. If you follow my suggestion <code>1</code> and use lists, you never need to do this in the current code.</li>
<li>You have <code>continue</code> statements in the MATLAB version but not in the python version, which means you are doing computation in the Python version that you skip in the MATLAB version. I think you are better off with <code>if..elseif</code>, but <code>continue</code> also works in Python.</li>
<li>You loop over <code>range(0,len(ticker))</code>, and then extract that element of ticker multiple times. You are better off just looping over <code>ticker</code> directly, by doing, for example <code>for i, iticker in enumerate(ticker):</code>. Using the <code>enumerate</code> allows you to also keep track of the index. </li>
<li>You use <code>find</code> to determine whether a substring is in a given string. It is faster, clearer, and simpler to just use <code>in</code> for that. Only use <code>find</code> if you care exactly where the substring is found, which you don't.</li>
<li>For both <code>modelListnum</code> and <code>tickerListnum</code>, you add one, assign the value to an array element, then add one and assign it back to itself, doing the same operation twice. In the MATLAB version, you increment first, then assign the already incremented version. This involves doing the same math twice as often in Python as you do in MATLAB.</li>
<li>It is quicker to pre-allocate <code>tickerType</code> to 'Future' like you do in MATLAB, which you can do by using something like <code>tickerType = ['Future']*len(ticker)</code>.</li>
<li>Since <code>tickerListnum</code> and <code>modelListnum</code> are always equal to the index, there is no reason to have those at all. Just get rid of them.</li>
<li>Since there is only ever one instance of each value in the first row of <code>tickerList</code>, it will be faster and easier to use an <code>OrderedDict</code>, or a regular <code>dict</code> if you don't care about order, where the keys are the <code>longTicker</code> value and the value is the type number.</li>
<li>If you don't care about the order of <code>modelList</code>, using a <code>set</code> will be faster.</li>
</ol>
<p>So here is a version that should be faster, assuming <code>metalTicks</code>, and <code>tickerList</code> are lists of lists, <code>sasTick</code> is a numpy array, and <code>prices</code> and <code>bbgTick</code> are either lists or arrays, and assuming you care about the oder of <code>modelList</code> and <code>tickerList</code>:</p>
<pre><code>from collections import OrderedDict
longTicker = [None]*len(ticker)
tickerType = ['Future']*len(ticker)
tickerList = OrderedDict()
modelList = []
derivativeType = np.ones_like(ticker)
for i, (iticker, imodCode) in enumerate(zip(ticker, modCode)):
if imodCode not in modelList:
modelList.append(imodCode)
if '3 MONTH' in iticker:
x = metalTicks[0].index(iticker)
longTicker[i] = metalTicks[3][x]
derivativeType[i] = 4
elif 'CURNCY' in iticker:
if 'KRWUSD CURNCY' in iticker:
prices[i] = 1/prices[i]
longTicker[i] = iticker
derivativeType[i] = 2
tickerType[i] = 'FX'
elif '_' in iticker:
longTicker[i] = bbgTick[iticker == sasTick]
derivativeType[i] = 3
tickerType[i] = 'Option'
tickerList[longTicker[i]] = derivativeType[i]
</code></pre>
<p>If you don't care about the order of <code>modelList</code> and <code>tickerList</code>, you can do this:</p>
<pre><code>longTicker = [None]*len(ticker)
tickerType = ['Future']*len(ticker)
tickerList = {}
modelList = set()
derivativeType = np.ones_like(ticker)
for i, (iticker, imodCode) in enumerate(zip(ticker, modCode)):
modelList.add(imodCode)
if '3 MONTH' in iticker:
x = metalTicks[0].index(iticker)
longTicker[i] = metalTicks[3][x]
derivativeType[i] = 4
elif 'CURNCY' in iticker:
if 'KRWUSD CURNCY' in iticker:
prices[i] = 1/prices[i]
longTicker[i] = iticker
derivativeType[i] = 2
tickerType[i] = 'FX'
elif '_' in iticker:
longTicker[i] = bbgTick[iticker == sasTick]
derivativeType[i] = 3
tickerType[i] = 'Option'
tickerList[longTicker[i]] = derivativeType[i]
</code></pre>
<p>Or simpler yet:</p>
<pre><code>longTicker = [None]*len(ticker)
tickerType = ['Future']*len(ticker)
derivativeType = np.ones_like(ticker)
for i, iticker in enumerate(ticker):
if '3 MONTH' in iticker:
x = metalTicks[0].index(iticker)
longTicker[i] = metalTicks[3][x]
derivativeType[i] = 4
elif 'CURNCY' in iticker:
if 'KRWUSD CURNCY' in iticker:
prices[i] = 1/prices[i]
longTicker[i] = iticker
derivativeType[i] = 2
tickerType[i] = 'FX'
elif '_' in iticker:
longTicker[i] = bbgTick[iticker == sasTick]
derivativeType[i] = 3
tickerType[i] = 'Option'
modelList = set(modCode)
tickerlist = dict(zip(longTicker, derivativeType))
</code></pre>
|
python|matlab|numpy
| 6
|
8,628
| 39,105,282
|
How to find min value of another column greater than current column Pandas
|
<p>I am sure this is an easy one, but how do I find the minimum value of a column that is greater than the value in the current column? Also, how do I find the maximum value of a column less that the value in the current column?</p>
<pre><code>from io import StringIO
import io
text = """Order starttime endtime
1 2016-03-01 14:31:10.777 2016-03-01 14:31:10.803
1 2016-03-01 14:31:10.779 2016-03-01 14:31:10.780
1 2016-03-01 14:31:10.790 2016-03-01 14:31:10.791
1 2016-03-01 14:31:10.806 2016-03-01 14:31:10.863"""
df = pd.read_csv(StringIO(text), sep='\s{2,}', engine='python', parse_dates=[1, 2])
</code></pre>
<p>So.. example..
for the endtime column, I want the minimum value of the starttime column that is greater to that value. </p>
<p>The value associated with then endtime 2016-03-01 14:31:10.803 (the first value)
would then be 2016-03-01 14:31:10.806 (the last value of startdatetime).</p>
<p>The value associated with 2016-03-01 14:31:10.780 (the second endtime) should then be 2016-03-01 14:31:10.790</p>
<p>So basically (in pseudocode)</p>
<p>df['nexttime'] = min(df['starttime'])>df['endtime']</p>
<p>Would appreciate any help .. I'm sure this is pretty easy for someone more skilled than I am</p>
|
<p>You can try something like this:</p>
<pre><code>df.endtime.apply(lambda x: min(df.starttime[df.starttime > x]) if len(df.starttime[df.starttime > x]) != 0 else np.nan)
# 0 2016-03-01 14:31:10.806
# 1 2016-03-01 14:31:10.790
# 2 2016-03-01 14:31:10.806
# 3 NaT
# Name: endtime, dtype: datetime64[ns]
</code></pre>
<p>Or slightly more efficient way:</p>
<pre><code>def findMin(x):
larger = df.starttime[df.starttime > x]
if len(larger) != 0:
return min(larger)
else:
return np.nan
df.endtime.apply(findMin)
# 0 2016-03-01 14:31:10.806
# 1 2016-03-01 14:31:10.790
# 2 2016-03-01 14:31:10.806
# 3 NaT
# Name: endtime, dtype: datetime64[ns]
</code></pre>
<p>There is probably a way to avoid the vector scan, but if the performance is not a big issue, this works.</p>
|
python|pandas|dataframe|aggregate|min
| 1
|
8,629
| 39,141,080
|
List most common members in Pandas group?
|
<p>I have a dataframe with columns like this:</p>
<pre><code> id lead_sponsor lead_sponsor_class
02837692 Janssen Research & Development, LLC Industry
02837679 Aarhus University Hospital Other
02837666 Universidad Autonoma de Ciudad Juarez Other
02837653 Universidad Autonoma de Madrid Other
02837640 Beirut Eye Specialist Hospital Other
</code></pre>
<p>I want to find the most common lead sponsors. I can list the size of each group using:</p>
<pre><code>df.groupby(['lead_sponsor', 'lead_sponsor_class']).size()
</code></pre>
<p>which gives me this:</p>
<pre><code>lead_sponsor lead_sponsor_class
307 Hospital of PLA Other 1
3E Therapeutics Corporation Industry 1
3M Industry 4
4SC AG Industry 8
5 Santé Other 1
</code></pre>
<p>But how do I find the top 10 most common groups? If I do:</p>
<pre><code>df.groupby(['lead_sponsor', 'lead_sponsor_class']).size().sort_values(ascending=False).head(10)
</code></pre>
<p>Then I get an error:</p>
<blockquote>
<p>AttributeError: 'Series' object has no attribute 'sort_values'</p>
</blockquote>
|
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.nlargest.html" rel="nofollow"><code>Series.nlargest</code></a>:</p>
<pre><code>print (df.groupby(['lead_sponsor', 'lead_sponsor_class']).size().nlargest(10))
</code></pre>
<p>In <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.nlargest.html" rel="nofollow">docs</a> is <strong>Notes</strong>:</p>
<blockquote>
<p>Faster than .sort_values(ascending=False).head(n) for small n relative to the size of the Series object.</p>
</blockquote>
<p>Sample:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'id': {0: 2837692, 1: 2837679, 2: 2837666, 3: 2837653, 4: 2837640},
'lead_sponsor': {0: 'a', 1: 'a', 2: 'a', 3: 's', 4: 's'},
'lead_sponsor_class': {0: 'Industry', 1: 'Other', 2: 'Other', 3: 'Other', 4: 'Other'}})
print (df)
id lead_sponsor lead_sponsor_class
0 2837692 a Industry
1 2837679 a Other
2 2837666 a Other
3 2837653 s Other
4 2837640 s Other
print (df.groupby(['lead_sponsor', 'lead_sponsor_class']).size())
lead_sponsor lead_sponsor_class
a Industry 1
Other 2
s Other 2
dtype: int64
print (df.groupby(['lead_sponsor', 'lead_sponsor_class']).size().sort_values(ascending=False).head(2))
lead_sponsor lead_sponsor_class
s Other 2
a Other 2
dtype: int64
print (df.groupby(['lead_sponsor', 'lead_sponsor_class']).size().nlargest(2))
lead_sponsor lead_sponsor_class
a Other 2
s Other 2
dtype: int64
</code></pre>
|
python|sorting|pandas|dataframe|series
| 2
|
8,630
| 39,148,628
|
Feeding dtype np.float32 to TensorFlow placeholder
|
<p>I am trying to feed an numpy ndarray of type : float32 to a TensorFlow placeholder, but it's giving me the following error:</p>
<pre><code>You must feed a value for placeholder tensor 'Placeholder' with dtype float
</code></pre>
<p>My place holders are defined as:</p>
<pre><code>n_steps = 10
n_input = 13
n_classes = 1201
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
</code></pre>
<p>And the line it's giving me the above error is:</p>
<pre><code>sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
</code></pre>
<p>where my batch_x and batch_y are numpy ndarrays of dtype('float32'). The following are the types that I printed using pdb:</p>
<pre><code>(Pdb)batch_x.dtype
dtype('float32')
(Pdb)x.dtype
tf.float32
</code></pre>
<p>I have also tried type-casting batch_x and batch_y to tf.float32 as it seems like x is of dtype tf.float32 but running the code with type-casting:</p>
<pre><code>sess.run(optimizer, feed_dict={x: tf.to_float(batch_x), y: tf.to_float(batch_y)})
</code></pre>
<p>gives the following error:</p>
<pre><code>TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, or numpy ndarrays.
</code></pre>
<p>How should I feed the placeholders? of what type should I use?
Any help/advice will be much appreciated!</p>
|
<p>For your first problem, are you sure that <code>batch_y</code> is also <code>float32</code>? You only provide the trace of the <code>batch_x</code> type, and <code>batch_y</code> is more likely to be integer, since it appears to be a one-hot encoding of your classes.</p>
<p>For the second problem, what you do wrong is you use <code>tf.to_float</code>, which is a tensor operation, on a regular numpy array. You should use numpy cast intstead:</p>
<pre><code>sess.run(optimizer, feed_dict={x: batch_x.astype(np.float32), y: batch_y.astype(np.float32)})
</code></pre>
|
numpy|tensorflow
| 2
|
8,631
| 39,044,434
|
Use qcut pandas for multiple valuable categorizing
|
<p>I am trying to use two values from two columns from a dataframe and perform <code>qcut</code> categorization. </p>
<p>single value categorizing it quite simple. But two variables as pairs and vs is something I am trying get. </p>
<p>Input: </p>
<pre><code>date,startTime,endTime,day,c_count,u_count
2004-01-05,22:00:00,23:00:00,Mon,18944,790
2004-01-05,23:00:00,00:00:00,Mon,17534,750
2004-01-06,00:00:00,01:00:00,Tue,17262,747
2004-01-06,01:00:00,02:00:00,Tue,19072,777
2004-01-06,02:00:00,03:00:00,Tue,18275,785
2004-01-06,03:00:00,04:00:00,Tue,13589,757
2004-01-06,04:00:00,05:00:00,Tue,16053,735
2004-01-06,05:00:00,06:00:00,Tue,11440,636
2004-01-06,06:00:00,07:00:00,Tue,5972,513
2004-01-06,07:00:00,08:00:00,Tue,3424,382
2004-01-06,08:00:00,09:00:00,Tue,2696,303
2004-01-06,09:00:00,10:00:00,Tue,2350,262
2004-01-06,10:00:00,11:00:00,Tue,2309,254
</code></pre>
<p>Code with pure python but I am trying to do the same in pandas. </p>
<pre><code>for row in csv.reader(inp):
if int(row[1])>(0.80*c_count) and int(row[2])>(0.80*u_count):
val='highly active'
elif int(row[1])>=(0.60*c_count) and int(row[2])<=(0.60*u_count):
val='active'
elif int(row[1])<=(0.40*c_count) and int(row[2])>=(0.40*u_count):
val='event based'
elif int(row[1])<(0.20*c_count) and int(row[2])<(0.20*u_count):
val ='situational'
else:
val= 'viewers'
</code></pre>
<p>What I am trying to find is ?</p>
<ol>
<li><code>c_count</code> and <code>u_count</code> both </li>
<li>Like in the above code <code>c_count</code> vs <code>u_count</code> </li>
</ol>
|
<p>You can create a Series for each quantile group:</p>
<pre><code>q = df[['c_count', 'u_count']].apply(lambda x: pd.qcut(x, np.linspace(0, 1, 6),
labels=np.arange(5)))
q
Out:
c_count u_count
0 4 4
1 3 3
2 3 2
3 4 4
4 4 4
5 2 3
6 2 2
7 2 2
8 1 1
9 1 1
10 0 0
11 0 0
12 0 0
</code></pre>
<p>0 is for the first 20%, 1 is for 20%-40% and goes on. </p>
<p>Now the if logic works a little different here. For the else part, first populate the column:</p>
<pre><code>df['val'] = 'viewers'
</code></pre>
<p>Anything we do afterwards will overwrite the values in this column if condition is satisfied. So the operation we do later precedes the previous one. From bottom to top:</p>
<pre><code>df.ix[(q['c_count'] < 1) & (q['u_count'] < 1), 'val'] = 'situational'
df.ix[(q['c_count'] < 2) & (q['u_count'] > 1), 'val'] = 'event_based'
df.ix[(q['c_count'] > 2) & (q['u_count'] < 2), 'val'] = 'active'
df.ix[(q['c_count'] > 3) & (q['u_count'] > 3), 'val'] = 'highly active'
</code></pre>
<p>The first condition checks whether both c_count and u_count are in the first 20%. If so, changes the corresponding rows at 'val' column to situational. The remaining ones work in a similar manner. You might need to adjust comparison operators a little bit (greater vs greater than or equal to).</p>
|
python|validation|csv|pandas
| 1
|
8,632
| 22,883,149
|
Can I configure Theano's x divided by 0 behavior
|
<p>I have a little problem using Theano. It seems that a <code>division by 0</code> results in <code>inf</code> not as using e.g. Numpy this results in 0 (at least the inverse function do behave like that). Take a look:</p>
<pre><code>from theano import function, sandbox, Out, shared
import theano.tensor as T
import numpy as np
reservoirSize = 7
_eye = np.eye(reservoirSize)
gpu_I = shared( np.asarray(_eye, np.float32 ) )
simply_inverse = function(
[],
Out(sandbox.cuda.basic_ops.gpu_from_host(
T.inv( gpu_I )
),
borrow=True
)
)
gpu_wOut = simply_inverse()
Wout = np.linalg.inv(_eye)
print "gpu_wOut:\n"
print np.asarray(gpu_wOut)
print "\nWout:\n"
print np.asarray(Wout)
diff_wOut = np.asarray(gpu_wOut) - Wout
diff_wOut = [ diff_wOut[0][i] if diff_wOut[0][i] > epsilon else 0 for i in range(reservoirSize)]
print "\n\nDifference of output weights: (only first row)\n"
print np.asarray(diff_wOut)
</code></pre>
<p>Results:</p>
<pre><code>gpu_wOut:
[[ 1. inf inf inf inf inf inf]
[ inf 1. inf inf inf inf inf]
[ inf inf 1. inf inf inf inf]
[ inf inf inf 1. inf inf inf]
[ inf inf inf inf 1. inf inf]
[ inf inf inf inf inf 1. inf]
[ inf inf inf inf inf inf 1.]]
Wout:
[[ 1. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0.]
[ 0. 0. 1. 0. 0. 0. 0.]
[ 0. 0. 0. 1. 0. 0. 0.]
[ 0. 0. 0. 0. 1. 0. 0.]
[ 0. 0. 0. 0. 0. 1. 0.]
[ 0. 0. 0. 0. 0. 0. 1.]]
Difference of output weights (only first row):
[ 0. inf inf inf inf inf inf]
</code></pre>
<p>This is a problem for some of my calculations I want to perform in my GPU and I don't want to get back the data from it to replace <code>inf</code> by <code>0</code> to continue my calculations of course since this would slow down the process considerably.</p>
|
<p><code>theano.tensor</code> calculates the <em>elementwise</em> inverse</p>
<p><code>np.linalg.inv</code> calculates the inverse <em>matrix</em></p>
<p>These are not the same thing mathematically</p>
<hr>
<p>You're probably looking for the <strong>experimental</strong> <a href="http://deeplearning.net/software/theano/library/sandbox/linalg.html#theano.sandbox.linalg.ops.MatrixInverse" rel="nofollow"><code>theano.sandbox.linalg.ops.MatrixInverse</code></a></p>
|
python|numpy|theano
| 1
|
8,633
| 13,308,131
|
Retrieving array elements with an array of frequencies in NumPy
|
<p>I have an array of numbers, <code>a</code>. I have a second array, <code>b</code>, specifying how many times I want to retrieve the corresponding element in <code>a</code>. How can this be achieved? The ordering of the output is not important in this case.</p>
<pre><code>import numpy as np
a = np.arange(5)
b = np.array([1,0,3,2,0])
# desired output = [0,2,2,2,3,3]
# i.e. [a[0], a[2], a[2], a[2], a[3], a[3] ]
</code></pre>
|
<p>Thats exactly what <code>np.arange(5).repeat([1,0,3,2,0])</code> does.</p>
|
python|numpy
| 6
|
8,634
| 29,592,634
|
timestamp from a txt file into an array
|
<p>I have a txt file with the following structure:</p>
<pre><code>"YYYY/MM/DD HH:MM:SS.SSS val1 val2 val3 val4 val5'
</code></pre>
<p>The first line look like:</p>
<pre><code>"2015/02/18 01:05:46.004 13.737306807 100.526088432 -22.2937 2 5"
</code></pre>
<p>I am having trouble to put the time stamp into the array. The time values are used to compare data with same timestamp from different files, parse the data for a specific time interval, and plotting purposes.</p>
<p>This is what I have right now ... except the time information:</p>
<pre><code>dt=np.dtype([('lat', float), ('lon', float), ('height', float), ('Q', int), ('ns', int)]
a=np.loadtxt('tmp.pos', dt)
</code></pre>
<p>Any suggestion how to extent the <em>dt</em> to include the date and the time columns? or Is there a better way than using <em>loadtext</em> from <em>numpy</em>?</p>
<p>An example of the file can be found here: <a href="https://www.dropbox.com/s/j69l8oeqdm73q8y/tmp.pos" rel="nofollow">https://www.dropbox.com/s/j69l8oeqdm73q8y/tmp.pos</a></p>
<p><strong>Edit 1</strong></p>
<p>It turns out that the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html" rel="nofollow">numpy.loadtxt</a> takes a parameter called <em>converters</em> that may does the job:</p>
<pre><code>a = np.loadtxt(fname='tmp.pos', converters={0: strpdate2num('%Y/%m/%d'), 1: strpdate2num('%H:%M:%S.%f')})
</code></pre>
<p>This means that the first two columns of a are 'date' and 'time' expressed as floats. To get back the time string, I can do something like this (though perhaps a bit clumsy):</p>
<pre><code>In [441]: [datetime.strptime(num2date(a[i,0]).strftime('%Y-%m-%d')+num2date(a[i,1]).strftime('%H:%M:%S.%f'), '%Y-%m-%d%H:%M:%S.%f') for i in range(len(a[:,0]))]
</code></pre>
<p>which gives:</p>
<pre><code>Out[441]: [datetime.datetime(2015, 2, 18, 1, 5, 46)]
</code></pre>
<p>However, the decimal part of the seconds are not preserved. What I am doing wrong?</p>
|
<p>If this is coming from a text file, it may be simpler to parse this as text unless you want it all to end up in a numpy array. For example:</p>
<pre><code>>>> my_line = "2015/02/18 01:05:46.004 13.737306807 100.526088432 -22.2937 2 5"
>>> datestamp, timestamp, val1, val2, val3, val4, val5 = [v.strip() for v in my_line.split()]
>>> datestamp
'2015/02/18'
>>> timestamp
'01:05:46.004'
</code></pre>
<p>So if you want to iterate over a file of these lines and obtain a native datetime object for each ine:</p>
<pre><code>from datetime import datetime
with open('path_to_file', 'r') as my_file:
for line in my_file:
d_stamp, t_stamp, val1, val2, val3, val4, val5 = [v.strip() for v in my_line.split()]
dt_obj = datetime.strptime(' '.join([d_stamp, t_stamp]), '%Y/%m/%d %H:%M:%S.%f')
</code></pre>
|
python|numpy
| 0
|
8,635
| 62,187,119
|
Proper way to extract value from DataFrame with composite index?
|
<p>I have a dataframe, call it current_data. This dataframe is generated via running statistical functions over another dataframe, current_data_raw. It has a compound index on columns "Method" and "Request.Name"</p>
<p><code>current_data = current_data_raw.groupby(['Name', 'Request.Method']).size().reset_index().set_index(['Name', 'Request.Method'])</code></p>
<p>I then run a bunch of statistical functions over <code>current_data_raw</code> adding new columns to <code>current_data</code></p>
<p>I then need to query that dataframe for specific values of columns. I would love to do something like:</p>
<p><code>val = df['Request.Name' == some_name, 'Method' = some_method]['Average']</code></p>
<p>However this isn't working, nor are the varients I have attempted above. <code>.xs</code> is returning a series. I could grab the only row in the series but that doesn't seem proper.</p>
|
<p>If want select in <code>MultiIndex</code> is possible use tuple in order of levels, but here is not specified index name like <code>'Request.Name'</code>:</p>
<pre><code>val = df.loc[(some_name, some_method), 'Average']
</code></pre>
<p>Another way is use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html" rel="nofollow noreferrer"><code>DataFrame.query</code></a>, but if levels names contains spaces or <code>.</code> is necessary use backticks:</p>
<pre><code>val = df.query("`Request.Name`=='some_name' & `Request.Method`=='some_method'")['Average']
</code></pre>
<p>If one word levels names:</p>
<pre><code>val = df.query("Name=='some_name' & Method=='some_method'")['Average']
</code></pre>
|
python|pandas
| 1
|
8,636
| 62,116,123
|
Streamlit Panda Query Function Syntax Error When Finding Column in CSV Dataframe
|
<p>When Using Streamlit to build a data interface getting a syntax error. My downloaded csv dataframe has a column 'NUMBER OF PERSONS INJURED', after converting it into a dataframe with panda and trying to use the query function to reference it I'm getting errors like below. I converted the text to lower case in the dataframe. I've attached the error message and sample csv file screenshot. Github has code and sample csv file. My questions are: </p>
<p><strong>1.</strong> how to fix this error? <strong>2.</strong> What's the underlying cause of it? </p>
<p>Code in Question:</p>
<pre><code>injured_people = st.slider("", 0, 19)
st.map(data.query("number of persons injured > @injured_people")[["latitude", "longitude"]].dropna(how="any"))
</code></pre>
<p><a href="https://i.stack.imgur.com/VjJeU.jpg" rel="nofollow noreferrer">Error Message</a></p>
<p><a href="https://i.stack.imgur.com/nMP0A.jpg" rel="nofollow noreferrer">csv sample shot of number of persons injured</a></p>
<p>Things I've tried:</p>
<ul>
<li><p>adding <code>''</code> to number of persons injured to convert to <code>string</code>. But
then get error about st.slider being <code>int</code> and unable to operate with
<code>></code> between str & int. </p></li>
<li><p>hacking the csv by converting number of persons injured with
underscore <code>number_of_persons_injured</code> but that throws undefined
error.</p></li>
<li><p>Converting @injured_people to a string. Yes stupid I know. String() undefined error.</p>
<blockquote>
<p>injured_people = st.slider("", 0, 19)</p>
<p>injured_people = string(injured_people)</p>
</blockquote></li>
</ul>
<p>Git File: <a href="https://github.com/petersun825/Bike_Crash_Dashboard_NYC/blob/master/app.py" rel="nofollow noreferrer">https://github.com/petersun825/Bike_Crash_Dashboard_NYC/blob/master/app.py</a></p>
|
<p>You can try by edit the column name to something simpler, like injured_person. Then restart your device and try run the streamlit again</p>
|
python|pandas|csv|dataframe|streamlit
| 0
|
8,637
| 62,240,625
|
AttributeError: 'int' object has no attribute 'plot' in pandas
|
<p>I tried to visualize the nulls values in each column but got the error: <code>AttributeError: 'int' object has no attribute 'plot'.</code></p>
<pre><code>
columns_with_null = ['ACCTAGE', 'PHONE', 'POS','INV','INVBAL','POSAMT', 'CC', 'CCBAL','HMOWN'
'CCPURC', 'INCOME', 'LORES', 'HMVAL', 'AGE','CRSCORE']
for col in columns_with_null:
print('COLUMN:', col)
print('percent of nulls:', df[col].isna().sum()/len(df))
# Viz the value counts
df[col].isna().sum()/len(df).plot(kind='barh')
plt.show()
</code></pre>
|
<p>You need pass all columns with nulls instead <code>col</code> variable and also add parentheses or <code>div</code> for division:</p>
<pre><code>(df[columns_with_null].isna().sum() / len(df)).plot(kind='barh')
df[columns_with_null].isna().sum().div(len(df)).plot(kind='barh')
</code></pre>
<p>If want plot all columns:</p>
<pre><code>(df.isna().sum() / len(df)).plot(kind='barh')
df.isna().sum().div(len(df)).plot(kind='barh')
</code></pre>
<hr>
<p>Problem of your solution is part:</p>
<pre><code>len(df).plot(kind='barh')
</code></pre>
<p>because <code>len(df)</code> return length of DataFrame, here integers and you want plot it. Another problem is first part also return integer, because processing only one column - <code>df[col].isna().sum()</code>.</p>
|
python|pandas
| 0
|
8,638
| 62,204,288
|
Python Pandas: Merging one column to another data frame does not return the same number of rows
|
<p>I have two data frames: first data frame (let say df1) has 389 rows with 5 columns, the second data frame (let say df2) has 10025 rows with 10 columns. I want to merge one of the columns (let say column name is 'description') to the first data frame. I was using pd.merge() command to merge column like below:</p>
<pre><code>pd.merge(df1,df2[['ID','description']],on='ID',how='left')
</code></pre>
<p>However, above command returns 22338 rows. When I searched on stackoverflow, I found one thread where it was asking to use the drop_duplicates with the second dataframe. So I changed my code like below:</p>
<pre><code>pd.merge(df1,df2[['ID','description']].drop_duplicates(),on='ID',how='left')
</code></pre>
<p>When I ran the above command it returned 751 rows. So still I am not getting the desired number of rows i.e. 389. Could anyone guide me how to fix the issue?</p>
|
<p>It looks like you have either a "<a href="https://en.wikipedia.org/wiki/One-to-many_(data_model)" rel="nofollow noreferrer">many-to-one</a>" or "<a href="https://en.wikipedia.org/wiki/Many-to-many_(data_model)" rel="nofollow noreferrer">many-to-many</a>" relationship. To eliminate this, you can do the following:</p>
<pre><code>pd.merge(
df1.drop_duplicates(subset=['ID']),
df2[['ID','description']].drop_duplicates(subset=['ID']),
on='ID',
how='left'
)
</code></pre>
|
python|python-3.x|pandas
| 1
|
8,639
| 62,091,420
|
Pandas Create multiindex pivot table with year month and day from single time column
|
<p>I have a single column with time (time since epoch if it matters) from an sql query.</p>
<pre><code>time value
1000000 10
1000001 15
1000002 20
...
</code></pre>
<p>I want to create a multiindex pivot table in Pandas automatically like so based on those values.</p>
<pre><code> Value
2018 Jan 10
Feb 15
March 20
2019 Jan 25
Feb 30
March 35
</code></pre>
<p>Is there an easy way to do this automatically for a variety of incoming time columns? And in some cases if the user specifies I would like to add day as the third multiindex level</p>
|
<p>If the column is a pandas.DateTime column, you can use the <a href="https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html#basics-dt-accessors" rel="nofollow noreferrer"><code>.dt</code> datetime accessor</a> to access attributes such as <code>df[col].dt.year</code>, <code>df[col].dt.month</code>, etc. You could assign these to new columns, then use pivot or set the index as you describe.</p>
|
python|pandas|pivot-table
| 1
|
8,640
| 51,150,199
|
Gcloud FileNotFound - ML Engine
|
<p>I'm trying to do a predict on Google Cloud ML Engine. I have the input uploaded in a bucket at Google Cloud Storage. I'm using the following flag:</p>
<pre><code>--file='gs://MyBucket/Photo/example3.jpg'
</code></pre>
<p>I've also tried: </p>
<pre><code>--file=gs://MyBucket/Photo/example3.jpg
</code></pre>
<p>In my python app I'm opening the image in this way:</p>
<pre><code>image = misc.imread(filename)
</code></pre>
<p>But the task produce the following error:</p>
<pre><code> "FileNotFoundError: [Errno 2] No such file or directory: 'gs://MyBucket/Photo/example3.jpg'"
</code></pre>
<p>I don't know if is a problem with the Google Cloud Storage permissions or the way I'm opening the image.</p>
<p>Thanks in advance! </p>
|
<p>To read image files from Google cloud storage, use tensorflow code like this, since native Python can not read from blob stores:</p>
<pre><code>image_contents = tf.read_file(filename)
image = tf.image.decode_jpeg(image_contents, channels=3)
image = tf.image.convert_image_dtype(image, dtype=tf.float32) # 0-1
</code></pre>
|
tensorflow|google-cloud-storage|google-cloud-ml
| 0
|
8,641
| 51,139,450
|
How to use numpy to operate on tensors in tensorflow lite
|
<p>I have a tensorflow graph that attempts to split an image into three single channel images.</p>
<pre><code>input_image = tf.placeholder(name="input_image", dtype=tf.float32, shape=[512 * 512 *3])
feed_dict ={input_image:resized_image_data}
channel_image = tf.reshape(input_image, (512, 512, 3))
</code></pre>
<p>I slice the tensor, compute blur and then use numpy's dstack on the "evaluated" tensors.
This works in tensorflow but tensorflow lite fails with "Can not allocate memory" error.</p>
<pre><code>r,g,b = channel_image[:,:,0],channel_image[:,:,1],channel_image[:,:,2]
rtensor = tf.convert_to_tensor(r)
gtensor = tf.convert_to_tensor(g)
btensor = tf.convert_to_tensor(b)
rbatch = tf.expand_dims(tf.expand_dims(rtensor, axis=2), axis=0)
gbatch = tf.expand_dims(tf.expand_dims(gtensor, axis=2), axis=0)
bbatch = tf.expand_dims(tf.expand_dims(btensor, axis=2), axis=0)
rblur = tf.squeeze(blur(one_chan_kernel, strides, rbatch, 2), name="rblur")
gblur = tf.squeeze(blur(one_chan_kernel, strides, gbatch, 2), name="gblur")
bblur = tf.squeeze(blur(one_chan_kernel, strides, bbatch, 2), name="bblur")
result_np = np.dstack((rblur.eval(feed_dict=feed_dict), gblur.eval(feed_dict=feed_dict), bblur.eval(feed_dict=feed_dict)))
result = tf.expand_dims(tf.convert_to_tensor(result_np, name="result_tensor"), axis=0)
</code></pre>
<p>How can I use numpy to operate on the tensor results?
I can not use tf.unstack and tf.stack since they are not currently implemented in tensorflow. lite</p>
<p>The logcat is:</p>
<blockquote>
<pre><code> --------- beginning of crash
</code></pre>
<p>07-02 11:20:36.486 6553-6553/camerafragment E/AndroidRuntime: FATAL EXCEPTION: main
Process: camerafragment, PID: 6553
java.lang.RuntimeException: Unable to resume activity {camerafragment/camerafragment.MainActivity}: java.lang.NullPointerException: Can not allocate memory for the interpreter
at android.app.ActivityThread.performResumeActivity(ActivityThread.java:3773)
at android.app.ActivityThread.handleResumeActivity(ActivityThread.java:3805)
at android.app.servertransaction.ResumeActivityItem.execute(ResumeActivityItem.java:51)
at android.app.servertransaction.TransactionExecutor.executeLifecycleState(TransactionExecutor.java:145)
at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:70)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1797)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loop(Looper.java:193)
at android.app.ActivityThread.main(ActivityThread.java:6642)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:858)
Caused by: java.lang.NullPointerException: Can not allocate memory for the interpreter
at org.tensorflow.lite.NativeInterpreterWrapper.createInterpreter(Native Method)
at org.tensorflow.lite.NativeInterpreterWrapper.(NativeInterpreterWrapper.java:63)
at org.tensorflow.lite.NativeInterpreterWrapper.(NativeInterpreterWrapper.java:51)
at org.tensorflow.lite.Interpreter.(Interpreter.java:90)
at camerafragment.MainActivity.initializeInference(MainActivity.java:230)
at camerafragment.MainActivity.onResume(MainActivity.java:91)
at android.app.Instrumentation.callActivityOnResume(Instrumentation.java:1412)
at android.app.Activity.performResume(Activity.java:7287)
at android.app.ActivityThread.performResumeActivity(ActivityThread.java:3765)
at android.app.ActivityThread.handleResumeActivity(ActivityThread.java:3805)
at android.app.servertransaction.ResumeActivityItem.execute(ResumeActivityItem.java:51)
at android.app.servertransaction.TransactionExecutor.executeLifecycleState(TransactionExecutor.java:145)
at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:70)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1797)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loop(Looper.java:193)
at android.app.ActivityThread.main(ActivityThread.java:6642)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:858) </p>
</blockquote>
|
<p>Until recently stack and unstack weren't supported by TensorFlow Lite. The "cannot allocate memory" error is likely due to the fact that the conversion failed. If you try again using tensorflow's nightly it might work.</p>
|
tensorflow|tensorflow-lite
| 0
|
8,642
| 51,536,111
|
How to perform a two sample t test in python using any statistics library similar to R?
|
<p>I can do this in R for 2 sample T-test:</p>
<pre><code>t.test(x, y = NULL, alternative = c("two.sided", "less", "greater"), mu = 0,
paired = FALSE, var.equal = FALSE, conf.level = 0.95)
</code></pre>
<p>I want some function where I can pass this mu(difference in mean) parameter in Python ttest?</p>
|
<p>R has one function <code>t.test()</code> to perform the Student's T test, while Python employs more methods.</p>
<p>If you want to perform a <strong>one sample t-test</strong> with mu as the true mean μ of the population from which the data is sampled, you should use <code>scipy.stats.ttest_1samp</code> and pass the parameter through <em>popmean</em>.
Docs are <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_1samp.html" rel="nofollow noreferrer">here</a>. </p>
<p>If you want to perform a <strong>two sample t-test</strong> with mu as the difference in means, <code>statsmodels.stats.weightstats.ttest_ind</code> from the <code>statsmodels</code> module is the right function, with mu passed through <em>value</em>.
Docs are <a href="http://www.statsmodels.org/stable/generated/statsmodels.stats.weightstats.ttest_ind.html" rel="nofollow noreferrer">here</a>, as well as a <a href="https://stackoverflow.com/questions/20682795/is-there-a-python-t-test-for-difference">link</a> to a useful answer.</p>
<p>Other links you may find useful are these: </p>
<ul>
<li><a href="https://plot.ly/python/t-test/" rel="nofollow noreferrer">https://plot.ly/python/t-test/</a> - T-tests with plotly</li>
<li><a href="https://towardsdatascience.com/inferential-statistics-series-t-test-using-numpy-2718f8f9bf2f" rel="nofollow noreferrer">https://towardsdatascience.com/inferential-statistics-series-t-test-using-numpy-2718f8f9bf2f</a>
T-test using Python and Numpy</li>
<li><a href="https://pythonfordatascience.org/independent-t-test-python/" rel="nofollow noreferrer">https://pythonfordatascience.org/independent-t-test-python/</a> - T-test
| Python for Data Science</li>
</ul>
|
python|numpy|scipy|t-test
| 1
|
8,643
| 51,129,506
|
Converting numpy array to picture
|
<p>So I have got a string of characters and I am representing it by a number between 1-5 in a numpy array. Now I want to convert it to a pictorial form by first repeating the string of numbers downwards so the picture becomes broad enough to be visible (since single string will give a thin line of picture). My main problem is how do I convert the array of numbers to a picture?</p>
|
<p>This would be a minimal working example to visualize with matplotlib:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# generate 256 by 1 vector of values 1-5
img = np.random.randint(1,6, 256)
# transpose for visualization
img = np.expand_dims(img, 1).T
# force aspect ratio
plt.imshow(img, aspect=100)
# or, alternatively use aspect='auto'
plt.show()
</code></pre>
<p>You can force the aspect ratio of the plotted figure by simply setting the <code>aspect</code> option of <code>imshow()</code></p>
|
python|python-3.x|image|numpy
| 1
|
8,644
| 51,259,166
|
pandas how to compare rows of 2 dataframes regardless of order
|
<pre><code>import pandas as pd
df1 = pd.DataFrame(index=[1,2,3,4])
df1['A'] = [1,2,5,4]
df1['B'] = [5,6,9,8]
df1['C'] = [9,10,1,12]
>>> df1
A B C
1 1 5 9
2 2 6 10
3 5 9 1
4 4 8 12
</code></pre>
<p>I want to compare rows of df1 and get a result of row1(1,5,9) == row3(5,9,1).</p>
<p>It means I care only contained items of row and ignore order of items of row. </p>
|
<p>I think need sorting each row by <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.sort.html" rel="nofollow noreferrer"><code>np.sort</code></a>:</p>
<pre><code>df2 = pd.DataFrame(np.sort(df1.values, axis=1), index=df1.index, columns=df1.columns)
print (df2)
A B C
1 1 5 9
2 2 6 10
3 1 5 9
4 4 8 12
</code></pre>
<p>And then remove duplicates by inverted <code>(~)</code> boolean mask created by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html" rel="nofollow noreferrer"><code>duplicated</code></a>:</p>
<pre><code>df2 = pd.DataFrame(np.sort(df1.values, axis=1), index=df1.index)
print (df2)
0 1 2
1 1 5 9
2 2 6 10
3 1 5 9
4 4 8 12
df1 = df1[~df2.duplicated()]
print (df1)
A B C
1 1 5 9
2 2 6 10
4 4 8 12
</code></pre>
|
python|pandas
| 2
|
8,645
| 48,371,188
|
Django - how to make a complex math annotation (k Nearest Neighbors)
|
<p>I have this model:</p>
<pre><code>class Image(models.Model):
title = models.CharField(max_length=200)
image = models.ImageField(upload_to='img/')
signature = models.TextField(null = True)
</code></pre>
<p>The signature is a numpy monodimensional vector encoded in json. In order to make my query, I have to decode each object signature into a nparray, and make a dot product between each object's signature and a given vector, then annotate as a float (named "score") field beside each raw. Lastly I have to order from max to min.</p>
<p>I tried this in view.py</p>
<pre><code>def image_sorted(request):
query_signature = extract_feat(settings.MEDIA_ROOT + "/cache" + "/003_ant_image_0003.jpg") # a NParray object
image_list = Image.objects.annotate(score=np.dot(
JSONVectConverter.json_to_vect(F('signature')), query_signature.T
).astype(float)).order_by('score') #JSONVectConverter is a class of mine
return render(request, 'images/sorted.html', {'image_sorted': image_list})
</code></pre>
<p>of course it doesn't work. I think "F()" operator is out of scope...</p>
<p>If you're wondering, I'm writing an image retrieval webapp for my university thesis. </p>
<p>Thank you.</p>
<p>EDIT:
I found <a href="https://stackoverflow.com/questions/43748805/django-dot-product-with-postgres-arrayfields">this</a> that is quite the same problem (He use postgres instead of MySQL) </p>
<p>EDIT2: I just remember now what is the last solution I've adopted! First I pull out every vector from the DB and mantain it in RAM, then I make some simple computes to find the K-Nearest Neighbors. Then, I retrieve from the DB the respective image using its index (primary key). So I decouple this task from Django ORM. Here's the code (from the Rest API)</p>
<pre><code>def query_over_db(query_signature, page):
query_signature = np.array(query_signature)
t0 = time.time()
descriptor_matrix = cache.get('descriptor_matrix')
id_vector = cache.get('id_vector')
if not descriptor_matrix:
id_vector = []
descriptor_matrix = []
images_dict = Image.objects.all().values('id', 'signature')
for image in images_dict:
s = image['signature']
descriptor = np.array(s)
descriptor_matrix.append(descriptor)
id_vector.append(image['id'])
cache.set('id_vector', id_vector)
cache.set('descriptor_matrix', descriptor_matrix)
t1 = time.time()
print("time to pull out the descriptors : " + str(t1 - t0))
t1 = time.time()
#result = np.abs(np.dot(descriptor_matrix, query_signature.T))
#result = np.sum((descriptor_matrix - query_signature)**2, axis=1)
result = ne.evaluate('sum((descriptor_matrix - query_signature)**2, axis=1)')
t2 = time.time()
print("time to calculate similarity: " + str(t2 - t1))
perm = np.argsort(result)[(page - 1) * 30:page * 30]
print(perm.shape)
print(len(id_vector))
perm_id = np.array(id_vector)[perm]
print(len(perm_id))
print("printing sort")
print(np.sort(result)[0])
t4 = time.time()
print("time to order the result: " + str(t4 - t2))
qs = Image.objects.defer('signature').filter(id__in=perm_id.tolist())
qs_new = []
for i in range(len(perm_id)):
qs_new.append(qs.get(id=perm_id[i]))
t3 = time.time()
print("time to get the results from the DB : " + str(t3 - t2))
print("total time : " + str(t3 - t0))
print(result[perm])
return qs_new
</code></pre>
|
<p>So that's the final working code:</p>
<pre><code>def image_sorted(request):
query_signature = extract_feat(settings.MEDIA_ROOT + "/cache" + "/001_accordion_image_0001.jpg") # a NParray object
#query_signature = extract_feat(settings.MEDIA_ROOT + "/cache" + "/003_ant_image_0003.jpg") # a NParray object
value_dict = {}
for image in Image.objects.all():
S = image.signature
value_dict[image.signature] = np.dot(
JSONVectConverter.json_to_vect(S),
query_signature.T
).astype(float)
whens = [
When(signature=k, then=v) for k, v in value_dict.items()
]
qs = Image.objects.all().annotate(
score=Case(
*whens,
default=0,
output_field=FloatField()
)
).order_by('-score')
for image in qs:
print(image.score)
return render(request, 'images/sorted.html', {'image_sorted': qs})
</code></pre>
<p>Thanks to Omar for helping me! Of course I'm still here if there are finer solutions.</p>
|
python|django|numpy|math|data-retrieval
| 0
|
8,646
| 48,017,713
|
Boolean Comparison across multiple dataframes
|
<p>I have an issue where I want to compare values across multiple dataframes. Here is a snippet example:</p>
<pre><code>data0 = [[1,'01-01'],[2,'01-02']]
data1 = [[11,'02-30'],[12,'02-25']]
data2 = [[8,'02-30'],[22,'02-25']]
data3 = [[7,'02-30'],[5,'02-25']]
df0 = pd.DataFrame(data0,columns=['Data',"date"])
df1 = pd.DataFrame(data1,columns=['Data',"date"])
df2 = pd.DataFrame(data2,columns=['Data',"date"])
df3 = pd.DataFrame(data3,columns=['Data',"date"])
result=(df0['Data']| df1['Data'])>(df2['Data'] | df3['Data'])
</code></pre>
<p>What I would like to do as I hope it can be seen is say if a value in <code>df0</code> <code>rowX</code> or <code>df1</code> <code>rowX</code> is greater than <code>df2</code> <code>rowX</code> or <code>df3</code> <code>rowX</code> return <code>True</code> else it should be <code>false</code>. In the code above 11 in <code>df1</code> is greater than both 8 and 7 (df2 and 3 respectively) so the result should be True and then for the second row neither 2 or 12 is greater than 22 (df2) so should be False. However, result gives me</p>
<pre><code>False,False
</code></pre>
<p>instead of</p>
<pre><code>True,False
</code></pre>
<p>any thoughts or help?</p>
|
<h2>Problem</h2>
<p>For your data:</p>
<pre><code>>>> df0['Data']
0 1
1 2
Name: Data, dtype: int64
>>> df1['Data']
0 11
1 12
Name: Data, dtype: int64
</code></pre>
<p>your a doing a <em>bitwise or</em> with <code>|</code>:</p>
<pre><code>>>> df0['Data']| df1['Data']
0 11
1 14
Name: Data, dtype: int64
>>> df2['Data']| df3['Data']
0 15
1 23
Name: Data, dtype: int64
</code></pre>
<p>Do this for the single numbers:</p>
<pre><code>>>> 1 | 11
11
>>> 2 | 12
14
</code></pre>
<p>This is not what you want.</p>
<h2>Solution</h2>
<p>You can use <code>np.maximum</code> for find the biggest values from each series:</p>
<pre><code>>>> np.maximum(df0['Data'], df1['Data']) > np.maximum(df2['Data'], df3['Data'])
0 True
1 False
Name: Data, dtype: bool
</code></pre>
|
python|python-3.x|pandas|dataframe
| 2
|
8,647
| 48,281,285
|
Seclective parse BeautifulSoup
|
<p>I want to parse data from Drug website. This parse need to be selective and this is the code I used:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
def get_details(url):
print('details:', url)
# get subpage
r = requests.get(url)
soup = BeautifulSoup(r.text ,"lxml")
# get data on subpabe
dts = soup.findAll('dt')
dds = soup.findAll('dd')
# display details
for dt, dd in zip(dts, dds):
print(dt.text)
print(dd.text)
print('---')
print('---------------------------')
def drug_data():
url = 'https://www.drugbank.ca/drugs/'
while url:
print(url)
r = requests.get(url)
soup = BeautifulSoup(r.text ,"lxml")
# get links to subpages
links = soup.select('strong a')
for link in links:
# exeecute function to get subpage
get_details('https://www.drugbank.ca' + link['href'])
# next page url
url = soup.findAll('a', {'class': 'page-link', 'rel': 'next'})
print(url)
if url:
url = 'https://www.drugbank.ca' + url[0].get('href')
else:
break
drug_data()
</code></pre>
<p>This is working well. But what about more deep and selective parse ? Let's say for this drug : <a href="https://www.drugbank.ca/drugs/DB01614" rel="nofollow noreferrer">https://www.drugbank.ca/drugs/DB01614</a> When I parse "PATENT" using my code, it will concatenate all the information of "PATENT" ( represented as a sub-table) in one paragraph.</p>
<p>Ideally, if I can parse PATENTS but extract only "patent number" , "approved" and the country represented by a Flag ! in separate columns !
Some help ? </p>
<p>Here is the patent screen shot:
<a href="https://i.stack.imgur.com/hsmCr.jpg" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>If you are looking for <code>Accession Number</code> and <code>Groups</code>, you can do the following: </p>
<pre><code>def get_details(url):
print('Details:', url)
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
accession_dt = soup.find('dt', text='Accession Number')
accession_number = accession_dt.nextSibling.string
groups_dt = soup.find('dt', text='Groups')
groups = groups_dt.nextSibling.string
print('Accession number: ' + accession_number)
print('Groups: ' + groups)
</code></pre>
<p>For the url that you provided, the output is as follows: </p>
<pre><code>>>> get_details('https://www.drugbank.ca/drugs/DB01614')
Details: https://www.drugbank.ca/drugs/DB01614
Accession number: DB01614
Groups: Approved, Vet Approved
</code></pre>
<hr>
<p>If you want to generalize this, you can define a function that returns the text of the key that you pass as a parameter: </p>
<pre><code>def get_value(soup, key):
key_dt = soup.find('dt', text=key)
return key_dt.nextSibling.string
</code></pre>
<p>To use this function, you can do this: </p>
<pre><code>def get_details(url):
print('Details:', url)
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
accession_number = get_value(soup, 'Accession Number')
groups = get_value(soup, 'Groups')
print('Accession number: ' + accession_number)
print('Groups: ' + groups)
</code></pre>
<p>Which gives the same output as shown above.</p>
<hr>
<p><strong>EDIT: The answer to the question</strong></p>
<p>This will give directly what you wanted.</p>
<pre><code>def get_details(url):
print('Details:', url)
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
patents = soup.find('dt', text='Patents').nextSibling
if patents.string == 'Not Available':
print('Patent: Not Available')
else:
for i, row in enumerate(patents.find('tbody').findAll('tr')):
print('\nPatent entry %d:' % (i+1))
patent_number = row.find('a').text
patent_approved = row.findAll('td')[2].text
patent_country = row.find('img')['alt']
print('Patent number: ' + patent_number)
print('Approved: ' + patent_approved)
print('Country: ' + patent_country)
</code></pre>
<p>For the drug: <a href="https://www.drugbank.ca/drugs/DB00639" rel="nofollow noreferrer">https://www.drugbank.ca/drugs/DB00639</a>, the output is </p>
<pre><code>Details: https://www.drugbank.ca/drugs/DB00639
Patent entry 1:
Patent number: US5266329
Approved: 1993-11-30
Country: Us
Patent entry 2:
Patent number: US5993856
Approved: 1997-11-17
Country: Us
</code></pre>
|
python|python-2.7|pandas|parsing|beautifulsoup
| 1
|
8,648
| 48,348,882
|
Faster way to remove punctuations and special characters in pandas dataframe column
|
<p>I'm using this below code to remove special characters and punctuations from a column in pandas dataframe. But this method of using regex.sub is not time efficient. Is there other options I could try to have better time efficiency and remove punctuations and special characters? Or the way I'm removing special characters and parsing it back to the column, pandas dataframe is causing me major computation burn? </p>
<pre><code>for n, string in data['text'].iteritems():
data['text'] = re.sub('([{string.punctuation}“”¨«»®´·º½¾¿¡§£₤‘’])','', string)
</code></pre>
|
<p>One way would be to keep only alphanumeric. Consider this dataframe</p>
<pre><code>df=pd.DataFrame({'Text':['#^#346fetvx@!.,;:', 'fhfgd54@!#><?']})
Text
0 #^#346fetvx@!.,;:
1 fhfgd54@!#><?
</code></pre>
<p>You can use </p>
<pre><code>df['Text'] = df['Text'].str.extract('(\w+)', expand = False)
Text
0 346fetvx
1 fhfgd54
</code></pre>
|
python|regex|pandas
| 7
|
8,649
| 70,983,208
|
How to fill in gaps of duplicate indices in dataframe?
|
<p>I have a dataframe like as shown below</p>
<pre><code>tdf = pd.DataFrame({'grade': np.random.choice(list('AAAD'),size=(5)),
'dash': np.random.choice(list('PPPS'),size=(5)),
'dumeel': np.random.choice(list('QWRR'),size=(5)),
'dumma': np.random.choice((1234),size=(5)),
'target': np.random.choice([0,1],size=(5))
})
</code></pre>
<p>I am trying to create a multi-index dataframe using some of the input columns</p>
<p>So, I tried the below</p>
<pre><code>tdf.set_index(['grade','dumeel'],inplace=True)
</code></pre>
<p>However, this results in missing/gap for duplicate entries (in red highlight)</p>
<p><a href="https://i.stack.imgur.com/zFXqj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zFXqj.png" alt="enter image description here" /></a></p>
<p>How can I avoid that and show my dataframe with all indices (whether it is duplicate or not)</p>
<p>I would like to my output to have all rows with corresponding indices based on original dataframe</p>
|
<p>It is only display issue:</p>
<pre><code>tdf.set_index(['grade','dumeel'],inplace=True)
print (tdf)
dash dumma target
grade dumeel
A W S 855 1
R P 498 1
R P 378 0
W P 211 0
W P 12 0
with pd.option_context("display.multi_sparse", False):
print (tdf)
dash dumma target
grade dumeel
A W S 855 1
A R P 498 1
A R P 378 0
A W P 211 0
A W P 12 0
</code></pre>
|
python|pandas|dataframe|series|multi-index
| 1
|
8,650
| 70,928,543
|
What are the steps involved in creating a Tensorflow.lite model?
|
<p><em><strong>What are the steps involved in creating and training a Tensorflow model to use in an Android app ?</strong></em></p>
<p>Below is what I think needs to be done in my scenario (based on what I've researched)</p>
<ul>
<li>I know I need to gather training images of various car parts and label them using labelImg . Then split into training and testing folders.</li>
<li>Create and train the model .</li>
<li>Export it to a .tflite format to later be used in the Android app.</li>
</ul>
<p>Questions.</p>
<ul>
<li>What resources can/should I use ?</li>
<li>The biggest one would be, what coding examples or coding resources should I follow to create the TF Model ?</li>
</ul>
<p><strong>I would highly appreciate being pointed in the right direction as to where I should start.</strong></p>
|
<p>To answer your question:
Can I write the above training model in Python locally on my PC or do I have to use Google colab?
--> You can try both seperately, both work.
What resources can/should I use ?
--> If your refering to GPU you can use that through colab [runtime->change runtime->hardware/device and GPU]
The biggest one would be, what coding examples or coding resources should I follow to create the TF Model ?
-->Please refer to this link and let us know [https://www.tensorflow.org/lite/examples]</p>
|
python|android|tensorflow|google-colaboratory
| 1
|
8,651
| 71,003,520
|
Find the index number of where a variable fits between in pandas column
|
<p>I have the following dataframe:</p>
<pre><code>import pandas as pd
#Create DF
d = {
'Category': ['A','B','C','D','E','F','G'],
'Value':[10,20,30,40,50,60,70],
}
df = pd.DataFrame(data=d)
df
</code></pre>
<p><a href="https://i.stack.imgur.com/EH7GF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EH7GF.png" alt="enter image description here" /></a></p>
<p>I then have a defined variable of a number that may change:
e.g <code>val = 45</code></p>
<p>How do i search the dataframe column <code>Value</code> and return the index number of the row where <code>val</code> would fit.</p>
<p>Expected value would be:</p>
<p>for <code>val = 45</code> the index i would like returned is 4</p>
<p>for <code>val = 22</code> the index i would like returned is 2</p>
<p>for <code>val = 60</code> the index i would like returned is 6 (would go after if it has a match)</p>
<p>Any help would be greatly appreciated!</p>
|
<p>You can just use <code>argmax</code></p>
<pre><code>(df['Value'] > 45).argmax() # 4
(df['Value'] > 22).argmax() # 2
(df['Value'] > 60).argmax() # 6
</code></pre>
<p>This assumes <code>'Value'</code> is sorted, but it works because the result of the comparison is a boolean array, so it is returning the index of the <em>first</em> <code>True</code> value.</p>
<h2>Edit</h2>
<p>To be more rigorous, we can support numbers that are greater than any value in the array:</p>
<pre><code>tmp = (df['Value'] > 100)
index = tmp.argmax() if tmp.any() else len(df)
</code></pre>
<p>In this case, we correctly get 7 whereas using <code>argmax</code> alone returns 0.</p>
<p>If you <strong>hate</strong> the extra one line, it looks like you can use the <a href="https://docs.python.org/3/whatsnew/3.8.html" rel="nofollow noreferrer">walrus operator</a> in Python 3.8+:</p>
<pre><code> tmp.argmax() if (tmp := (df['Value'] > 100)).any() else len(df)
</code></pre>
<h2>Correction</h2>
<p>Thanks to @Bill, I can see that <code>argmax</code> is in fact deprecated, and <code>idxmax</code> is the correct way to go:</p>
<pre><code>(df['Value'] > 45).idxmax() # 4
(df['Value'] > 22).idxmax() # 2
(df['Value'] > 60).idxmax() # 6
tmp.idxmax() if (tmp := (df['Value'] > 100)).any() else len(df) # 7
tmp.idxmax() if (tmp := (df['Value'] > 22)).any() else len(df) # 2
</code></pre>
|
python|pandas
| 7
|
8,652
| 51,855,777
|
Remove column without headers and data
|
<p>I have a CSV file and when I bring it to python as a dataframe, it create a new Unnamed: 1 column in dataframe. So how could I remove it or filter it.
<a href="https://i.stack.imgur.com/rLTGd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rLTGd.png" alt="Here, whats my csv looks like"></a></p>
<p>So I need only Title and Date column in my dataframe not the column B of csv. Dataframe look like,</p>
<pre><code> Title Unnamed: 1 Date
0 Đồng Nai Province makes it easier for people w... NaN 18/07/2018
1 Ex-NBA forward Washington gets six-year prison... NaN 10/07/2018
2 Helicobacter pylori NaN 10/07/2018
3 Paedophile gets prison term for sexual assault NaN 03/07/2018
4 Immunodeficiency burdens families NaN 28/06/2018
</code></pre>
|
<p>Drop that column from your dataframe: </p>
<p><code>df.drop(["Unnamed: 1"], inplace=True)</code></p>
|
python|pandas|csv|dataframe
| 2
|
8,653
| 51,722,531
|
Pandas bar plot with both categorical and numerical data
|
<p>I have a pandas dataframe named 'data' with data similar to the following table. I want to plot them in python as bar plots. </p>
<pre><code>Name X Y Z Activity
AAA1 0.0 0.0 2.0 Low
AAA2 0.0 2.0 6.0 Medium
AAA3 1.0 2.0 3.0 High
AAA4 2.0 1.0 4.0 High
</code></pre>
<p>What I tried is, with a bit of setting color and style,</p>
<pre><code>sns.set(style="white", context="talk", palette="husl")
data.plot.bar(x='Name', width=.8)
</code></pre>
<p>While this plots a bar diagram showing X, Y, and Z for each Name (9 bars in total), I cannot figure out a way of showing the 'Activity' corresponding to each 'Name' in the bar diagrams. It will be nice if I can somehow show this categorical column graphically, either by color or by some style, on the top of the bar charts. I am using Python 3.7.0 with
Pandas 0.23.3,
matplotlib 2.2.2, and
seaborn version 0.9.0 .</p>
|
<p>I don't know if this solution fit you, but at first I group data by <em>Activity</em> and <em>Name</em> and then I plot barplot. Example (I get your DataFrame):</p>
<pre><code>import matplotlib.pyplot as plt
df.groupby(['Activity','Name']).sum().plot(kind='bar')
</code></pre>
<p>And the result
<a href="https://i.stack.imgur.com/YOS1i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YOS1i.png" alt="Barplot"></a></p>
|
python|pandas|dataframe
| 2
|
8,654
| 51,820,952
|
Convert 2D numpy array into pandas pivot table
|
<p>I have a 2D numpy array representing depth on grid of coordinates.</p>
<pre><code>z = np.array([[100, 101, 102, 103],
[101, 102, 103, 104],
[102, 103, 104, 105],
[103, 104, 105, 106],
[104, 105, 106, 107]])
</code></pre>
<p>I also have a 1D numpy array listing the vertical coordinates and another 1D numpy array listing the horizontal coordinates.</p>
<pre><code>x = np.array([10, 11, 12, 13])
y = np.array([20, 21, 22, 23, 24])
</code></pre>
<p>In some cases, this data is provided as a list of 'x y z' data e.g.:</p>
<pre><code>10 20 100
10 21 101
10 22 102
10 23 103
10 24 104
11 20 101
11 21 102
...
12 23 105
12 24 106
13 20 103
13 21 104
13 22 105
13 23 106
13 24 107
</code></pre>
<p>In this case creating a pivot table is trivial....</p>
<pre><code>data = pd.read_csv(file, header=None, names=['x', 'y', 'z'], delim_whitespace=True)
pvt = data.pivot_table(values='z', index='y', columns='x', fill_value=-100000)
</code></pre>
<p>How can I create a pivot table with the same labels, format etc starting with the data in three arrays?</p>
|
<p>I think I've found a way using numpy array functions to get the data in the correct format. It's a valid answer but I hoped there was a more elegant way to do it with pandas.</p>
<p>Since I can already pivot from the DataFrame returned by <code>read_csv()</code>, the simplest option is to get the data in the same format as the read DataFrame...</p>
<pre><code>zf = z.flatten()
xr = np.repeat(x, y.size)
yt = np.tile(y, x.size)
d = np.stack((xr, yt, zf), axis=-1)
data = pd.DataFrame(data=d, columns=['x', 'y', 'z'])
pvt = data.pivot_table(values='z', index='y', columns='x')
</code></pre>
|
python|arrays|pandas|numpy|pivot-table
| 0
|
8,655
| 51,687,727
|
How to plot pandas DataFrame with date (Year/Month)?
|
<p>I've got a pandas dataframe populated with the following data : </p>
<pre><code> Date val count
0 2013-01 A 1
1 2013-01 M 1
2 2013-02 M 2
3 2013-03 B 3
4 2013-03 M 5
5 2014-05 B 1
</code></pre>
<p>I'm new to matplotlib and couldn't figure how to plot this data using as Y axis (count), as X axis (Date) and having three separate curves, one for each value of val. </p>
|
<p>Using <strong><code>pivot</code></strong> and <strong><code>plot</code></strong> (<code>A</code> isn't showing up because it only has a single point and is getting hidden by the first point of <code>M</code>). You also have to convert your <code>Date</code> column to <code>datetime</code> in order to accurately display the X-Axis:</p>
<pre><code>df.Date = pd.to_datetime(df.Date)
df.pivot(index='Date', columns='val', values='count').plot(marker='o')
</code></pre>
<p><a href="https://i.stack.imgur.com/ddmGr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ddmGr.png" alt="enter image description here"></a></p>
<p>If you'd like to show <code>NaN</code> values as zero instead, just use <strong><code>fillna</code></strong>:</p>
<pre><code>df.pivot(index='Date', columns='val', values='count').fillna(0).plot(marker='o')
</code></pre>
|
python|pandas
| 2
|
8,656
| 41,771,992
|
HDF5 adding numpy arrays slow
|
<p>First time using hdf5 so could you help me figure out what is wrong, why adding 3d numpy arrays is slow.
Preprocessing takes 3s, adding 3d numpy array (100x512x512) 30s and rising with each sample</p>
<p>First I create hdf with:</p>
<pre><code>def create_h5(fname_):
"""
Run only once
to create h5 file for dicom images
"""
f = h5py.File(fname_, 'w', libver='latest')
dtype_ = h5py.special_dtype(vlen=bytes)
num_samples_train = 1397
num_samples_test = 1595 - 1397
num_slices = 100
f.create_dataset('X_train', (num_samples_train, num_slices, 512, 512),
dtype=np.int16, maxshape=(None, None, 512, 512),
chunks=True, compression="gzip", compression_opts=4)
f.create_dataset('y_train', (num_samples_train,), dtype=np.int16,
maxshape=(None, ), chunks=True, compression="gzip", compression_opts=4)
f.create_dataset('i_train', (num_samples_train,), dtype=dtype_,
maxshape=(None, ), chunks=True, compression="gzip", compression_opts=4)
f.create_dataset('X_test', (num_samples_test, num_slices, 512, 512),
dtype=np.int16, maxshape=(None, None, 512, 512), chunks=True,
compression="gzip", compression_opts=4)
f.create_dataset('y_test', (num_samples_test,), dtype=np.int16, maxshape=(None, ), chunks=True,
compression="gzip", compression_opts=4)
f.create_dataset('i_test', (num_samples_test,), dtype=dtype_,
maxshape=(None, ),
chunks=True, compression="gzip", compression_opts=4)
f.flush()
f.close()
print('HDF5 file created')
</code></pre>
<p>Then I run code updating hdf file:</p>
<pre><code>num_samples_train = 1397
num_samples_test = 1595 - 1397
lbl = pd.read_csv(lbl_fldr + 'stage1_labels.csv')
patients = os.listdir(dicom_fldr)
patients.sort()
f = h5py.File(h5_fname, 'a') #r+ tried
train_counter = -1
test_counter = -1
for sample in range(0, len(patients)):
sw_start = time.time()
pat_id = patients[sample]
print('id: %s sample: %d \t train_counter: %d test_counter: %d' %(pat_id, sample, train_counter+1, test_counter+1), flush=True)
sw_1 = time.time()
patient = load_scan(dicom_fldr + patients[sample])
patient_pixels = get_pixels_hu(patient)
patient_pixels = select_slices(patient_pixels)
if patient_pixels.shape[0] != 100:
raise ValueError('Slices != 100: ', patient_pixels.shape[0])
row = lbl.loc[lbl['id'] == pat_id]
if row.shape[0] > 1:
raise ValueError('Found duplicate ids: ', row.shape[0])
print('Time preprocessing: %0.2f' %(time.time() - sw_1), flush=True)
sw_2 = time.time()
#found test patient
if row.shape[0] == 0:
test_counter += 1
f['X_test'][test_counter] = patient_pixels
f['i_test'][test_counter] = pat_id
f['y_test'][test_counter] = -1
#found train
else:
train_counter += 1
f['X_train'][train_counter] = patient_pixels
f['i_train'][train_counter] = pat_id
f['y_train'][train_counter] = row.cancer
print('Time saving: %0.2f' %(time.time() - sw_2), flush=True)
sw_el = time.time() - sw_start
sw_rem = sw_el* (len(patients) - sample)
print('Elapsed: %0.2fs \t rem: %0.2fm %0.2fh ' %(sw_el, sw_rem/60, sw_rem/3600), flush=True)
f.flush()
f.close()
</code></pre>
|
<p>The slowness is almost certainly due to the compression and chunking. It's hard to get this right. In my past projects I often had to turn off compression because it was too slow, although I have not given up on the idea of compression in HDF5 in general.</p>
<p>First you should try to confirm that compression and chunking are the cause of the performance issues. Turn off chunking and compression (i.e. leave out the <code>chunks=True, compression="gzip", compression_opts=4</code> parameters) and try again. I suspect it will be a lot faster. </p>
<p>If you want to use compression you must understand how chunking works, because HDF compresses the data chunk-by-chunk. Google it, but at least read the <a href="http://docs.h5py.org/en/latest/high/dataset.html#chunked-storage" rel="nofollow noreferrer">section on chunking from the h5py docs</a>. The following quote is crucial:</p>
<blockquote>
<p>Chunking has performance implications. It’s recommended to keep the total size of your chunks between 10 KiB and 1 MiB, larger for larger datasets. <strong>Also keep in mind that when any element in a chunk is accessed, the entire chunk is read from disk.</strong></p>
</blockquote>
<p>By setting <code>chunks=True</code> you let h5py determine the chunk sizes for you automatically (print the <code>chunks</code> property of the dataset to see what they are). Let's say the chunk size in the first dimension (your <code>sample</code> dimension) is 5 . This would mean that when you add one sample, the underlying HDF library will read all the chunks that contain that sample from disk (so in total it will read the 5 samples completely). For every chunk HDF will read it, uncompress it, add the new data, compress it, and write it back to disk. Needless to say, this is slow. This is mitigated by the fact that HDF has a chunk cache, so that uncompressed chunks can reside in memory. However the chunk cache seems to be rather small (see <a href="https://support.hdfgroup.org/HDF5/doc/H5.user/Chunking.html" rel="nofollow noreferrer">here</a>), so I think all the chunks are swapped in and out of the cache in every iteration of your for-loop. I couldn't find any setting in h5py to alter the chunk cache size.</p>
<p>You can explicitly set the chunk size by assigning a tuple to the <code>chunks</code> keyword parameter. With all this in mind you can experiment with different chunk sizes. My first experiment would be to set the chunk size in the first (sample) dimension to 1, so that individual samples can be accessed without reading other samples into the cache. Let me know if this helped, I'm curious to know.</p>
<p>Even if you find a chunk size that works well for writing the data, it may still be slow when reading, depending on which slices you read. When choosing the chunk size, keep in mind on how your application typically reads the data. You may have to adapt your file-creation routines to these chunk sizes (e.g. fill your data sets chunk by chunk). Or you can decide that it's simply not worth the effort and create uncompressed HDF5 files.</p>
<p>Finally, I would set <code>shuffle=True</code> in the <code>create_dataset</code> calls. This may get you a better compression ratio. It shouldn't influence the performance however.</p>
|
python|numpy|hdf5|h5py
| 2
|
8,657
| 64,610,026
|
CUDNN_STATUS_INTERNAL_ERROR in tensorflow 2.1 c++
|
<p>I face a problem as the title says when I load a pre-trained model(.pb model of YOLOv3) and infer with this model in tensorflow 2.1 c++. Error messages are as the following:</p>
<pre class="lang-sh prettyprint-override"><code>2020-10-30 21:36:20.245492: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
2020-10-30 21:36:20.269906: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
[InferCC] Model infer failed(2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node yolov3/yolo_darknet/conv2d/Conv2D}}]]
[[StatefulPartitionedCall/_791]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node yolov3/yolo_darknet/conv2d/Conv2D}}]]
0 successful operations.
</code></pre>
<p>Here is my configuration:</p>
<pre class="lang-sh prettyprint-override"><code>Ubuntu 18.04
Tensorflow 2.1 c++
Cuda 10.1
cuDNN 7.6.5
GPU memory ~6G
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ nvidia-smi
Fri Oct 30 21:50:24 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.57 Driver Version: 450.57 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 2060 Off | 00000000:01:00.0 On | N/A |
| N/A 46C P8 6W / N/A | 553MiB / 5931MiB | 1% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1242 G /usr/lib/xorg/Xorg 18MiB |
| 0 N/A N/A 1886 G /usr/bin/gnome-shell 50MiB |
| 0 N/A N/A 9708 G /usr/lib/xorg/Xorg 321MiB |
| 0 N/A N/A 9876 G /usr/bin/gnome-shell 128MiB |
| 0 N/A N/A 12591 G /usr/lib/firefox/firefox 3MiB |
| 0 N/A N/A 14848 G ...s/QtCreator/bin/qtcreator 3MiB |
| 0 N/A N/A 15696 G ...AAAAAAAAA= --shared-files 21MiB |
+-----------------------------------------------------------------------------+
</code></pre>
<p>I searched it on internet and it seems that my GPU memory is ran out(I'm not sure about it). So I add following codes to set GPU memory growth before load model:</p>
<pre class="lang-cpp prettyprint-override"><code>tensorflow::ConfigProto config;
config.mutable_gpu_options()->set_allow_growth(true);
</code></pre>
<p>But with no luck, errors are still there.</p>
<p>Or I want to know if it is indeed lack of GPU memory(~6G is not enough for YOLOv3 model)?
Please some one helps me out. Thanks.</p>
|
<p>Finally, I find a way to correct that error. Just pass the SessionOptions variable(call set_allow_growth with it) into LoadSavedModel function:</p>
<pre class="lang-cpp prettyprint-override"><code> if(tensorflow::MaybeSavedModelDirectory(modelName.toStdString()))
{
// set gpu memory growth true here
tensorflow::SessionOptions sessionOpt;
sessionOpt.config.mutable_gpu_options()->set_allow_growth(true);
// then pass that variable into LoadSavedModel
tensorflow::Status status = tensorflow::LoadSavedModel(
sessionOpt,
tensorflow::RunOptions(),
modelName.toStdString(),
{tensorflow::kSavedModelTagServe},
_model);
if(!status.ok())
{
qCritical("[InferCC] Load %s model failed - %s.", modelName.toStdString().c_str(), status.error_message().c_str());
_loaded = false;
return false;
}
}
</code></pre>
<p>Then run program while keep calling <code>nvidia-smi</code> in command line, showing that GPU memory is increasing and the last state(used 5278 MiB)as :</p>
<pre class="lang-sh prettyprint-override"><code>+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.57 Driver Version: 450.57 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 2060 Off | 00000000:01:00.0 On | N/A |
| N/A 58C P0 63W / N/A | 5832MiB / 5931MiB | 37% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 656 C ...VisTrack/release/VisTrack 5278MiB |
| 0 N/A N/A 1242 G /usr/lib/xorg/Xorg 18MiB |
| 0 N/A N/A 1886 G /usr/bin/gnome-shell 50MiB |
| 0 N/A N/A 4843 G /usr/lib/firefox/firefox 3MiB |
| 0 N/A N/A 5879 G /usr/lib/firefox/firefox 3MiB |
| 0 N/A N/A 9708 G /usr/lib/xorg/Xorg 293MiB |
| 0 N/A N/A 9876 G /usr/bin/gnome-shell 143MiB |
| 0 N/A N/A 12591 G /usr/lib/firefox/firefox 3MiB |
| 0 N/A N/A 13952 G /usr/lib/firefox/firefox 3MiB |
| 0 N/A N/A 14848 G ...s/QtCreator/bin/qtcreator 3MiB |
| 0 N/A N/A 15696 G ...AAAAAAAAA= --shared-files 21MiB |
+-----------------------------------------------------------------------------+
</code></pre>
|
c++|tensorflow
| 0
|
8,658
| 64,451,989
|
Grouping results of a groupby with too numerous cases into a "trash bin" level
|
<p>I need to "simplify" for reporting purposes rare events in a pandas DataFrame resulting from a group-by operation.</p>
<p>Let's take for example this DataFrame, where I use <code>colA</code> to count the occurrences of values in <code>colB</code></p>
<pre><code>df = pd.DataFrame(data={'colA':['a','b','c','d','e','a','a','b','b','a'],'colB':[1,2,3,4,5,1,1,2,2,1]})
df_grouped = df.groupby(['colA']).agg('count')
</code></pre>
<p>The result is:</p>
<pre><code> colB
colA
a 4
b 3
c 1
d 1
e 1
</code></pre>
<p>from this grouped dataframe I want to obtain a new dataframe, where the least frequent values, namely those corresponding to <code>colA={'c','d','e'}</code> are grouped in a new value of the <code>colA</code> level called <code>'other'</code>, that contains the total number of all these, like the following:</p>
<pre><code> colB
colA
a 4
b 3
other 3
</code></pre>
<p>Is there a simple way to perform this "put-rare-stuff-in-the-trashbin" operation?
Moreover, how can I do it in the presence of a MultiIndex?</p>
|
<p>Try this</p>
<pre><code>df_final = df_grouped.rename({k: 'Other' for k, v in df_grouped.colB.eq(1).items()
if v == True}).sum(level=0)
Out[671]:
colB
colA
a 4
b 3
Other 3
</code></pre>
|
python|pandas|pandas-groupby
| 2
|
8,659
| 64,554,908
|
How to count number of elements in a row greater than zero
|
<p>I need to count the number of values in each row that are greater than zero and store them in a new column</p>
<p>The df bellow:</p>
<pre><code> team goals goals_against games_in_domestic_league
0 juventus 1 0 0
1 barcelona 0 1 1
2 santos 2 1 2
</code></pre>
<p>Should become:</p>
<pre><code> team goals goals_against games_in_domestic_league total
0 juventus 1 0 0 1
1 barcelona 0 1 1 2
2 santos 2 1 2 3
</code></pre>
|
<p>First idea is select numeric columns, test if greater like <code>0</code> and count <code>True</code>s by <code>sum</code>:</p>
<pre><code>df['total'] = df.select_dtypes(np.number).gt(0).sum(axis=1)
</code></pre>
<p>If want specify columns by list:</p>
<pre><code>cols = ['goals','goals_against','games_in_domestic_league']
df['total'] = df[cols].gt(0).sum(axis=1)
</code></pre>
|
python|pandas
| 5
|
8,660
| 49,090,915
|
tensorflow installation(both 1.5.0 and 1.6.0) doesn't work on mac osx yosemite
|
<p>1.5.0 installs fine, but when I import tensorflow, I get this error:</p>
<pre class="lang-none prettyprint-override"><code>RuntimeError: module compiled against API version 0xa but this version
of numpy is 0x9 RuntimeError: module compiled against API version 0xa
but this version of numpy is 0x9 Traceback (most recent call last):
File "<stdin>", line 1, in <module> File
"/Library/Python/2.7/site-packages/tensorflow/__init__.py", line 24,
in <module>
from tensorflow.python import * File "/Library/Python/2.7/site-packages/tensorflow/python/__init__.py",
line 63, in <module>
from tensorflow.python.framework.framework_lib import * File "/Library/Python/2.7/site-packages/tensorflow/python/framework/framework_lib.py",
line 81, in <module>
from tensorflow.python.framework.sparse_tensor import SparseTensor File
"/Library/Python/2.7/site-packages/tensorflow/python/framework/sparse_tensor.py",
line 25, in <module>
from tensorflow.python.framework import tensor_util File "/Library/Python/2.7/site-packages/tensorflow/python/framework/tensor_util.py",
line 34, in <module>
from tensorflow.python.framework import fast_tensor_util File "__init__.pxd", line 163, in init
tensorflow.python.framework.fast_tensor_util ValueError: numpy.dtype
has the wrong size, try recompiling. Expected 88, got 96
</code></pre>
<p>1.6.0 fails to install with this error:</p>
<pre class="lang-none prettyprint-override"><code>DEPENDENCY ERROR
The target you are trying to run requires an OpenSSL implementation.
Your system doesn't have one, and either the third_party directory
doesn't have it, or your compiler can't build BoringSSL.
Please consult INSTALL to get more information.
If you need information about why these tests failed, run:
make run_dep_checks
make: Circular /private/tmp/pip-build-Lth8PD/grpcio/libs/opt/libares.a <- /private/tmp/pip-build-Lth8PD/grpcio/libs/opt/libz.a dependency dropped.
make: *** [stop] Error 1
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/tmp/pip-build-Lth8PD/grpcio/setup.py", line 311, in <module>
cmdclass=COMMAND_CLASS,
File "/Library/Python/2.7/site-packages/setuptools/__init__.py", line 129, in setup
return distutils.core.setup(**attrs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/Library/Python/2.7/site-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/command/install.py", line 573, in run
self.run_command('build')
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/command/build.py", line 127, in run
self.run_command(cmd_name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/Library/Python/2.7/site-packages/setuptools/command/build_ext.py", line 78, in run
_build_ext.run(self)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/command/build_ext.py", line 337, in run
self.build_extensions()
File "/private/tmp/pip-build-Lth8PD/grpcio/src/python/grpcio/commands.py", line 278, in build_extensions
raise Exception("make command failed!")
Exception: make command failed!
----------------------------------------
Command "/usr/bin/python -u -c "import setuptools,
tokenize;__file__='/private/tmp/pip-build-Lth8PD/grpcio/setup.py';f=getattr(tokenize, 'open', open)
(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code,
__file__, 'exec'))" install --record /tmp/pip-eSD2il-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/tmp/pip-build-Lth8PD/grpcio/
</code></pre>
|
<p>I ran into the same issue. The installation error is because the new version of tensorflow requires new dependencies (grpcio). Here is how I handle my problem.</p>
<p>Force installing the binary wheels.</p>
<pre><code>$ pip install --no-cache-dir --only-binary :all: grpcio==1.10.1
</code></pre>
<p>Then I can upgrade my tensorflow.</p>
<pre><code>$ pip install --upgrade tensorflow # for Python 2.7
$ pip3 install --upgrade tensorflow # for Python 3.n
</code></pre>
<p>Hope it helps.</p>
|
python-2.7|numpy|tensorflow
| 0
|
8,661
| 58,898,253
|
transfer learning - trying to retrain efficientnet-B07 on RTX 2070 out of memory
|
<p>this is the training code I am trying to run work when trying on <code>64gb ram CPU</code>
crush on <code>RTX 2070</code> </p>
<pre><code>config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.7
tf.keras.backend.set_session(tf.Session(config=config))
model = efn.EfficientNetB7()
model.summary()
# create new output layer
output_layer = Dense(5, activation='sigmoid', name="retrain_output")(model.get_layer('top_dropout').output)
new_model = Model(model.input, output=output_layer)
new_model.summary()
# lock previous weights
for i, l in enumerate(new_model.layers):
if i < 228:
l.trainable = False
# lock probs weights
new_model.compile(loss='mean_squared_error', optimizer='adam')
batch_size = 5
samples_per_epoch = 30
epochs = 20
# generate train data
train_datagen = ImageDataGenerator(
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
validation_split=0)
train_generator = train_datagen.flow_from_directory(
train_data_input_folder,
target_size=(input_dim, input_dim),
batch_size=batch_size,
class_mode='categorical',
seed=2019,
subset='training')
validation_generator = train_datagen.flow_from_directory(
validation_data_input_folder,
target_size=(input_dim, input_dim),
batch_size=batch_size,
class_mode='categorical',
seed=2019,
subset='validation')
new_model.fit_generator(
train_generator,
samples_per_epoch=samples_per_epoch,
epochs=epochs,
validation_steps=20,
validation_data=validation_generator,
nb_worker=24)
new_model.save(model_output_path)
exception:
</code></pre>
<blockquote>
<p>2019-11-17 08:52:52.903583: I
tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA
library libcublas.so.10.0 locally .... ... 2019-11-17 08:53:24.713020:
I tensorflow/core/common_runtime/bfc_allocator.cc:641] 110 Chunks of
size 27724800 totalling 2.84GiB 2019-11-17 08:53:24.713024: I
tensorflow/core/common_runtime/bfc_allocator.cc:641] 6 Chunks of size
38814720 totalling 222.10MiB 2019-11-17 08:53:24.713027: I
tensorflow/core/common_runtime/bfc_allocator.cc:641] 23 Chunks of size
54000128 totalling 1.16GiB 2019-11-17 08:53:24.713031: I
tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size
73760000 totalling 70.34MiB 2019-11-17 08:53:24.713034: I
tensorflow/core/common_runtime/bfc_allocator.cc:645] Sum Total of
in-use chunks: 5.45GiB 2019-11-17 08:53:24.713040: I
tensorflow/core/common_runtime/bfc_allocator.cc:647] Stats: Limit:
5856749158 InUse: 5848048896 MaxInUse: 5848061440 NumAllocs: 6140
MaxAllocSize: 3259170816</p>
<p>2019-11-17 08:53:24.713214: W
tensorflow/core/common_runtime/bfc_allocator.cc:271]
**************************************************************************************************** 2019-11-17 08:53:24.713232: W
tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at
cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating
tensor with shape[5,1344,38,38] and type float on
/job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last): File
"/home/naort/Desktop/deep-learning-data-preparation-tools/EfficientNet-Transfer-Learning-Boiler-Plate/model_retrain.py",
line 76, in nb_worker=24) File
"/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py",
line 91, in wrapper return func(*args, **kwargs) File
"/usr/local/lib/python3.6/dist-packages/keras/engine/training.py",
line 1732, in fit_generator initial_epoch=initial_epoch) File
"/usr/local/lib/python3.6/dist-packages/keras/engine/training_generator.py",
line 220, in fit_generator reset_metrics=False) File
"/usr/local/lib/python3.6/dist-packages/keras/engine/training.py",
line 1514, in train_on_batch outputs = self.train_function(ins) File
"/home/naort/.local/lib/python3.6/site-packages/tensorflow/python/keras/backend.py",
line 3076, in call run_metadata=self.run_metadata) File
"/home/naort/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py",
line 1439, in call run_metadata_ptr) File
"/home/naort/.local/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py",
line 528, in exit c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM
when allocating tensor with shape[5,1344,38,38] and type float on
/job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node
training/Adam/gradients/AddN_387-0-TransposeNHWCToNCHW-LayoutOptimizer}}]]
Hint: If you want to see a list of allocated tensors when OOM happens,
add report_tensor_allocations_upon_oom to RunOptions for current
allocation info.</p>
<p>[[{{node Mean}}]] Hint: If you want to see a list of allocated tensors
when OOM happens, add report_tensor_allocations_upon_oom to RunOptions
for current allocation info.</p>
</blockquote>
|
<p>Despite the EfficientNet models having lower parameter counts than comparative ResNe(X)t models, they still consume significant amounts of GPU memory. What you're seeing is an out of memory error for your GPU (8GB for an RTX 2070), not the system (64GB).</p>
<p>A B7 model, especially at full resolution, is beyond what you'd want to use for training with a single RTX 2070 card. Even if freezing a lot of layers. </p>
<p>Something that may help, is running the model in FP16, which will also leverage the TensorCores of your RTX card. From <a href="https://medium.com/@noel_kennedy/how-to-use-half-precision-float16-when-training-on-rtx-cards-with-tensorflow-keras-d4033d59f9e4" rel="nofollow noreferrer">https://medium.com/@noel_kennedy/how-to-use-half-precision-float16-when-training-on-rtx-cards-with-tensorflow-keras-d4033d59f9e4</a>, try this:</p>
<pre><code>import keras.backend as K
dtype='float16'
K.set_floatx(dtype)
# default is 1e-7 which is too small for float16. Without adjusting the epsilon, we will get NaN predictions because of divide by zero problems
K.set_epsilon(1e-4)
</code></pre>
|
python|tensorflow|keras|deep-learning|efficientnet
| 4
|
8,662
| 58,783,573
|
Pandas running division for a column
|
<p>I'm new to Pandas and would love some help I'm trying to take:</p>
<pre><code>factor
1
1
2
1
1
3
1
2
</code></pre>
<p>and produce:</p>
<pre><code>factor running_div
1 1
1 1
2 0.5
1 0.5
1 0.5
3 0.1666667
1 0.1666667
2 0.0833333
</code></pre>
<p>I can do it by looping through using .iloc, but trying to use vector math for efficiency. Have looked at rolling window and using .shift(1), but can't get it working. Would appreciate any guideance anyone could provide.</p>
|
<p>Use numpy <code>ufunc.accumulate</code></p>
<pre><code>df['cum_div'] = np.divide.accumulate(df.factor.to_numpy())
factor cum_div
0 1 1.000000
1 1 1.000000
2 2 0.500000
3 1 0.500000
4 1 0.500000
5 3 0.166667
6 1 0.166667
7 2 0.083333
</code></pre>
|
python|pandas
| 2
|
8,663
| 58,992,619
|
Display red channel of image with NumPy and Matplotlib only
|
<p>I'm trying to display the red channel of an image using Matplotlib.pyplot and NumPy only. Can someone explain, why I get different images for the following two codes?</p>
<p>Code #1:</p>
<pre><code>R = numpy.copy(img) # copy image into new array
R[:,:,1]=0 # set green channel to 0
R[:,:,2]=0 # set blue channel to 0
matplotlib.pyplot.imshow(R)
matplotlib.plyplot.show() # display new image
</code></pre>
<p>Code #2:</p>
<pre><code>R = numpy.zeros(img.shape) # create array of 0-s with same dimensions as image
R[:,:,0]=img[:,:,0] # copy red channel values into array of 0-s
matplotlib.pyplot.imshow(R)
matplotlib.plyplot.show() # display new image
</code></pre>
|
<p>You don't specify a data type in your <code>numpy.zeros()</code> call:</p>
<pre class="lang-py prettyprint-override"><code>numpy.zeros(img.shape)
</code></pre>
<p>That way, <code>R</code> is of type <code>float64</code>, and most of its values are greater or equal <code>1</code>, such that you see "clipping" in your plots.</p>
<p>The easiest way to fix that, would be to set up the proper data type:</p>
<pre class="lang-py prettyprint-override"><code>numpy.zeros(img.shape, numpy.uint8)
</code></pre>
<p>Then, both versions produce equal results.</p>
<p>The <code>numpy.copy(img)</code> in the first version uses the data type from the copy source, such that <code>R</code> has the proper data type <code>numpy.uint8</code> right from the start.</p>
<p>Alternatively, you could also modify your <code>matplotlib.pyplot.imshow()</code> call:</p>
<pre class="lang-py prettyprint-override"><code>matplotlib.pyplot.imshow(R / 255)
</code></pre>
<p>So, all values in <code>R</code> are properly mapped to <code>[0.0 ... 1.0]</code>.</p>
<p>Hope that helps!</p>
|
python|image|numpy|matplotlib|matrix
| 2
|
8,664
| 58,938,177
|
subset a series of pandas df based on index
|
<p>I have a series of dataframes.</p>
<pre><code>ind = [78, 87, 677, 900]
df = pd.Series(data = [pd.DataFrame(np.arange(12).reshape(3, 4), index = [0, 1, 2], columns = ['a', 'b', 'c', 'd']) for _ in range(4)],
index = ind)
</code></pre>
<p>Each df in the series looks like this:</p>
<pre><code>df[78]:
a b c d
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
</code></pre>
<p>I want to get the sum of first 3 dfs in this series at column b and index 0 and column c and index 2and make a new df
so expected output would be</p>
<pre><code>sum:
a b c d
0 0 3 2 3
1 4 5 6 7
2 8 9 30 11
</code></pre>
<p>I also want to get another df which would be the sum of the first 3 dfs in the series for all values in the df</p>
<pre><code>sum_full:
a b c d
0 0 3 6 9
1 12 15 18 21
2 24 27 30 33
</code></pre>
<p>I am not sure how to subset the series based on index and get sum of only the two values thanks.</p>
|
<p>Try this:</p>
<pre><code>pd.concat(df.head(3).tolist()).sum(level=0)
</code></pre>
<p>Output:</p>
<pre><code> a b c d
0 0 3 6 9
1 12 15 18 21
2 24 27 30 33
</code></pre>
|
python|pandas|dataframe
| 0
|
8,665
| 58,996,458
|
Count occurrences in a list for each row and specific column in a dataframe
|
<p>I've been trying to use <code>collection.Counter</code> or <code>value_counts</code> in <strong>Python 3.7</strong> to do something like the df below, but I had no success. So far, this is an example of what I'm trying to get: </p>
<pre><code> IDs Col2 Col3
0 123 [A, A, B, B, C] {A:2, B:2, C:1}
1 456 [A, B, C, C] {A:1, B:1, C:2}
2 789 [A, A, A, D, D] {A:3, D:2}
</code></pre>
<p>Then I need to get for each correspondent row, the maximum value in <code>Col3</code> and, if there's a tie, show it in a new column only with the keys that tied. Something like this:</p>
<pre><code> IDs Col2 Col3 Max
0 123 [A, A, B, B, C] {A:2, B:2, C:1} {A:2, B:2}
1 456 [A, B, C, C] {A:1, B:1, C:2} {C:2}
2 789 [A, A, A, D, D] {A:3, D:2} {A:3}
</code></pre>
|
<p>Use dict comprehension with test if value is <code>max</code>:</p>
<pre><code>from collections import Counter
df = pd.DataFrame({'Col1':[123,456,789],
'Col2':[list('AABBC'), list('ABCC'), list('AAADD')]})
df['Col3'] = df['Col2'].apply(Counter)
df['Max'] = df['Col3'].apply(lambda x: {k:v for k, v in x.items() if max(x.values()) == v})
</code></pre>
<p>Thank you @Keyur Potdar for another idea use <a href="https://docs.python.org/2/library/collections.html#collections.Counter.most_common" rel="nofollow noreferrer"><code>most_common</code></a>:</p>
<pre><code>f = lambda x: {k:v for k, v in x.items() if x.most_common(1)[0][1] == v}
df['Max'] = df['Col3'].apply(f)
print (df)
Col1 Col2 Col3 Max
0 123 [A, A, B, B, C] {'A': 2, 'B': 2, 'C': 1} {'A': 2, 'B': 2}
1 456 [A, B, C, C] {'A': 1, 'B': 1, 'C': 2} {'C': 2}
2 789 [A, A, A, D, D] {'A': 3, 'D': 2} {'A': 3}
</code></pre>
|
python|python-3.x|pandas|dictionary|counter
| 4
|
8,666
| 70,287,631
|
How best to randomly select a number of non zero elements from an array with many duplicate integers
|
<p>I need to randomly select x non-zero integers from an unsorted 1D numpy array containing y integer elements including an unknown number of zeros as well as duplicate integers. The output should include duplicate integers if required by this random selection. What is the best way to achieve this?</p>
|
<p>One option is to select the non-zero elements first then use <code>random.choice()</code> (with the <code>replace</code> parameter set to either True or False) to select a given number of elements.</p>
<p>Something like this:</p>
<pre><code>import numpy as np
rng = np.random.default_rng() # doing this is recommended by numpy
n = 4 # number of non-zero samples
arr = np.array([1,2,0,0,4,2,3,0,0,4,2,1])
non_zero_arr = arr[arr!=0]
rng.choice(non_zero_arr, n, replace=True)
</code></pre>
|
python|numpy
| 1
|
8,667
| 70,120,222
|
Use multiple dates in pd.date_range
|
<p>I have a range of dates in date column of dataframe. The dates are scattered eg 1st feb, 5th Feb, 11th feb etc.</p>
<p>I want to use pd.date_range with frequency one minute on every date in this column. So my start argument will be date and the end argument will be date + datetime.timedelta(days=1).</p>
<p>I'm struggling with using apply function with this, can someone help me with it? or can I use some other function over here?</p>
<p>I don't want to use a for loop because the length of my dates will be HUGE.</p>
<p>I tried this :</p>
<p><code>df.date.apply(lamda x : pd.date_range(start=df['date'],end = df['date']+datetime.timedelta(days=1),freq="1min"),axis =1)</code></p>
<p>but I'm getting error.</p>
<p>Thanks in advance</p>
|
<p>Use <code>x</code> in lambda function instead <code>df['date']</code> and remove <code>axis=1</code>:</p>
<pre><code>df = pd.DataFrame({'date':pd.date_range('2021-11-26', periods=3)})
print (df)
date
0 2021-11-26
1 2021-11-27
2 2021-11-28
s = df['date'].apply(lambda x:pd.date_range(start=x,end=x+pd.Timedelta(days=1),freq="1min"))
print (s)
0 DatetimeIndex(['2021-11-26 00:00:00', '2021-11...
1 DatetimeIndex(['2021-11-27 00:00:00', '2021-11...
2 DatetimeIndex(['2021-11-28 00:00:00', '2021-11...
Name: date, dtype: object.Timedelta(days=1),freq="1min"))
</code></pre>
|
python|pandas|date-range
| 0
|
8,668
| 70,177,871
|
Convert a NumPy array to a binary array with the condition of each element existing in a list
|
<p>Is there any way to convert an array to a binary array such that any element that exists within a defined list is 1, and any element not in the list is 0?</p>
<p>For example, if I define a NumPy array as so:</p>
<pre><code>a = np.array([[23,43,1],[43,5,0],[5,0,0]])
</code></pre>
<p>and a list as so:</p>
<pre><code>l = [5,43]
</code></pre>
<p>I want a function that converts the array to/creates this:</p>
<pre><code>array([[0, 1, 0],
[1, 1, 0],
[1, 0, 0]])
</code></pre>
<p>I have already tried <code>np.where(a in l, 1, 0)</code>, and it gives me this error:</p>
<blockquote>
<p>ValueError: The truth value of an array with more than one element is
ambiguous. Use a.any() or a.all()</p>
</blockquote>
|
<p>Check out <a href="https://numpy.org/doc/stable/reference/generated/numpy.isin.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/reference/generated/numpy.isin.html</a></p>
<pre><code>arr=np.array([[23,43,1],[43,5,0],[5,0,0]])
l = [5,43]
np.isin(arr, l).astype(int)
#array([[0, 1, 0],
# [1, 1, 0],
# [1, 0, 0]])
</code></pre>
|
python|numpy|numpy-ndarray
| 1
|
8,669
| 70,073,499
|
How to keep leading zeroes from a panda column post operation?
|
<p>I have a column which has data as :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">Date</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">'2021-01-01'</td>
</tr>
<tr>
<td style="text-align: center;">'2021-01-10'</td>
</tr>
<tr>
<td style="text-align: center;">'2021-01-09'</td>
</tr>
<tr>
<td style="text-align: center;">'2021-01-11'</td>
</tr>
</tbody>
</table>
</div>
<p>I need to get only the "year and month" as one column and have it as an integer instead of string like '2021-01-01' should be saved as 202101. (I don't need the day part).</p>
<p>When I try to clean the data I am able to do it but it removes the leading zeroes.</p>
<pre><code>df['period'] = df['Date'].str[:4] + df['Date'].str[6:7]
</code></pre>
<p>This gives me:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">Date</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">20211</td>
</tr>
<tr>
<td style="text-align: center;">202110</td>
</tr>
<tr>
<td style="text-align: center;">20219</td>
</tr>
<tr>
<td style="text-align: center;">202111</td>
</tr>
</tbody>
</table>
</div>
<p>As you can see, for months Jan to Sept, it returns only 1 to 9 instead of 01 to 09, which creates discrepancy. If I add a zero manually as part of the merge it will make '2021-10' as 2021010. I want it simply as the Year and month without the hyphen and keeping the leading zeroes for months. See below how I would want it to come in the new column.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">Date</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;"><strong>202101</strong></td>
</tr>
<tr>
<td style="text-align: center;">202110</td>
</tr>
<tr>
<td style="text-align: center;"><strong>202109</strong></td>
</tr>
<tr>
<td style="text-align: center;">202111</td>
</tr>
</tbody>
</table>
</div>
<p>I can do it using loop but that's not efficient. Is there a better way to do it in python?</p>
|
<p>The leading zeros are being dropped because of a misunderstanding about the use of <a href="https://stackoverflow.com/questions/509211/understanding-slice-notation">slice notation</a> in Python.</p>
<p>Try changing your code to:</p>
<pre><code>df['period'] = df['Date'].str[:4] + df['Date'].str[5:7]
</code></pre>
<p>Note the change from [6:7] to [5:7].</p>
|
python|pandas|string|data-cleaning
| 2
|
8,670
| 56,268,220
|
Df Headers: Insert a full year of header rows at end of month and fill non populated months with zero
|
<p>Afternoon All,</p>
<p>Test Data as at 30 Mar 2019:</p>
<pre><code>Test_Data = [
('Index', ['Year_Month','Done_RFQ','Not_Done_RFQ','Total_RFQ']),
('0', ['2019-01',10,20,30]),
('1', ['2019-02', 10, 20, 30]),
('2', ['2019-03', 20, 40, 60]),
]
df = pd.DataFrame(dict(Test_Data))
print(df)
Index 0 1 2
0 Year_Month 2019-01 2019-02 2019-03
1 Done_RFQ 10 10 20
2 Not_Done_RFQ 20 20 40
3 Total_RFQ 30 30 60
</code></pre>
<p>Desired output as at 31 Mar 2019 </p>
<p><a href="https://i.stack.imgur.com/VZZiu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VZZiu.png" alt="enter image description here"></a></p>
<p>Desired output as at 30 Apr 2019 </p>
<p><a href="https://i.stack.imgur.com/E3OmR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E3OmR.png" alt="enter image description here"></a></p>
<p>As each month progresses the unformatted df will have an additional column of data </p>
<p>I'd like to:</p>
<p>a. Replace headers in the existing df, note there will only be four columns in March, then 5 in April....13 in Dec: </p>
<pre><code>df.columns = ['Report_Mongo','Month_1','Month_2','Month_3','Month_4','Month_5','Month_6','Month_7','Month_8','Month_9','Month_10','Month_11','Month_12']
</code></pre>
<p>b. As we progress through the year zero valaues would be replaced with data. The challenge is to determine how many months have passed and only update non populated columns with data </p>
|
<p>You can assign columns by length of original columns and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a>:</p>
<pre><code>c = ['Report_Mongo','Month_1','Month_2','Month_3','Month_4','Month_5','Month_6',
'Month_7','Month_8','Month_9','Month_10','Month_11','Month_12']
df.columns = c[:len(df.columns)]
df = df.reindex(c, axis=1, fill_value=0)
print (df)
Report_Mongo Month_1 Month_2 Month_3 Month_4 Month_5 Month_6 \
0 Year_Month 2019-01 2019-02 2019-03 0 0 0
1 Done_RFQ 10 10 20 0 0 0
2 Not_Done_RFQ 20 20 40 0 0 0
3 Total_RFQ 30 30 60 0 0 0
Month_7 Month_8 Month_9 Month_10 Month_11 Month_12
0 0 0 0 0 0 0
1 0 0 0 0 0 0
2 0 0 0 0 0 0
3 0 0 0 0 0 0
</code></pre>
<p>Alternative is create header with months periods, advantage is only numeric data in all rows:</p>
<pre><code>#set columns by first row
df.columns = df.iloc[0]
#remove first row and create index by first column
df = df.iloc[1:].set_index('Year_Month')
#convert columns to month periods
df.columns = pd.to_datetime(df.columns).to_period('m')
#reindex to full year
df = df.reindex(pd.period_range(start='2019-01',end='2019-12',freq='m'),axis=1,fill_value=0)
print (df)
2019-01 2019-02 2019-03 2019-04 2019-05 2019-06 2019-07 \
Year_Month
Done_RFQ 10 10 20 0 0 0 0
Not_Done_RFQ 20 20 40 0 0 0 0
Total_RFQ 30 30 60 0 0 0 0
2019-08 2019-09 2019-10 2019-11 2019-12
Year_Month
Done_RFQ 0 0 0 0 0
Not_Done_RFQ 0 0 0 0 0
Total_RFQ 0 0 0 0 0
</code></pre>
|
python|pandas|dataframe
| 1
|
8,671
| 56,034,981
|
How to fix the image preprocessing difference between tensorflow and android studio?
|
<p>I'm trying to build a classification model with keras and deploy the model to my Android phone. I use the code from <a href="https://medium.com/@elye.project/applying-tensorflow-in-android-in-4-steps-to-recognize-superhero-f224597eb055" rel="nofollow noreferrer">this website</a> to deploy my own converted model, which is a .pb file, to my Android phone. I load a image from my phone and everything worked fine, but the prediction result is totally different from the result I got from my PC.</p>
<p>The procedure of testing on my PC are:</p>
<ol>
<li><p>load the image with cv2, and convert to np.float32</p></li>
<li><p>use the keras resnet50 'preprocess_input' python function to preprocess the image</p></li>
<li><p>expand the image dimension for batching (batch size is 1)</p></li>
<li><p>forward the image to model and get the result</p></li>
</ol>
<p>Relevant code:</p>
<pre class="lang-py prettyprint-override"><code>img = cv2.imread('./my_test_image.jpg')
x = preprocess_input(img.astype(np.float32))
x = np.expand_dims(x, axis=0)
net = load_model('./my_model.h5')
prediction_result = net.predict(x)
</code></pre>
<p>And I noticed that the image preprocessing part of Android is different from the method I used in keras, which mode is caffe(convert the images from RGB to BGR, then zero-center each color channel with respect to the ImageNet dataset). It seems that the original code is for mode tf(will scale pixels between -1 to 1). </p>
<p>So I modified the following code of 'preprocessBitmap' to what I think it should be, and use a 3 channel RGB image with pixel value [127,127,127] to test it. The code predicted the same result as .h5 model did. But when I load a image to classify, the prediction result is different from .h5 model. </p>
<p>Does anyone has any idea? Thank you very much.</p>
<p><strong>I have tried the following:</strong></p>
<ol>
<li><p>Load a 3 channel RGB image in my Phone with pixel value [127,127,127], and use the modified code below, and it will give me a prediction result that is same as prediction result using .h5 model on PC.</p></li>
<li><p>Test the converted .pb model <em>on PC</em> using tensorflow gfile module with a image, and it give me a correct prediction result (compare to .h5 model). So I think the converted .pb file does not have any problem. </p></li>
</ol>
<p><strong>Entire section of preprocessBitmap</strong></p>
<pre class="lang-java prettyprint-override"><code>// code of 'preprocessBitmap' section in TensorflowImageClassifier.java
TraceCompat.beginSection("preprocessBitmap");
// Preprocess the image data from 0-255 int to normalized float based
// on the provided parameters.
bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
for (int i = 0; i < intValues.length; ++i) {
// this is a ARGB format, so we need to mask the least significant 8 bits to get blue, and next 8 bits to get green and next 8 bits to get red. Since we have an opaque image, alpha can be ignored.
final int val = intValues[i];
// original
/*
floatValues[i * 3 + 0] = (((val >> 16) & 0xFF) - imageMean) / imageStd;
floatValues[i * 3 + 1] = (((val >> 8) & 0xFF) - imageMean) / imageStd;
floatValues[i * 3 + 2] = ((val & 0xFF) - imageMean) / imageStd;
*/
// what I think it should be to do the same thing in mode caffe when using keras
floatValues[i * 3 + 0] = (((val >> 16) & 0xFF) - (float)123.68);
floatValues[i * 3 + 1] = (((val >> 8) & 0xFF) - (float)116.779);
floatValues[i * 3 + 2] = (((val & 0xFF)) - (float)103.939);
}
TraceCompat.endSection();
</code></pre>
|
<p>This question is old, but remains the top Google result for preprocess_input for ResNet50 on Android. I could not find an answer for implementing <code>preprocess_input</code> for Java/Android, so I came up with the following based on the original python/keras code:</p>
<pre><code>/*
Preprocesses RGB bitmap IAW keras/imagenet
Port of https://github.com/tensorflow/tensorflow/blob/v2.3.1/tensorflow/python/keras/applications/imagenet_utils.py#L169
with data_format='channels_last', mode='caffe'
Convert the images from RGB to BGR, then will zero-center each color channel with respect to the ImageNet dataset, without scaling.
Returns 3D float array
*/
static float[][][] imagenet_preprocess_input_caffe( Bitmap bitmap ) {
// https://github.com/tensorflow/tensorflow/blob/v2.3.1/tensorflow/python/keras/applications/imagenet_utils.py#L210
final float[] imagenet_means_caffe = new float[]{103.939f, 116.779f, 123.68f};
float[][][] result = new float[bitmap.getHeight()][bitmap.getWidth()][3]; // assuming rgb
for (int y = 0; y < bitmap.getHeight(); y++) {
for (int x = 0; x < bitmap.getWidth(); x++) {
final int px = bitmap.getPixel(x, y);
// rgb-->bgr, then subtract means. no scaling
result[y][x][0] = (Color.blue(px) - imagenet_means_caffe[0] );
result[y][x][1] = (Color.green(px) - imagenet_means_caffe[1] );
result[y][x][2] = (Color.red(px) - imagenet_means_caffe[2] );
}
}
return result;
}
</code></pre>
<p>Usage with a 3D tensorflow-lite input with shape (1,224,224,3):</p>
<pre><code>Bitmap bitmap = <your bitmap of size 224x224x3>;
float[][][][] imgValues = new float[1][bitmap.getHeight()][bitmap.getWidth()][3];
imgValues[0]=imagenet_preprocess_input_caffe(bitmap);
... <prep tfInput, tfOutput> ...
tfLite.run(tfInput, tfOutput);
</code></pre>
|
android|tensorflow|keras|image-preprocessing
| 0
|
8,672
| 55,779,039
|
Pandas: Merge 2 dataframes based on a column values; for mulitple rows containing same column value, append those to different columns
|
<p>I have two dataframes, dataframe1 and dataframe2. They both share the same data in a particular column for both, lets call this column 'share1' and 'share2' for dataframe1 and dataframe2 respectively. </p>
<p>The issue is, there are instances where in dataframe1 , there is only one row in 'share1' with a particular value (lets call it 'c34z'), but in dataframe2 there are multiple rows with the value 'c34z' in the 'share2' column. </p>
<p>What I would like to do is, in the new merged dataframe, when there are new values, I would just like to place them in a new column. </p>
<p>So the number of columns in the new dataframe will be the maximum number of duplicates for a particular value in 'share2' . And for rows where there was only a unique value in 'share2', the rest of the added columns will be blank, for that row. </p>
|
<p>Loading Data:</p>
<pre><code>import pandas as pd
df1 = {'key': ['c34z', 'c34z_2'], 'value': ['x', 'y']}
df2 = {'key': ['c34z', 'c34z_2', 'c34z_2'], 'value': ['c34z_value', 'c34z_2_value', 'c34z_2_value']}
df1 = pd.DataFrame(df1)
df2 = pd.DataFrame(df2)
</code></pre>
<p>Convert df2 by grouping and pivoting</p>
<pre><code>df2_pivot = df2.groupby('key')['value'].apply(lambda df: df.reset_index(drop=True)).unstack().reset_index()
</code></pre>
<p>merge df1 and df2_pivot</p>
<pre><code>df_merged = pd.merge(df1, df2_pivot, on='key')
</code></pre>
|
python|pandas
| 1
|
8,673
| 55,688,491
|
Replace the value inside a csv column by value inside parentheses of the same column using python pandas
|
<p>I got the following csv file with sample data:
<a href="https://i.stack.imgur.com/MeDQy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MeDQy.png" alt="Small part of the csv file with sample data"></a></p>
<p>Now I want to replace the columns 'SIFT' and 'PolyPhen' values with the data inside the parentheses of these columns. So for row 1 the SIFT value will replace to 0.82, and for row 2 the SIFT value will be 0.85. Also I want the part before the parentheses, tolerated/deleterious, inside a new column named 'SIFT_prediction'. </p>
<p>This is what I tried so far:</p>
<pre><code>import pandas as pd
import re
testfile = 'test_sift_columns.csv'
df = pd.read_csv(testfile)
df['SIFT'].re.search(r'\((.*?)\)',s).group(1)
</code></pre>
<p>This code will take everything inside the parentheses of the column SIFT. But this does not replace anything. I probably need a for loop to read and replace every row but I don't know how to do it correctly. Also I am not sure if using a regular expression is necessary with pandas. Maybe there is a smarter way to resolve my problem.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>Series.str.extract</code></a>:</p>
<pre><code>df = pd.DataFrame({'SIFT':['tol(0.82)','tol(0.85)','tol(1.42)'],
'PolyPhen':['beg(0)','beg(0)','beg(0)']})
pat = r'(.*?)\((.*?)\)'
df[['SIFT_prediction','SIFT']] = df['SIFT'].str.extract(pat)
df[['PolyPhen_prediction','PolyPhen']] = df['PolyPhen'].str.extract(pat)
print(df)
SIFT_prediction SIFT PolyPhen_prediction PolyPhen
0 tol 0.82 beg 0
1 tol 0.85 beg 0
2 tol 1.42 beg 0
</code></pre>
<p>Alternative:</p>
<pre><code>df[['SIFT_prediction','SIFT']] = df['SIFT'].str.rstrip(')').str.split('(', expand=True)
df[['PolyPhen_prediction','PolyPhen']] = df['PolyPhen'].str.rstrip(')').str.split('(', expand=True)
</code></pre>
|
python|python-3.x|pandas|csv|dataframe
| 2
|
8,674
| 55,845,173
|
sklearn.confusion_matrix - TypeError: 'numpy.ndarray' object is not callable
|
<p>I am trying to build a sklearn confusion matrix using the below</p>
<p>test_Y:</p>
<pre><code> Target
0 0
1 0
2 1
</code></pre>
<p>the data type of test_Y is</p>
<pre><code>Target int64
dtype: object
</code></pre>
<p>and my y_pred is</p>
<pre><code>array([0,0,1])
</code></pre>
<p>i then do my confusion matrix as</p>
<pre><code>cm = confusion_matrix(test_Y,y_pred)
sns.heatmap(cm,annot=True)
</code></pre>
<p>but i get the error</p>
<p><strong>TypeError: 'numpy.ndarray' object is not callable</strong></p>
|
<p>You have reused the name <code>confusion_matrix</code>. You need to rebind it back to your function; this is one way:</p>
<pre><code>from sklearn.metrics import confusion_matrix
cm = confusion_matrix(test_Y, y_pred)
sns.heatmap(cm, annot=True)
</code></pre>
|
python|pandas|numpy|scikit-learn|confusion-matrix
| 1
|
8,675
| 64,992,133
|
How to implement a velocity Verlet integrator which works for the harmonic oscillator in python?
|
<p>I am new to python and i am trying to implement a velocity Verlet integrator which works for the harmonic oscillator.
As you can see from my notebook below (taken from: <a href="http://hplgit.github.io/prog4comp/doc/pub/._p4c-solarized-Python022.html" rel="nofollow noreferrer">http://hplgit.github.io/prog4comp/doc/pub/._p4c-solarized-Python022.html</a>), the Euler's method works, but not the verlet integrator. What am i missing?</p>
<pre><code>## Euler's method
from numpy import zeros, linspace, pi, cos, array
import matplotlib.pyplot as plt
import numpy as np
omega = 1
m=1
N=500
dt=0.08
t = linspace(0, N*dt, N+1)
u = zeros(N+1)
vi = zeros(N+1)
X_0 = 2
u[0] = X_0
vi[0] = 0
for n in range(N):
u[n+1] = u[n] + dt*vi[n]
vi[n+1] = vi[n] - dt*omega**2*u[n]
fig = plt.figure()
l1, l2 = plt.plot(t, u, 'b-', t, X_0*cos(omega*t), 'r--')
fig.legend((l1, l2), ('numerical', 'exact'), 'upper left')
plt.xlabel('t')
plt.show()
## Velocity Verlet
for n in range(N):
v_next = vi[n] - 0.5*omega**2*u[n]*dt
u[n+1]= u[n]+ dt*v_next
vi[n+1] = v_next - 0.5*omega**2*u[n]*dt
np.append(u, x_next)
plt.plot(t, X_0*cos(omega*t), label = 'Analytical')
plt.plot(t, u, 'r--', label = 'Verlet')
plt.title('Verlet', fontweight = 'bold', fontsize = 1)
plt.xlabel('t', fontweight = 'bold', fontsize = 14)
plt.ylabel('X', fontweight = 'bold', fontsize = 14)
plt.legend()
plt.show()
</code></pre>
|
<p>You did not implement the velocity Verlet step correctly, the second velocity update uses the newly computed position, not the old one,</p>
<pre><code>vi[n+1] = v_next - 0.5*omega**2*u[n+1]*dt
</code></pre>
<p>This small change should restore second order and energy/amplitude preservation.</p>
<hr />
<p>Also remove the remains of a previous change, there is no <code>x_next</code> to append.</p>
<p><a href="https://i.stack.imgur.com/IJa6E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IJa6E.png" alt="enter image description here" /></a></p>
|
python|numpy|numerical-methods|verlet-integration
| 0
|
8,676
| 39,926,162
|
TensorFlow Casting Internal Tensor Pointer
|
<p>I <a href="https://stackoverflow.com/questions/39797095/tensorflow-custom-allocator-and-accessing-data-from-tensor">previously asked</a> how to get at the pointer within a Tensor. I would now like to figure out the datatype is stored and then be able to cast <code>void*</code>'s to this data type.</p>
<p>Tensor's have a function,
<code>
DataType dtype() const { return shape_.data_type(); }
</code></p>
<p><code>DataType</code> is a simple enum and doesn't actually help me cast. I would like to be able <code>static_cast<type></code> things to the result of <code>dtype()</code>. So I am looking for a macro or template that can do this for me.</p>
<p>Does such exist?</p>
|
<p>The framework is build in <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/types.h#L142" rel="nofollow"><code>types.h</code></a> and is used in <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/queue_base.cc#L31" rel="nofollow"><code>queue_base.cc</code></a></p>
<p>Unfortunately, because it is a template, you need to use macros and such so that each different template gets compiled.</p>
<p>First write function that gets the type as a template parameter</p>
<pre><code>template <DataType DT>
Status HandleSliceToElement(const Tensor& parent, Tensor* element,
int64 index) {
typedef typename EnumToDataType<DT>::Type T;
...
}
</code></pre>
<p>Then use macros to handle each datatype. Trimmed for brevity (See full <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/queue_base.cc#L361" rel="nofollow">list of macros</a>)</p>
<pre><code>Status QueueBase::CopySliceToElement(const Tensor& parent, Tensor* element,
int64 index) {
#define HANDLE_TYPE(DT) \
if (parent.dtype() == DT) { \
TF_RETURN_IF_ERROR(HandleSliceToElement<DT>(parent, element, index)); \
return Status::OK(); \
}
HANDLE_TYPE(DT_FLOAT);
HANDLE_TYPE(DT_HALF);
HANDLE_TYPE(DT_DOUBLE);
...
#undef HANDLE_TYPE
return errors::Unimplemented("CopySliceToElement Unhandled data type: ",
parent.dtype());
}
</code></pre>
|
c++|templates|macros|tensorflow
| 0
|
8,677
| 44,259,578
|
Faster RCNN: how to translate coordinates
|
<p>I'm trying to understand and use the <a href="https://arxiv.org/pdf/1506.01497.pdf" rel="nofollow noreferrer">Faster R-CNN</a> algorithm on my own data.</p>
<p>My question is about ROI coordinates: what we have as labels, and what we want in the end, are ROI coordinates in the input image. However, if I understand it correctly, anchor boxes are given in the convolutional feature map, then the ROI regression gives ROI coordinates relatively to an anchor box (so easily translatable to coordinates in conv feature map coordinates), and then the <a href="https://arxiv.org/pdf/1504.08083.pdf" rel="nofollow noreferrer">Fast-RCNN</a> part does the ROI pooling using the coordinates in the convolutional feature map, and itself (classifies and) regresses the bounding box coordinates.</p>
<p>Considering that between the raw image and the convolutional features, some convolutions and poolings occured, possibly with strides <code>>1</code> (subsampling), how do we associate coordinates in the raw images to coordinates in feature space (in both ways) ?</p>
<p>How are we supposed to give anchor boxes sizes: relatively to the input image size, or to the convolutional feature map ?</p>
<p>How is the bounding box regressed by Fast-RCNN expressed ? (I would guess: relatively to the ROI proposal, similarly to the encoding of the proposal relatively to the anchor box; but I'm not sure)</p>
|
<p>It looks like it's actually an implementation question, the method itself does not answer that. </p>
<p>A good way to do it though, that is used by <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="nofollow noreferrer">Tensorflow Object Detection API</a>, is to always give coordinates and ROI sizes relatively to the layer's input size. That is, all coordinates and sizes will be real numbers between <code>0</code> and <code>1</code>. Likewise for the anchor boxes.</p>
<p>This handles nicely the problem of the downsampling, and allows easy computations of ROI coordinates.</p>
|
machine-learning|tensorflow|computer-vision|deep-learning
| 1
|
8,678
| 69,546,079
|
Python : DataFrame.to_excel should write the table vertically
|
<p>I am trying to write 2 Sheets in a new Excel file</p>
<pre class="lang-py prettyprint-override"><code>with pd.ExcelWriter(outputDetailsFile) as writer:
df1.to_excel(writer, sheet_name='FA',index = False)
df2.to_excel(writer, sheet_name='TA', index = False)
</code></pre>
<p>The above code is working fine.
There is only 1 row in that in "df1"
Therefore, I need the First Sheet "FA" to be written Vertically instead of Horizontally for better readability</p>
<p>At present it is writing like this :
<a href="https://i.stack.imgur.com/7F4GS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7F4GS.png" alt="enter image description here" /></a></p>
<p>It should Write like this :</p>
<p><a href="https://i.stack.imgur.com/caA0B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/caA0B.png" alt="enter image description here" /></a></p>
<p>Please suggest</p>
|
<p>Try transposing the index and columns using the <code>T</code> accessor:</p>
<p><code>df1.T.to_excel(writer, sheet_name='FA',index = False)</code></p>
|
python|pandas|dataframe
| 0
|
8,679
| 41,103,119
|
Excel merge cells, from 2 sheets using Python Pandas
|
<p>I have two Excel sheets, <code>sheet1</code>, and <code>sheet2</code>. Sheet1 has the <code>row id</code>, <code>First name</code>, <code>Last name</code>, <code>Description</code> columns, etc. Sheet2 has also a column that stores the <code>First name</code>, <code>Last name</code>, and also two other columns, <code>column D</code>, and <code>column E</code>, that need to be merged in the Description column.</p>
<p>The combination of <code>First name</code>, <code>Last name</code>, exists only once in both sheets. </p>
<p>How could I merge the contents of column D, E from sheet 2, in column named Description, in sheet 1, based on the matching criteria First name and Last name are equal in row from sheet 1, and from sheet 2, using Python Pandas? </p>
<p>Sheet 1:</p>
<pre><code>ID | columnB | column C | Column D
1 | John | Hingins | Somedescription
</code></pre>
<p>Sheet 2: </p>
<pre><code>ID | column Z | column X | Column Y | Column W
1 | John | Hingins | description2 | Somemoredescription
</code></pre>
<p>Output:
Sheet 1:</p>
<pre><code>ID | columnB | column C | Column D
1 | John | Hingins | description2-separator-Someotherdescription-separator-Somedescription
</code></pre>
|
<p>I think you should look at this. But that's mostly for context. </p>
<p><a href="http://pbpython.com/excel-file-combine.html" rel="nofollow noreferrer">http://pbpython.com/excel-file-combine.html</a></p>
<p>I think your issue actually boils down to this.</p>
<pre><code>>>> !cat scores3.csv
ID,JanSales,FebSales
1,100,200
2,200,500
3,300,400
>>> !cat scores4.csv
ID,CreditScore,EMMAScore
2,good,Watson
3,okay,Thompson
4,not-so-good,NA
</code></pre>
<p>We could read these into objects called DataFrames (think of them sort of like Excel sheets):</p>
<pre><code>>>> import pandas as pd
>>> s3 = pd.read_csv("scores3.csv")
>>> s4 = pd.read_csv("scores4.csv")
>>> s3
ID JanSales FebSales
0 1 100 200
1 2 200 500
2 3 300 400
>>> s4
ID CreditScore EMMAScore
0 2 good Watson
1 3 okay Thompson
2 4 not-so-good NaN
</code></pre>
<p>And then we can merge them on the ID column:</p>
<pre><code>>>> merged = s3.merge(s4, on="ID", how="outer")
>>> merged
ID JanSales FebSales CreditScore EMMAScore
0 1 100 200 NaN NaN
1 2 200 500 good Watson
2 3 300 400 okay Thompson
3 4 NaN NaN not-so-good NaN
</code></pre>
<p>After which we could save it to a csv file or to an Excel file:</p>
<pre><code>>>> merged.to_csv("merged.csv")
>>> merged.to_excel("merged.xlsx")
</code></pre>
<p>From...here...</p>
<p><a href="https://stackoverflow.com/questions/17661836/looking-to-merge-two-excel-files-by-id-into-one-excel-file-using-python-2-7">Looking to merge two Excel files by ID into one Excel file using Python 2.7</a></p>
|
python|excel|pandas
| 1
|
8,680
| 53,977,693
|
Pandas to_sql in django
|
<p>I am trying to use Django's db connection variable to insert a pandas dataframe to Postgres database. The code I use is</p>
<pre><code>df.to_sql('forecast',connection,if_exists='append',index=False)
</code></pre>
<p>And I get the following error</p>
<blockquote>
<p>Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': relation "sqlite_master" does not exist
LINE 1: SELECT name FROM sqlite_master WHERE type='table' AND name=?...</p>
</blockquote>
<p>I think this happens because the Django connection object is not an sqlalchemy object and therefor Pandas assumes I am using sqlite. Is there any way to use .to_sql other than make another connection to the database?</p>
|
<p>It is possible to create db configuration in setting.py file</p>
<pre><code>DATABASES = {
'default': env.db('DATABASE_URL_DEFAULT'),
'other': env.db('DATABASE_URL_OTHER')
}
DB_URI_DEFAULT=env.str('DATABASE_URL_DEFAULT')
DB_URI_OTHER=env.str('DATABASE_URL_OTHER')
</code></pre>
<p>If you want to create sql_alchemy connection you should use DB_URI_DEFAULT or DB_URI_OTHER</p>
<p>in the <strong>init</strong> method of the class you will use .to_sql method you should write</p>
<pre><code>from you_project_app import settings
from sqlalchemy import create_engine
import pandas as pd
class Example:
def __init__(self):
self.conn_default = create_engine(settings.DB_URI_DEFAULT).connect()
</code></pre>
<p>And when you use .to_sql method of pandas it should be like this:</p>
<pre><code>df_insert.to_sql(table, if_exists='append',index=False,con=self.conn_default)
</code></pre>
|
django|pandas
| 2
|
8,681
| 54,207,221
|
Tensorflow does not see gpu on pycharm
|
<p>Specifications:
System: Ubuntu 18.0.4
Tensorflow:1.9.0,
cudnn=7.2.1</p>
<p>Interpreter project: anaconda environment.</p>
<p>When I run the script on terminal with the same anaconda env, it works fine. Using pycharm, it does not work!! What is the issue ?</p>
|
<p>Go to <strong>File -> Settings -> Project Interpreter</strong> and set the same python environment used by Anaconda.</p>
|
python-3.x|tensorflow|pycharm
| 0
|
8,682
| 53,917,608
|
Is it beneficial to use OOP on large datasets in Python?
|
<p>I'm implementing Kalman Filter on two types of measurements. I have GPS measurement every second (1Hz) and 100 measurment of accelration in one second (100Hz).
So basically I have two huge tables and they have to be fused at some point. My aim is: I really want to write readable and maintainable code. </p>
<p>My first approach was: there is a class for both of the datatables (so an object is a datatable), and I do bulk calculations in the class methods (so almost all of my methods include a for loop), until I get to the actual filter. I found this approach a bit too stiff. It works, but there is so much data-type transformation, and it is just not that convenient.</p>
<p>Now I want to change my code. If I would want to stick to OOP, my second try would be: every single measurment is an object of either the GPS_measurment or the acceleration_measurement. This approach seems better, but this way thousands of objects would have been created.</p>
<p>My third try would be a data-driven design, but I'm not really familiar with this approach.</p>
<p>Which paradigm should I use? Or perhaps it should be solved by some kind of mixture of the above paradigms? Or should I just use procedural programming with the use of pandas dataframes? </p>
|
<p>It sounds like you would want to use <code>pandas</code>. OOP is a concept btw, not something you explicitly code in inflexibly. Generally speaking, you only want to define your own classes if you plan on extending them or encapsulating certain features. <code>pandas</code> and <code>numpy</code> are 2 modules that already do almost everything you can ask for with regards to data and they are faster in execution. </p>
|
python|pandas|oop|data-driven
| 0
|
8,683
| 66,017,881
|
Logistic Regression Model (binary) crosstab error = shape of passed values issue
|
<p>I am currently trying to run logistic regression for a data set. I dummy encoded my cat variables and normalized my continuous variables, and I fill null values with -1 (which works for my dataset). I am going through the steps and I am not getting any errors until I try to run my crosstab where its complaining about the shape of my the values passed. I'm getting the same error for both LogR w/ and w/out CV. I have included my code below, I did not include the encoding because that does not seem to be the issue or the code LogR w/out CV because it is basically identical except it excluding the CV.</p>
<pre><code># read in the df w/ encoded variables
allyrs=pd.read_csv("C:/Users/cyrra/OneDrive/Documents/Pythonread/HDS805/CS1W1/modelready_working.csv")
# Find locations of where I need to trim the data down selecting only the encoded variables
allyrs.columns.get_loc("BMI_C__-1.0")
23
allyrs.columns.get_loc("N_BMIR")
152
# Finding the location of the Y col
allyrs.columns.get_loc("CM")
23
#create new X and y for binary LR
y_bi = allyrs[["CM"]]
X_bi = allyrs.iloc[0:1305720, 23:152]
</code></pre>
<p>I then went ahead and checked the lengths of both variables and checked for all the columns in the X set, everything was there. The values are as followed: y_bi = 1305720 rows x 1 col , X_bi = 1305720 rows × 129 columns</p>
<pre><code># Create test/train
# Create test/train for bi column
from sklearn.model_selection import train_test_split
Xbi_train, Xbi_test, ybi_train, ybi_test = train_test_split(X_bi, y_bi,
train_size=0.8,test_size = 0.2)
</code></pre>
<p>again I check the size of Xbi_train and & Ybi_train: Xbi_train=1044576 rows × 129 columns, ybi_train= 1044576 rows × 1 columns</p>
<pre><code># LRw/CV for the binary col
from sklearn.linear_model import LogisticRegressionCV
logitbi_cv = LogisticRegressionCV(cv=2, random_state=0).fit(Xbi_train, ybi_train)
# Set predicted (checking to see if its an array)
logitbi_cv.predict(Xbi_train)
array([0, 0, 0, ..., 0, 0, 0], dtype=int64)
# Set predicted to its own variable
[IN]:pred_logitbi_cv =logitbi_cv.predict(Xbi_train)
# Cross tab LR w/0ut
from sklearn.metrics import confusion_matrix
ct_bi_cv=pd.crosstab(ybi_train, pred_logitbi_cv)
</code></pre>
<p>The error:</p>
<pre><code>[OUT]:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\anaconda3\lib\site-packages\pandas\core\internals\managers.py in create_block_manager_from_arrays(arrays, names, axes)
1701 blocks = _form_blocks(arrays, names, axes)
-> 1702 mgr = BlockManager(blocks, axes)
1703 mgr._consolidate_inplace()
~\anaconda3\lib\site-packages\pandas\core\internals\managers.py in __init__(self, blocks, axes, do_integrity_check)
142 if do_integrity_check:
--> 143 self._verify_integrity()
144
~\anaconda3\lib\site-packages\pandas\core\internals\managers.py in _verify_integrity(self)
322 if block.shape[1:] != mgr_shape[1:]:
--> 323 raise construction_error(tot_items, block.shape[1:], self.axes)
324 if len(self.items) != tot_items:
ValueError: Shape of passed values is (1, 2), indices imply (1044576, 2)
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-121-c669b17c171f> in <module>
1 # LR W/ CV
2 # Cross tab LR w/0ut
----> 3 ct_bi_cv=pd.crosstab(ybi_train, pred_logitbi_cv)
~\anaconda3\lib\site-packages\pandas\core\reshape\pivot.py in crosstab(index, columns, values, rownames, colnames, aggfunc, margins, margins_name, dropna, normalize)
596 **dict(zip(unique_colnames, columns)),
597 }
--> 598 df = DataFrame(data, index=common_idx)
599 original_df_cols = df.columns
600
~\anaconda3\lib\site-packages\pandas\core\frame.py in __init__(self, data, index, columns, dtype, copy)
527
528 elif isinstance(data, dict):
--> 529 mgr = init_dict(data, index, columns, dtype=dtype)
530 elif isinstance(data, ma.MaskedArray):
531 import numpy.ma.mrecords as mrecords
~\anaconda3\lib\site-packages\pandas\core\internals\construction.py in init_dict(data, index, columns, dtype)
285 arr if not is_datetime64tz_dtype(arr) else arr.copy() for arr in arrays
286 ]
--> 287 return arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype)
288
289
~\anaconda3\lib\site-packages\pandas\core\internals\construction.py in arrays_to_mgr(arrays, arr_names, index, columns, dtype, verify_integrity)
93 axes = [columns, index]
94
---> 95 return create_block_manager_from_arrays(arrays, arr_names, axes)
96
97
~\anaconda3\lib\site-packages\pandas\core\internals\managers.py in create_block_manager_from_arrays(arrays, names, axes)
1704 return mgr
1705 except ValueError as e:
-> 1706 raise construction_error(len(arrays), arrays[0].shape, axes, e)
1707
1708
ValueError: Shape of passed values is (1, 2), indices imply (1044576, 2)
</code></pre>
<p>I realize this is saying that the number of rows being passed in to the cross tab doesn't match but can someone tell me why this is happening or where I am going wrong? I am copying the example code with my own data exactly as it was provided in the book I am working from .</p>
<p>Thank you so much!</p>
|
<p>Your target variable should be of shape (n,) not (n,1) as is your case when you call <code>y_bi = allyrs[["CM"]]</code> . See the relevant <a href="https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html#sklearn.linear_model.LogisticRegressionCV.fit" rel="nofollow noreferrer">help page</a>. There should be a warning about this because the fit will not work but I guess this was missed somehow.</p>
<p>If you call <code>y_bi = allyrs["CM"]</code>, for example, if I set up some dummy data:</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(111)
allyrs = pd.DataFrame(np.random.binomial(1,0.5,(100,4)),columns=['x1','x2','x3','CM'])
X_bi = allyrs.iloc[:,:4]
y_bi = allyrs["CM"]
</code></pre>
<p>Then run the train test split followed by the fit:</p>
<pre><code>from sklearn.model_selection import train_test_split
Xbi_train, Xbi_test, ybi_train, ybi_test = train_test_split(X_bi, y_bi,
train_size=0.8,test_size = 0.2)
from sklearn.linear_model import LogisticRegressionCV
logitbi_cv = LogisticRegressionCV(cv=2, random_state=0).fit(Xbi_train, ybi_train)
pred_logitbi_cv =logitbi_cv.predict(Xbi_train)
pd.crosstab(ybi_train, pred_logitbi_cv)
col_0 0 1
CM
0 39 0
1 0 41
</code></pre>
|
python|pandas|scikit-learn|logistic-regression|crosstab
| 1
|
8,684
| 52,870,521
|
pandas.pivot_table : How to name functions for aggregation
|
<p>I am trying to pivot pandas DataFrame using several aggregate functions, some of which are lambda. There has to be a distinct name for each column in order to have aggregations by several lambda functions. I tried a few ideas I found online but none worked. This is the minimal example:</p>
<pre><code>df = pd.DataFrame({'col1': [1, 1, 2, 3], 'col2': [4, 4, 5, 6], 'col3': [7, 10, 8, 9]})
pivoted_df = df.pivot_table(index = ['col1', 'col2'], values = 'col3', aggfunc=[('lam1', lambda x: np.percentile(x, 50)), ('lam2', np.percentile(x, 75)]).reset_index()
</code></pre>
<p>The error is </p>
<pre><code>AttributeError: 'SeriesGroupBy' object has no attribute 'lam1'
</code></pre>
<p>I tried with <code>dictionary</code>, it also results in error. Can someone help? Thanks!</p>
|
<p>Name the functions explicitly:</p>
<pre><code>def lam1(x):
return np.percentile(x, 50)
def lam2(x):
return np.percentile(x, 75)
pivoted_df = df.pivot_table(index = ['col1', 'col2'], values = 'col3',
aggfunc=[lam1, lam2]).reset_index()
</code></pre>
<p>Your aggregation series will then be appropriately named:</p>
<pre><code>print(pivoted_df)
col1 col2 lam1 lam2
0 1 4 8.5 9.25
1 2 5 8.0 8.00
2 3 6 9.0 9.00
</code></pre>
<p>The <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow noreferrer">docs</a> for <code>pd.pivot_table</code> explain why:</p>
<blockquote>
<p><strong>aggfunc</strong> : function, list of functions, dict, default numpy.mean</p>
<p>If list of functions passed, <strong>the resulting pivot table will have
hierarchical columns whose top level are the function names</strong> (<em>inferred
from the function objects themselves</em>) If dict is passed, the key is
column to aggregate and value is function or list of functions</p>
</blockquote>
|
python|pandas|lambda|pivot-table
| 4
|
8,685
| 52,503,035
|
How to find out index in numpy array python
|
<p>I have saved the Urdu text in <code>numpy</code> array and I want to find out the index number, however I am not able to do that. This is my code</p>
<pre><code>import numpy as np
wordsList = np.load('urduwords.npy')
wordIndex= list(wordsList).index("آئندہ")
</code></pre>
<p>When I print the <code>wordsList</code> I can see the word exist there</p>
<pre><code> print(wordsList)
['\ufeffکرخت', 'آؤٹ', 'آؤں', 'آئرلینڈ', 'آئرن']
</code></pre>
|
<p>Numpy has a <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer">where</a> function which will give you the index value of the output. By the way, if you are using the latest numpy version then it will automatically detect the type.</p>
<p>Try this</p>
<pre><code>import numpy as np
wordsList =np.array(['\ufeffکرخت', 'آؤٹ', 'آؤں', 'آئرلینڈ', 'آئرن',"آئندہ"])
arr_index = np.where(wordsList == 'آئندہ')
print(arr_index)
</code></pre>
|
python|arrays|numpy
| 0
|
8,686
| 52,574,040
|
Min-max scaling along rows in numpy array
|
<p>I have a numpy array and I want to rescale values along each row to values between 0 and 1 using the following procedure:</p>
<p>If the maximum value along a given row is <code>X_max</code> and the minimum value along that row is <code>X_min</code>, then the rescaled value (<code>X_rescaled</code>) of a given entry (<code>X</code>) in that row should become:</p>
<pre><code>X_rescaled = (X - X_min)/(X_max - X_min)
</code></pre>
<p>As an example, let's consider the following array (<code>arr</code>):</p>
<pre><code>arr = np.array([[1.0,2.0,3.0],[0.1, 5.1, 100.1],[0.01, 20.1, 1000.1]])
print arr
array([[ 1.00000000e+00, 2.00000000e+00, 3.00000000e+00],
[ 1.00000000e-01, 5.10000000e+00, 1.00100000e+02],
[ 1.00000000e-02, 2.01000000e+01, 1.00010000e+03]])
</code></pre>
<p>Presently, I am trying to use <code>MinMaxscaler</code> from <code>scikit-learn</code> in the following way:</p>
<pre><code>from sklearn.preprocessing import MinMaxScaler
result = MinMaxScaler(arr)
</code></pre>
<p>But, I keep getting my initial array, i.e. <code>result</code> turns out to be the same as <code>arr</code> in the aforementioned method. What am I doing wrong? </p>
<p>How can I scale the array <code>arr</code> in the manner that I require (min-max scaling along each axis?) Thanks in advance.</p>
|
<p><code>MinMaxScaler</code> is a bit clunky to use; <code>sklearn.preprocessing.minmax_scale</code> is more convenient. This operates along columns, so use the transpose:</p>
<pre><code>>>> import numpy as np
>>> from sklearn import preprocessing
>>>
>>> a = np.random.random((3,5))
>>> a
array([[0.80161048, 0.99572497, 0.45944366, 0.17338664, 0.07627295],
[0.54467986, 0.8059851 , 0.72999058, 0.08819178, 0.31421126],
[0.51774372, 0.6958269 , 0.62931078, 0.58075685, 0.57161181]])
>>> preprocessing.minmax_scale(a.T).T
array([[0.78888024, 1. , 0.41673812, 0.10562126, 0. ],
[0.63596033, 1. , 0.89412757, 0. , 0.314881 ],
[0. , 1. , 0.62648851, 0.35384099, 0.30248836]])
>>>
>>> b = np.array([(4, 1, 5, 3), (0, 1.5, 1, 3)])
>>> preprocessing.minmax_scale(b.T).T
array([[0.75 , 0. , 1. , 0.5 ],
[0. , 0.5 , 0.33333333, 1. ]])
</code></pre>
|
python|arrays|numpy|scikit-learn
| 10
|
8,687
| 52,596,609
|
Efficient way to go from iris dataset in Pandas form to sk-learn form?
|
<p>How can I transform the Pandas version of the iris dataset, to the form used by <code>sk-learn</code>?</p>
<pre><code>#Seaborn dataset
import seaborn as sns
iris_seaborn = sns.load_dataset("iris")
sepal_length sepal_width petal_length petal_width species
0 5.1 3.5 1.4 0.2 setosa
1 4.9 3.0 1.4 0.2 setosa
2 4.7 3.2 1.3 0.2 setosa
3 4.6 3.1 1.5 0.2 setosa
4 5.0 3.6 1.4 0.2 setosa
</code></pre>
<p>Sci-kit Learn: </p>
<pre><code>#sk-learn dataset
from sklearn.datasets import load_iris
iris_sklearn = load_iris()
[Out] array([[5.1, 3.5, 1.4, 0.2],
[4.9, 3. , 1.4, 0.2],
[4.7, 3.2, 1.3, 0.2],
[4.6, 3.1, 1.5, 0.2],
[5. , 3.6, 1.4, 0.2]])
iris_sklearn.target[0:5]
[Out] array([0, 0, 0, 0, 0])
</code></pre>
<p>I know that the steps are normalizing the columns using <code>sklearn.preprocessing.MinMaxScaler</code> and <code>sklearn.preprocessing.LabelEncoder</code> for the numeric and categorical data respectively. But I don't know a more efficent way other than doing it to each column and then putting them together with <code>zip()</code>.</p>
<p>Any help is appreciated!</p>
|
<p>You can <code>factorize</code> the labels, then use the underlying <code>numpy</code> array for the rest of the data:</p>
<pre><code>target = pd.factorize(iris_seaborn.species)[0]
# alternatively:
# target = pd.Categorical(iris_seaborn.species).codes
# or
# target = iris_seaborn.species.factorize()[0]
data = iris_seaborn.iloc[:,:-1].values
# look at start of data:
>>> data[:5,:]
array([[5.1, 3.5, 1.4, 0.2],
[4.9, 3. , 1.4, 0.2],
[4.7, 3.2, 1.3, 0.2],
[4.6, 3.1, 1.5, 0.2],
[5. , 3.6, 1.4, 0.2]])
# and of target:
>>> target[:5]
array([0, 0, 0, 0, 0])
</code></pre>
|
python|pandas|scikit-learn
| 2
|
8,688
| 46,449,721
|
Splitting a dataframe based on the index
|
<p>I would like to split the below DF_input based on the index. That's from the below DF, How to obtain: </p>
<pre><code> measurement value
0 0 13
1 1 3
2 2 4
0 0 8
1 1 12
2 2 34
3 5 54
</code></pre>
<p>DF_output1</p>
<pre><code> measurement value
0 0 13
1 1 3
2 2 4
</code></pre>
<p>DF_output2</p>
<pre><code> measurement value
0 0 8
1 1 12
2 2 34
3 5 54
</code></pre>
<p>What I did is the following:` </p>
<pre><code> df_input.reset_index(inplace=True)
shifted = df_dataset['index'].shift()
m = shifted.diff(-1).ne(0.000000)
a = m.cumsum()
aa = df_dataset.groupby([df_dataset.uuid,a])
for k, gp in aa:
print(gp)
</code></pre>
<p>What Am I doing wrong? Any help please would be very appreciated.
Best Regards, Carlo</p>
|
<p>Use a <code>groupby</code> to partition the index into separate dataframes of increasing subsequence:</p>
<pre><code>for _, g in df.groupby((df.index.to_series().diff().fillna(1) < 0).cumsum()):
print(g, '\n')
measurement value
0 0 13
1 1 3
2 2 4
measurement value
0 0 8
1 1 12
2 2 34
3 5 54
</code></pre>
<p>This solution is a little more flexible because it does not define groups based on whether they begin with <code>0</code>, but rather finds increasing subsequences in the index.</p>
|
python|pandas|dataframe|indexing
| 1
|
8,689
| 46,228,276
|
Create a Pandas daily aggregate time series from a DataFrame with date ranges
|
<p>I have a Pandas DataFrame of subscriptions, each with a start datetime (timestamp) and an optional end datetime (if they were canceled).</p>
<p>For simplicity, I have created string columns for the date (e.g. "20170901") based on start and end datetimes (timestamps). It looks like this:</p>
<p><code>df = pd.DataFrame([('20170511', None), ('20170514', '20170613'), ('20170901', None), ...], columns=["sd", "ed"])
</code></p>
<p>The end result should be a time series of how many subscriptions were active on any given date in a range.</p>
<p>To that end, I created an Index for all the days within a range:</p>
<p><code>days = df.groupby(["sd"])["sd"].count()</code></p>
<p>I am able to create what I am interested in with a loop each executing a query over the entire DataFrame <code>df</code>.</p>
<p><code>count_by_day = pd.DataFrame([
len(df.loc[(df.sd <= i) & (df.ed.isnull() | (df.ed > i))])
for i in days.index], index=days.index)
</code></p>
<p>Note that I have values for each day in the original dataset, so there are no gaps. I'm sure getting the date range can be improved.</p>
<p>The actual question is: is there an efficient way to compute this for a large initial dataset df, with multiple thousands of rows? It seems the method I used is quadratic in complexity. I've also tried df.query() but it's 66% slower than the Pythonic filter and does not change the complexity.</p>
<p>I tried to search the Pandas docs for examples but I seem to be using the wrong keywords. Any ideas?</p>
|
<p>It's an interesting problem, here's how I would do it. Not sure about performance</p>
<p>EDIT: My first answer was incorrect, I didn't read fully the question</p>
<pre><code># Initial data, columns as Timestamps
df = pd.DataFrame([('20170511', None), ('20170514', '20170613'), ('20170901', None)], columns=["sd", "ed"])
df['sd'] = pd.DatetimeIndex(df.sd)
df['ed'] = pd.DatetimeIndex(df.ed)
# Range input and related index
beg = pd.Timestamp('2017-05-15')
end = pd.Timestamp('2017-09-15')
idx = pd.DatetimeIndex(start=beg, end=end, freq='D')
# We filter data for records out of the range and then clip the
# the subscriptions start/end to the range bounds.
fdf = df[(df.sd <= beg) | ((df.ed >= end) | (pd.isnull(df.ed)))]
fdf['ed'].fillna(end, inplace=True)
fdf['ps'] = fdf.sd.apply(lambda x: max(x, beg))
fdf['pe'] = fdf.ed.apply(lambda x: min(x, end))
# We run a conditional count
idx.to_series().apply(lambda x: len(fdf[(fdf.ps<=x) & (fdf.pe >=x)]))
</code></pre>
|
python|pandas|datetime|filter|aggregate
| 2
|
8,690
| 46,246,960
|
Enumerate rows for each dtaaframe group based on conditions
|
<p>I would like to reenumerate rows in given <code>df</code> using some conditions. My question is an extension of this <a href="https://stackoverflow.com/questions/17228215/enumerate-each-row-for-each-group-in-a-dataframe">question</a>.</p>
<p>Example of <code>df</code>:</p>
<pre><code> ind seq status
0 1 2 up
1 1 3 mid
2 1 5 down
3 2 1 up
4 2 2 mid
5 2 3 down
6 3 1 up
7 3 2 mid
8 3 3 oth
</code></pre>
<p>The <code>df</code> contains <code>ind</code> column which represents a <strong>group</strong>. The <code>seq</code> column might have some bad data. That's way I would like to add another column <code>seq_corr</code> to correct the <code>seq</code> enumerating based on some conditions:</p>
<ul>
<li>the first value in a group in <code>status</code> column equals <code>up</code></li>
<li>the last value in a group in <code>status</code> column equals <code>down</code> OR <code>oth</code></li>
<li>in all other cases copy number from <code>seq</code> column. </li>
</ul>
<p>I know the logical way to do this but I have some troubles how to convert it to <code>Python</code>. Especially when it comes to proper slicing and accessing the first and the last element of each group.</p>
<p>Below you can find my not working code:</p>
<pre><code> def new_id(x):
if (x.loc['status',0] == 'up') and ((x.loc['status',-1]=='down') or (x['status',-1]=='oth')):
x['ind_corr'] = np.arange(1, len(x) + 1)
else:
x['seq_corr']= x['seq']
return x
df.groupby('ind', as_index=False).apply(new_id)
</code></pre>
<p>Expected result:</p>
<pre><code> ind seq status seq_corr
0 1 2 up 1
1 1 3 mid 2
2 1 5 down 3
3 2 1 up 1
4 2 2 mid 2
5 2 3 down 3
6 3 5 up 1
7 3 2 mid 2
8 3 7 oth 3
</code></pre>
<p>Hoping that someone would be able to point me out any solution.</p>
|
<p>Let's try <code>df.groupby</code> followed by an <code>apply</code> and <code>concatenat</code>ion.</p>
<pre><code>vals = df.groupby('ind').apply(
lambda g: np.where(g['status'].iloc[0] == 'up'
or g['status'].iloc[-1] in {'down', 'oth'},
np.arange(1, len(g) + 1), g['seq'])
).values
df['seq_corr'] = np.concatenate(vals)
</code></pre>
<hr>
<pre><code>df
ind seq status seq_corr
0 1 2 up 1
1 1 3 mid 2
2 1 5 down 3
3 2 1 up 1
4 2 2 mid 2
5 2 3 down 3
6 3 1 up 1
7 3 2 mid 2
8 3 3 oth 3
</code></pre>
|
python|pandas|dataframe|group-by
| 2
|
8,691
| 58,359,645
|
A question about modifying values of one array based on values in another array
|
<p>Consider a 2D numpy array, a 1D numpy array, and a constant:</p>
<pre><code>arr1 = [[ 4 4] arr2 = [ 1 7] k = 2
[ 3 6]
[ 7 10]
[-2 6]
[-1 6]
[-8 8]]
</code></pre>
<p>Here's what I need to do: If the absolute value of the values in arr1[:,0] are in arr2, then I need to subtract k from the corresponding values in arr1[:,1]. The final output should be:</p>
<pre><code>arr1 = [[ 4 4]
[ 3 6]
[ 7 8]
[-2 6]
[-1 4]
[-8 8]]
</code></pre>
<p>Thank you.</p>
|
<p>Still learning, but this seems to work:</p>
<pre><code>print(arr1[np.in1d(abs[:,0]), arr2), 1] -= k)
</code></pre>
|
python-3.x|numpy-ndarray
| 0
|
8,692
| 58,194,497
|
Custom Early Stop Function - Stop When Cost Value Starts Accelerating Upward After Convergence?
|
<p>I am training a model using Tensorflow in Python 3, and have set up my own separate early stopping function. My model keeps the cost value fairly low for most of the training run, but then like normal, it reaches a certain point where not only does it no longer improve/minimize the cost function, but it gets exponentially worse and accelerates up. I have attached the values of my costs below.</p>
<p>I'm wondering if someone has an idea (pseudo-code, brainstorm, or link that I haven't found yet) of a way to improve my early stopping function to catch when this acceleration happens, and enforce the early stop. I don't necessarily want to have just a static number (like > 1.000) in case it hits that number but isn't done searching below. Maybe have some sort of an acceleration monitoring? A moving average? As you will see from the values and image, the acceleration is generally quite extreme at the end and will happen eventually without fail every training run. I'd like to be able to catch it as soon as possible, but still ensure that the move is drastic enough to enforce the stop. Thanks!</p>
<p><a href="https://i.stack.imgur.com/Wsjry.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wsjry.png" alt="Image of Cost Acceleration"></a></p>
<pre><code>epoch: 1 cost: 0.032336
epoch: 2 cost: 0.015083
epoch: 3 cost: 0.003783
epoch: 4 cost: 0.011579
epoch: 5 cost: 0.00436
epoch: 6 cost: 0.003667
epoch: 7 cost: 0.000973
epoch: 8 cost: 0.002916
epoch: 9 cost: 0.016516
epoch: 10 cost: 0.00094
epoch: 11 cost: 0.000656
epoch: 12 cost: 0.001112
epoch: 13 cost: 0.000761
epoch: 14 cost: 0.002976
epoch: 15 cost: 0.004531
epoch: 16 cost: 0.00247
epoch: 17 cost: 0.005809
epoch: 18 cost: 0.011614
epoch: 19 cost: 0.004681
epoch: 20 cost: 0.002704
epoch: 21 cost: 0.001122
epoch: 22 cost: 0.109581
epoch: 23 cost: 0.001352
epoch: 24 cost: 0.000767
epoch: 25 cost: 0.009472
epoch: 26 cost: 0.003918
epoch: 27 cost: 0.007462
epoch: 28 cost: 0.002033
epoch: 29 cost: 0.004985
epoch: 30 cost: 0.006285
epoch: 31 cost: 0.004838
epoch: 32 cost: 0.008076
epoch: 33 cost: 0.008414
epoch: 34 cost: 0.008761
epoch: 35 cost: 0.002719
epoch: 36 cost: 0.002752
epoch: 37 cost: 0.00355
epoch: 38 cost: 0.012253
epoch: 39 cost: 0.052947
epoch: 40 cost: 0.005952
epoch: 41 cost: 0.012556
epoch: 42 cost: 0.018322
epoch: 43 cost: 0.042715
epoch: 44 cost: 0.045315
epoch: 45 cost: 0.051732
epoch: 46 cost: 0.072919
epoch: 47 cost: 0.013907
epoch: 48 cost: 0.088789
epoch: 49 cost: 0.045083
epoch: 50 cost: 0.038073
epoch: 51 cost: 0.033848
epoch: 52 cost: 0.022773
epoch: 53 cost: 0.198873
epoch: 54 cost: 0.020925
epoch: 55 cost: 0.02264
epoch: 56 cost: 0.039353
epoch: 57 cost: 0.055266
epoch: 58 cost: 0.057254
epoch: 59 cost: 0.048848
epoch: 60 cost: 0.072187
epoch: 61 cost: 0.066818
epoch: 62 cost: 0.111698
epoch: 63 cost: 0.121994
epoch: 64 cost: 0.216178
epoch: 65 cost: 0.4132
epoch: 66 cost: 0.243138
epoch: 67 cost: 0.628117
epoch: 68 cost: 0.349325
epoch: 69 cost: 0.413678
epoch: 70 cost: 0.376448
epoch: 71 cost: 0.931199
epoch: 72 cost: 5.495036
epoch: 73 cost: 2.914621
epoch: 74 cost: 7.160439
epoch: 75 cost: 13.324359
epoch: 76 cost: 22.426832
epoch: 77 cost: 116.921036
epoch: 78 cost: 285.824371
</code></pre>
|
<p>You can do this by keeping a window of last n loss values and calculating a range (max minus min of the window). Then you put a threshold, if the range value is bigger than m times of the min of this window, then you just stop.</p>
|
python|python-3.x|tensorflow|reinforcement-learning
| 1
|
8,693
| 58,568,218
|
Datetime conversion format
|
<p>I converted the datetime from a format '2018-06-22T09:38:00.000-04:00'
to pandas datetime format</p>
<p>i tried to convert using pandas and got output but the output is</p>
<p>o/p: 2018-06-22 09:38:00-04:00</p>
<pre><code>date = '2018-06-22T09:38:00.000-04:00'
dt = pd.to_datetime(date)
</code></pre>
<p>expected result: 2018-06-22 09:38 </p>
<p>actual result: 2018-06-22 09:38:00-04:00</p>
|
<p>There is timestamps with timezones, so if convert to UTC by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Timestamp.tz_convert.html" rel="nofollow noreferrer"><code>Timestamp.tz_convert</code></a>, times are changed:</p>
<pre><code>date = '2018-06-22T09:38:00.000-04:00'
dt = pd.to_datetime(date).tz_convert(None)
print (dt)
2018-06-22 13:38:00
</code></pre>
<p>So possible solution is remove last 6 values in datetimes:</p>
<pre><code>dt = pd.to_datetime(date[:-6])
print (dt)
2018-06-22 09:38:00
</code></pre>
|
python-3.x|pandas
| 2
|
8,694
| 58,412,910
|
Extract key and value from json to new dataframe
|
<p>I have a dataframe that has JSON values are in columns. Those were indented into multiple levels. I would like to extract the end key and value into a new dataframe. I will give you sample column values below</p>
<blockquote>
<p>{'shipping_assignments': [{'shipping': {'address': {'address_type':
'shipping', 'city': 'Calder', 'country_id': 'US',
'customer_address_id': 1, 'email': 'roni_cost@example.com',
'entity_id': 1, 'firstname': 'Veronica', 'lastname': 'Costello',
'parent_id': 1, 'postcode': '49628-7978', 'region': 'Michigan',
'region_code': 'MI', 'region_id': 33, 'street': ['6146 Honey Bluff
Parkway'], 'telephone': '(555) 229-3326'}, 'method':
'flatrate_flatrate', 'total': {'base_shipping_amount': 5,
'base_shipping_discount_amount': 0,
'base_shipping_discount_tax_compensation_amnt': 0,
'base_shipping_incl_tax': 5, 'base_shipping_invoiced': 5,
'base_shipping_tax_amount': 0, 'shipping_amount': 5,
'shipping_discount_amount': 0,
'shipping_discount_tax_compensation_amount': 0, 'shipping_incl_tax':
5, 'shipping_invoiced': 5, 'shipping_tax_amount': 0}}, 'items':
[{'amount_refunded': 0, 'applied_rule_ids': '1',
'base_amount_refunded': 0, 'base_discount_amount': 0,
'base_discount_invoiced': 0, 'base_discount_tax_compensation_amount':
0, 'base_discount_tax_compensation_invoiced': 0,
'base_original_price': 29, 'base_price': 29, 'base_price_incl_tax':
31.39, 'base_row_invoiced': 29, 'base_row_total': 29, 'base_row_total_incl_tax': 31.39, 'base_tax_amount': 2.39,
'base_tax_invoiced': 2.39, 'created_at': '2019-09-27 10:03:45',
'discount_amount': 0, 'discount_invoiced': 0, 'discount_percent': 0,
'free_shipping': 0, 'discount_tax_compensation_amount': 0,
'discount_tax_compensation_invoiced': 0, 'is_qty_decimal': 0,
'item_id': 1, 'name': 'Iris Workout Top', 'no_discount': 0,
'order_id': 1, 'original_price': 29, 'price': 29, 'price_incl_tax':
31.39, 'product_id': 1434, 'product_type': 'configurable', 'qty_canceled': 0, 'qty_invoiced': 1, 'qty_ordered': 1,
'qty_refunded': 0, 'qty_shipped': 1, 'row_invoiced': 29, 'row_total':
29, 'row_total_incl_tax': 31.39, 'row_weight': 1, 'sku':
'WS03-XS-Red', 'store_id': 1, 'tax_amount': 2.39, 'tax_invoiced':
2.39, 'tax_percent': 8.25, 'updated_at': '2019-09-27 10:03:46', 'weight': 1, 'product_option': {'extension_attributes':
{'configurable_item_options': [{'option_id': '141', 'option_value':
167}, {'option_id': '93', 'option_value': 58}]}}}]}],
'payment_additional_info': [{'key': 'method_title', 'value': 'Check /
Money order'}], 'applied_taxes': [{'code': 'US-MI-<em>-Rate 1', 'title':
'US-MI-</em>-Rate 1', 'percent': 8.25, 'amount': 2.39, 'base_amount':
2.39}], 'item_applied_taxes': [{'type': 'product', 'applied_taxes': [{'code': 'US-MI-<em>-Rate 1', 'title': 'US-MI-</em>-Rate 1', 'percent':
8.25, 'amount': 2.39, 'base_amount': 2.39}]}], 'converting_from_quote': True}</p>
</blockquote>
<p>Above is single row value of the dataframe column df['x']</p>
<p>My codes are below to convert</p>
<pre><code>sample = data['x'].tolist()
data = json.dumps(sample)
df = pd.read_json(data)
</code></pre>
<p>it gives new dataframe with columns </p>
<blockquote>
<p>Index(['applied_taxes', 'converting_from_quote', 'item_applied_taxes',
'payment_additional_info', 'shipping_assignments'],
dtype='object')</p>
</blockquote>
<p>When I tried to do the same above to convert the column which has row values </p>
<pre><code>m_df = df['applied_taxes'].apply(lambda x : re.sub('.?\[|$.|]',"", str(x)))
m_sample = m_df.tolist()
m_data = json.dumps(m_sample)
c_df = pd.read_json(m_data)
</code></pre>
<p>It doesn't work</p>
<p>Check this link to get the <a href="https://jsonbeautifier.org/?id=5316ed51cbf545ca8b83f947377ce2a2" rel="nofollow noreferrer">beautified_json</a></p>
|
<p>I came across a beautiful ETL package in python called petl. convert the json list into dict form with the help of function called fromdicts(json_string)</p>
<pre><code>order_table = fromdicts(data_list)
</code></pre>
<p>If you find any nested dict in any of the columns, use unpackdict(order_table,'nested_col')
it will unpack the nested dict.
In my case, I need to unpack the applied_tax column. Below code will unpack and append the key and value as a column and row in the same table.</p>
<pre><code>order_table = unpackdict(order_table, 'applied_taxes')
</code></pre>
<p>If you guys wants to know more about -<a href="https://petl.readthedocs.io/en/v0.10.1/" rel="nofollow noreferrer">petl</a></p>
|
python|json|pandas|dataframe
| 2
|
8,695
| 58,203,166
|
How to draw plots on Specific pandas columns
|
<p>So I have the df.head() being displayed below.I wanted to display the progression of salaries across time spans.As you can see the teams will get repeated across the years and the idea is to
display how their salaries changed over time.So for teamID='ATL' I will have a graph that starts by 1985 and goes all the way to the present time.
I think I will need to select teams by their team ID and have the x axis display time (year) and Y axis display year. I don't know how to do that on Pandas and for each team in my data frame.</p>
<pre><code> teamID yearID lgID payroll_total franchID Rank W G win_percentage
0 ATL 1985 NL 14807000.0 ATL 5 66 162 40.740741
1 BAL 1985 AL 11560712.0 BAL 4 83 161 51.552795
2 BOS 1985 AL 10897560.0 BOS 5 81 163 49.693252
3 CAL 1985 AL 14427894.0 ANA 2 90 162 55.555556
4 CHA 1985 AL 9846178.0 CHW 3 85 163 52.147239
5 ATL 1986 NL 17800000.0 ATL 4 55 181 41.000000
</code></pre>
|
<p>You can use <code>seaborn</code> for this:</p>
<pre><code>import seaborn as sns
sns.lineplot(data=df, x='yearID', y='payroll_total', hue='teamID')
</code></pre>
<p>To get different plot for each team:</p>
<pre><code>for team, d in df.groupby('teamID'):
d.plot(x='yearID', y='payroll_total', label='team')
</code></pre>
|
python|pandas|data-visualization
| 1
|
8,696
| 58,405,825
|
How to replace particular values in dataframe column from a dictionary?
|
<p>So, I have a table of the following manner:</p>
<pre><code>Col1 Col2
ABS 45
CDC 23
POP 15
</code></pre>
<p>Now, I have a dictionary <code>aa = {'A':'AD','P':'PL','C':'LC'}</code>. So for the matching key parts only I want the values in the column to change. For the other letters which do not match the dictionary keys should remain the same.</p>
<p>The final table should look like:</p>
<pre><code>Col1 Col2
ADBS 45
LCDLC 23
PLOPL 15
</code></pre>
<p>I am trying to use the following code but it is not working.</p>
<pre class="lang-py prettyprint-override"><code>df['Col1'].str.extract[r'([A-Z]+)'].map(aa)
</code></pre>
|
<h1>Solution</h1>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'Col1': ['ABS', 'CDC', 'POP'],
'Col2': [45, 23, 15],
})
keys = aa.keys()
df.Col1 = [''.join([aa.get(e) if (e in keys) else e for e in list(ee)]) for ee in df.Col1.tolist()]
df
</code></pre>
<p><strong>Output</strong>: </p>
<p><a href="https://i.stack.imgur.com/jm8aZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jm8aZ.png" alt="enter image description here"></a></p>
<blockquote>
<h2>Unpacking the Condensed List Comprehension</h2>
</blockquote>
<p>Let us write down the list comprehension in a more readable form. We create a function <code>do_something</code> to understand what is happenning in the first part of the list-comprehension. The second part (<code>for ee in df.Col1.tolist()</code>) essentially iterates over each row in the column <code>'Col1'</code> of the dataframe <code>df</code>. </p>
<pre class="lang-py prettyprint-override"><code>def do_something(x):
# here x is like 'ABS'
xx = '.join([aa.get(e) if (e in keys) else e for e in list(x)])
return xx
df.Col1 = [do_something(ee) for ee in df.Col1.tolist()]
</code></pre>
<h3>Unpacking <code>do_something(x)</code></h3>
<p>The function <code>do_something(x)</code> does the following. It will be easier if you try it with <code>x = 'ABS'</code>. The <code>''.join(some_list)</code> in <code>do_something</code> joins the list produced. The following code block will illustrate that. </p>
<pre class="lang-py prettyprint-override"><code>x = 'ABS'
print(do_something(x))
[aa.get(e) if (e in keys) else e for e in list(x)]
</code></pre>
<p><strong>Output</strong>: </p>
<pre><code>ADBS
['AD', 'B', 'S']
</code></pre>
<blockquote>
<h3>So what is the core logic?</h3>
<p>The following code-block shows you step-by-step how the logic works. Obviously, the <code>list comprehension</code> introduced at the beginning of the solution compresses the <code>nested for loops</code> into a single line, and hence should be preferred over the following. </p>
</blockquote>
<pre><code>keys = aa.keys()
packlist = list()
for ee in df.Col1.tolist():
# Here we iterate over each element of
# the dataframe's column (df.Col1)
# make a temporary list
templist = list()
for e in list(ee):
# here e is a single character of the string ee
# example: list('ABS') = ['A', 'B', 'S']
if e in keys:
# if e is one of the keys in the dict aa
# append the corresponding value to templist
templist.append(aa.get(e))
else:
# if e is not a key in the dict aa
# append e itself to templist
templist.append(e)
# append a copy of templist to packlist
packlist.append(templist.copy())
# Finally assign the list: packlist to df.Col1
# to update the column values
df.Col1 = packlist
</code></pre>
<h1>References</h1>
<p>List and dict comprehensions are some very powerful tools any python programmer would find handy and nifty while coding. They have the ability to neatly compress an otherwise elaborate code-block into merely a line or two. I would suggest that you take a look at the following. </p>
<ol>
<li><a href="https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions" rel="nofollow noreferrer">List Comprehensions: <code>python.org</code></a></li>
<li><a href="https://docs.python.org/3/tutorial/datastructures.html#dictionaries" rel="nofollow noreferrer">Dict Comprehensions: <code>python.org</code></a></li>
<li><a href="https://medium.com/better-programming/list-comprehension-in-python-8895a785550b" rel="nofollow noreferrer">List Comprehension in Python: <code>medium.com</code></a></li>
</ol>
|
python|regex|pandas|dictionary
| 1
|
8,697
| 69,181,026
|
filter a dataframe with vectorization
|
<p>I have the followings:</p>
<pre><code>df = pd.DataFrame({"value":['A','B','B','C','B'],'my_list':[['J1','J4'],['J2','J9','J1'],['J0','J9','J2'],['J2'],['V13','X9','J1']]})
value my_list
0 A [J1, J4]
1 B [J2, J9, J1]
2 B [J0, J9, J2]
3 C [J2]
4 B [V13, X9, J1]
</code></pre>
<p>and I wish to find all values B that does not have J2 in it.
If using lambda I can do:</p>
<pre><code>df.apply(lambda x: x['value']=='B' and 'J2' not in x['my_list'], axis=1)
0 False
1 False
2 False
3 False
4 True
</code></pre>
<p>But I wish to do something like this:</p>
<pre><code>df[(df['value']=='B') & ('J2' not in df['my_list'])]
</code></pre>
<p>doable?</p>
|
<p>Using list, dict etc in a pandas dataframe loses the befinifit of vectorization. If storing these values with these dtypes are mandatory, list comprehension works faster:</p>
<pre><code>df['value'].eq("B")&['J2' not in i for i in df['my_list']]
</code></pre>
<p>Other methods:</p>
<p>Converting to dataframe looks like:</p>
<pre><code>df['value'].eq('B') & ~pd.DataFrame(df['my_list'].tolist()).isin(['J2']).any(1)
</code></pre>
<p>Or converting to string :</p>
<pre><code>df['value'].eq('B')& ~df['my_list'].astype(str).str.contains("J2")
</code></pre>
<p>And:</p>
<pre><code>df['value'].eq("B")&df['my_list'].astype(str).str.count("J2").ne(1)
</code></pre>
|
python|pandas
| 2
|
8,698
| 68,996,540
|
Chaquopy Android Wrapper
|
<p><strong>Chaquopy Android</strong> I have to call the python file method with array data. then python file executes ECG Peak(PQRST) using <strong>neurokit2</strong> and got this error.</p>
<p><a href="https://i.stack.imgur.com/oOnKx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oOnKx.png" alt="enter image description here" /></a></p>
<p>code is properly work in pycharm. there is no TypeError
<a href="https://i.stack.imgur.com/7PzAq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7PzAq.png" alt="enter image description here" /></a></p>
|
<p>Pandas added support for the <code>string</code> dtype in <a href="https://pandas.pydata.org/pandas-docs/stable/whatsnew/v1.0.0.html#dedicated-string-data-type" rel="nofollow noreferrer">version 1.0</a>. So change the <code>pip</code> section of your build.gradle file to install <code>pandas==1.3.2</code>, which we released for Chaquopy a few days ago.</p>
|
python|android|pandas
| 1
|
8,699
| 68,966,864
|
Batchsize in DataLoader
|
<p>I have two tensors:</p>
<pre><code>x[train], y[train]
</code></pre>
<p>And the shape is</p>
<pre><code>(311, 3, 224, 224), (311) # 311 Has No Information
</code></pre>
<p>I want to use DataLoader to load them batch by batch, the code I write is:</p>
<pre><code>from torch.utils.data import Dataset
class KD_Train(Dataset):
def __init__(self,a,b):
self.imgs = a
self.index = b
def __len__(self):
return len(self.imgs)
def __getitem__(self,index):
return self.imgs, self.index
kdt = KD_Train(x[train], y[train])
train_data_loader = Data.DataLoader(
kdt,
batch_size = 64,
shuffle = True,
num_workers = 0)
for step, (a,b) in enumerate (train_data_loader):
print(a.shape)
break
</code></pre>
<p>But it shows:</p>
<pre><code>(64, 311, 3, 224, 224)
</code></pre>
<p>the DataLoader just add a dimension directly instead of choosing some batches, anyone know what should I do?</p>
|
<p>Your dataset's <code>__getitem__</code> method should return a single element:</p>
<pre><code>def __getitem__(self, index):
return self.imgs[index], self.index[index]
</code></pre>
|
pytorch|dataloader
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.