Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
3,400
| 73,446,860
|
Can tensorflow's tf.function be used with methods of dataclasses?
|
<p>Can methods of <code>dataclass</code>es be decorated with <code>@tf.function</code>? A straight-forward test</p>
<pre><code>@dataclass
class Doubler:
@tf.function
def double(a):
return a*2
</code></pre>
<p>gives an error</p>
<pre><code>d = Doubler()
d.double(2)
</code></pre>
<p>saying that <code>Doubler</code> is not hashable (<code>TypeError: unhashable type: 'Doubler'</code>), which I believe is because hashing is disabled by default for dataclasses. Is this a general limitation or can it be made to work? I found <a href="https://stackoverflow.com/a/68924806/143931">this answer</a> that seems to indicate that it doesn't work.</p>
|
<p>I think the official recommendation from Tensorflow is to use <code>tf.experimental.ExtensionType</code>:</p>
<pre><code>import tensorflow as tf
class Doubler(tf.experimental.ExtensionType):
@tf.function
def double(self, a):
return a*2
d = Doubler()
d.double(2)
</code></pre>
<p>According to the <a href="https://www.tensorflow.org/guide/extension_type#extension_types_2" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>The tf.experimental.ExtensionType base class works similarly to
typing.NamedTuple and @dataclasses.dataclass from the standard Python
library. In particular, it automatically adds a constructor and
special methods (such as <strong>repr</strong> and <strong>eq</strong>) based on the field type
annotations.</p>
</blockquote>
<p>If you read further down in the docs, you will see what features are provided by default.</p>
|
python|tensorflow|python-dataclasses
| 2
|
3,401
| 67,341,689
|
How to convert all images in one folder to numpy files?
|
<p>I need to do Semantic image segmentation based on Unet.</p>
<p>I have to work with Pascal VOC 2012 dataset, however I don't know how to do it, do I manually select images for the train & val and convert them into numpy and then load them into the model? Or is there another way?</p>
<p>If this is the first one I would like to know how to convert all the images present in a folder into .npy.</p>
|
<p>if i understood correctly, you just need to go through all the files from the folder and add them to the numpy table?</p>
<pre><code>numpyArrays = [yourfunc(file_name) for file_name in listdir(mypath) if isfile(join(mypath, file_name))]
</code></pre>
<p>yourfunc is the function you need to write to convert one file from dataset format to numpy table</p>
|
python|numpy
| 0
|
3,402
| 67,246,859
|
How to convert rows into columns and filter using the ID
|
<p>I have a CSV file that looks like this:</p>
<pre><code>customer_id | key_id. | quantity |
1 | 777 | 3 |
1 | 888 | 2 |
1 | 999 | 3 |
2 | 777 | 6 |
2 | 888 | 1 |
</code></pre>
<p>and I would like to use simple python or pandas to:</p>
<ol>
<li>Make each unique customer id in a separate row</li>
<li>convert key_id to the columns titles and the values are the quantity</li>
</ol>
<p>The output table should look like this:</p>
<pre><code> | 777 | 888 | 999 |
1 | 3 | 2 | 3 |
2 | 6 | 1 | 0 |
</code></pre>
<p>I have been struggling to find a good data structure to do this but I couldn't. and using pandas I also couldn't filter using 2 ids. Any tips?</p>
|
<p>You can pivot into <code>key_id</code> columns using <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pivot_table.html" rel="nofollow noreferrer"><strong><code>pivot_table()</code></strong></a>:</p>
<pre class="lang-py prettyprint-override"><code>df.pivot_table(index='customer_id', columns='key_id', values='quantity').fillna(0)
# key_id 777 888 999
# customer_id
# 1 3.0 2.0 3.0
# 2 6.0 1.0 0.0
</code></pre>
<hr />
<p>To handle duplicates, <code>pivot_table()</code> averages them by default. To override this aggregation method, you can set the <code>aggfunc</code> param (<code>max</code>, <code>min</code>, <code>first</code>, <code>last</code>, <code>sum</code>, etc.):</p>
<pre class="lang-py prettyprint-override"><code>df.pivot_table(
index='customer_id',
columns='key_id',
values='quantity',
aggfunc='max',
).fillna(0)
</code></pre>
|
python|python-3.x|pandas|dataframe|csv
| 2
|
3,403
| 60,200,831
|
Replace value one last valid item - Pandas
|
<p>I want to replace a value in a column based off another column in a <code>pandas</code> df. Specifically, where <code>col B == X</code>, I want to change the value in <code>col C</code>, but for the last <code>X</code> in a given sequence. I can change all respective <code>X's</code> in <code>C</code>. But I only want to replace the last valid <code>X</code>. Would <code>mask</code> and last valid value be ideal here?</p>
<p><code>X's</code> appear in random order and groupings</p>
<pre><code>df = pd.DataFrame({
'A' : [1,1,1,1,1,1,1,1],
'B' : ['X','X','X','D','A','A','X','D'],
'C' : [1,1,1,1,1,1,1,1],
})
df.loc[df['B'] == 'X', ['C']] = 'str'
mask = df['B'] == 'X'
</code></pre>
<p>Intended Output:</p>
<pre><code> A B C
0 1 X 1
1 1 X 1
2 1 X str
3 1 D 1
4 1 A 1
5 1 A 1
6 1 X str
7 1 D 1
</code></pre>
|
<p>You can use <code>shift</code> along with <code>numpy.where</code></p>
<pre><code>import numpy as np
b1 = df["B"].shift(-1)
df["C"] = np.where((df["B"]=="X") & (b1!="X"), "str" , df["C"])
</code></pre>
<p>Output:</p>
<pre><code> A B C
0 1 X 1
1 1 X 1
2 1 X str
3 1 D 1
4 1 A 1
5 1 A 1
6 1 X str
7 1 D 1
</code></pre>
|
python|pandas|dataframe
| 3
|
3,404
| 65,319,496
|
Days calculation in DataFrame in Python Pandas?
|
<p>I have DataFrame with clients' agreements like below:</p>
<pre><code>rng = pd.date_range('2020-12-01', periods=5, freq='D')
df = pd.DataFrame({ "ID" : ["1", "2", "1", "2", "2"], "Date": rng})
</code></pre>
<p>And I need to create new DataFrame with calculation based on above df:</p>
<ol>
<li>New1 = amount of days from the first agreement until today (16.12)</li>
<li>New2 = amount of days from the last agreement until today (16.12)</li>
</ol>
<p>To be more precision I need to create df like below:</p>
<p><a href="https://i.stack.imgur.com/STf6z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/STf6z.png" alt="enter image description here" /></a></p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.rsub.html" rel="nofollow noreferrer"><code>Series.rsub</code></a> for subtract from right side with today and convert timedeltas to days by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.days.html" rel="nofollow noreferrer"><code>Series.dt.days</code></a> and then aggregate by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.agg.html" rel="nofollow noreferrer"><code>GroupBy.agg</code></a> for <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.first.html" rel="nofollow noreferrer"><code>GroupBy.first</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.last.html" rel="nofollow noreferrer"><code>GroupBy.last</code></a> values per groups:</p>
<pre><code>now = pd.to_datetime('today')
df = (df.assign(new = df['Date'].rsub(now).dt.days)
.groupby('ID').agg(New1 = ('new', 'first'),
New2 = ('new', 'last')))
.reset_index()
print (df)
ID New1 New2
0 1 15 13
1 2 14 11
</code></pre>
|
python|pandas|dataframe|date
| 0
|
3,405
| 65,212,488
|
Mapping graph/relationship based values in a DataFrame in python
|
<p>I have an input DataFrame in the below format:</p>
<pre><code>input_data = [[1000, 1002], [1002, 1003], [1004, 1000],[1010,1050],[1060,1002],[1050,1100],[1200,1250],[1300,1200]]
input_df = pd.DataFrame(input_data, columns = ['Value1', 'Value2'])
print(input_df)
</code></pre>
<p>Ignored the index for readability,</p>
<pre><code>Value1 Value2
1000 1002
1002 1003
1004 1000
1010 1050
1060 1002
1050 1100
1200 1250
1300 1200
</code></pre>
<p>The output I am expecting is shown below. I need to map all related values (be it value1 -> value2 or value2 -> value1) and collect them all to ordered indexes(starting from 1) like below:</p>
<pre><code>Index Value
1 1000
1 1002
1 1003
1 1004
1 1060
2 1010
2 1050
2 1100
3 1200
3 1250
3 1300
3 1200
</code></pre>
<p>What I have tried?
I did try looping over the rows in input. I'm able to relate if values in a single row are related. But I'm finding it hard to use this logic when the relationships spans across multiple rows and multiple columns(Value1 and Value2)</p>
|
<p>Use <a href="https://networkx.github.io/documentation/stable/reference/generated/networkx.convert_matrix.from_pandas_edgelist.html" rel="nofollow noreferrer"><code>convert_matrix.from_pandas_edgelist</code></a> with <a href="https://networkx.github.io/documentation/networkx-1.10/reference/generated/networkx.algorithms.components.connected.connected_components.html" rel="nofollow noreferrer"><code>connected_components</code></a> first, then create dictionary for mapping, reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.melt.html" rel="nofollow noreferrer"><code>DataFrame.melt</code></a>, map values per groups by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a>, remove duplicates <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>DataFrame.drop_duplicates</code></a> and last sorting:</p>
<pre><code>import networkx as nx
# Create the graph from the dataframe
g = nx.Graph()
g = nx.from_pandas_edgelist(input_df,'Value1','Value2')
connected_components = nx.connected_components(g)
# Find the component id of the nodes
node2id = {}
for cid, component in enumerate(connected_components):
for node in component:
node2id[node] = cid
df = input_df.melt()
df['g'] = df['value'].map(node2id)
df = df.drop_duplicates(['value','g']).sort_values(['g','value'])
print (df)
variable value g
0 Value1 1000 0
1 Value1 1002 0
9 Value2 1003 0
2 Value1 1004 0
4 Value1 1060 0
3 Value1 1010 1
5 Value1 1050 1
13 Value2 1100 1
6 Value1 1200 2
14 Value2 1250 2
7 Value1 1300 2
</code></pre>
|
python|python-3.x|pandas|dataframe
| 2
|
3,406
| 49,919,300
|
Tensorflow vs OpenCV
|
<p>I'm new into the AI world, I've start doing some stuff using Python & OpenCV for face detection and so on. I know that with the implementation of some algorithms I can develop AI system using Python & OpenCV. So my question is : What is the position of Tensorflow here? Can I say Tensorflow is an alternative to OpenCV? as I can say Python is an alternative programming language to Java (for example).</p>
|
<p>The main difference is that TensorFlow is a framework for machine learning, and OpenCV is a library for computer vision. It can be a good start to check the link below to get a grasp for the difference between framework and library: <a href="https://stackoverflow.com/questions/148747/what-is-the-difference-between-a-framework-and-a-library">What is the difference between a framework and a library?</a> </p>
<p>You can do image recognition with TensorFlow. Though it is suited for more general problems as well, such as: classification, clustering and regression.</p>
<p>I guess people downvoted because this question might be more relevant to: <a href="https://datascience.stackexchange.com/">https://datascience.stackexchange.com/</a> </p>
|
opencv|tensorflow|artificial-intelligence
| 67
|
3,407
| 50,069,340
|
Create new pandas column with same list as every row?
|
<p>I would like to create a new column in a dataframe that has a list at every row. I'm looking for something that will accomplish the following:</p>
<pre><code>df = pd.DataFrame(data={'A': [1, 2, 3], 'B': ['x', 'y', 'z']})
list_=[1,2,3]
df['new_col] = list_
A B new_col
0 1 x [1,2,3]
1 2 y [1,2,3]
2 3 z [1,2,3]
</code></pre>
<p>Does anyone know how to accomplish this? Thank you! </p>
|
<pre><code>df = pd.DataFrame(data={'A': [1, 2, 3], 'B': ['x', 'y', 'z']})
list_=[1,2,3]
df['new_col'] = [list_]*len(df)
</code></pre>
<p>Output: </p>
<pre><code> A B new_col
0 1 x [1, 2, 3]
1 2 y [1, 2, 3]
2 3 z [1, 2, 3]
</code></pre>
<p>Tip: <code>list</code> as a variable name is not advised. <code>list</code> is a built in type like <code>str</code>, <code>int</code> etc.</p>
|
python|list|pandas|dataframe
| 2
|
3,408
| 63,947,336
|
A question that involves permutations of pairs of row elements
|
<p>Consider two numpy arrays of integers. U has 2 columns and shows all (p,q) where p<q. For this question, I'll restrict myself to 0<=p,q<=5. The cardinality of U is C(6,2) = 15.</p>
<pre><code>U = [[0,1],
[0,2],
[0,3],
[0,4],
[0,5],
[1,2],
[1,3],
[1,4],
[1,5],
[2,3],
[2,4],
[2,5],
[3,4],
[3,5],
[4,5]]
</code></pre>
<p>The 2nd array, V, has 6 columns. I formed it by finding the cartesian product UxUxU. So, the first row of V is [0,1,0,1,0,1], and the last row is [4,5,4,5,4,5]. The cardinality of V is C(6,2)^3 = 3375.</p>
<p>A SMALL SAMPLE of V, used in my question, is shown below. <em><strong>The elements of each row should be thought of as 3 pairs. The rationale follows</strong></em>.</p>
<pre><code>V = [[0,1, 2,5, 2,4],
[0,1, 2,5, 2,5],
[0,1, 2,5, 3,4],
[0,1, 2,5, 3,5],
[0,1, 2,5, 4,0],
[0,1, 2,5, 4,1]]
</code></pre>
<p>Here's why the row elements should be thought of as a set of 3 pairs: Later in my code, I will loop through <strong>each row of V, using the pair values to 'swap' columns of a matrix M</strong>. (M is not shown because it isn't needed for this question) When we get to row <code>[0,1, 2,5, 2,4]</code>, for example, <strong>we will swap the columns of M having indices 0 & 1, THEN swap the columns having indices 2 & 5, and finally, swap the columns having indices 2 & 4</strong>.</p>
<p>I'm currently wasting a lot of time because many of the rows of V could be eliminated.</p>
<p>The easiest case to understand involves V rows like <code>[0,1, 2,5, 3,4]</code> where <em><strong>all values are unique</strong></em>. This row has 6 pair permutations, but they all have the same net effect on M. Their values are unique, so none of the swaps will encounter 'interference' from another swap.</p>
<p><strong>Question 1</strong>: How can I efficiently eliminate rows that have unique elements in unneeded permutations?
I would keep, say, <code>[0,1, 2,5, 3,4]</code>, but drop:</p>
<pre><code>[0,1, 3,4, 2,5],
[2,5, 0,1, 3,4],
[2,5, 3,4, 0,1],
[3,4, 0,1, 2,5],
[3,4, 2,5, 0,1]
</code></pre>
<p>I'm guessing a solution would involve np.sort and np.unique, but I'm struggling with getting a good result.</p>
<p><strong>Question 2</strong>: (I don't think it's reasonable to expect an answer to this question, but I'd certainly appreciate any pointers or tips re resources that I could study) The question involves <strong>rows of V having one or more common elements</strong>, like <code>[0,1, 2,5, 2,4]</code> or <code>[0,5, 2,5, 2,4]</code> or <code>[0,5, 2,5, 3,5]</code>. All of these have 6 pair permutations, but they don't all have the same effect of M. The row <code>[0,1, 2,5, 2,4]</code>, for example, has 3 permutations that produce one M outcome, and 3 permutations that produce another. Ideally, I would like to keep two of the rows but eliminate the other four. The two other rows I showed are even more 'pathological'.</p>
<p>Does anyone see a path forward here that would allow more eliminations of V rows? If not, I'll continue what I'm currently doing even though it's really inefficient - screening the code's final outputs for doubles.</p>
|
<p>To get rows of an array, without repetitions (in your sense), you can run:</p>
<pre><code>VbyRows = V[np.lexsort(V[:, ::-1].T)]
sorted_data = np.sort(VbyRows, axis=1)
result = VbyRows[np.append([True], np.any(np.diff(sorted_data, axis=0), 1))]
</code></pre>
<p>Details:</p>
<ul>
<li><code>VbyRows = V[np.lexsort(V[:, ::-1].T)]</code> - sort rows by all columns.
I used <em>::-1</em> as the column index to sort first on the first column,
then by the second, and so on.</li>
<li><code>sorted_data = np.sort(VbyRows, axis=1)</code> - sort each row from <em>VbyRows</em>
(and save it as a separate array).</li>
<li><code>np.diff(sorted_data, axis=0)</code> - compute "vertical" differences between
previous and current row (in <em>sorted_data</em>).</li>
<li><code>np.any(...)</code> - A <em>bool</em> vector - "cumulative difference indicator" for
each row from <em>sorted_data</em> but the first (does it differ from the
previous row on any position).</li>
<li><code>np.append([True], ...)</code> - prepend the above result with <em>True</em> (an
indicator that the first row should be included in the result).
The result is also a <em>bool</em> vector, this time for all rows. Each element
of this row answers the question: Should the respective row from <em>VbyRows</em>
be included in the result.</li>
<li><code>result = VbyRows[np.append([True], np.any(np.diff(sorted_data, axis=0), 1))]</code> -
the final result.</li>
</ul>
<p>To test the above code I prepared <em>V</em> as follows:</p>
<pre><code>array([[ 0, 1, 2, 5, 3, 4],
[ 0, 1, 3, 4, 2, 5],
[ 2, 5, 0, 1, 3, 4],
[ 2, 5, 3, 4, 0, 1],
[ 3, 4, 0, 1, 2, 5],
[13, 14, 12, 15, 10, 11],
[ 3, 4, 2, 5, 0, 1]])
</code></pre>
<p>(the last but one row is "other", all remaining rows contain the same
numbers in various order).</p>
<p>The result is:</p>
<pre><code>array([[ 0, 1, 2, 5, 3, 4],
[13, 14, 12, 15, 10, 11]])
</code></pre>
<p>Note that <em>lexsort</em> as the first step provides that from rows with
the same set of numbers the returned row will be the first from rows
sorted by consecutive columns.</p>
|
arrays|numpy
| 1
|
3,409
| 63,927,648
|
Can't iterate through PyTorch DataLoader
|
<p>I am trying to learn PyTorch and create my first neural network. I am using a custom dataset, here is a sample of the data:</p>
<pre><code>ID_REF cg00001854 cg00270460 cg00293191 cg00585219 cg00702638 cg01434611 cg02370734 cg02644867 cg02879967 cg03036557 cg03123104 cg03670302 cg04146801 cg04570540 cg04880546 cg07044749 cg07135408 cg07303143 cg07475178 cg07553761 cg07917901 cg08016257 cg08548498 cg08715791 cg09334636 cg11153071 cg11441796 cg11642652 cg12256803 cg12352902 cg12541127 cg13313833 cg13500819 cg13975075 cg14061946 cg14086922 cg14224196 cg14530143 cg15456742 cg16230982 cg16734549 cg17166941 cg17290213 cg17292667 cg18266594 cg18335535 cg18584803 cg19273773 cg19378199 cg19523692 cg20115827 cg20558024 cg20608895 cg20899581 cg21186299 cg22115892 cg22454769 cg22549547 cg23098693 cg23193759 cg23500537 cg23606718 cg24079702 cg24888989 cg25090514 cg25344401 cg25635000 cg25726357 cg25743481 cg26019498 cg26647566 cg26792755 cg26928195 cg26940620 Age
0 0.252486 0.284724 0.243242 0.200685 0.904132 0.102795 0.473919 0.264084 0.367480 0.671434 0.075955 0.329343 0.217375 0.210861 1.000000 0.356048 0.577945 0.557148 0.249014 0.847134 0.254539 0.319858 0.220589 0.796789 0.361994 0.296101 0.105965 0.239796 0.169738 0.357586 0.365674 0.132575 0.250932 0.283227 1.000000 0.262259 0.208146 0.290623 0.113049 0.255710 0.555382 0.281046 0.168826 0.492007 0.442871 0.509569 0.219183 0.641244 0.339088 0.164062 0.227678 0.340220 0.541491 0.423010 0.621303 0.243750 0.869947 0.124120 0.317660 0.985243 0.645869 0.590888 0.841485 0.825372 0.904037 0.407343 0.223722 0.352113 0.855653 0.289593 0.428849 0.719758 0.800240 0.473586 68
1 0.867671 0.606590 0.803673 0.845942 0.086222 0.996915 0.871998 0.791823 0.877639 0.095326 0.857108 0.959701 0.688322 0.650640 0.062329 0.920434 0.687537 0.193038 0.891809 0.273775 0.583457 0.793486 0.798427 0.102910 0.773496 0.658568 0.759050 0.754877 0.787817 0.585895 0.792240 0.734543 0.854528 0.735642 0.389495 0.736709 0.600386 0.775989 0.819579 0.696350 0.110374 0.878199 0.659849 0.716714 0.771206 0.870711 0.919629 0.359592 0.677752 0.693433 0.683448 0.792423 0.933971 0.170669 0.249908 0.879879 0.111498 0.623053 0.626821 0.000000 0.157429 0.197567 0.160809 0.183031 0.202754 0.597896 0.826429 0.886736 0.086038 0.844088 0.761793 0.056548 0.270670 0.940083 21
2 0.789439 0.594060 0.857086 0.633195 0.000000 0.953293 0.832107 0.692119 0.641294 0.169303 0.935807 0.674698 0.789146 0.796555 0.208590 0.791318 0.777537 0.221895 0.804405 0.138006 0.738616 0.758083 0.749127 0.180998 0.769312 0.592938 0.578885 0.896125 0.553588 0.781393 0.898768 0.705339 0.861029 0.966552 0.274496 0.575738 0.490313 0.951172 0.833724 0.901890 0.115235 0.651489 0.619196 0.760758 0.902768 0.835082 0.610065 0.294962 0.907979 0.703284 0.775867 0.910324 0.858090 0.190595 0.041909 0.792941 0.146005 0.615639 0.761822 0.254161 0.101765 0.343289 0.356166 0.088915 0.114347 0.628616 0.697758 0.910687 0.133282 0.775737 0.809420 0.129848 0.126485 0.875580 20
3 0.615803 0.710968 0.874037 0.771136 0.199428 0.861378 0.861346 0.695713 0.638599 0.158479 0.903668 0.758718 0.581146 0.857357 0.307756 0.977337 0.805049 0.188333 0.788991 0.312119 0.706578 0.782006 0.793232 0.288111 0.691131 0.758102 0.829221 1.000000 0.742666 0.897607 0.797869 0.803221 0.912101 0.736800 0.315636 0.760577 0.609101 0.733923 0.578598 0.796944 0.096960 0.924135 0.612601 0.727117 0.905177 0.776481 0.727865 0.429820 0.666803 0.924595 0.567474 0.752196 0.742709 0.303662 0.168286 0.720899 0.099313 0.595328 0.734024 0.268583 0.293437 0.244840 0.311726 0.213415 0.418673 0.819981 0.816660 0.684730 0.146797 0.686270 0.777680 0.087826 0.335125 1.000000 23
4 0.847329 0.735766 0.858018 0.896453 0.186994 0.831964 0.762522 0.840186 0.830930 0.199264 0.788487 0.912629 0.702284 0.838771 0.065271 0.959230 0.912387 0.377203 0.794480 0.207909 0.766246 0.582117 0.902944 0.301144 0.765401 0.715115 0.646735 0.812084 0.697886 0.714310 0.890658 0.826644 0.944022 0.729517 0.530379 0.756268 0.764899 0.914573 0.825766 0.673394 0.017316 0.949335 0.614375 0.650553 0.898788 0.685396 0.823348 0.210175 0.831852 0.829067 0.858212 0.916433 0.778864 0.241186 0.144072 0.889536 0.058360 0.703567 0.852496 0.094223 0.341236 0.284903 0.231957 0.125196 0.333207 0.752592 0.899356 0.839006 0.174601 0.937948 0.716135 0.000000 0.114062 0.969760 22
</code></pre>
<p>I split the data into train/test/val data like this:</p>
<pre><code>train_df, rest_df = train_test_split(df, test_size=0.4)
test_df, val_df = train_test_split(rest_df, test_size=0.5)
x_train_tensor = torch.tensor(train_df.drop('Age', axis=1).to_numpy(), requires_grad=True)
y_train_tensor = torch.tensor(train_df['Age'].to_numpy())
x_test_tensor = torch.tensor(test_df.drop('Age', axis=1).to_numpy(), requires_grad=True)
y_test_tensor = torch.tensor(test_df['Age'].to_numpy())
x_val_tensor = torch.tensor(val_df.drop('Age', axis=1).to_numpy(), requires_grad=True)
y_val_tensor = torch.tensor(val_df['Age'].to_numpy())
bs = len(train_df.index)//10
train_dl = DataLoader(train_df, bs, shuffle=True)
test_dl = DataLoader(test_df, len(test_df), shuffle=False)
val_dl = DataLoader(val_df, bs, shuffle=False)
</code></pre>
<p>And here is the Network so far (very basic, just to test if it works):</p>
<pre><code>class Net(nn.Module):
def __init__(self):
super().__init__()
input_size = len(df.columns)-1
self.fc1 = nn.Linear(input_size, input_size//2)
self.fc2 = nn.Linear(input_size//2, input_size//4)
self.fc3 = nn.Linear(input_size//4, 1)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
return x
net = Net()
print(net)
</code></pre>
<p>Here is where I get the error, on the last line:</p>
<pre><code>loss = torch.nn.MSELoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)
EPOCHS = 3
STEPS_PER_EPOCH = len(train_dl.dataset)//bs
iterator = iter(train_dl)
print(train_dl.dataset)
for epoch in range(EPOCHS):
for s in range(STEPS_PER_EPOCH):
print(iterator)
iterator.next()
</code></pre>
<pre><code>ID_REF cg00001854 cg00270460 cg00293191 ... cg26928195 cg26940620 Age
29 0.781979 0.744825 0.744579 ... 0.242138 0.854054 19
44 0.185400 0.299145 0.160084 ... 0.638449 0.413286 69
21 0.085470 0.217421 0.277675 ... 0.863455 0.512334 75
4 0.847329 0.735766 0.858018 ... 0.114062 0.969760 22
20 0.457293 0.462984 0.323835 ... 0.584259 0.481060 68
33 0.784562 0.845031 0.958335 ... 0.122210 0.854005 19
25 0.258434 0.354822 0.405620 ... 0.677245 0.540463 70
27 0.737131 0.768188 0.897724 ... 0.203228 0.831175 20
37 0.002051 0.202403 0.134198 ... 0.753844 0.302229 70
10 0.737427 0.537413 0.614343 ... 0.464244 0.723953 23
0 0.252486 0.284724 0.243242 ... 0.800240 0.473586 68
32 0.927260 1.000000 0.853864 ... 0.261990 0.892503 18
7 0.035825 0.271602 0.236109 ... 1.000000 0.471256 69
17 0.000000 0.202986 0.132144 ... 0.874550 0.342981 79
18 0.282112 0.479775 0.218852 ... 0.908217 0.426143 79
11 0.708797 0.536074 0.721171 ... 0.048768 0.699540 27
15 0.686921 0.639198 0.858981 ... 0.305142 0.978350 24
38 0.246031 0.186011 0.235928 ... 0.754013 0.342380 70
30 0.814767 0.771483 0.437789 ... 0.000000 0.658354 18
43 0.247471 0.399231 0.271619 ... 0.895016 0.468336 72
46 0.000428 0.263164 0.163303 ... 0.567005 0.252806 76
3 0.615803 0.710968 0.874037 ... 0.335125 1.000000 23
5 0.777925 0.821814 0.636676 ... 0.233359 0.753266 20
34 0.316262 0.307535 0.203090 ... 0.570755 0.351226 73
23 0.133038 0.000000 0.208442 ... 0.631202 0.459593 76
6 0.746102 0.585211 0.626580 ... 0.311914 0.753994 25
1 0.867671 0.606590 0.803673 ... 0.270670 0.940083 21
47 0.444606 0.502357 0.207560 ... 0.987106 0.446959 71
[28 rows x 75 columns]
<torch.utils.data.dataloader._SingleProcessDataLoaderIter object at 0x7f166241c048>
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
2645 try:
-> 2646 return self._engine.get_loc(key)
2647 except KeyError:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 13
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
6 frames
/usr/local/lib/python3.6/dist-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
2646 return self._engine.get_loc(key)
2647 except KeyError:
-> 2648 return self._engine.get_loc(self._maybe_cast_indexer(key))
2649 indexer = self.get_indexer([key], method=method, tolerance=tolerance)
2650 if indexer.ndim > 1 or indexer.size > 1:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 13
</code></pre>
<p>I really have no idea what the error means or where to look.
I'd greatly appreciate some guidance, thank you!</p>
|
<p>Use <code>Numpy</code> array instead of <code>dataframe</code>. You can use <code>to_numpy()</code> to convert dataframe to numpy array.</p>
<pre class="lang-py prettyprint-override"><code>train_dl = DataLoader(train_df.to_numpy(), bs, shuffle=True)
test_dl = DataLoader(test_df.to_numpy(), len(test_df), shuffle=False)
val_dl = DataLoader(val_df.to_numpy(), bs, shuffle=False)
</code></pre>
|
python|pandas|deep-learning|iterator|pytorch
| 2
|
3,410
| 46,766,048
|
How does pandas argsort work? How do I interpret the result?
|
<p>I have the following pandas series:</p>
<pre><code>>>>ser
num let
0 a 12
b 11
c 18
1 a 10
b 8
c 5
2 a 8
b 9
c 6
3 a 15
b 10
c 11
</code></pre>
<p>When I use argsort, I get this:</p>
<pre><code>>>>ser.argsort()
num let
0 a 5
b 8
c 4
1 a 6
b 7
c 3
2 a 10
b 1
c 11
3 a 0
b 9
c 2
</code></pre>
<p>Which I don't really understand. Shouldn't ser[(1, 'c')] get the lowest value from argsort? </p>
<p>I am further confused by how ordering ser according to ser.argsort() works like a charm:</p>
<pre><code>>>>ser.iloc[ser.argsort()]
num let
1 c 5
2 c 6
1 b 8
2 a 8
b 9
1 a 10
3 b 10
0 b 11
3 c 11
0 a 12
3 a 15
0 c 18
</code></pre>
<p>Will appreciate any input to help me sort this out.</p>
|
<p>Per the documentation: <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.argsort.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.argsort.html</a></p>
<p><code>pd.Series.argsort()</code> </p>
<p>does the same job as <code>np.ndarray.argsort()</code>, namely (<a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html#numpy-argsort" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html#numpy-argsort</a>) </p>
<p>"Returns the indices that would sort an array."</p>
<p>So it returns the Series with the values replaced by the order the index should be in to see the Series sorted. This is why when you call <code>ser.iloc[ser.argsort()]</code>, you get a sorted Series.</p>
<p>If you're looking for a simple way to sort the series by values, why not just use <code>ser.sort_values()</code>?</p>
<p>The confusion over what <code>ser.argsort()[(1, 'c')]</code> returns is understandable. </p>
<p>You might expect it to return the position of <code>ser[(1,'c')]</code> after the sort, but that's not what it's trying to do.</p>
<p>What <code>ser.argsort()[(1, 'c')]</code> is doing is:</p>
<ul>
<li><p>once we've performed the argsort, what is old the <em>positional</em> index of the value which now resides at the <em>location</em> index (1,'c').</p></li>
<li><p>After sorting the series, the value which would sit where (1,'c') was previously is (1,'b'), which is ser.iloc[3], hence you get 3. </p></li>
</ul>
<p>It's not at all intuitive, but that's what it is!</p>
<p><code>argsort</code> returns a series with the same index as the initial series (so you can use .iloc as you have), but with the values replaced by the prior position of the sorted value. </p>
|
python|pandas
| 3
|
3,411
| 62,995,703
|
Transform pd.DataFrame() to narrower, longer dataframe
|
<p>I have a pandas data frame in which each case contains multiple sets of interesting information. In short, I want the columns to decrease and the data frame to become longer according to pre-specified relationships.</p>
<p>My old data frame looks like this:</p>
<pre><code>old = pd.DataFrame(columns=['index', 'residency', 'rating_NYC', 'dist_NYC', 'rating_PAR', 'dist_PAR',
'rating_LON', 'dist_LON', 'rating_MUM', 'dist_MUM', 'gen_rating'],
data = [[0, 'New York', 9, 2, 5, 8, 4, 9, 3, 8, 6],
[1, 'Paris', 5, 9, 7, 1, 6, 2, 4, 6, 7]])
</code></pre>
<p>Each line is one individual stating her <code>residency</code>, rating a city (<code>rating_xxx</code>), stating her geographical distance to that city's centre <code>dist_xxx</code>, and giving a general rating of living in a city (each range <code>0</code>-<code>10</code>).</p>
<p>I now want to create a new df with fewer columns and more rows. Each row in the old df yields information for multiple rows in the new one: I want one line per <code>rating_xxx</code> / <code>dist_xxx</code> combination in the old df (i.e. multiple lines per individual). The new df should contain: the <code>old_index</code>, the <code>rating</code> of and (iii) <code>distance</code> to that particular city, whether the individual is a <code>resident</code> of that city and the general rating (<code>gen_rating</code>).</p>
<p>For example, the first line in the new df would contain the first individual's ratings of/ distance to NYC and that she is NYC resident (and her general rating); the second line would contain the first individual's rating of/ distance to PAR etc.</p>
<p>Based on the above data frame, the desired output is:</p>
<pre><code>pd.DataFrame(columns=['index', 'old_index', 'rating', 'dist', 'resident', 'gen_rating'],
data = [ [0, 0, 9, 2, 1, 6], # NYC -> NYC
[1, 0, 5, 8, 0, 6], # NYC -> PAR
[2, 0, 4, 9, 0, 6], # NYC -> LON
[3, 0, 3, 8, 0, 6], # NYR -> MUM
[4, 1, 5, 9, 0, 7], # PAR -> NYC
[5, 1, 7, 1, 1, 7], # PAR -> PAR
[6, 1, 6, 2, 0, 7], # PAR -> LON
[7, 1, 4, 6, 0, 7]])# PAR -> MUM
</code></pre>
<p>Can someone point me to the correct function I need for this and the most efficient way of achieving this? (The actual data frame is a bit larger ;) ) Many thanks!</p>
|
<p>You can first set the columns which remains single for each index as index , then split the column names to create a Multiindex and then use <code>stack</code>:</p>
<pre><code>old_ = old.set_index(['index','residency','gen_rating'])
old_.columns = old_.columns.str.split('_',expand=True)
(old_.stack().reset_index(['index','gen_rating']).reset_index(drop=True)
.rename_axis('New_Index'))
</code></pre>
<hr />
<pre><code> index gen_rating dist rating
New_Index
0 0 6 9 4
1 0 6 8 3
2 0 6 2 9
3 0 6 8 5
4 1 7 2 6
5 1 7 6 4
6 1 7 9 5
7 1 7 1 7
</code></pre>
<p>Or if you want the reference later you can retain the stacked indexes :</p>
<pre><code>old_.stack().reset_index(['index','gen_rating'])
index gen_rating dist rating
residency
New York LON 0 6 9 4
MUM 0 6 8 3
NYC 0 6 2 9
PAR 0 6 8 5
Paris LON 1 7 2 6
MUM 1 7 6 4
NYC 1 7 9 5
PAR 1 7 1 7
</code></pre>
|
python|pandas|dataframe
| 1
|
3,412
| 62,947,466
|
The gradient cannot be calculated automatically
|
<p>I am a beginner of Deep Learning and trying to making discriminator that judge cats/non-cats.
<br>
But When I run the code following, runtime error occured.</p>
<p>I know that "requires_grad" must be set to True in order to calculate the gradient automatically, <br>but since X_train and Y_train are variables for reading, they are set to False.</p>
<p>I would be grateful if you could modify this code.</p>
<pre><code>X_train = torch.tensor(train_set_x, dtype=dtype,requires_grad=False)
Y_train = torch.tensor(train_set_y, dtype=dtype,requires_grad=False)
def train_model(X_train, Y_train, X_test, Y_test, n_h, num_iterations=10000,learning_rate=0.5, print_cost=False):
"""
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
n_h -- size of the hidden layer
num_iterations -- number of iterations in gradient descent loop
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- if True, print the cost every 200 iterations
Returns:
d -- dictionary containing information about the model.
"""
n_x = X.size(1)
n_y = Y.size(1)
# Create model
model = nn.Sequential(
nn.Linear(n_x,n_h),
nn.ReLU(),
nn.Linear(n_h,n_y),
nn.ReLU()
)
# Initialize parameters
for name, param in model.named_parameters():
if name.find('weight') != -1:
torch.nn.init.orthogonal_(param)
elif name.find('bias') != -1:
torch.nn.init.constant_(param, 0)
# Cost function
cost_fn = nn.BCELoss()
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: compute predicted labels by passing input data to the model.
Y_predicted = model(X_train)
A2 = (Y_predicted > 0.5).float()
# Cost function. Inputs: predicted and true values. Outputs: "cost".
cost = cost_fn(A2, Y_train)
# Print the cost every 100 iterations
if print_cost and i % 100 == 0:
print("Cost after iteration %i: %f" % (i, cost.item()))
# Zero the gradients before running the backward pass. See hint in problem description
model.zero_grad()
# Backpropagation. Compute gradient of the cost function with respect to all the
# learnable parameters of the model. Use autograd to compute the backward pass.
cost.backward()
# Gradient descent parameter update.
with torch.no_grad():
for param in model.parameters():
# Your code here !!
param -= learning_rate * param.grad
d = {"model": model,
"learning_rate": learning_rate,
"num_iterations": num_iterations}
return d
</code></pre>
<pre><code>RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
</code></pre>
|
<p>I believe your problem is that you are mixing numpy arrays and torch tensors. Pytorch tensors are a bit like numpy arrays, but they also kept in a computational graph that is responsible for the backward pass.</p>
<p>The description of your received variables <code>X_train, Y_train, X_test, Y_test</code> says they are numpy arrays. You should convert them all to torch tensors:</p>
<pre><code>x = torch.tensor(x)
</code></pre>
<p>I also noticed that you are manually performing gradient updates. Unless that was your intention, I would recomend you using one of pytorch's optimizers.</p>
<pre><code>from torch.optim import SGD
model = nn.Sequential(
nn.Linear(n_x,n_h),
nn.ReLU(),
nn.Linear(n_h,n_y),
nn.Sigmoid() # You are using BCELoss, you should give it an input from 0 to 1
)
optimizer = SGD(model.parameters(), lr=learning_rate)
cost_fn = nn.BCELoss()
optimizer.zero_grad()
y = model(x)
cost = cost_fn(y, target)
cost.backward()
optimizer.step() # << updated the gradients of your model
</code></pre>
<p>Notice that it is recomended to use <code>torch.nn.BCEWithLogitsLoss</code> instead of <code>BCELoss</code>. The first implements sigmoid and the binary cross entropy together with some math tricks to make it more numerically stable. Your model should look something like:</p>
<pre><code>model = nn.Sequential(
nn.Linear(n_x,n_h),
nn.ReLU(),
nn.Linear(n_h,n_y)
)
</code></pre>
|
pytorch
| 0
|
3,413
| 62,899,856
|
groupby column in pandas
|
<p>I am trying to groupby columns value in pandas but I'm not getting.</p>
<p>Example:</p>
<pre><code>Col1 Col2 Col3
A 1 2
B 5 6
A 3 4
C 7 8
A 11 12
B 9 10
-----
result needed grouping by Col1
Col1 Col2 Col3
A 1,3,11 2,4,12
B 5,9 6,10
c 7 8
</code></pre>
<p>but I getting this ouput</p>
<p><pandas.core.groupby.generic.DataFrameGroupBy object at 0x0000025BEB4D6E50></p>
<p>I am getting using excel power query with function group by and count all rows, but I can´t get the same with python and pandas. Any help?</p>
|
<p>Try this</p>
<pre><code>(
df
.groupby('Col1')
.agg(lambda x: ','.join(x.astype(str)))
.reset_index()
)
</code></pre>
<p>it outputs</p>
<pre><code> Col1 Col2 Col3
0 A 1,3,11 2,4,12
1 B 5,9 6,10
2 C 7 8
</code></pre>
|
python-3.x|pandas|pandas-groupby
| 1
|
3,414
| 63,088,861
|
How to transform survey pandas dataframe into a different format usable with BI tools in Python?
|
<p>I need to convert survey results into something that is usable in a BI tool like Tableau.</p>
<p>The survey is in the format of the following dataframe</p>
<pre><code>df = pd.DataFrame({'Respondent': ['Sally', 'Tony', 'Fred'],
'What project did you work on with - Chris?': ['Project A','Project B', np.nan],
'What score would you give - Chris': [9,7,np.nan],
'Any other feedback for - Chris': ['Random Comment','Okay performance',np.nan],
'What project did you work on with - Matt?': [np.nan,'Project C', 'Project X'],
'What score would you give - Matt': [np.nan,9,8],
'Any other feedback for - Matt': [np.nan, 'Great to work with Matt', 'Work was just okay'],
'What project did you work on with - Luke?': ['Project B','Project D', 'Project Y'],
'What score would you give - Luke': [10,8,7],
'Any other feedback for - Luke': ['Work was excellent', 'Was a bit technical', 'Another Random Comment'],
})
</code></pre>
<p>I need this to be transformed into a format like below:</p>
<pre><code>df = pd.DataFrame({'Name': ['Chris','Chris','Matt','Matt','Luke','Luke','Luke'],
'Assessor': ['Sally','Tony','Tony','Fred','Sally','Tony','Fred'],
'Project Name': ['Project A', 'Project B', 'Project C', 'Project X', 'Project B', 'Project D', 'Project Y'],
'NPS Score': [9,7,9,8,10,8,7],
'Feedback': ['Random Comment','Okay performance','Great to work with Matt','Work was just okay','Work was excellent','Was a bit technical','Another Random Comment']
})
</code></pre>
<p>As you can see, it needs to be able to pull the names from the columns. The real data is actually much larger so I need the code to work with any size and not just for this example.</p>
|
<pre><code>new_data = pd.DataFrame(columns = ["Assessor", "Project Name","NPS Score","Feedback", "Name"])
i = 1
while i < (len(df.columns)):
data = df.iloc[:,[0,i,i+1,i+2]]
data["Name"] = str(data.columns[-1].split(" ")[-1])
data.columns = ["Assessor", "Project Name","NPS Score","Feedback","Name"]
new_data = new_data.append(data)
i = i + 3
new_data = new_data.reset_index(drop = True)
new_data
</code></pre>
|
python|pandas|dataframe|transformation|unpivot
| 1
|
3,415
| 63,229,611
|
Learning rate setting when calling the function tff.learning.build_federated_averaging_process
|
<p>I'm carrying out a federated learning process and use the function tff.learning.build_federated_averaging_process to create an iterative process of federated learning. As mentioned in the TFF tutorial, this function has two arguments called client_optimizer_fn and server_optimizer_fn, which in my opinion, represent the optimizer for client and server, respectively. But in the FedAvg paper, it seems that only clients carry out the optimization while the server only do the averaging operation, so what exactly is the server_optimizer_fn doing and what does its learning rate mean?</p>
|
<p>In <a href="https://arxiv.org/abs/1602.05629" rel="nofollow noreferrer">McMahan et al., 2017</a>, the clients communicate the <em>model weights</em> after local training to the server, which are then averaged and re-broadcast for the next round. No server optimizer needed, the averaging step updates the global/server model.</p>
<p><a href="https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_averaging_process" rel="nofollow noreferrer"><code>tff.learning.build_federated_averaging_process</code></a> takes a slight different approach: the <em>delta</em> of the model weights the client received and the model weights after local training is sent back to the server. This <em>delta</em> can be though of as a pseudo-gradient, allowing the server to apply it to the global model using standard optimization techniques. <a href="https://arxiv.org/abs/2003.00295" rel="nofollow noreferrer">Reddi et al., 2020</a> delves into this formulation and how adaptive optimizers (Adagrad, Adam, Yogi) on the server can greatly improve convergence rates. Using SGD without momentum as the server optimizer, with a learning rate of <code>1.0</code>, exactly recovers the method described in McMahan et al., 2017.</p>
|
tensorflow-federated
| 3
|
3,416
| 63,299,860
|
python, count unique list values of a list inside a data frame
|
<p>I have a dataframe which contains two columns of user feedback.</p>
<p>The first column is from a multi-choice answer of the survey. In each row of the column is a list of the answers they selected.</p>
<p>the next column is a category of age range. so one row will contain a list of the users colour preferences and an age range.</p>
<p>e.g.</p>
<pre><code>what colours do you like? age
['yellow','orange','green'] 18-25
['yellow'] 18-25
['blue','green','red','orange'] 26-30
['blue','red'] 26-30
</code></pre>
<p>I'm looking to get individual counts for each colour in the list and then split by age range
desired output:</p>
<pre><code>age colour count
18-25 yellow 2
18-25 orange 1
18-25 green 1
26-30 blue 2
26-30 green 1
26-30 red 2
26-30 orange 1
</code></pre>
<p>thanks in advance!</p>
|
<p>Set the index of dataframe as <code>age</code>, then use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.explode.html" rel="nofollow noreferrer"><code>Series.explode</code></a> on column <code>what colours do you like?'</code> then use <code>groupby</code> on <code>level=0</code> and aggregate the series using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>value_counts</code></a>:</p>
<pre><code>df1 = (
df.set_index('age')['what colours do you like?'].explode()
.rename('color').groupby(level=0).value_counts().reset_index(name='count')
)
</code></pre>
<p>Result:</p>
<pre><code>print(df1)
age color count
0 18-25 yellow 2
1 18-25 green 1
2 18-25 orange 1
3 26-30 blue 2
4 26-30 red 2
5 26-30 green 1
6 26-30 orange 1
</code></pre>
|
python|pandas|list|split|count
| 3
|
3,417
| 63,022,461
|
Pandas: Sort DataFrame Cells across columns if unordered
|
<p>Looks like my last question was closed, but I forgot to mention the update below the first time. Only modifying a few of the columns and not all.</p>
<p>What is the best way to modify (sort) a Series of data in a Pandas DataFrame?
For example, after importing some data, colums should be in ascending order, but I need to reorder data if it is not. Data is being imported from a <code>csv</code> into a <code>pandas.df</code>.</p>
<pre><code> num_1 num_2 num_3
date
2020-02-03 17 22 36
2020-02-06 52 22 14
2020-02-10 5 8 29
2020-02-13 10 14 30
2020-02-17 7 8 19
</code></pre>
<p>I would ideally find the second row (panda Series) in the Dataframe as the record to be fixed:</p>
<pre><code> num_1 num_2 num_3 num_4 num_5
date
2020-02-06 52 22 14 25 27
</code></pre>
<p>And modify it to be: <em><strong>(Only sorting nums 1-3 and not touching columns 4 & 5)</strong></em></p>
<pre><code> num_1 num_2 num_3 num_4 num_5
date
2020-02-06 14 22 52 25 27
</code></pre>
<p>I could iterate over the DataFrame and check for indexes that have Series data out of order by comparing each column to the column to it's right. Then write a custom sorter and rewrite that record back into the Dataframe, but that seems clunky.</p>
<p>I have to imagine there's a more Pythonic (Pandas) way to do this type of thing. I just can't find it in the pandas documentation. I don't want to reorder the rows just make sure the values are in the appropriate order within the columns.</p>
<p><em><strong>Update</strong></em>: I forgot to mention one of the most critical aspects. There are other columns in the DataFrame that should not be touched. So in the example below, only <code>sort</code> (<code>num_1, num_2, num_3</code>) not the others. I'm guessing I can use the solutions posed already, split the DataFrame, sort the first part and re-merge them together. Is there an alternative?</p>
|
<p>The best way is to use the sort_values() function and only allow it work on the columns which require sorting.</p>
<pre><code>for index, rows in df.iterrows():
df[['col1','col2','col3']] = df[['col1','col2','col3']].sort_values(by=[index], axis = 1, ascending = True)
</code></pre>
<p>This loops through every row, makes the values in the desired columns ascending, and then resaves the columns.</p>
|
python|pandas|dataframe
| 2
|
3,418
| 67,894,649
|
ValueError with NERDA model import
|
<p>I'm trying to import the NERDA library in order use it to engage in a Named-Entity Recognition task in Python. I initially tried importing the library in a jupyter notebook and got the following error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\oefel\AppData\Local\Programs\Python\Python38\lib\site-packages\NERDA\models.py", line 13, in <module>
from .networks import NERDANetwork
File "C:\Users\oefel\AppData\Local\Programs\Python\Python38\lib\site-packages\NERDA\networks.py", line 4, in <module>
from transformers import AutoConfig
File "C:\Users\oefel\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\__init__.py", line 43, in <module>
from . import dependency_versions_check
File "C:\Users\oefel\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\dependency_versions_check.py", line 36, in <module>
from .file_utils import is_tokenizers_available
File "C:\Users\oefel\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\file_utils.py", line 51, in <module>
from huggingface_hub import HfApi, HfFolder, Repository
File "C:\Users\oefel\AppData\Local\Programs\Python\Python38\lib\site-packages\huggingface_hub\__init__.py", line 31, in <module>
from .file_download import cached_download, hf_hub_url
File "C:\Users\oefel\AppData\Local\Programs\Python\Python38\lib\site-packages\huggingface_hub\file_download.py", line 37, in <module>
if tuple(int(i) for i in _PY_VERSION.split(".")) < (3, 8, 0):
File "C:\Users\oefel\AppData\Local\Programs\Python\Python38\lib\site-packages\huggingface_hub\file_download.py", line 37, in <genexpr>
if tuple(int(i) for i in _PY_VERSION.split(".")) < (3, 8, 0):
ValueError: invalid literal for int() with base 10: '6rc1'
</code></pre>
<p>I then tried globally installing using pip in gitbash and got the same error. The library appeared to install without error but when I try the following import, I get that same ValueError:</p>
<pre><code>from NERDA.models import NERDA
</code></pre>
<p>I've also tried some of the pre-cooked model imports and gotten the same ValueError.</p>
<pre><code>from NERDA.precooked import EN_ELECTRA_EN
from NERDA.precooked import EN_BERT_ML
</code></pre>
<p>I can't find anything on this error online and am hoping someone may be able to lend some insight? Thanks so much!</p>
|
<p>Take a look at the <a href="https://github.com/huggingface/huggingface_hub/blob/59ea9998ee2331acf1c50a9fe2f93e5606c5fefb/src/huggingface_hub/file_download.py#L35-L37" rel="nofollow noreferrer">source code of the used huggingface_hub lib</a>. They comparing the version of your python version to do different imports.<br />
But you uses a <a href="https://devguide.python.org/devcycle/#rc" rel="nofollow noreferrer">release candidate</a> python version (this tells the value <code>'6rc1'</code>, that caused the error). Because they didn't expect/handle this, you get the int-parse-ValueError.</p>
<hr />
<p><strong>Solution 1:</strong><br />
Update your python version to a stable version. No release candidate. So you have an int-only version number.</p>
<p><strong>Solution 2:</strong><br />
Monkeypatch <a href="https://docs.python.org/3/library/sys.html#sys.version" rel="nofollow noreferrer"><code>sys.version</code></a>, before you import the <code>NERDA</code> libs.</p>
<pre><code>sys.version = '3.8.0'
</code></pre>
|
python|huggingface-transformers|named-entity-recognition
| 1
|
3,419
| 61,263,787
|
Folium FeatureGroup in Python
|
<p>I am trying to create maps using Folium Feature group. The feature group will be from a pandas dataframe row. I am able to achieve this when there is one data in the dataframe. But when there are more than 1 in the dataframe, and loop through it in the for loop I am not able to acheive what I want. Please find attached the code in Python. </p>
<pre class="lang-py prettyprint-override"><code>from folium import Map, FeatureGroup, Marker, LayerControl
mapa = Map(location=[35.11567262307692,-89.97423444615382], zoom_start=12,
tiles='Stamen Terrain')
feature_group1 = FeatureGroup(name='Tim')
feature_group2 = FeatureGroup(name='Andrew')
feature_group1.add_child(Marker([35.035075, -89.89969], popup='Tim'))
feature_group2.add_child(Marker([35.821835, -90.70503], popup='Andrew'))
mapa.add_child(feature_group1)
mapa.add_child(feature_group2)
mapa.add_child(LayerControl())
mapa
</code></pre>
<p>My dataframe contains the following: </p>
<pre><code>Name Address
0 Dollar Tree #2020 3878 Goodman Rd.
1 Dollar Tree #2020 3878 Goodman Rd.
2 National Guard Products Inc 4985 E Raines Rd
3 434 SAVE A LOT C MID WEST 434 Kelvin 3240 Jackson Ave
4 WALGREENS 06765 108 E HIGHLAND DR
5 Aldi #69 4720 SUMMER AVENUE
6 Richmond, Christopher 1203 Chamberlain Drive
City State Zipcode Group
0 Horn Lake MS 38637 Johnathan Shaw
1 Horn Lake MS 38637 Tony Bonetti
2 Memphis TN 38118 Tony Bonetti
3 Memphis TN 38122 Tony Bonetti
4 JONESBORO AR 72401 Josh Jennings
5 Memphis TN 38122 Josh Jennings
6 Memphis TN 38119 Josh Jennings
full_address Color sequence \
0 3878 Goodman Rd.,Horn Lake,MS,38637,USA blue 1
1 3878 Goodman Rd.,Horn Lake,MS,38637,USA cadetblue 1
2 4985 E Raines Rd,Memphis,TN,38118,USA cadetblue 2
3 3240 Jackson Ave,Memphis,TN,38122,USA cadetblue 3
4 108 E HIGHLAND DR,JONESBORO,AR,72401,USA yellow 1
5 4720 SUMMER AVENUE,Memphis,TN,38122,USA yellow 2
6 1203 Chamberlain Drive,Memphis,TN,38119,USA yellow 3
Latitude Longitude
0 34.962637 -90.069019
1 34.962637 -90.069019
2 35.035367 -89.898428
3 35.165115 -89.952624
4 35.821835 -90.705030
5 35.148707 -89.903760
6 35.098829 -89.866838
</code></pre>
<p>The same when I am trying to loop through in the for loop, I am not able to achieve what I need. : </p>
<pre class="lang-py prettyprint-override"><code>from folium import Map, FeatureGroup, Marker, LayerControl
mapa = Map(location=[35.11567262307692,-89.97423444615382], zoom_start=12,tiles='Stamen Terrain')
#mapa.add_tile_layer()
for i in range(0,len(df_addresses)):
feature_group = FeatureGroup(name=df_addresses.iloc[i]['Group'])
feature_group.add_child(Marker([df_addresses.iloc[i]['Latitude'], df_addresses.iloc[i]['Longitude']],
popup=('Address: ' + str(df_addresses.iloc[i]['full_address']) + '<br>'
'Tech: ' + str(df_addresses.iloc[i]['Group'])),
icon = plugins.BeautifyIcon(
number= str(df_addresses.iloc[i]['sequence']),
border_width=2,
iconShape= 'marker',
inner_icon_style= 'margin-top:2px',
background_color = df_addresses.iloc[i]['Color'],
)))
mapa.add_child(feature_group)
mapa.add_child(LayerControl())
</code></pre>
|
<p>This is an example dataset because I didn't want to format your df. That said, I think you'll get the idea.</p>
<pre><code>print(df_addresses)
Latitude Longitude Group
0 34.962637 -90.069019 B
1 34.962637 -90.069019 B
2 35.035367 -89.898428 A
3 35.165115 -89.952624 B
4 35.821835 -90.705030 A
5 35.148707 -89.903760 A
6 35.098829 -89.866838 A
</code></pre>
<p>After I create the map object(maps), I perform a groupby on the group column. I then iterate through each group. I first create a FeatureGroup with the grp_name(A or B). And for each group, I iterate through that group's dataframe and create Markers and add them to the FeatureGroup</p>
<pre><code>mapa = folium.Map(location=[35.11567262307692,-89.97423444615382], zoom_start=12,
tiles='Stamen Terrain')
for grp_name, df_grp in df_addresses.groupby('Group'):
feature_group = folium.FeatureGroup(grp_name)
for row in df_grp.itertuples():
folium.Marker(location=[row.Latitude, row.Longitude]).add_to(feature_group)
feature_group.add_to(mapa)
folium.LayerControl().add_to(mapa)
mapa
</code></pre>
<p><a href="https://i.stack.imgur.com/l6GBD.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/l6GBD.jpg" alt="enter image description here"></a></p>
|
python|pandas|folium
| 10
|
3,420
| 61,242,103
|
every time when I train the model I receive a different result - why?
|
<ul>
<li>I don't understand what I do wrong - every time when I launch this code I receive a <a href="https://drive.google.com/drive/folders/1iSOJ3ZE5OAY6Lqg1Qz-jpXhaGaZdU040?usp=sharing" rel="nofollow noreferrer">different result</a>. </li>
<li>I figured out that the result will be different when I change the batch size, but the accuracy should not depend on batch size. </li>
<li>And charts look completely wrong then I expected </li>
</ul>
<p><strong>Could somebody point me on my mistake?</strong></p>
<pre class="lang-py prettyprint-override"><code>%reload_ext autoreload
%autoreload 2
%matplotlib inline
## Load dataset
import tensorflow as tf
tf.config.set_soft_device_placement(True)
tf.debugging.set_log_device_placement(True)
import tensorflow_datasets as tfds # must be 2.1
import matplotlib.pyplot as plt
builder = tfds.builder('beans')
info = builder.info
builder.download_and_prepare()
datasets = builder.as_dataset()
raw_train_dataset, raw_test_dataset = datasets['train'], datasets['test']
## Build a model
def get_model(image_width:int, image_height:int, num_classes:int):
model = tf.keras.Sequential()
#layer 1
model.add(tf.keras.layers.Conv2D(filters=96, kernel_size=(11,11), strides= 4, padding= 'valid',
activation=tf.keras.activations.relu, input_shape=(image_width, image_height ,3)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(3, 3), strides=(2,2),padding= 'valid'))
# layer 2
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=(5,5), strides= 1, padding='same',
activation=tf.keras.activations.relu))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(3, 3), strides=(2,2), padding= 'valid'))
# layer 3
model.add(tf.keras.layers.Conv2D(filters=384, kernel_size=(3,3), strides= 1, padding= 'same', activation=tf.keras.activations.relu))
model.add(tf.keras.layers.Conv2D(filters=384, kernel_size=(3,3), strides= 1, padding= 'same', activation=tf.keras.activations.relu))
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=(3,3), strides= 1, padding= 'same', activation=tf.keras.activations.relu))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(3, 3), strides=(2,2), padding= 'valid'))
# layer4 4
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(4096, activation=tf.keras.activations.relu))
model.add(tf.keras.layers.Dropout(0.5))
# layer4 4
model.add(tf.keras.layers.Dense(4096, activation=tf.keras.activations.relu))
model.add(tf.keras.layers.Dropout(0.5))
# layer4 5
model.add(tf.keras.layers.Dense(1000, activation = 'relu'))
model.add(tf.keras.layers.Dropout(0.5))
# layer4 6
model.add(tf.keras.layers.Dense(num_classes, activation = 'softmax'))
return model
IMAGE_width = 500
IMAGE_height = 500
NUM_CLASSES = info.features["label"].num_classes
CLASS_NAMES = info.features["label"].names
model = get_model(IMAGE_width, IMAGE_height, NUM_CLASSES)
result = model.compile( loss=tf.keras.losses.CategoricalCrossentropy(),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
metrics=['accuracy'])
## Train
def prepare_record(record):
image = record['image']
# image = tf.image.resize(image, (image_width,image_height))
# image = tf.cast(image, tf.int32)
label = record['label']
return image, label
train_dataset = raw_train_dataset.map(prepare_record, num_parallel_calls=tf.data.experimental.AUTOTUNE).shuffle(1034).batch(517).prefetch(tf.data.experimental.AUTOTUNE)
for train_image_batch, train_label_batch in train_dataset:
train_one_hot_y = tf.one_hot(train_label_batch, NUM_CLASSES )
history = model.fit(train_image_batch, train_one_hot_y, epochs=10, verbose=0,validation_split=0.2)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
</code></pre>
|
<p>In the <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit" rel="nofollow noreferrer">fit</a> function there is a parameter called <code>shuffle</code>:</p>
<blockquote>
<p>Boolean (whether to shuffle the training data before each epoch) or str (for 'batch'). 'batch' is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when <code>steps_per_epoch</code> is not <code>None</code>.</p>
</blockquote>
<p>If you set it to <code>False</code>, the results should be equal.</p>
<p>Another, probably preferable, way would be to use <code>tf.random.set_seed(seed)</code>, so that the shuffling is always performed in the same way (see <a href="https://www.tensorflow.org/api_docs/python/tf/random/set_seed" rel="nofollow noreferrer">docs</a>).</p>
|
tensorflow|tensorflow2.0
| 0
|
3,421
| 61,321,028
|
operate only on filtered elements in an array in python
|
<p>Consider this simple problem: in a list of integers I need multiply all even number by 10.
I can certainly do element-wise operation such as:</p>
<pre><code>[if x%2==0: x=x*10 for x in arr]
</code></pre>
<p>But what if I the operation has to be operated on the array level? The trouble I am having is after the operation on the filtered array, how do I nicely put them back to the original array?</p>
<p>Code example:</p>
<pre><code>arr=np.arange(1,10) # the original array array([1, 2, 3, 4, 5, 6, 7, 8, 9])
filter1 = arr%2==0 # the filter
arr1=arr[filter1] # the filtered array array([2, 4, 6, 8])
arr1=arr1*10 # the 'array'-wise operation array([20, 40, 60, 80])
# this is the part I am trying to improve
i=0
j=0
arr2=[]
for f in filter1:
if f:
arr2.append(arr1[i])
i=i+1
else:
arr2.append(arr[j])
j=j+1
# output arr2: [1, 20, 3, 40, 5, 60, 7, 80, 9]
</code></pre>
|
<pre><code>>>> a = np.arange(10)
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> a[a%2 == 0] *= 10
>>> a
array([ 0, 1, 20, 3, 40, 5, 60, 7, 80, 9])
</code></pre>
|
python|arrays|numpy|if-statement
| 1
|
3,422
| 61,325,958
|
scaler.inverse_transform() is giving an error while taking LSTM NN predictions into real data values
|
<p>I have converted my data between 0 and 1 and fed it via LSTM NN .The results also stay between 0 and 1 and to have the proper output i need to convert it back to as it was same with my original data values.</p>
<p>But</p>
<pre><code>scaler=MinMaxScaler(feature_range=(0,1))
scaler.inverse_transform(result)
</code></pre>
<p>outputs an error . My code is as below. Here i have loaded saved Data,Target & LSTM trained Weights.</p>
<pre><code>import numpy as np
data=np.load('data_2.npy')
target=np.load('target_2.npy')
train_data=data[:120]
train_target=target[:120]
test_data=data[120:]
test_target=target[120:]
from keras.models import Sequential
from keras.layers import LSTM,Dense,Dropout
model=Sequential()
model.add(LSTM(units=172,return_sequences=True,input_shape=(data.shape[1:])))
model.add(Dropout(0.2))
model.add(LSTM(units=940,return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=2510,return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=50,return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(units=1,activation='linear'))
model.compile(loss='mse',optimizer='adam',metrics=['accuracy'])
model.load_weights('AirlineLSTMweights.h5')
result=model.predict(test_data)
print(result)
scaler=MinMaxScaler(feature_range=(0,1))
scaler.inverse_transform(result)
</code></pre>
<p>NotFittedError: This MinMaxScaler instance is not fitted yet. Call 'fit' with appropriate arguments before using this method.</p>
<p>Can anyone help me here please? </p>
<pre><code>my print(results) = [[0.6232013 ]
[0.67273337]
[0.7892405 ]
</code></pre>
|
<p>It is just what the error is saying. when using scikit-learn modules, they usually have a fit(), transform(), or in case of classifiers also predict() methods. after making an instance of a Class like MinMaxScaler() in your case, you need to fit it on some data, just call its fit method and pass your training examples as an argument. in your code here, you made an instance, but it does not know what your data was to update its internal variables which it uses when you call its transform, or inverse_transform. </p>
<pre><code>scaler=MinMaxScaler(feature_range=(0,1))
x_train = scaler.fit_transform(x_train) # you missed this line
# rest of your code for training Neural network with x_train
...
# now convert the result
scaler.inverse_transform(result)
</code></pre>
<p>scikit-learn <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html?highlight=minmaxscaler" rel="nofollow noreferrer">Doc</a> is the best Documentation you ever see, check it out if you still felt stuck.</p>
|
python|tensorflow|machine-learning|keras|lstm
| 0
|
3,423
| 61,348,293
|
Python Retain function. Use value from previous row in calculation
|
<pre><code>In [10]: df
Out[10]:
PART AVAILABLE_INVENTORY DEMAND
1 A 12 6
2 A 12 2
3 A 12 1
4 B 24 1
5 B 24 1
6 B 24 4
7 B 24 3
</code></pre>
<p>Output wanted:</p>
<pre><code> PART AVAILABLE_INVENTORY DEMAND AI AI_AFTER
1 A 12 6 12 6
2 A 12 2 6 4
3 A 12 1 4 3
4 B 24 1 24 23
5 B 24 1 23 22
6 B 24 4 22 18
7 B 24 3 18 15
</code></pre>
<p>The code I have so far is below but it is not giving the output I am looking for:</p>
<pre><code>def retain(df):
df['PREV_PART'] = df['PART'].shift()
df['PREV_AI_AFTER'] = df['AI'].shift() - df['DEMAND'].shift()
df['AI'] = np.where(df['PART'] != df['PREV_PART'], df['AI'], df['PREV_AI_AFTER'])
df['AI_AFTER'] = df['AI'] - df['DEMAND']
df['AI'] = df['AVAILABLE_INVENTORY']
retain(df)
</code></pre>
<p>What is the fastest way to do this with performance in mind?</p>
|
<p>you can do it with <code>groupby</code> with <code>cumsum</code> on the column 'DEMAND' and <code>shift</code> on the column 'AI_AFTER' just created before:</p>
<pre><code>df['AI_AFTER'] = df['AVAILABLE_INVENTORY'] - df.groupby('PART')['DEMAND'].cumsum()
df['AI'] = df.groupby('PART')['AI_AFTER'].shift().fillna(df['AVAILABLE_INVENTORY'])
print (df)
PART AVAILABLE_INVENTORY DEMAND AI_AFTER AI
1 A 12 6 6 12.0
2 A 12 2 4 6.0
3 A 12 1 3 4.0
4 B 24 1 23 24.0
5 B 24 1 22 23.0
6 B 24 4 18 22.0
7 B 24 3 15 18.0
</code></pre>
|
python|pandas|retain
| 1
|
3,424
| 68,543,621
|
How to flat in one row Dataframe and concatenate them
|
<p>I have several Dataframes with the same structure :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>0</strong></td>
<td>TITLE</td>
<td>TITLE1</td>
</tr>
<tr>
<td><strong>1</strong></td>
<td>A</td>
<td>A1</td>
</tr>
<tr>
<td><strong>2</strong></td>
<td>B</td>
<td>B1</td>
</tr>
<tr>
<td><strong>3</strong></td>
<td>C</td>
<td>C1</td>
</tr>
<tr>
<td><strong>4</strong></td>
<td>D</td>
<td>D1</td>
</tr>
<tr>
<td><strong>5</strong></td>
<td>E</td>
<td>E1</td>
</tr>
</tbody>
</table>
</div><div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>0</strong></td>
<td>TITLE</td>
<td>TITLE2</td>
</tr>
<tr>
<td><strong>1</strong></td>
<td>A</td>
<td>A2</td>
</tr>
<tr>
<td><strong>2</strong></td>
<td>B</td>
<td>B2</td>
</tr>
<tr>
<td><strong>3</strong></td>
<td>C</td>
<td>C2</td>
</tr>
<tr>
<td><strong>4</strong></td>
<td>D</td>
<td>D2</td>
</tr>
<tr>
<td><strong>5</strong></td>
<td>E</td>
<td>E2</td>
</tr>
</tbody>
</table>
</div>
<p>My goal is to have :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>TITLE</th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>TITLE1</strong></td>
<td>A1</td>
<td>B1</td>
<td>C1</td>
<td>D1</td>
<td>E1</td>
</tr>
<tr>
<td><strong>TITLE2</strong></td>
<td>A2</td>
<td>B2</td>
<td>C2</td>
<td>D2</td>
<td>E2</td>
</tr>
</tbody>
</table>
</div>
<p>How can I transform my Dataframes to flat them and concatenate them like this?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> with transpose and then <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a>:</p>
<pre><code>df11 = df1.set_index(0).T
df22 = df2.set_index(0).T
df = pd.concat([df11,df22]).set_index('TITLE')
print (df)
0 A B C D E
TITLE
TITLE1 A1 B1 C1 D1 E1
TITLE2 A2 B2 C2 D2 E2
</code></pre>
<p>Or transpose after <code>concat</code> with <code>axis=1</code>:</p>
<pre><code>df11 = df1.set_index(0)
df22 = df2.set_index(0)
df = pd.concat([df11,df22], axis=1).T.set_index('TITLE')
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
3,425
| 68,796,689
|
How to get values return by Tuple Object in Maskcrnn libtorch
|
<p>I’m new in C++ and libtorch, I try load model by torchscript and execute inference, the codes like below:</p>
<pre class="lang-cpp prettyprint-override"><code> torch::jit::script::Module module;
try {
module = torch::jit::load("../../weights/card_extraction/pytorch/2104131340/best_model_27_mAP=0.9981_torchscript.pt");
}
catch (const c10::Error& e) {
std::cerr << "Error to load model\n";
return -1;
}
std::cout << "Load model successful!\n";
torch::DeviceType device_type;
device_type = torch::kCPU;
torch::Device device(device_type, 0);
module.to(device);
torch::Tensor sample = torch::zeros({3, 800, 800});
std::vector<torch::jit::IValue> inputs;
std::vector<torch::Tensor> images;
images.push_back(sample);
/* images.push_back(torch::ones({3, 224, 224})); */
inputs.push_back(images);
auto t1 = std::chrono::high_resolution_clock::now();
auto output = module.forward(inputs);
auto t2 = std::chrono::high_resolution_clock::now();
int duration = std::chrono::duration_cast<std::chrono::milliseconds> (t2 - t1).count();
std::cout << "Inference time: " << duration << " ms" << std::endl;
std::cout << output << std::endl;
</code></pre>
<p>And the result like this:</p>
<pre class="lang-cpp prettyprint-override"><code>Load model successful!
[W mask_rcnn.py:86] Warning: RCNN always returns a (Losses, Detections) tuple in scripting (function )
Inference time: 2321 ms
({}, [{boxes: [ CPUFloatType{0,4} ], labels: [ CPULongType{0} ], scores: [ CPUFloatType{0} ], masks: [ CPUFloatType{0,1,800,800} ]}])
</code></pre>
<p>How do I get value boxes, labels, scores and masks from return output object using c++ ?
I tried many ways but compile always error with “c10::IValue” error thrown.</p>
<p>And more question, why is the time inference when I convert the model to torchscript, executed by C++ is slower than python?
Many thanks</p>
|
<p>You can get access to the element like here: I can parse Tuple arguments and get access to it like in Tensor format. It can help u.<br></p>
<pre><code>auto output1_t = output.toTuple()->elements()[0].toTensor();
auto output2_t = output.toTuple()->elements()[1].toTensor();
</code></pre>
<p><a href="https://discuss.pytorch.org/t/how-can-i-get-access-to-first-and-second-tensor-from-tuple-returned-from-forward-method-in-libtorch-c-c/139741" rel="nofollow noreferrer">https://discuss.pytorch.org/t/how-can-i-get-access-to-first-and-second-tensor-from-tuple-returned-from-forward-method-in-libtorch-c-c/139741</a></p>
|
c++|pytorch|libtorch|torchscript
| 2
|
3,426
| 68,500,441
|
Groupby() and bfill() with condition
|
<p>I had DataFrame with ID, Tenure, and several variables:</p>
<pre><code> ID Tenure var1 var2
A 1 NaN NaN
A 2 NaN 30
A 3 40 50
A 4 NaN 60
B 1 NaN NaN
B 2 NaN NaN
B 3 40 50
B 4 NaN 60
B 5 50 NaN
</code></pre>
<p>I would like to use bfill() to fill the value for ID with tenure less than 3, and is expecting the below output:</p>
<p>I tried <code>df[['var1','var2']] = np.where(df['Tenure]<=3, df[['ID','var1','var2']].groupby('ID).bfill(), df[['var1','var2']])</code>, it reports error.</p>
<pre><code> ID Tenure var1 var2
A 1 40 30
A 2 40 30
A 3 40 50
A 4 NaN 60
B 1 40 50
B 2 40 50
B 3 40 50
B 4 NaN 60
B 5 50 NaN
</code></pre>
|
<p>Try use <code>loc</code> assign</p>
<pre><code>df.loc[df['Tenure']<3 ,['var1','var2']] = df[['ID','var1','var2']].groupby('ID').bfill()
df
Out[146]:
ID Tenure var1 var2
0 A 1 40.0 30.0
1 A 2 40.0 30.0
2 A 3 40.0 50.0
3 A 4 NaN 60.0
4 B 1 40.0 50.0
5 B 2 40.0 50.0
6 B 3 40.0 50.0
7 B 4 NaN 60.0
8 B 5 50.0 NaN
</code></pre>
|
python|pandas
| 1
|
3,427
| 68,696,629
|
Resampling in pandas to split a datetime series into "n" minute buckets & counts for each
|
<p>I want to break down a list of datetimes into 15 (or 10 or 30 maybe) minute buckets, and count how many objects are in each bucket.</p>
<p>The ideal output is a list of integers, each item being a count for a 15 minute bucket, the list in the original datetime order from earliest to latest</p>
<p>The actual dates and times themselves are not important in this application.</p>
<p>The datetimes are Tweet creation datetimes, and are in Twitter's native string format ("%a %b %d %H:%M:%S +0000 %Y"), as seen in the data snippet below.</p>
<p>(It's no problem to convert them to Unix time or whatever's most convenient, if it helps)</p>
<p>data snippet:</p>
<pre><code>['Wed Jul 07 07:39:41 +0000 2021',
'Wed Jul 07 09:25:06 +0000 2021',
'Wed Jul 07 10:12:24 +0000 2021',
'Wed Jul 07 12:03:36 +0000 2021',
'Wed Jul 07 12:51:56 +0000 2021',
'Thu Jul 08 18:01:02 +0000 2021',
'Thu Jul 08 18:02:01 +0000 2021',
'Thu Jul 08 18:02:40 +0000 2021',
'Thu Jul 08 18:03:45 +0000 2021',
'Thu Jul 08 18:04:10 +0000 2021',
'Thu Jul 08 18:16:05 +0000 2021',
'Thu Jul 08 18:17:40 +0000 2021',
'Thu Jul 08 18:22:04 +0000 2021',
'Thu Jul 08 18:23:02 +0000 2021',
'Thu Jul 08 18:24:34 +0000 2021',
'Thu Jul 08 21:07:36 +0000 2021',
'Fri Jul 09 07:31:41 +0000 2021',
'Fri Jul 09 07:45:14 +0000 2021',
'Fri Jul 09 08:37:09 +0000 2021',
'Fri Jul 09 09:32:22 +0000 2021',
'Fri Jul 09 10:49:53 +0000 2021',
'Fri Jul 09 11:33:48 +0000 2021',
'Fri Jul 09 11:35:02 +0000 2021',
'Fri Jul 09 11:35:43 +0000 2021',
'Fri Jul 09 12:41:08 +0000 2021',
'Fri Jul 09 12:41:37 +0000 2021',
'Fri Jul 09 12:42:38 +0000 2021',
'Fri Jul 09 13:26:51 +0000 2021',
'Fri Jul 09 13:41:18 +0000 2021',
'Fri Jul 09 13:45:51 +0000 2021',
'Fri Jul 09 14:03:37 +0000 2021',
'Fri Jul 09 17:59:09 +0000 2021',
'Fri Jul 09 19:36:01 +0000 2021',
'Fri Jul 09 19:40:46 +0000 2021',
'Sat Jul 10 08:34:06 +0000 2021',
...
]
</code></pre>
<p>I suppose I could convert all the datetimes to unix time & write a loop to chunk it into 900 second buckets, but it seems clunky when pandas seems to have builtins for this sort of thing.</p>
<p>(I've seen e.g. <a href="https://stackoverflow.com/questions/51705583/pandas-resample-timeseries-data-to-15-mins-and-45-mins-using-multi-index-or-co">Pandas resample timeseries data to 15 mins and 45 mins - using multi-index or column</a> and the pandas docs themselves e.g. <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.resample.html?highlight=resample#pandas.Series.resample" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.Series.resample.html?highlight=resample#pandas.Series.resample</a>)</p>
<p>So I've had a go and so far I've got what's below, but now I'm stuck and need some help.</p>
<p>(I'm not a professional programmer & this isn't coursework or homework, though I have written quite a lot of simple Python in the past couple of years; For completeness sake, the purpose here is to create data that can be used to drive synths (soft or hard) to create sonic representations of Twitter user timelines, and I'm just tinkering with the most basic thing I can think of to start with)</p>
<pre><code>
# where "x" is a list of datetimes as above
df = pd.DataFrame(x, columns=["created_at"])
df["cti"] = pd.to_datetime(df["created_at"])
dfrs = df.set_index("cti")
qbert = dfrs["created_at"].resample("15T").sum()
print(qbert)
</code></pre>
<p>I thought from my reading of the pandas docs etc that this would give me an output with summary counts for each bucket (but I'm likely to have misunderstood or misinterpreted: I'm not a "natural" coder)</p>
<p>But the output I get is this:</p>
<pre><code> cti
2021-07-07 07:30:00+00:00 Wed Jul 07 07:39:41 +0000 2021
2021-07-07 07:45:00+00:00 0
2021-07-07 08:00:00+00:00 0
2021-07-07 08:15:00+00:00 0
2021-07-07 08:30:00+00:00 0
...
2021-08-05 13:45:00+00:00 Thu Aug 05 13:58:07 +0000 2021
2021-08-05 14:00:00+00:00 Thu Aug 05 14:02:32 +0000 2021Thu Aug 05 14:05...
2021-08-05 14:15:00+00:00 Thu Aug 05 14:20:49 +0000 2021Thu Aug 05 14:23...
2021-08-05 14:30:00+00:00 Thu Aug 05 14:30:59 +0000 2021Thu Aug 05 14:31...
2021-08-05 14:45:00+00:00 Thu Aug 05 14:45:56 +0000 2021Thu Aug 05 14:52...
Freq: 15T, Name: created_at, Length: 2814, dtype: object
</code></pre>
<p>So this isn't what I was expecting but I'm unsure where I've gone wrong or whether I've even selected an appropriate approach for what I want to do.</p>
|
<p>You almost had it, but<code>sum</code> will concatenate the strings. You need to <code>count</code> instead:</p>
<pre><code>qbert = dfs["created_at"].resample("15T").count()
</code></pre>
|
python|pandas|datetime
| 3
|
3,428
| 68,523,007
|
How to use mobilenet as feature-extractor for high resolution images?
|
<p>How can i use a mobilenet model as a feature-extractor for images with way higher resolution than 224x224?
I guess i need to change a certain layer after it's loaded to increase the input size? My current code is this:</p>
<pre><code>const featureExtractor = await tf.loadGraphModel('http://localhost:3000/mobilenet_v3_large_100_224/model.json');
</code></pre>
<p>I know that i could resample my images down to 224x224, but i fear important information will be lost.</p>
|
<p>MobileNet v2 technically starts with a fully convolutional layer with 32 filters. So, yes you can train the model with larger images, however you'd be starting from scratch. The feature-extracted models that seem to be available are mostly trained on datasets of 224x224.</p>
<p>If you believe this will remove important information, you might be right! However, I'd definitely give it a go before I'd call it quits. I'm amazed at how proficient 28x28 datasets are, and this is substantially more data.</p>
<p>You can adjust the depth multiplier to <a href="https://tfhub.dev/google/imagenet/mobilenet_v2_140_224/feature_vector/5" rel="nofollow noreferrer">1.4</a> and get a substantially larger set of features from your images. If you're concerned about quality, do that. Maybe you can even use a larger model like Inception? Those images are 299x299.</p>
<p>Regardless, it depends on how much time and energy you have to retrain.</p>
|
machine-learning|tensorflow.js|mobilenet|tfjs-node
| 2
|
3,429
| 52,968,814
|
Fanccy Indexing vs View in Numpy part II
|
<p><a href="https://stackoverflow.com/questions/52967957/fancy-indexing-vs-views-in-numpy">Fancy Indexing vs Views in Numpy</a></p>
<p>In an answer to this equation: is is explained that different idioms will produce different results. </p>
<p>Using the idiom where fancy indexing is to chose the values and said values are set to a new value in the same line means that the values in the original object will be changed in place.</p>
<p>However the final example below:</p>
<p><a href="https://scipy-cookbook.readthedocs.io/items/ViewsVsCopies.html" rel="nofollow noreferrer">https://scipy-cookbook.readthedocs.io/items/ViewsVsCopies.html</a></p>
<p>"A final exercise"</p>
<p>The example appears to use the same idiom:</p>
<p>a[x, :][:, y] = 100</p>
<p>but it still produces a different result depending on whether x is a slice or a fancy index (see below):</p>
<pre><code>a = np.arange(12).reshape(3,4)
ifancy = [0,2]
islice = slice(0,3,2)
a[islice, :][:, ifancy] = 100
a
#array([[100, 1, 100, 3],
# [ 4, 5, 6, 7],
# [100, 9, 100, 11]])
a = np.arange(12).reshape(3,4)
ifancy = [0,2]
islice = slice(0,3,2)
a[ifancy, :][:, islice] = 100 # note that ifancy and islice are interchanged here
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
</code></pre>
<p>My intuition is that if the first set of fancy indexes is a slice it treats the object like a view and therefore the values in the orignal object are changed.</p>
<p>Whereas in the second case the first set of fancy indexes is itself a fancy index so it treats the object as a fancy index creating a copy of the original object. This then means that the original object is not changed when the values of the copy object are changed.</p>
<p>Is my intuition correct?</p>
<p>The example hints that one should think of the sqeuence of <strong>getitem</strong> and <strong>setitem</strong> can someone explain it to my properly in theis way? </p>
|
<p>Python evaluates each set of [] separately. <code>a[x, :][:, y] = 100</code> is 2 operations.</p>
<pre><code>temp = a[x,:] # getitem step
temp[:,y] = 100 # setitem step
</code></pre>
<p>Whether the 2nd line ends up modifying <code>a</code> depends on whether <code>temp</code> is a view or copy.</p>
<p>Remember, <code>numpy</code> is an addon to Python. It does not modify basic Python syntax or interpretation.</p>
|
python|numpy
| 1
|
3,430
| 53,245,243
|
Count occurences in pandas column database and bar plot with matplotlib
|
<p>So I have a pandas database with 4 columns.</p>
<p>Date A B C.</p>
<p>The dates contain all daily dates from year 2018, 2019 and 2020.</p>
<p>These columns A B C contains numbers from 1 to 7. No decimals.
I want to count each of this number occurences and stack them into a bar plot.
Count all 1's, 2's, 3's ect.</p>
<p>Anyone got a good solution for this?</p>
|
<p>Using <code>melt</code> with <code>value_counts</code></p>
<pre><code>df.melt('Date').value.value_counts().plot(kind='bar')
</code></pre>
|
pandas|matplotlib
| 0
|
3,431
| 53,136,542
|
What is the numpy way to conditionally merge arrays?
|
<p>I have two numpy arrays <code>(1000,)</code> filled with predictions from two models:</p>
<pre><code>pred_1 = model_1.predict(x_test)
pred_2 = model_2.predict(x_test)
</code></pre>
<p><code>model_1</code> is attractive due to extremely low <code>FP</code>, but consequently high <code>FN</code>.</p>
<p><code>model_2</code> is attractive due to overall accuracy and recall.</p>
<p>How can I <em>conditionally</em> apply predictions to take advantage of these strengths and weaknesses?</p>
<p>I'd like to take all positive (<code>1</code>) predictions from the first model, and let the second model deal with the rest.</p>
<p>Essentially I'm looking for something like this:</p>
<pre><code>final_pred = model_1.predict() if model_1.predict() > 0.5 else model_2.predict()
</code></pre>
<p>This fails: The truth value of an array with more than one element is ambiguous.</p>
<p>What is the numpy way to combine these arrays as above?</p>
|
<p>You're looking for <a href="https://www.numpy.org/devdocs/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a>:</p>
<pre><code>a = model_1.predict(x_test)
b = model_2.predict(x_test)
out = np.where(a > 0.5, a, b)
</code></pre>
|
python|numpy|text-classification
| 3
|
3,432
| 65,608,713
|
Tensorflow GPU Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found
|
<p>When i run</p>
<pre><code>import tensorflow as tf
tf.test.is_gpu_available(
cuda_only=False, min_cuda_compute_capability=None
)
</code></pre>
<p>I get the following error</p>
<p><a href="https://i.stack.imgur.com/Mv2p5.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/Mv2p5.jpg" alt="enter image description here" /></a></p>
|
<strong>Step 1</strong>
<pre><code> Move to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin
</code></pre>
<strong>Step 2</strong>
<pre><code>Rename file cusolver64_11.dll To cusolver64_10.dll
</code></pre>
<p><a href="https://i.stack.imgur.com/3xHxB.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/3xHxB.jpg" alt="enter image description here" /></a></p>
<pre><code> cusolver64_10.dll
</code></pre>
<p><a href="https://i.stack.imgur.com/7D0XF.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/7D0XF.jpg" alt="enter image description here" /></a></p>
|
python|tensorflow|gpu
| 98
|
3,433
| 65,845,222
|
When using the Deepbrain libary error message "module 'tensorflow' has no attribute 'Session"
|
<p>I am trying to use the library Deepbrain to extract brains from the entire MRIs scan I am using the code</p>
<pre><code>def Reduce_Brain(img):
img_data = img.get_fdata()
prob = ext.run(img)
print(prob)
img = nib.load('ADNI_002_S_0295.nii')
Reduce_Brain(img)
</code></pre>
<p>however, when I tried this I got the error module 'tensorflow' has no attribute 'Session' which I found was an error to do with the wrong version of tensorflow so I then changed the library code as said in the other question (see below). but this produced more errors such as module 'tensorflow' has no attribute 'gfile'</p>
<p><a href="https://stackoverflow.com/questions/55142951/tensorflow-2-0-attributeerror-module-tensorflow-has-no-attribute-session">Tensorflow 2.0 - AttributeError: module 'tensorflow' has no attribute 'Session'</a></p>
<p><a href="https://github.com/iitzco/deepbrain" rel="nofollow noreferrer">https://github.com/iitzco/deepbrain</a></p>
|
<p>In TF 2.x you should use <code>tf.compat.v1.Session()</code> instead of <code>tf.Session()</code>.
Take a look at <a href="https://www.tensorflow.org/guide/migrate/migrate_tf2" rel="nofollow noreferrer">Migrate_tf2 guide</a> for more information</p>
<p>To get TF 1.x like behaviour in TF 2.0 add below code</p>
<pre><code>import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
</code></pre>
|
python|tensorflow
| 0
|
3,434
| 63,327,569
|
Only select year from date
|
<p>My dataframe look like this:</p>
<pre><code> mth account_type interest_rate
1057 1977-01-01 Special 6.5
1061 1977-02-01 Special 6.5
1065 1977-03-01 Special 6.5
1069 1977-04-01 Special 6.5
1073 1977-05-01 Special 6.5
... ... ... ...
3077 2019-02-01 Special 5
3081 2019-03-01 Special 5
3085 2019-04-01 Special 5
3089 2019-05-01 Special 5
3093 2019-06-01 Special 5
</code></pre>
<p>I like to collapse "mth" column to just year</p>
<pre><code> mth account_type interest_rate
1057 1977 Special 6.5
... ... ... ...
3093 2019 Special 5
</code></pre>
<p>Any help would be very much appreciated. Many Thanks!</p>
|
<p>If your column <code>mth</code> is already <code>datetime</code>:</p>
<pre><code>df['mth'] = df['mth'].dt.year
</code></pre>
<p>If it is a <code>string</code> you have to first convert to <code>datetime</code>:</p>
<pre><code>df['mth'] = pd.to_datetime(df['mth'])
</code></pre>
|
pandas|datetime|time-series
| 1
|
3,435
| 63,331,532
|
tensor elements assignment in tensorflow
|
<p>I'm trying to create a for loop function as part of a convolutional neural network code to modify a variable if this variable is in a certain location of a 60 by 60 image.
how can I do this in Tensorflow/Keras?</p>
<p>I always get this error:
TypeError: 'Tensor' object does not support item assignment</p>
<pre><code>def newvar(x):
p = K.expand_dims(x[:, :, :, 1], -1)
q= p*0.0
for i in range(60):
for j in range(60):
if[i,j] in y.tolist()
q[:,i,j,:] = (p[:, i, j, :]- (10 * p[:,i, j,:]))
return q
</code></pre>
|
<p>TensorFlow does not support item assignment for now...
Try numpy first, then convert it to tf.float32.</p>
|
tensorflow|keras|conv-neural-network
| 0
|
3,436
| 71,850,728
|
Realtime JSON data to panda dataframe
|
<p>I have been trying to normalize my JSON file which I retrieved from Firebase Realtime Database, and turn it into a python panda data-frame but I keep getting everything as a row.</p>
<p>my JSON file is structured as following:</p>
<pre><code>{
“Device 1” {
“ID-1”{
“key”: value
“time”: xxx
}
“ID-2”{
“key”: value
“time”: xxx
}
“ID-3”{
“key”: value
“time”: xxx
}
“ID-4”{
“key”: value
“time”: xxx
}
}
“Device 2" {
“ID-1”{
“key”: value
“key”: value
“time”: xxx
}
“ID-2”{
“key”: value
“key”: value
“time”: xxx
}
“ID-3”{
“key”: value
“key”: value
“time”: xxx
}
“ID-4”{
“key”: value
“key”: value
“time”: xxx
}
}
“Device 3" {
“ID-1”{
“key”: value
“key”: value
“time”: xxx
}
“ID-2”{
“key”: value
“key”: value
“time”: xxx
}
“ID-3”{
“key”: value
“key”: value
“time”: xxx
}
“ID-4”{
“key”: value
“key”: value
“time”: xxx
}
}
}
</code></pre>
<p>what I am trying to do is have each device in a separate table, with the ID as a column alongside the values listed below it, like this:</p>
<p>Device 1 table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">ID</th>
<th style="text-align: center;">key 1</th>
<th style="text-align: right;">key 2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">id value</td>
<td style="text-align: center;">value</td>
<td style="text-align: right;">value</td>
</tr>
</tbody>
</table>
</div>
<p>Device 2 table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">ID</th>
<th style="text-align: center;">key 1</th>
<th style="text-align: center;">key 2</th>
<th style="text-align: right;">key 3</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">id value</td>
<td style="text-align: center;">value</td>
<td style="text-align: center;">value</td>
<td style="text-align: right;">value</td>
</tr>
</tbody>
</table>
</div>
<p>Device 3 table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">ID</th>
<th style="text-align: center;">key 1</th>
<th style="text-align: center;">key 2</th>
<th style="text-align: right;">key 3</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">id value</td>
<td style="text-align: center;">value</td>
<td style="text-align: center;">value</td>
<td style="text-align: right;">value</td>
</tr>
</tbody>
</table>
</div>
|
<p>Here's how I would do it. Since each Device is a table, I'll split them into individual dataframes. We can always unite them later if need be.</p>
<p>I'll suppose you have your JSON data in a dict called <code>devices</code></p>
<pre><code>import pandas as pd
dataframes = []
for device, id_data in devices.items():
rows = []
for row_id, values in id_data.items():
values["id"] = row_id
rows.append(values)
dataframes.append(pd.DataFrame(rows))
</code></pre>
|
python|json|pandas
| 0
|
3,437
| 72,075,933
|
finding a row in pandas through input
|
<p>I made a small script that iterates over a certain column through a given name and prints all its rows</p>
<p>I would like to make it search through its rows through a user input but not have to give it its full name.. last 3 letters would be sufficient for it.</p>
<p><a href="https://i.stack.imgur.com/Hb5Kp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hb5Kp.png" alt="enter image description here" /></a></p>
<p>If i give it the full name, for example - H516G067U it will find that memory</p>
<p>If something like 67U is given it will not find it, and this is exactly what im trying to do here.</p>
<p>What i've tried so far</p>
<pre><code>import pandas as pd
file = "path"
df = pd.read_excel(f"{file}", "DDR5 UDIMM")
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
pd.set_option('display.max_colwidth', None)
sn = [x for x in df["IDC S/N"]]
memory = input("enter a number : ")
if memory in sn:
print("true")
else:
print("false")
</code></pre>
|
<p>Use <code>str</code> accessor:</p>
<pre><code>import re
file = "path"
df = pd.read_excel(f"{file}", "DDR5 UDIMM")
... # set_option
# memory <- 67U
memory = re.escape(input("enter a number : "))
if df['IDC S/N'].str[-3:] == memory
print('true')
else:
print('false')
</code></pre>
|
python|pandas
| 0
|
3,438
| 56,630,896
|
How to benchmark USB Accelerator coral beta by google?
|
<p>I want to benchmark the USB Accelerator coral beta by google with use the function time.time() of Python.</p>
<p>I started by install Edge TPU runtime library. I found the procedure on <a href="https://coral.withgoogle.com/docs/accelerator/get-started/" rel="nofollow noreferrer">Google</a>.</p>
<p>Then, I followed then method to run inference with classification neuronal network.
I execute this commands lines:</p>
<pre><code> cd /usr/local/lib/python3.5/dist-packages/edgetpu/demo
python3 classify_image.py \
--model ~/Downloads /mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
--label ~/Downloads/inat_bird_labels.txt \
--image ~/Downloads/parrot.jpg
</code></pre>
<p>Now I want benchmark this example, so I went to the classify_image.py and i implemented function time.time() of Python library for measure time of execution of the neuronal.</p>
<p>Here are the changes I have made:</p>
<pre><code> def main():
parser = argparse.ArgumentParser()
parser.add_argument(
'--model', help='File path of Tflite model.', required=True)
parser.add_argument(
'--label', help='File path of label file.', required=True)
parser.add_argument(
'--image', help='File path of the image to be recognized.', required=True)
args = parser.parse_args()
print ("[ INFO ] Loading network files:")
print(args.model)
print ("[ INFO ] Loading image file:")
print(args.image)
print ("[ INFO ] Starting inference (5 iterations)")
print ("[ INFO ] Loading label file:")
print (args.label)
# Prepare labels.
labels = ReadLabelFile(args.label)
temps=0.0
print("[ INFO ] stard profiling")
print(".................................................")
for i in range(4):
# Initialize engine.
engine = ClassificationEngine(args.model)
# Run inference.
print("[ INFO ] Loading image in the model")
t1=time.time()
img = Image.open(args.image)
result=engine.ClassifyWithImage(img, threshold=0.1, top_k=5, resample=0)
t2=time.time()
temps=temps+(t2-t1)
print("[ INFO ] end profiling")
print(".................................................")
print("total inference time {} s:".format(temps))
print("Average running time of one iteration {} s:".format(temps/5.0))
print("Throughput: {} FPS".format(5.0/temps*1.0))
</code></pre>
<p>The result is "Average running time of one iteration 0.41750078201293944 s". </p>
<pre><code>[ INFO ] Loading network files:
inception_v1_224_quant.tflite
[ INFO ] Loading image file:
/cat_W_3000_H_2000.jpg
[ INFO ] Starting inference (5 iterations)
[ INFO ] Loading label file:
/imagenet_labels.txt
[ INFO ] stard profiling
.................................................
[ INFO ] end profiling
.................................................
total inference time 2.0875039100646973 s:
Average running time of one iteration 0.41750078201293944 s:
Throughput: 2.3952050944158647 FPS
</code></pre>
<p>When i wanted verifies if my results ares correct I went to this link <a href="https://coral.withgoogle.com/docs/edgetpu/benchmarks/" rel="nofollow noreferrer">Google</a> (official website for USB Accelerator coarl beta by google), and I found for neuronal network inception_v1 (224*224) they measure 3.6 ms, while I measure 417 ms. </p>
<p>So, my question is: how i can benchmark correctly the USB Accelerator coarl beta by google ?</p>
|
<p>There are two problems. First, to get "correct" benchmark numbers, you have to run several times (rather than just one). Why? Usually, it takes some extra time to prepare the running environment and there might be variance between runs. Second, the <code>engine.ClassifyWithImage</code> includes image processing time (scaling and cropping the input image). So, a "right way" to do it (1) add some warm-up runs and run multiple times, and (2) use <code>ClassificationEngine</code> instead of <code>ClassifyWithImage</code>. Actually, I did that about two month ago and put my code on github, see my <a href="https://github.com/freedomtan/edge_tpu_python_scripts" rel="nofollow noreferrer">scripts here</a>.</p>
|
tensorflow-lite|google-coral
| 1
|
3,439
| 56,809,095
|
Implementing 2D max subarray function as custom loss function in Keras
|
<p>I'm trying to implement a custom loss function in Keras (Tensorflow backend). </p>
<p>My aim is to create a loss function takes y_pred of size (150, 200, 1) (i.e. image of 150x200 with 1 channel), take the difference between it and a corresponding tensor y_true, then scan the resulting "difference" array for subarrays of all possible dimensions that produces a sum with the maximum absolute value (a 2D max subarray problem). Then, the function should output the absolute value of the sum of that subarray as the loss (a float). (I'm trying to model this function on the "MESA" algorithm from this paper: <a href="https://www.robots.ox.ac.uk/~vgg/publications/2010/Lempitsky10b/lempitsky10b.pdf" rel="nofollow noreferrer">https://www.robots.ox.ac.uk/~vgg/publications/2010/Lempitsky10b/lempitsky10b.pdf</a>)</p>
<p>I have been trying to read around custom loss functions in Keras, and I understand that one has to write a loss function within the Keras function space. While I currently have a Cython-optimised version of my loss function, I don't quite know how to translate it into a Keras-friendly version. The code for the main basis of my loss function is shown below.</p>
<pre class="lang-py prettyprint-override"><code>#The loss function as defined in my code
def MESA(y_true, y_pred):
diff = y_true - y_pred
diff = K.eval(diff)
result = CythonMESA.MaxSubArray2D(diff)
result = np.array([result])
result = K.variable(result)
return result
model.compile(
loss=MESA,
optimizer='adam',
metrics=['accuracy']
)
</code></pre>
<p>The "CythonMESA" module contains some Cython-optimised functions, which I have attached below. Specifically, the "CythonMESA.MaxSubArray2D" function takes a 2D array as input (such as a 2D np.ndarray object) and outputs a double.</p>
<pre class="lang-py prettyprint-override"><code>#Contents of CythonMESA.pyx
import numpy as np
cimport cython
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.cdivision(True)
#a helper function that is called within the main function below
#this function computes the maximum sum subarray in a 1D array using Kadane's algorithm
cdef double KadaneAbsoluteValue(double [:] array):
cdef int length = int(array.shape[0])
cdef double[:] maxSums = np.zeros(length, np.float64)
cdef double kadaneMax
cdef int i
for i in range(length):
if i == 0:
maxSums[0] = array[0]
kadaneMax = abs(maxSums[0])
else:
if abs(array[i]) >= abs(array[i] + maxSums[i-1]):
maxSums[i] = array[i]
else:
maxSums[i] = array[i] + maxSums[i-1]
if abs(maxSums[i]) > kadaneMax:
kadaneMax = abs(maxSums[i])
return kadaneMax
#The main basis for the loss function
#Loops through a 2D array and uses the function above to compute maximum subarray
cpdef double MaxSubArray2D(double [:,:] array):
cdef double maxSum = 0.
cdef double currentSum
cdef int height = int(array.shape[0])
cdef int width = int(array.shape[1])
cdef int i, j
cdef double [:] tempArray
if height >= width:
for i in range(width):
for j in range(i,width):
tempArray = np.sum(array[:,i:j+1], axis=1)
currentSum = KadaneAbsoluteValue(tempArray)
if currentSum > maxSum:
maxSum = currentSum
else:
for i in range(height):
for j in range(i, height):
tempArray = np.sum(array[i:j+1,:], axis=0)
currentSum = KadaneAbsoluteValue(tempArray)
if currentSum > maxSum:
maxSum = currentSum
return maxSum
</code></pre>
<p>I've actually tried compiling a network in Keras directly using the above function, but as expected, it throws an error.</p>
<p>If anyone could point me in the right direction as to where I can find out how to translate this into a Keras-friendly function, etc. I would greatly appreciate it!</p>
|
<p>A simple convolution of 1 filter with all-ones followed by a maxpooling would do it. </p>
<pre><code>subArrayX = 3
subArrayY = 3
inputChannels = 1
outputChannels = 1
convFilter = K.ones((subArrayX, subArrayY, inputChannels, outputChannels))
def local_loss(true, pred):
diff = K.abs(true-pred) #you might also try K.square instead of abs
localSums = K.conv2d(diff, convFilter)
localSums = K.batch_flatten(localSums)
#if using more than 1 channel, you might want a different thing here
return K.max(localSums, axis=-1)
model.compile(loss = local_loss, ....)
</code></pre>
<h2>For all possible shapes:</h2>
<pre><code>convWeights = []
for i in range(1, maxWidth+1):
for j in range(1, maxHeight+1):
convWeights.append(K.ones((i,j,1,1)))
def custom_loss(true,pred):
diff = true - pred
#sums for each array size
sums = [K.conv2d(diff, w) for w in convWeights]
# I didn't understand if you want the max abs sum or abs of max sum
# add this line depending on the answer:
sums = [K.abs(s) for s in sums]
#get the max sum for each array size
sums = [K.batch_flatten(s) for s in sums]
sums = [K.max(s, axis=-1) for s in sums]
#global sums for all sizes
sums = K.stack(sums, axis=-1)
sums = K.max(sums, axis=-1)
return K.abs(sums)
</code></pre>
<h3>Trying something similar to Kadane's (separate the dimensions)</h3>
<p>Let's just do this in separate dimensions:</p>
<pre><code>if height >= width:
convFilters1 = [K.ones((1, i, 1, 1)) for i in range(1,width+1)]
convFilters2 = [K.ones((i, 1, 1, 1) for i in range(1,height+1)]
concatDim1 = 2
concatDim2 = 1
else:
convFilters1 = [K.ones((i, 1, 1, 1)) for i in range(1,height+1)]
convFilters2 = [K.ones((1, i, 1, 1) for i in range(1,width+1)]
concatDim1 = 1
concatDim2 = 2
def custom_loss_2_step(true,pred):
diff = true-pred #shape (samp, h, w, 1)
sums = [K.conv2d(diff, f) for f in convFilters1] #(samp, h, var, 1)
#(samp, var, w, 1)
sums = K.concatenate(sums, axis=concatDim1) #(samp, h, superW, 1)
#(samp, superH, w, 1)
sums = [K.conv2d(sums, f) for f in convFilters2] #(samp, var, superW, 1)
#(samp, superH, var, 1)
sums = K.concatenate(sums, axis=concatDim2) #(samp, superH, superW, 1)
sums = K.batch_flatten(sums) #(samp, allSums)
#??? sums = K.abs(sums)
maxSum = K.max(sums, axis-1) #(samp,)
#??? maxSum = K.abs(maxSum)
return maxSum
</code></pre>
|
python|arrays|tensorflow|keras|cython
| 0
|
3,440
| 56,659,143
|
How to use pd.DataFrame method to manually create a dataframe from info scraped using beautifulsoup4
|
<p>I made it to the point where all <code>tr</code> data data has been scraped and I am able to get a nice printout. But when I go to implement the <code>pd.DataFrame</code> as in <code>df= pd.DataFrame({"A": a})</code> etc, I get a syntax error</p>
<p>Here is a list of my imported libraries in the Jupyter Notebook:</p>
<pre><code>import pandas as pd
import numpy as np
import bs4 as bs
import requests
import urllib.request
import csv
import html5lib
from pandas.io.html import read_html
import re
</code></pre>
<p>Here is my code:</p>
<pre><code>source = urllib.request.urlopen('https://www.zipcodestogo.com/Texas/').read()
soup = bs.BeautifulSoup(source,'html.parser')
table_rows = soup.find_all('tr')
table_rows
for tr in table_rows:
td = tr.find_all('td')
row = [i.text for i in td]
print(row)
texas_info = pd.DataFrame({
"title": Texas
"Zip Code" : [Zip Code],
"City" :[City],
})
texas_info.head()
</code></pre>
<p>I expect to get a dataframe with two columns, one being the 'Zip Code' and the other the 'Cities'</p>
|
<p>If you want to create manually, with bs4 4.7.1 you can use <code>:not</code>, <code>:contains</code> and <code>:nth-of-type</code> pseudo classes to isolate the two columns of interest, then construct a dict then convert to df</p>
<pre><code>import pandas as pd
import urllib
from bs4 import BeautifulSoup as bs
source = urllib.request.urlopen('https://www.zipcodestogo.com/Texas/').read()
soup = bs(source,'lxml')
zips = [item.text for item in soup.select('.inner_table:contains(Texas) td:nth-of-type(1):not([colspan])')]
cities = [item.text for item in soup.select('.inner_table:contains(Texas) td:nth-of-type(2):not([colspan])')]
d = {'Zips': zips,'Cities': cities}
df = pd.DataFrame(d)
df = df[1:].reset_index(drop = True)
</code></pre>
<p>You could combine selectors into one line:</p>
<pre><code>import pandas as pd
import urllib
from bs4 import BeautifulSoup as bs
source = urllib.request.urlopen('https://www.zipcodestogo.com/Texas/').read()
soup = bs(source,'lxml')
items = [item.text for item in soup.select('.inner_table:contains(Texas) td:nth-of-type(1):not([colspan]), .inner_table:contains(Texas) td:nth-of-type(2):not([colspan])')]
d = {'Zips': items[0::2],'Cities': items[1::2]}
df = pd.DataFrame(d)
df = df[1:].reset_index(drop = True)
print(df)
</code></pre>
<hr>
<p>I note you want to create manually but worth knowing for future readers that you could just use pandas read_html</p>
<pre><code>import pandas as pd
table = pd.read_html('https://www.zipcodestogo.com/Texas/')[1]
table.columns = table.iloc[1]
table = table[2:]
table = table.drop(['Zip Code Map', 'County'], axis=1).reset_index(drop=True)
print(table)
</code></pre>
|
pandas|web-scraping|beautifulsoup
| 0
|
3,441
| 56,631,871
|
How to compare list items to a pandas DataFrame value
|
<p>I want a method to match a whole value of the loaded_list DataFrame with an item from the domain_list. If an email in loaded_list contains a domain in domain_list then it should be populated in match_list.</p>
<p>I have tried many methods such as contains(domain_list), loaded_list == domain_list - with [row] and DataFrame column header name and IsIn method from pandas. All no luck</p>
<pre><code>loaded_list = []
match_list = []
domain_list = ['@hotmail.co.uk', '@gmail.com']
#This line below is from List to DataFrame
domain_list = pd.DataFrame(domain_list, columns=['Email Address'])
with open(self.breach_file, 'r', encoding='utf-8-sig') as breach_file:
found_reader = pd.read_csv(breach_file, sep=':', names=['Email Address'], engine='c')
loaded_list = found_reader
print("List Parsed... Enumerating Content Types")
breach_file.close()
match_list = ???
print(f"Match:\n {match_list}")
</code></pre>
<p>The expected outcome I would like is the var match_list displaying the emails in loaded_list which contain domain_list.</p>
<p>Many errors have popped up from the methods tried (isin, contains()). Dont want to use For Loops as ill be processing large data.</p>
<p>List Examples</p>
<pre><code>loaded_list:
abc@gmail.com
def@blaa.com
ghi@hotmail.co.uk
jkl@hotmail.com
mnop@yahoo.com
domain_list:
@gmail.com
@hotmail.co.uk
</code></pre>
|
<p>Did you try generating a regex with your domain_list by concatening the values separated by "|" then filter loaded_list using this generated pattern ?</p>
<p>Example:</p>
<pre><code>In[1]: loaded_list=pd.Series([
"abc@gmail.com",
"def@blaa.com",
"ghi@hotmail.co.uk",
"jkl@hotmail.com",
"mnop@yahoo.com"
])
In[2]: domain_list=pd.Series([
"@gmail.com",
"@hotmail.co.uk"
])
In[3]: import re
In[4]: match_list = loaded_list[loaded_list.str.contains(domain_list.apply(re.escape).str.cat(sep="|"))]
In[5]: match_list
Out[5]:
0 abc@gmail.com
2 ghi@hotmail.co.uk
dtype: object
</code></pre>
<p>I escaped all special characters in domain_list (to avoid any problem with regex special characters) and then used cat to join all domain_list patterns in one pattern with multiple alternatives using the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.cat.html" rel="nofollow noreferrer">str.cat</a> method.</p>
|
python-3.x|pandas
| 1
|
3,442
| 56,737,541
|
Get row data from a pandas dataframe as a list
|
<p>I want to get row data as a list in a pandas dataframe. I can get the data in the correct order(column order) in jupiter notebook but when i run the code as a python file in ubuntu terminal the list is not in order.infact it seemes the list is in assending order.</p>
<p>this is the data frame</p>
<pre><code>Src_mac Dest_mac Src_Port Dest_port Byte_Count duration Packet_Count
0 02 01 2 1 238 1.000000 3
1 01 02 1 2 140 0.893617 2
2 03 01 2 1 238 0.489362 3
3 01 03 1 2 140 0.446809 2
4 04 01 2 1 238 0.021277 3
5 01 04 1 2 140 0.000000 2
</code></pre>
<p>from this code</p>
<pre><code>l=list(range(0,len(df.index)))
for i in l:
d = df.iloc[i]
ml_data = d.tolist()
print(ml_data)
</code></pre>
<p>The output in Jupyter Notebook is as follows</p>
<pre><code>[2.0, 1.0, 2.0, 1.0, 238.0, 1.0, 3.0]
[1.0, 2.0, 1.0, 2.0, 140.0, 0.8936170212765937, 2.0]
[3.0, 1.0, 2.0, 1.0, 238.0, 0.4893617021276597, 3.0]
[1.0, 3.0, 1.0, 2.0, 140.0, 0.4468085106382951, 2.0]
[4.0, 1.0, 2.0, 1.0, 238.0, 0.02127659574468055, 3.0]
[1.0, 4.0, 1.0, 2.0, 140.0, 0.0, 2.0]
</code></pre>
<p>But if i run the same code as a indipendent python file in ubuntu terminal i get this (not in order)</p>
<pre><code>[238.0, 1.0, 1.0, 3.0, 2.0, 2.0, 1.0]
[140.0, 2.0, 2.0, 2.0, 1.0, 1.0, 0.8936170212765937]
[238.0, 1.0, 1.0, 3.0, 2.0, 3.0, 0.4893617021276597]
[140.0, 3.0, 2.0, 2.0, 1.0, 1.0, 0.4468085106382951]
[238.0, 1.0, 1.0, 3.0, 2.0, 4.0, 0.02127659574468055]
[140.0, 4.0, 2.0, 2.0, 1.0, 1.0, 0.0]
</code></pre>
<p>What did i do wrong </p>
|
<p>If you are using <code>pandas</code> you can consider to use</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
df = pd.DataFrame(np.arange(9).reshape(3,3))
df.values.tolist()
</code></pre>
<p>This returns</p>
<pre class="lang-py prettyprint-override"><code>[[0, 1, 2], [3, 4, 5], [6, 7, 8]]
</code></pre>
|
python|pandas|list|dataframe
| 2
|
3,443
| 66,857,825
|
Counting rows in df against string
|
<p>I have a df with a feature that contains some pattern of +/- for the prior N days. For each row in the df, I'm trying to count the number of times that given pattern appears in a string (and add this count as a new column in df)</p>
<p>For example</p>
<pre><code>d = {'day': [1, 2, 3], 'pattern': ['++-', '+++', '-+-']}
df = pd.DataFrame(data=d)
s = ('++-+++----++-+-')
</code></pre>
<p>For day 1 (row 1) I want it to search s for '++-' and return 3. For Day 2 it should return 1, etc.</p>
<p>In Excel this would be an easy countifs so I've been trying to use groupby().count(), or .str.contains but all of the examples I find have hardcoded strings to search for rather than iterating through the df to search for the pattern in that row.</p>
<p>Any help is appreciated!</p>
|
<p>You can use <a href="https://docs.python.org/3/library/stdtypes.html#str.count" rel="nofollow noreferrer"><code>str.count</code></a> along with pandas' <code>apply</code></p>
<pre><code>>>df['pattern'].apply(s.count)
0 3
1 1
2 1
Name: pattern, dtype: int64
</code></pre>
|
python|pandas|count
| 3
|
3,444
| 66,993,186
|
Split a string with , and split a string even no , present in a dataframe column
|
<p>I have a dataframe column and i need to split a column with "," and even if no "," present in the value.</p>
<pre><code>Value
=====
59.5
59.5, 5
60
60,5
</code></pre>
<p>desired output</p>
<pre><code>value1 value2
====== ======
59.5
59.5 5
60
60 5
</code></pre>
<p>Tried the code but getting an below error:</p>
<p>df['value1'], df_merge['value2'] = df['value'].str.split(',', 1).str</p>
<p>ValueError: not enough values to unpack (expected 2, got 1)</p>
|
<p>You could search for "," first and only do the split if str contains ",".</p>
|
python|pandas
| 0
|
3,445
| 68,159,503
|
using f1 score sklearn in pytorch ignite custom metric
|
<p>I would like to use the f1_score of sklearn in a custom metric of PyTorch-ignite.<br />
I couldn't find a good solution. although on the official website of PyTorch-ignite, there is a solution of</p>
<pre><code> precision = Precision(average=False)
recall = Recall(average=False)
F1 = Fbeta(beta=1.0, average=False, precision=precision, recall=recall)
</code></pre>
<p>, if you need to have an f1 score micro/macro/weighted, you can not use this example.</p>
<p>how can I use a custom metric with the sklearn library?</p>
|
<p>The solution is first create a custom metric:</p>
<pre class="lang-py prettyprint-override"><code>import torch
from ignite.metrics import Metric
from sklearn.metrics import f1_score
class F1Score(Metric):
def __init__(self, *args, **kwargs):
self.f1 = 0
self.count = 0
super().__init__(*args, **kwargs)
def update(self, output):
y_pred, y = output[0].detach(), output[1].detach()
_, predicted = torch.max(y_pred, 1)
f = f1_score(y.cpu(), predicted.cpu(), average='micro')
self.f1 += f
self.count += 1
def reset(self):
self.f1 = 0
self.count = 0
super(F1Score, self).reset()
def compute(self):
return self.f1 / self.count
</code></pre>
<p>then you can use it in <code>create_supervised_evaluator</code> or <code>create_supervised_trainer</code> as :</p>
<pre class="lang-py prettyprint-override"><code>import logging
import torch
from ignite.engine import Events
from ignite.engine import create_supervised_evaluator
from ignite.metrics import Accuracy, Fbeta
from ignite.metrics.precision import Precision
from ignite.metrics.recall import Recall
from metrics.f1score import F1Score
def inference(
cfg,
model,
val_loader
):
device = cfg.MODEL.DEVICE
logger = logging.getLogger("template_model.inference")
logger.info("Start inferencing")
precision = Precision(average=False)
recall = Recall(average=False)
F1 = Fbeta(beta=1.0, average=False, precision=precision, recall=recall)
metrics = {'accuracy': Accuracy(),
'precision': precision,
'recall': recall,
'custom': F1Score(),
'f1': F1}
evaluator = create_supervised_evaluator(model,
metrics=metrics,
device=device)
# adding handlers using `evaluator.on` decorator API
@evaluator.on(Events.EPOCH_COMPLETED)
def print_validation_results(engine):
metrics = evaluator.state.metrics
metrics = evaluator.state.metrics
_avg_accuracy = metrics['accuracy']
_precision = metrics['precision']
_precision = torch.mean(_precision)
_recall = metrics['recall']
_recall = torch.mean(_recall)
_f1 = metrics['f1']
_f1 = torch.mean(_f1)
_custom = metrics['custom']
logger.info(
"Test Results - Epoch: {} Avg accuracy: {:.3f}, precision: {:.3f}, recall: {:.3f}, f1 score: {:.3f}, custom: {:.2f}".format(
engine.state.epoch, _avg_accuracy, _precision, _recall, _f1, _custom))
evaluator.run(val_loader)
</code></pre>
<p>the result is:</p>
<pre><code>Test Results - Epoch: 1 Avg accuracy: 0.758, precision: 0.776, recall: 0.766, f1 score: 0.759, custom: 0.76
</code></pre>
|
python|scikit-learn|deep-learning|pytorch|pytorch-ignite
| 0
|
3,446
| 68,106,870
|
From a pandas DataFrame, how can I create copy-and-pasteable text that will re-create the same DataFrame, including the index?
|
<h3>Motivation:</h3>
<p>I have a CSV with some data, which is then loaded into a pandas DataFrame <code>raw_data</code>. While unit testing I want to do some aggregation or other process on <code>raw_data</code>, to create a new DataFrame, <code>df</code>, and then confirm that the results (i.e. <code>df</code>) are what I expect.</p>
<p>This means I need to specify, in the unit test, what the results should look like, including the index. Since I have to do this a lot, it's tedious to manually create the exemplars to test against. What I want to do is, in an interactive session, construct the correct result dataframe and then convert it into something I can paste into my test module.</p>
<p>In other words, I want some function <code>f</code> with inverse function <code>F</code>, such that <code>f(df)</code> is something I can copy and paste, and <code>F(f(df)) == df</code></p>
<h3>Example</h3>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({
'foo': [1, 2, 3],
'bar': ['a', 'b', 'c'],
'baz': ['d', 'e', 'f'],
'qux': ['z', 'y', 'x']
}).set_index(['baz', 'qux'])
</code></pre>
<p>What I want to do is to take <code>df</code>, get something I can copy and paste into my test module, and create an exact copy of <code>df</code>.</p>
<p>I have tried <code>to_dict</code> and <code>to_json</code>, but for all values of <code>orient</code> either there's an exception or the index is not recreated correctly (often, the <em>names</em> of the index are left out).</p>
<p>Here's code to confirm this:</p>
<h4><code>to_dict</code></h4>
<pre class="lang-py prettyprint-override"><code>orientations = ['dict', 'split', 'series', 'records', 'list', 'index']
print('\t'.join(['orient', 'eqval', 'valexc', 'eqidx', 'idxexc']))
for orient in orientations:
try:
# Convert df to a dict, which I can __repr__ and then copy and paste
serialized_df = df.to_dict(orient=orient)
# Then, in theory I should be able to recreate the original df from
# the dictionary object
ser_de_df = pd.DataFrame.from_dict(serialized_df, orient=orient)
equal_values = df.equals(ser_de_df)
values_exception = False
except:
equal_values = False
values_exception = True
try:
equal_index = df.index.equals(ser_de_df.index) and df.index.names == ser_de_df.index.names
index_exception = False
except:
equal_index = False
index_exception = True
print('\t'.join(map(str, [orient, equal_values, values_exception, equal_index, index_exception])))
</code></pre>
<p>This produces:</p>
<pre><code>orient eqval valexc eqidx idxexc
dict False True False False
split False True False False
series False True False False
records False True False False
list False True False False
index True False False False
</code></pre>
<p>So, <code>orient='index'</code> creates an equal-value DataFrame, but the index loses the names.</p>
<h4>JSON</h4>
<p>Here's code to confirm no value of <code>orient</code> works for JSON:</p>
<pre class="lang-py prettyprint-override"><code>orientations = {'split', 'records', 'index', 'columns', 'values', 'table'}
print('\t'.join(['orient', 'eqval', 'valexc', 'eqidx', 'idxexc']))
for orient in orientations:
try:
# Convert to JSON, which I could just copy and paste as a string
serialized_df = df.to_json(orient=orient)
# Load back into a DataFrame
ser_de_df = pd.read_json(serialized_df, orient=orient)
equal_values = df.equals(ser_de_df)
values_exception = False
except:
equal_values = False
values_exception = True
try:
equal_index = df.index.equals(ser_de_df.index) and df.index.names == ser_de_df.index.names
index_exception = False
except:
equal_index = False
index_exception = True
print('\t'.join(map(str, [orient, equal_values, values_exception, equal_index, index_exception])))
</code></pre>
<p>Produces:</p>
<pre><code>orient eqval valexc eqidx idxexc
columns False False False False
table True False True False
index False False False False
split False True False False
records False False False False
values False False False False
</code></pre>
<p>So <code>table</code> creates equal values, but not equal index.</p>
<p>I could, of course, manually add code to set the index names correctly again, but it seems odd that there would not be a way to consistently serialize a DataFrame including the complete index (other than to a file like with <code>to_pickle</code>, etc.).</p>
<p>Is there a better way?</p>
<p><strong>Note: Edited to fix a typo in the code, which prevented me from discovering the answer (which I've added below). Keeping this question here in case others might have same question</strong></p>
|
<p>I had a typo in my original code, which obscured the fact that <code>to_json(orient='table')</code> works:</p>
<pre class="lang-py prettyprint-override"><code>ser_de_df = pd.read_json(df.to_json(orient='table'), orient='table')
</code></pre>
<p>This preserves both the index labels and the names.</p>
<p>For a Python object that is concise, you can do this:</p>
<pre class="lang-py prettyprint-override"><code>import json
obj = json.loads(df.to_json(orient='table'))
</code></pre>
<p>If you print or repr <code>obj</code>, you'll get:</p>
<pre><code>{'schema': {'fields': [{'name': 'baz', 'type': 'string'},
{'name': 'qux', 'type': 'string'},
{'name': 'foo', 'type': 'integer'},
{'name': 'bar', 'type': 'string'}],
'primaryKey': ['baz', 'qux'],
'pandas_version': '0.20.0'},
'data': [{'baz': 'd', 'qux': 'z', 'foo': 1, 'bar': 'a'},
{'baz': 'e', 'qux': 'y', 'foo': 2, 'bar': 'b'},
{'baz': 'f', 'qux': 'x', 'foo': 3, 'bar': 'c'}]}
</code></pre>
<p>This can then be copied and pasted, and then made back into a dataframe using:</p>
<pre class="lang-py prettyprint-override"><code>pd.read_json(json.dumps({'schema': {'fields': [{'name': 'baz', 'type': 'string'},
{'name': 'qux', 'type': 'string'},
{'name': 'foo', 'type': 'integer'},
{'name': 'bar', 'type': 'string'}],
'primaryKey': ['baz', 'qux'],
'pandas_version': '0.20.0'},
'data': [{'baz': 'd', 'qux': 'z', 'foo': 1, 'bar': 'a'},
{'baz': 'e', 'qux': 'y', 'foo': 2, 'bar': 'b'},
{'baz': 'f', 'qux': 'x', 'foo': 3, 'bar': 'c'}]}
), orient='table')
</code></pre>
|
python|pandas|dataframe
| 1
|
3,447
| 68,384,132
|
BERT: AttributeError: 'RobertaForMaskedLM' object has no attribute 'bert'
|
<p>I am trying to freeze some layers of my masked language model using the following code:</p>
<pre><code>for param in model.bert.parameters():
param.requires_grad = False
</code></pre>
<p>However, when I execute the code above, I get this error:</p>
<pre><code>AttributeError: 'RobertaForMaskedLM' object has no attribute 'bert'
</code></pre>
<p>In my code, I have the following imports for my masked language model, but I am unsure what is causing the error above:</p>
<pre><code>from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained(model_checkpoint)
</code></pre>
<p>So far, I have tried to replace <code>bert</code> with <code>model</code> in my code, but that did not work.</p>
<p>Any help would be good.</p>
<p>Thanks.</p>
|
<p>If you look at the source code of <code>RobertaForMaskedLM</code> code <a href="https://huggingface.co/transformers/_modules/transformers/models/roberta/modeling_roberta.html#RobertaForMaskedLM" rel="nofollow noreferrer">here</a>, you can observe that there is no object with the name <code>bert</code>. Instead, they have an object <code>roberta</code> which is an object of type <code>RobertaModel</code></p>
<p>Hence, to freeze the Roberta Model and train only the LM head, you should modify your code as:</p>
<pre><code>for param in model.roberta.parameters():
param.requires_grad = False
</code></pre>
|
python|bert-language-model|huggingface-transformers|huggingface-tokenizers
| 0
|
3,448
| 45,836,553
|
Pandas MultiIndex: Selecting a column knowing only the second index?
|
<p>I'm working with the following DataFrame:</p>
<pre><code> age height weight shoe_size
0 8.0 6.0 2.0 1.0
1 8.0 NaN 2.0 1.0
2 6.0 1.0 4.0 NaN
3 5.0 1.0 NaN 0.0
4 5.0 NaN 1.0 NaN
5 3.0 0.0 1.0 0.0
</code></pre>
<p>I added another header to the df in this way:</p>
<pre><code>zipped = list(zip(df.columns, ["RHS", "height", "weight", "shoe_size"]))
df.columns = pd.MultiIndex.from_tuples(zipped)
</code></pre>
<p>So this is the new DataFrame:</p>
<pre><code> age height weight shoe_size
RHS height weight shoe_size
0 8.0 6.0 2.0 1.0
1 8.0 NaN 2.0 1.0
2 6.0 1.0 4.0 NaN
3 5.0 1.0 NaN 0.0
4 5.0 NaN 1.0 NaN
5 3.0 0.0 1.0 0.0
</code></pre>
<p>Now I know how to select the first column, by using the corresponding tuple <code>("age", "RHS")</code>:</p>
<pre><code>df[("age", "RHS")]
</code></pre>
<p>but I was wondering about how to do this by using only the second index "RHS".
Ideally something like:</p>
<pre><code>df[(any, "RHS")]
</code></pre>
|
<p>You could use <code>get_level_values</code></p>
<pre><code>In [700]: df.loc[:, df.columns.get_level_values(1) == 'RHS']
Out[700]:
age
RHS
0 8.0
1 8.0
2 6.0
3 5.0
4 5.0
5 3.0
</code></pre>
|
python|pandas|dataframe|multi-index
| 2
|
3,449
| 50,906,831
|
GroupBy aggregate count based on specific column
|
<p>I've been looking for a few hours and can't seem to find a topic related to that exact matter.</p>
<p>So basically, I want to apply on a groupby to find something else than the mean. My groupby returns two columns 'feature_name' and 'target_name', and I want to replace the value in 'target_name' by something else : the number of occurences of 1, of 0, the difference between both, etc. </p>
<pre><code>print(df[[feature_name, target_name]])
</code></pre>
<p>When I print my dataframe with the column I use, I get the following : <a href="https://i.stack.imgur.com/kZqqm.png" rel="nofollow noreferrer">screenshot</a></p>
<p>I already have the following code to compute the mean of 'target_name' for each value of 'feature_name':</p>
<pre><code>df[[feature_name, target_name]].groupby([feature_name],as_index=False).mean()
</code></pre>
<p>Which returns : <a href="https://i.stack.imgur.com/E8SbW.png" rel="nofollow noreferrer">this</a>.</p>
<p>And I want to compute different things than the mean. Here are the values I want to compute in the end : <a href="https://i.stack.imgur.com/fZjPZ.png" rel="nofollow noreferrer">what I want</a></p>
<p>In my case, the feature 'target_name' will always be equal to either 1 or 0 (with 1 being 'good' and 0 'bad'.</p>
<p>I have seen this example from <a href="https://stackoverflow.com/a/31649782/9924048">an answer.</a>:</p>
<pre><code>df.groupby(['catA', 'catB'])['scores'].apply(lambda x: x[x.str.contains('RET')].count())
</code></pre>
<p>But I don't know how to apply this to my case as x would be simply an int.
And after solving this issue, I still need to compute more than just the count!</p>
<p>Thanks for reading ☺</p>
|
<pre><code>import pandas as pd
import numpy as np
def my_func(x):
# Create your 3 metrics here
calc1 = x.min()
calc2 = x.max()
calc3 = x.sum()
# return a pandas series
return pd.Series(dict(metric1=calc1, metric2=calc2, metric3=calc3))
# Apply the function you created
df.groupby(...)['columns needed to calculate formulas'].apply(my_func).unstack()
</code></pre>
<p>Optionally, using <code>.unstack()</code> at the end allows you to see all your 3 metrics as column headers</p>
<p>As an example:</p>
<pre><code>df
Out[]:
Names A B
0 In 0.820747 0.370199
1 Out 0.162521 0.921443
2 In 0.534743 0.240836
3 Out 0.910891 0.096016
4 In 0.825876 0.833074
5 Out 0.546043 0.551751
6 In 0.305500 0.091768
7 Out 0.131028 0.043438
8 In 0.656116 0.562967
9 Out 0.351492 0.688008
10 In 0.410132 0.443524
11 Out 0.216372 0.057402
12 In 0.406622 0.754607
13 Out 0.272031 0.721558
14 In 0.162517 0.408080
15 Out 0.006613 0.616339
16 In 0.313313 0.808897
17 Out 0.545608 0.445589
18 In 0.353636 0.465455
19 Out 0.737072 0.306329
df.groupby('Names')['A'].apply(my_func).unstack()
Out[]:
metric1 metric2 metric3
Names
In 0.162517 0.825876 4.789202
Out 0.006613 0.910891 3.879669
</code></pre>
|
python|pandas
| 1
|
3,450
| 66,434,570
|
Numpy's "shape" function returns a 1D value for a 2D array
|
<p>so I have created this array as an example:</p>
<pre><code>a = np.array([[1, 1, 1, 1, 2], [2, 2, 2, 3], [3, 3, 3, 4], [13, 49, 13, 49], [10, 10, 2, 2],
[11, 1, 1, 1, 2], [22, 2, 2, 3], [33, 3, 3, 4], [133, 49, 13, 49], [100, 10, 2, 2],
[5, 1, 1, 1, 2], [32, 2, 2, 3], [322, 3, 3, 4], [13222, 49, 13, 49], [130, 10, 2, 2]])
</code></pre>
<p>I wanted to create a 2d array. So for example in this case, 15*5 array.</p>
<p>However, when I use the <code>a.shape</code>, it returns <code>(15,)</code></p>
<p>What is wrong with my array definition?</p>
|
<h3>Tl;dr. Your individual lists are of variable length thus forcing your NumPy array to be a 1D array of list objects rather than a 2D array of integers/floats</h3>
<p>Numpy arrays can only be defined when each axis has the same number of elements. Otherwise, you are left with an 1D array of objects.</p>
<p><a href="https://i.stack.imgur.com/NWTQH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NWTQH.png" alt="enter image description here" /></a></p>
<p>This is what is happening with your array. You have a list of lists, which contains a variable number of elements (some 4 and some 5). This during conversion turns it into a <code>(15,)</code> numpy arrays where the arrays has 15 separate list-objects.</p>
<pre><code>a = np.array([[1, 1, 1, 1, 2], [2, 2, 2, 3], [3, 3, 3, 4], [13, 49, 13, 49]
## |______________| |__________|
## | |
## 5 length 4 length
</code></pre>
<h3>A quick demonstration -</h3>
<pre><code>#Variable length sublists
print(np.array([[1,2,3], [4,5]]))
#Fixed length sublists
print(np.array([[1,2,3], [4,5,6]]))
</code></pre>
<pre><code>array([list([1, 2, 3]), list([4, 5])], dtype=object) #This is (2,)
array([[1, 2, 3], #This is (2,3)
[4, 5, 6]])
</code></pre>
<ol>
<li>What you might want to do is either fix the number of elements in each sublist.</li>
<li>Or you may want to do some <em>padding</em> on your array.</li>
</ol>
|
python|arrays|python-3.x|numpy|multidimensional-array
| 5
|
3,451
| 66,712,176
|
Round off list values in Pandas column
|
<pre><code> column1 column2
0 name1 [(0, 0.12561743), (1, 0.12500079), (2, 0.1250000)]
1 name2 [(0, 0.1251732), (1, 0.12597172), (2, 0.623854998)]
</code></pre>
<p>How can I round off the values in column2 in 3 decimal places like this:</p>
<pre><code> column1 column2
0 name1 [(0, 0.125), (1, 0.125), (2, 0.125)]
1 name2 [(0, 0.125), (1, 0.125), (2, 0.623)]
</code></pre>
<p>The following is not working as the values are in list:</p>
<pre><code>df.column2 = df.column2.apply(lambda x: round(x, 2))
</code></pre>
|
<p>You need to loop twice:</p>
<pre><code># don't use .column2 to assign
df['column2'] = df.column2.apply(lambda x: [tuple(round(z, 3) for z in y) for y in x])
</code></pre>
<p>Output:</p>
<pre><code> column1 column2
0 name1 [(0, 0.126), (1, 0.125), (2, 0.125)]
1 name2 [(0, 0.125), (1, 0.126), (2, 0.624)]
</code></pre>
|
python|pandas
| 2
|
3,452
| 57,716,363
|
Explicit broadcasting of variable batch-size tensor
|
<p>I'm trying to implement a custom Keras <code>Layer</code> in Tensorflow 2.0RC and need to concatenate a <code>[None, Q]</code> shaped tensor onto a <code>[None, H, W, D]</code> shaped tensor to produce a <code>[None, H, W, D + Q]</code> shaped tensor. It is assumed that the two input tensors have the same batch size even though it is not known beforehand. Also, none of H, W, D, and Q are known at write-time but are evaluated in the layer's <code>build</code> method when the layer is first called. The issue that I'm experiencing is when broadcasting the <code>[None, Q]</code> shaped tensor up to a <code>[None, H, W, Q]</code> shaped tensor in order to concatenate.</p>
<p>Here is an example of trying to create a Keras <code>Model</code> using the Functional API that performs variable-batch broadcasting from shape <code>[None, 3]</code> to shape <code>[None, 5, 5, 3]</code>:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import tensorflow.keras.layers as kl
import numpy as np
x = tf.keras.Input([3]) # Shape [None, 3]
y = kl.Reshape([1, 1, 3])(x) # Need to add empty dims before broadcasting
y = tf.broadcast_to(y, [-1, 5, 5, 3]) # Broadcast to shape [None, 5, 5, 3]
model = tf.keras.Model(inputs=x, outputs=y)
print(model(np.random.random(size=(8, 3))).shape)
</code></pre>
<p>Tensorflow produces the error:</p>
<pre><code>InvalidArgumentError: Dimension -1 must be >= 0
</code></pre>
<p>And then when I change <code>-1</code> to <code>None</code> it gives me:</p>
<pre><code>TypeError: Failed to convert object of type <class 'list'> to Tensor. Contents: [None, 5, 5, 3]. Consider casting elements to a supported type.
</code></pre>
<p>How can I perform the specified broadcasting?</p>
|
<p>You need to use the dynamic shape of <code>y</code> to determine the batch size. The dynamic shape of a tensor <code>y</code> is given by <code>tf.shape(y)</code> and is a tensor op representing the shape of <code>y</code> evaluated at runtime. The modified example demonstrates this by selecting between the old shape, <code>[None, 1, 1, 3]</code>, and the new shape using <a href="https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/where" rel="nofollow noreferrer"><code>tf.where</code></a>.</p>
<pre><code>import tensorflow as tf
import tensorflow.keras.layers as kl
import numpy as np
x = tf.keras.Input([3]) # Shape [None, 3]
y = kl.Reshape([1, 1, 3])(x) # Need to add empty dims before broadcasting
# Retain the batch and depth dimensions, but broadcast along H and W
broadcast_shape = tf.where([True, False, False, True],
tf.shape(y), [0, 5, 5, 0])
y = tf.broadcast_to(y, broadcast_shape) # Broadcast to shape [None, 5, 5, 3]
model = tf.keras.Model(inputs=x, outputs=y)
print(model(np.random.random(size=(8, 3))).shape)
# prints: "(8, 5, 5, 3)"
</code></pre>
<p>References:</p>
<p><a href="https://blog.metaflow.fr/shapes-and-dynamic-dimensions-in-tensorflow-7b1fe79be363" rel="nofollow noreferrer">"TensorFlow: Shapes and dynamic dimensions"</a></p>
|
python|keras|tensorflow2.0|tf.keras
| 4
|
3,453
| 73,136,069
|
Check if values in each first coulmn in df startswith ! using pandas
|
<pre><code>types={}
for col in df.columns[0]:
if df[col].dtype == object:
print('Check values inside column')
if '!' in df[col].values :
print("\nThis value exists in Dataframe")
</code></pre>
<p>I have a couple of the data frames. I need to check two thigs:</p>
<ul>
<li>if the first column in each df is string, if so, then</li>
<li>if any value inside that column starts with !. I am trying to playing around with this code, but I'm getting a key error.</li>
</ul>
|
<p>This should do the trick:</p>
<pre class="lang-py prettyprint-override"><code>df.iloc[:, 0].astype(str).str.startswith('!').any()
</code></pre>
<p>This assumes that the other possible dtypes do not result in a string representation that starts with a <code>!</code>, which should work in the vast majority of applications.</p>
|
python|pandas
| 1
|
3,454
| 70,432,235
|
Pandas None series has 0
|
<p>I want to generate a pandas series with null values, with int type. I use this:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
pd.Series(None, index=[1, 2, 3], dtype=int)
</code></pre>
<p>And it returns</p>
<pre class="lang-py prettyprint-override"><code>1 0
2 0
3 0
dtype: int64
</code></pre>
<p>What can I do for it to return a series whose values are null?</p>
|
<p>Use <code>Int64</code> for <a href="https://pandas.pydata.org/docs/user_guide/integer_na.html" rel="nofollow noreferrer"><code>nullable integer type</code></a>:</p>
<pre><code>s = pd.Series(None, index=[1, 2, 3], dtype='Int64')
print (s)
1 <NA>
2 <NA>
3 <NA>
dtype: Int64
</code></pre>
<hr />
<pre><code>s = pd.Series(np.nan, index=[1, 2, 3], dtype=int)
print (s)
1 NaN
2 NaN
3 NaN
dtype: float64 <- casting to float
</code></pre>
|
python|pandas
| 3
|
3,455
| 51,256,360
|
How to take remote data access (from stock market) and combine them into a single dataframe?
|
<p>I tried grabbing data from morningstar and combining different stock, but I can't figure out how to combine the data properly. I want to organize it by Date, but it just stacks the data on top of each other.</p>
<pre><code>print('test')
print('testing')
#this program will read data from morningstar and interpret them using pandas
import pandas as pd
import datetime
import numpy as np
import matplotlib.pylab as plt
pd.core.common.is_list_like = pd.api.types.is_list_like
import pandas_datareader.data as web
start = datetime.datetime(2010,1,1) #datetime is (year, month, day)
end = datetime.date.today()
#Getting data from morningstar
microsoft = pd.DataFrame(web.DataReader("MSFT", "morningstar", start, end))
apple = pd.DataFrame(web.DataReader("AAPL","morningstar", start, end))
google = pd.DataFrame(web.DataReader("GOOG", "morningstar", start, end))
stocks = pd.DataFrame({"MSFT": microsoft["Volume"],
"AAPL": apple["Volume"],
"GOOG": google["Volume"]})
print(stocks)
</code></pre>
<p>Basically I want the data to look like this: </p>
<pre><code> stock1 stock2 stock3
date1 123 345 234
date2 657 294 553
date3 786 321 933
</code></pre>
<p>But instead it turns out like this: </p>
<pre><code> stock1 stock2 stock3
date1 123 NaN NaN
date2 657 NaN NaN
date3 786 NaN NaN
date1 NaN 345 NaN
date2 NaN 294 NaN
date3 NaN 321 NaN
date1 NaN NaN 234
date2 NaN NaN 553
date3 NaN NaN 933
</code></pre>
|
<p>You can add <code>reset_index</code> at the end when you create your new dataframe</p>
<pre><code>stocks = pd.DataFrame({"MSFT": microsoft["Volume"].reset_index(level=0,drop=True),
"AAPL": apple["Volume"].reset_index(level=0,drop=True),
"GOOG": google["Volume"].reset_index(level=0,drop=True)})
</code></pre>
|
python|pandas|dataframe|matplotlib|join
| 0
|
3,456
| 71,037,820
|
Can't install scikit-learn on mac
|
<p>I am trying to install scikit-learn and scipy packages in Python. I've already installed NumPy and Pandas successfully. Then I used following command:</p>
<pre><code>pip install scikit-learn
</code></pre>
<p>and I received following error:</p>
<pre><code>File "/private/tmp/pip-build-env-w7wmv28v/overlay/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 278, in get_tag
assert tag in supported_tags, "would build wheel with unsupported tag {}".format(tag)
AssertionError: would build wheel with unsupported tag ('cp39', 'cp39', 'macosx_11_0_universal2')
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for numpy
Failed to build numpy
ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>I am using python 3.9.10, and a mac computer. What should I do?</p>
|
<p>Have you tried this?:</p>
<pre><code>pip install sklearn
</code></pre>
|
python|numpy|scikit-learn|pip
| 0
|
3,457
| 70,990,504
|
Replace values in column based on same or closer values from another columns pandas
|
<p>I have two dataframes look like:</p>
<p>DF1:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Score 1</th>
<th>Avg_life</th>
</tr>
</thead>
<tbody>
<tr>
<td>4.033986</td>
<td>3482.0</td>
</tr>
<tr>
<td>9.103820</td>
<td>758.0</td>
</tr>
<tr>
<td>-1.34432</td>
<td>68000.0</td>
</tr>
<tr>
<td>218670040.0</td>
<td>33708.0</td>
</tr>
<tr>
<td>2.291000</td>
<td>432.0</td>
</tr>
</tbody>
</table>
</div>
<p>DF2:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Score 1</th>
<th>life</th>
</tr>
</thead>
<tbody>
<tr>
<td>3.033986</td>
<td>0</td>
</tr>
<tr>
<td>9.103820</td>
<td>0</td>
</tr>
<tr>
<td>9.103820</td>
<td>0</td>
</tr>
<tr>
<td>7.350981</td>
<td>0</td>
</tr>
<tr>
<td>1.443400</td>
<td>0</td>
</tr>
<tr>
<td>9.103820</td>
<td>0</td>
</tr>
<tr>
<td>-1.134486</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>The 0 values in "life" from second dataframe should be replaced by the values from "avg life" from the first dataframe if the columns "Score 1" from both dataframes are the same. Btw if there are no the same values, we take the closest value from "Score1" in DF1 to value from "Score1" in DF2.</p>
<p>The problem is in the word "the closest".</p>
<p>For example:
I don't have the value "3.033986" in DF1 in "Score 1", but I want to take the closest value to this - "4.033986" and change 0 in "life" column to "3482.0" from "Avg_life" because "3.033986" is closer to "4.033986".</p>
<p>The result should be like this:</p>
<p>DF_result:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Score 1</th>
<th>life</th>
</tr>
</thead>
<tbody>
<tr>
<td>3.033986</td>
<td>3482.0</td>
</tr>
<tr>
<td>9.103820</td>
<td>758.0</td>
</tr>
<tr>
<td>9.103820</td>
<td>758.0</td>
</tr>
<tr>
<td>7.350981</td>
<td>758.0</td>
</tr>
<tr>
<td>1.443400</td>
<td>432.0</td>
</tr>
<tr>
<td>9.103820</td>
<td>758.0</td>
</tr>
<tr>
<td>-1.134486</td>
<td>68000.0</td>
</tr>
</tbody>
</table>
</div>
<p>I hope I made it clear....</p>
<p>Thanks for all help!</p>
|
<p>First we find the value if <code>df1['Score1']</code> that is the closest to each value in <code>df2['Score1']</code>, and put it into <code>df2['match']</code>:</p>
<pre><code>df2['match'] = df2['Score1'].apply(lambda s : min(df1['Score1'].values, key = lambda x: abs(x-s)))
</code></pre>
<p><code>df2</code> now looks like this</p>
<pre><code>
Score1 life match
0 3.033986 0 2.29100
1 9.103820 0 9.10382
2 9.103820 0 9.10382
3 7.350981 0 9.10382
4 1.443400 0 2.29100
5 9.103820 0 9.10382
6 -1.134486 0 -1.34432
</code></pre>
<p>Now we just merge on <code>match</code>, drop unneeded columns and rename others</p>
<pre><code>(df2[['match', 'Score1']].merge(df1, how = 'left', left_on = 'match', right_on = 'Score1', suffixes = ['','_2'])
.rename(columns = {'Avg_life':'life'})
.drop(columns = ['match', 'Score1_2'])
)
</code></pre>
<p>output</p>
<pre><code>
Score1 life
0 3.033986 432.0
1 9.103820 758.0
2 9.103820 758.0
3 7.350981 758.0
4 1.443400 432.0
5 9.103820 758.0
6 -1.134486 68000.0
</code></pre>
|
python|python-3.x|pandas|dataframe|numpy
| 4
|
3,458
| 35,962,462
|
ploting 3D surface using array
|
<pre><code>Z=np.array([[10.,12.,12.,5.],
[10.,0.,0.,5.],
[10.,0.,0.,5.],
[10.,20.,20.,20.]])
X = np.arange(0, 4, 1)
Y = np.arange(0, 4, 1)
</code></pre>
<p>I have a 2D 4x4 array with. I want to make a 3D plot with x and y axes having discrete integer values from 0 to 4. Can someone help me with that?</p>
|
<p>you first need to make 2D arrays of your X,Y vectors:</p>
<pre><code>import numpy as np
X2D,Y2D = np.meshgrid(X,Y)
</code></pre>
<p>then you can use a surface plot (or wireframe):</p>
<pre><code>from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = Axes3D(fig)
ax.plot_surface(X2D,Y2D, Z)
</code></pre>
<p>the axis will can only be 0 to 3 if you only have 4 points (you need 5 to go from 0 to 4)</p>
|
python|numpy|matplotlib|mplot3d
| 0
|
3,459
| 41,968,714
|
Eliminate one of the columns from pivot_table for changing the grouping logic
|
<p>I have this <code>dataframe</code>:</p>
<pre><code>df =
GROUP HOUR TOTAL_SERVICE_TIME TOTAL_WAIT_TIME IS_EVALUATED IS_NEGATIVE_GRADE
AAA 7 24 32 0 0
AAA 7 23 30 1 0
AAA 8 25 31 1 1
BBB 7 26 33 1 0
BBB 8 27 31 1 0
</code></pre>
<p>I want to adapt the below-given code to grouping the data only by <code>GROUP</code>. I don't want to use the column <code>HOUR</code>. I wonder if I can use <code>pivot_table</code> without <code>HOUR</code>, so that the data is grouped only by <code>GROUP</code>, while ignoring <code>HOUR</code>?</p>
<pre><code>piv_df = df.pivot_table(index='GROUP', columns='HOUR', fill_value=0).stack()
avg_tot = piv_df[['TOTAL_SERVICE_TIME', 'TOTAL_WAIT_TIME']].add_prefix("AVG_")
avg_pct1 = piv_df['IS_EVALUATED'].mul(100).astype(int)
avg_pct2 = piv_df['IS_NEGATIVE_GRADE'].mul(100).astype(int)
fresult = avg_tot.join(avg_pct1.to_frame("AVG_PERCENT_EVAL_1")).join(avg_pct2.to_frame("AVG_PERCENT_NEGATIVE")).reset_index()
</code></pre>
|
<p>Without <code>columns='Hour'</code>, you no longer need to <code>stack</code></p>
<pre><code>piv_df = df.pivot_table(index='GROUP', fill_value=0)
avg_tot = piv_df[['TOTAL_SERVICE_TIME', 'TOTAL_WAIT_TIME']].add_prefix("AVG_")
avg_pct1 = piv_df['IS_EVALUATED'].mul(100).astype(int)
avg_pct2 = piv_df['IS_NEGATIVE_GRADE'].mul(100).astype(int)
fresult = avg_tot.join(avg_pct1.to_frame("AVG_PERCENT_EVAL_1")).join(avg_pct2.to_frame("AVG_PERCENT_NEGATIVE")).reset_index()
fresult
</code></pre>
<p><a href="https://i.stack.imgur.com/uX3rp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uX3rp.png" alt="enter image description here"></a></p>
|
python|python-2.7|pandas
| 3
|
3,460
| 41,736,954
|
convolution of .mat file and 1D array
|
<p>my code is:</p>
<pre><code>import numpy as np
import scipy.io as spio
x=np.zeros((22113,1),float)
x= spio.loadmat('C:\\Users\\dell\\Desktop\\Rabia Ahmad spring 2016\\'
'FYP\\1. Matlab Work\\record work\\kk.mat')
print(x)
x = np.reshape(len(x),1);
h = np.array([0.9,0.3,0.1],float)
print(h)
h = h.reshape(len(h),1);
dd = np.convolve(h,x)
</code></pre>
<p>and the error I encounter is "<strong>ValueError: object too deep for desired array</strong>"
kindly help me in this reguard.</p>
|
<pre><code>{'__globals__': [], '__version__': '1.0', 'ans': array([[ 0.13580322,
0.13580322], [ 0.13638306, 0.13638306], [ 0.13345337, 0.13345337],
..., [ 0.13638306, 0.13638306], [ 0.13345337, 0.13345337], ..., [
0.13638306, 0.13638306], [ 0.13345337, 0.13345337], ..., [-0.09136963,
-0.09136963], [-0.12442017, -0.12442017], [-0.15542603, -0.15542603]])}
</code></pre>
<p>See {}? That means <code>x</code> from the <code>loadmat</code> is a dictionary. </p>
<p><code>x['ans']</code> will be an array </p>
<pre><code>array([[ 0.13580322,
0.13580322], [ 0.13638306, 0.13638306], [ 0.13345337, 0.13345337],...]])
</code></pre>
<p>which, if I count the [] right is a (n,2) array of floats.</p>
<p>The following line does not make sense:</p>
<pre><code>x = np.reshape(len(x),1);
</code></pre>
<p>I suspect you mean <code>x = x.reshape(...)</code> as you do with <code>h</code>. But that would give an error with the dictionary <code>x</code>.</p>
<p>When you say <code>the shape of x is (9,) and its dtype is uint16</code> - where in your code you verifying that?</p>
|
python|python-3.x|numpy
| 1
|
3,461
| 41,723,419
|
Why can itertools.groupby group the NaNs in lists but not in numpy arrays
|
<p>I'm having a difficult time to debug a problem in which the float <code>nan</code> in a <code>list</code> and <code>nan</code> in a <code>numpy.array</code> are handled differently when these are used in <code>itertools.groupby</code>:</p>
<p>Given the following list and array:</p>
<pre><code>from itertools import groupby
import numpy as np
lst = [np.nan, np.nan, np.nan, 0.16, 1, 0.16, 0.9999, 0.0001, 0.16, 0.101, np.nan, 0.16]
arr = np.array(lst)
</code></pre>
<p>When I iterate over the list the contiguous <code>nan</code>s are grouped:</p>
<pre><code>>>> for key, group in groupby(lst):
... if np.isnan(key):
... print(key, list(group), type(key))
nan [nan, nan, nan] <class 'float'>
nan [nan] <class 'float'>
</code></pre>
<p>However if I use the array it puts successive <code>nan</code>s in different groups:</p>
<pre><code>>>> for key, group in groupby(arr):
... if np.isnan(key):
... print(key, list(group), type(key))
nan [nan] <class 'numpy.float64'>
nan [nan] <class 'numpy.float64'>
nan [nan] <class 'numpy.float64'>
nan [nan] <class 'numpy.float64'>
</code></pre>
<p>Even if I convert the array back to a list:</p>
<pre><code>>>> for key, group in groupby(arr.tolist()):
... if np.isnan(key):
... print(key, list(group), type(key))
nan [nan] <class 'float'>
nan [nan] <class 'float'>
nan [nan] <class 'float'>
nan [nan] <class 'float'>
</code></pre>
<p>I'm using:</p>
<pre><code>numpy 1.11.3
python 3.5
</code></pre>
<p>I know that generally <code>nan != nan</code> so why do these operations give different results? And how is it possible that <code>groupby</code> can group <code>nan</code>s at all?</p>
|
<p>Python lists are just arrays of pointers to objects in memory. In particular <code>lst</code> holds pointers to the object <code>np.nan</code>:</p>
<pre><code>>>> [id(x) for x in lst]
[139832272211880, # nan
139832272211880, # nan
139832272211880, # nan
139832133974296,
139832270325408,
139832133974296,
139832133974464,
139832133974320,
139832133974296,
139832133974440,
139832272211880, # nan
139832133974296]
</code></pre>
<p>(<code>np.nan</code> is at 139832272211880 on my computer.)</p>
<p>On the other hand, NumPy arrays are just contiguous regions of memory; they are regions of bits and bytes that are interpreted as a sequence of values (floats, ints, etc.) by NumPy.</p>
<p>The trouble is that when you ask Python to iterate over a NumPy array holding floating values (at a <code>for</code>-loop or <code>groupby</code> level), Python needs to box these bytes into a proper Python object. It creates a brand new Python object in memory for each single value in the array as it iterates.</p>
<p>For example, you can see that that distinct objects for each <code>nan</code> value are created when <code>.tolist()</code> is called:</p>
<pre><code>>>> [id(x) for x in arr.tolist()]
[4355054616, # nan
4355054640, # nan
4355054664, # nan
4355054688,
4355054712,
4355054736,
4355054760,
4355054784,
4355054808,
4355054832,
4355054856, # nan
4355054880]
</code></pre>
<p><code>itertools.groupby</code> is able to group on <code>np.nan</code> for the Python list because it checks for <em>identity</em> first when it compares Python objects. Because these pointers to <code>nan</code> all point at the same <code>np.nan</code> object, grouping is possible.</p>
<p>However, iteration over the NumPy array does not allow this initial identity check to succeed, so Python falls back to checking for equality and <code>nan != nan</code> as you say.</p>
|
python|arrays|list|numpy|nan
| 9
|
3,462
| 37,887,796
|
pandas DataFrame add fill_value NotImplementedError
|
<p>I have:</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed([3,1415])
df = pd.DataFrame(np.random.choice((1, 2, np.nan), (5, 5)))
s = pd.Series(range(5))
</code></pre>
<p>I want to add <code>s</code> to <code>df</code> and broadcast across rows. Usually I'd:</p>
<pre><code>df.add(s)
</code></pre>
<p>And be done with it. However, I want to fill missing values with <code>0</code>. So I thought I'd:</p>
<pre><code>df.add(s, fill_value=0)
</code></pre>
<p>But I got:</p>
<blockquote>
<p>NotImplementedError: fill_value 0 not supported</p>
</blockquote>
<h3>Question: How do I get the result I was anticipating?</h3>
<pre><code> 0 1 2 3 4
0 1.0 1.0 2.0 3.0 4.0
1 2.0 3.0 2.0 4.0 4.0
2 1.0 1.0 3.0 4.0 4.0
3 1.0 1.0 2.0 4.0 6.0
4 1.0 3.0 4.0 3.0 5.0
</code></pre>
|
<p>I ran into this issue also. In my case it's because I was adding a series to a dataframe. </p>
<p>The <code>fill_value=0</code> instruction works for me when adding a series to a series or adding a dataframe to a dataframe. </p>
<p>I just made a new dataframe with the series as its only column and now I can add them with <code>fill_value=0</code>.</p>
<pre><code>df1.add(df2, fill_value=0) # This works
series1.add(series2, fill_value=0) # This works
df.add(series, fill_value=0) # Throws error
df.add(pd.DataFrame(series), fill_value=0) # Works again
</code></pre>
|
python|pandas
| 3
|
3,463
| 64,239,738
|
TensorFlow Lite does not recognize op VarHandleOp
|
<p>I am attempting to convert a TF model to TFLite. The model was saved in <code>.pb</code> format and I have converted it with the following code:</p>
<pre><code>import os
import tensorflow as tf
from tensorflow.core.protobuf import meta_graph_pb2
export_dir = os.path.join('export_dir', '0')
if not os.path.exists('export_dir'):
os.mkdir('export_dir')
tf.compat.v1.enable_control_flow_v2()
tf.compat.v1.enable_v2_tensorshape()
# I took this function from a tutorial on the TF website
def wrap_frozen_graph(graph_def, inputs, outputs):
def _imports_graph_def():
tf.compat.v1.import_graph_def(graph_def, name="")
wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, [])
import_graph = wrapped_import.graph
return wrapped_import.prune(
inputs, outputs)
graph_def = tf.compat.v1.GraphDef()
loaded = graph_def.ParseFromString(open(os.path.join(export_dir, 'saved_model.pb'),'rb').read())
concrete_func = wrap_frozen_graph(
graph_def, inputs=['extern_data/placeholders/data/data:0', 'extern_data/placeholders/data/data_dim0_size:0'],
outputs=['output/output_batch_major:0'])
concrete_func.inputs[0].set_shape([None, 50])
concrete_func.inputs[1].set_shape([None])
concrete_func.outputs[0].set_shape([None, 100])
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
converter.experimental_new_converter = True
converter.post_training_quantize=True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
converter.allow_custom_ops=True
tflite_model = converter.convert()
# Save the model.
if not os.path.exists('tflite'):
os.mkdir('tflite')
output_model = os.path.join('tflite', 'model.tflite')
with open(output_model, 'wb') as f:
f.write(tflite_model)
</code></pre>
<p>However, when I try to use the intepretere with this model I get the following error:</p>
<pre><code>INFO: TfLiteFlexDelegate delegate: 8 nodes delegated out of 970 nodes with 3 partitions.
INFO: TfLiteFlexDelegate delegate: 0 nodes delegated out of 4 nodes with 0 partitions.
INFO: TfLiteFlexDelegate delegate: 3 nodes delegated out of 946 nodes with 1 partitions.
INFO: TfLiteFlexDelegate delegate: 0 nodes delegated out of 1 nodes with 0 partitions.
INFO: TfLiteFlexDelegate delegate: 3 nodes delegated out of 16 nodes with 2 partitions.
Traceback (most recent call last):
File "/path/to/tflite_interpreter.py", line 9, in <module>
interpreter.allocate_tensors()
File "/path/to/lib/python3.6/site-packages/tensorflow/lite/python/interpreter.py", line 243, in allocate_tensors
return self._interpreter.AllocateTensors()
RuntimeError: Encountered unresolved custom op: VarHandleOp.Node number 0 (VarHandleOp) failed to prepare.
</code></pre>
<p>Now, I don't find any <code>VarHandleOp</code> in the code and I found out that it is actually in tensorflow (<a href="https://www.tensorflow.org/api_docs/python/tf/raw_ops/VarHandleOp" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/raw_ops/VarHandleOp</a>).
So, why isn't TFLite able to recognize it?</p>
|
<p>It's certainly hard to provide a minimal reproducible example in the case of model conversion, as the SO guidelines recommend, but the questions would benefit from better pointers. For example, instead of saying “I took this function from a tutorial on the TF website”, it is a much better idea to provide a link to the tutorial. The TF website is vastly huge.</p>
<p>The tutorial that you are referring to is probably from the <a href="https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt" rel="nofollow noreferrer">section on migrating from TF1 to TF2</a>, specifically the part of handling the raw graph files. The crucially important note is</p>
<blockquote>
<p>if you have a "Frozen graph" (a <code>tf.Graph</code> <strong>where the variables have been turned into constants</strong>)</p>
</blockquote>
<p>(the bold highlight is mine). Apparently, your graph contains <code>VarHandleOp</code> (the same applies to the <code>Variable</code> and <code>VariableV2</code> nodes), and is not “frozen” by this definition. Your general approach makes sense, but you need a graph that contains actual trained values for the variables in the form of the <a href="https://www.tensorflow.org/api_docs/python/tf/raw_ops/Const" rel="nofollow noreferrer"><code>Const</code></a> node. You need variables at the training time, but for inference time, and should be baked into the graph. TFLite, as an inference-time framework, does not support variables.</p>
<p>The rest of your idea seems fine. <code>TFLiteConverter.from_concrete_functions</code> currently takes exactly one <code>concrete_function</code>, but this is what you get from wrapping the graph. With enough luck it may work.</p>
<p>There is a utility <a href="https://github.com/tensorflow/tensorflow/blob/v2.3.1/tensorflow/python/tools/freeze_graph.py" rel="nofollow noreferrer"><code>tensorflow/python/tools/freeze_graph.py</code></a> that attempts its best to replace variables in a Graph.pb with constants taken from the latest checkpoint file. If you look at its code, either using the saved metagraph (<em>checkpoint_name</em>.meta) file or pointing the tool to the training directory eliminates a lot of guesswork; also, I think that providing the model directory is the only way to get a single frozen graph a sharded model.</p>
<hr />
<p>I noticed that you use just <code>input</code> in place of <code>tf.nest.map_structure(import_graph.as_graph_element, inputs)</code> in the example. You may have other reasons for that, but if you do it because <code>as_graph_element</code> complains about datatype/shape, this is likely to be resolved by freezing the graph properly. The concrete_function that you obtain from the frozen graph will have a good idea about its input shapes and datatypes. Generally, it's unexpected to need to manually set them, and the fact that you do seems odd to me (but I do not claim a broad experience with this dark corner of TF).</p>
<p><code>map_structure</code> has a keyword argument to skip the check.</p>
|
python|tensorflow|tensorflow2.0|tensorflow-lite
| 1
|
3,464
| 64,223,955
|
Selecting top results from the Sparse csr matrix in Python
|
<p>I am working on sparse.csr.csr_matrix of size (4860x89462 sparse matrix of type '<class 'numpy.float64'>'with 9111761 stored elements) and using jupyter notebook 3.7.4</p>
<p>My requirement is to extract the top 2 results based on the Value of the elements in sparse matrix.</p>
<p>I am sharing one example of my sample sparse csr_matrix</p>
<p>Current Sparse matrix</p>
<pre><code> (1, 1) 0.5
(1, 5) 0.66
(1, 6) 1.0
(2, 2) 1.0
(2, 3) 0.5
(2, 7) 0.33
</code></pre>
<p>Desired Sparse matrix</p>
<pre><code> (1, 6) 1.0
(1, 5) 0.66
(2, 2) 1.0
(2, 3) 0.5
</code></pre>
<p>I am looking for solution which can work on huge matrix without taking much time.</p>
<p>Thanks in advance.</p>
|
<pre><code>top_n = 2
out = []
for r in arr:
if r.data.size <= top_n:
out.append(r)
else:
top_hits = np.argsort(r.data)[-1 * top_n:]
out.append(sparse.csr_matrix((r.data[top_hits], r.indices[top_hits], np.array([0,len(top_hits)])), shape=(1, arr.shape[1])))
out = sparse.vstack(out)
</code></pre>
<p>This is just not gonna be fast. I don't know of any better way to do it though.</p>
|
python|numpy|scipy|sparse-matrix
| 0
|
3,465
| 64,365,215
|
Finding means and stds of a bunch of torch.Tensors (that are converted from ndarray images)
|
<pre><code>to_tensor = transforms.ToTensor()
img = to_tensor(train_dataset[0]['image'])
img
</code></pre>
<p>Converts my images values between 0 and 1 which is expected. It also converts <code>img</code> which is an <code>ndarray</code> to a <code>torch.Tensor</code>.</p>
<p>Previously, without using <code>to_tensor</code> (which I need it now), the following code snippet worked (not sure if this is the best way to find means and stds of the train set, however now doesn't work. How can I make it work?</p>
<pre><code>image_arr = []
for i in range(len(train_dataset)):
image_arr.append(to_tensor(train_dataset[i]['image']))
print(np.mean(image_arr, axis=(0, 1, 2)))
print(np.std(image_arr, axis=(0, 1, 2)))
</code></pre>
<p>The error is:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-147-0e007c030629> in <module>
4 image_arr.append(to_tensor(train_dataset[i]['image']))
5
----> 6 print(np.mean(image_arr, axis=(0, 1, 2)))
7 print(np.std(image_arr, axis=(0, 1, 2)))
<__array_function__ internals> in mean(*args, **kwargs)
~/anaconda3/lib/python3.7/site-packages/numpy/core/fromnumeric.py in mean(a, axis, dtype, out, keepdims)
3333
3334 return _methods._mean(a, axis=axis, dtype=dtype,
-> 3335 out=out, **kwargs)
3336
3337
~/anaconda3/lib/python3.7/site-packages/numpy/core/_methods.py in _mean(a, axis, dtype, out, keepdims)
133
134 def _mean(a, axis=None, dtype=None, out=None, keepdims=False):
--> 135 arr = asanyarray(a)
136
137 is_float16_result = False
~/anaconda3/lib/python3.7/site-packages/numpy/core/_asarray.py in asanyarray(a, dtype, order)
136
137 """
--> 138 return array(a, dtype, copy=False, order=order, subok=True)
139
140
ValueError: only one element tensors can be converted to Python scalars
</code></pre>
|
<p>Here is a working example:</p>
<pre><code>import torch
from torchvision import transforms
train_dataset = torch.rand(100, 32, 32, 3)
image_arr = []
to_tensor = transforms.ToTensor()
for i in range(len(train_dataset)):
# to tensor will give you a tensor which is emulated here by reading the tensor at i
image_arr.append(train_dataset[i])
print(torch.mean(torch.stack(image_arr, dim=0), dim=(0, 1, 2)))
print(torch.std(torch.stack(image_arr, dim=0), dim=(0, 1, 2)))
</code></pre>
<p>What did I do?</p>
<p>I used <code>torch.stack</code> to concatenate image array into a single torch tensor and use <code>torch.mean</code> and <code>torch.std</code> to compute stats. I would not recommend converting back to numpy for the purpose of evaluating stats as it can lead to unnecessary conversion from GPU to CPU.</p>
<p><strong>More information on which dimension is the channel:</strong>
The above example assumes the last dimension is the channel and the image is 32x32x3 with 100 batch size. This is usually the case when the image is loaded using PIL (pillow) or numpy. Images are loaded as HWC (height width channel) in that case. This also seems to be the dimension in the question asked looking at the code example.</p>
<p>If the image tensor is CHW format, then you should use</p>
<pre><code>print(torch.mean(torch.stack(image_arr, dim=0), dim=(0, 2, 3)))
print(torch.std(torch.stack(image_arr, dim=0), dim=(0, 2, 3)))
</code></pre>
<p>Torch tensors are usually CHW format as Conv layers expect CHW format. This is done automatically when the <code>toTensor</code> transform is applied to an image (PIL image). For complete rules see documentation of <code>toTensor</code> <a href="https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.ToTensor" rel="nofollow noreferrer">here</a>.</p>
|
python|numpy|pytorch|mean|numpy-ndarray
| 2
|
3,466
| 64,321,976
|
Why the exactly identical keras model predict different results for the same input data in the same env
|
<p>I have two models that proven to be identical as following:</p>
<pre><code>if len(m_s.layers) != len(m_m.layers):
print("number of layers are different")
for i in range(len(m_s.layers)):
weight_s = m_s.layers[i].get_weights()
weight_m = m_m.layers[i].get_weights()
if len(weight_s) > 0:
for j in range(len(weight_s)):
if (weight_s[j] == weight_m[j]).all:
print("layer %d identical" % i)
else:
print("!!!!! layer %d not the same" % i)
else:
if len(weight_m) == 0:
print("layer %d identical" % i)
else:
print("!!!!! layer %d not the same" % i)
</code></pre>
<p>and the output shows they are identical. They are slices from the imagenet model.</p>
<pre><code>layer 0 identical
layer 1 identical
layer 2 identical
layer 2 identical
layer 2 identical
layer 2 identical
layer 3 identical
layer 4 identical
layer 5 identical
layer 5 identical
layer 5 identical
layer 5 identical
layer 6 identical
layer 7 identical
layer 8 identical
layer 8 identical
layer 8 identical
layer 8 identical
layer 9 identical
layer 10 identical
layer 10 identical
layer 10 identical
layer 10 identical
layer 11 identical
layer 12 identical
layer 13 identical
layer 13 identical
layer 13 identical
layer 13 identical
layer 14 identical
layer 15 identical
layer 16 identical
layer 16 identical
layer 16 identical
layer 16 identical
layer 17 identical
layer 18 identical
layer 18 identical
layer 18 identical
layer 18 identical
layer 19 identical
layer 20 identical
layer 21 identical
layer 21 identical
layer 21 identical
layer 21 identical
layer 22 identical
layer 23 identical
layer 24 identical
layer 24 identical
layer 24 identical
layer 24 identical
layer 25 identical
layer 26 identical
layer 27 identical
layer 27 identical
layer 27 identical
layer 27 identical
layer 28 identical
layer 29 identical
layer 30 identical
layer 30 identical
layer 30 identical
layer 30 identical
layer 31 identical
layer 32 identical
layer 33 identical
layer 33 identical
layer 33 identical
layer 33 identical
layer 34 identical
layer 35 identical
layer 35 identical
layer 35 identical
layer 35 identical
layer 36 identical
layer 37 identical
layer 38 identical
layer 38 identical
layer 38 identical
layer 38 identical
layer 39 identical
layer 40 identical
layer 41 identical
layer 41 identical
layer 41 identical
layer 41 identical
layer 42 identical
layer 43 identical
layer 44 identical
layer 44 identical
layer 44 identical
layer 44 identical
layer 45 identical
layer 46 identical
layer 47 identical
layer 47 identical
layer 47 identical
layer 47 identical
layer 48 identical
layer 49 identical
layer 50 identical
layer 50 identical
layer 50 identical
layer 50 identical
layer 51 identical
</code></pre>
<p>However, when I used these two model in the same machine and same env to predict the same input data, the outputs were completely different.</p>
<pre><code>m_s.predict(data)
</code></pre>
<p>output</p>
<pre><code>array([[[[-2.2014694e+00, -7.4636793e+00, -3.7543521e+00, ...,
4.2393379e+00, 7.2923303e+00, -7.9203067e+00],
[-6.8980045e+00, -6.7517347e+00, 5.9752476e-01, ...,
2.2391853e+00, -2.0161586e+00, -7.5054851e+00],
[-4.4470978e+00, -4.2420959e+00, -3.9374633e+00, ...,
5.9843721e+00, 5.4481273e+00, -2.7136576e+00],
...,
[-8.2077494e+00, -5.5874801e+00, 2.2708473e+00, ...,
-2.5585687e-01, 4.0198727e+00, -4.5880938e+00],
[-7.5793233e+00, -6.3811040e+00, 3.7389126e+00, ...,
1.7169635e+00, -3.4249902e-01, -7.1873198e+00],
[-8.2512989e+00, -4.2883468e+00, -2.7908459e+00, ...,
3.9796615e+00, 4.7512245e-01, -4.5338011e+00]],
[[-5.2522459e+00, -5.2272692e+00, -3.7313356e+00, ...,
1.0820831e+00, -1.9317195e+00, -8.3177958e+00],
[-5.8229809e+00, -6.8049965e+00, -1.4538713e+00, ...,
4.0576010e+00, -1.9025326e-02, -8.2517090e+00],
[-6.1541910e+00, -2.6757658e-01, -5.4412403e+00, ...,
1.7984511e+00, 2.9016986e+00, 7.6427579e-01],
...,
[-1.1129386e+00, 7.9319181e+00, 7.7404571e-01, ...,
-1.7145084e+01, 1.5210888e+01, 1.3812095e+01],
[ 3.5752565e-01, 1.4212518e+00, -6.1826277e-01, ...,
-3.4348285e+00, 5.1942883e+00, 2.1960042e+00],
[-6.3907943e+00, -5.3237562e+00, -3.1632636e+00, ...,
2.1118989e+00, -3.8516359e+00, -6.2463970e+00]],
[[-7.2064867e+00, -3.6420932e+00, -1.6844990e+00, ...,
6.4910537e-01, -4.4807429e+00, -7.8619242e+00],
[-6.4934230e+00, -4.5477719e+00, 9.2149705e-01, ...,
4.2846882e-01, -7.4903011e-01, -9.8737726e+00],
[-7.2704558e+00, 9.5214283e-01, -2.0818310e+00, ...,
-1.6958854e-01, 1.6371614e+00, -2.7756066e+00],
...,
[-7.1980424e+00, -7.2074276e-01, 2.3514495e+00, ...,
-9.7255888e+00, 2.1547556e-01, 4.3379207e+00],
[-6.7656651e+00, 6.3100419e+00, -7.8286257e+00, ...,
-5.1035576e+00, -1.3960669e+00, 2.3991609e+00],
[-7.0669832e+00, -1.2582588e-01, -5.3176193e+00, ...,
3.4836166e+00, -2.4024684e+00, -6.0632706e+00]],
...,
[[-7.3400059e+00, -3.1168675e+00, -1.9545169e+00, ...,
1.0936095e+00, -1.5736668e+00, -9.5641651e+00],
[-2.9115820e+00, -4.7334772e-01, 2.6805878e-01, ...,
8.3148491e-01, -1.2751791e+00, -5.5142212e+00],
[ 1.2365078e+00, 1.0945862e+01, -4.9259267e+00, ...,
1.9169430e+00, 5.1151342e+00, 4.9710069e+00],
...,
[-2.2321188e+00, 8.8735223e-02, -7.6890874e+00, ...,
-3.1269640e-01, 7.3404179e+00, -7.2507386e+00],
[-2.2741010e+00, -6.5992510e-01, 4.0761769e-01, ...,
1.8645943e+00, 4.0359187e+00, -7.7996893e+00],
[ 5.5672646e-02, -1.4715804e+00, -1.9753509e+00, ...,
2.5039923e+00, -1.0506821e-01, -6.5183282e+00]],
[[-8.3111782e+00, -4.6992331e+00, -3.1351955e+00, ...,
1.8569698e+00, -1.1717710e+00, -8.5070782e+00],
[-4.7671299e+00, -2.5072317e+00, 2.9760203e+00, ...,
2.9142296e+00, 3.2271760e+00, -4.7557964e+00],
[ 5.5070686e-01, 5.3218126e-02, -2.1629403e+00, ...,
8.8359457e-01, 3.1481497e+00, -2.1769693e+00],
...,
[-3.7305963e+00, -1.2512873e+00, 2.0231385e+00, ...,
4.4094267e+00, 3.0268743e+00, -9.6763916e+00],
[-5.4271636e+00, -4.6796727e+00, 5.7922940e+00, ...,
3.6725988e+00, 5.2563481e+00, -8.1707211e+00],
[-1.2138665e-02, -3.6983132e+00, -6.4367266e+00, ...,
6.8217549e+00, 5.7782011e+00, -5.4132147e+00]],
[[-5.0323372e+00, -3.3903065e+00, -2.7963824e+00, ...,
3.9016938e+00, 1.4906535e+00, -2.1907964e+00],
[-7.7795396e+00, -5.7441168e+00, 3.4615259e+00, ...,
1.4764800e+00, -2.9045539e+00, -4.4136987e+00],
[-7.2599754e+00, -3.4636111e+00, 4.3936129e+00, ...,
1.9856967e+00, -1.0856767e+00, -5.7980385e+00],
...,
[-6.1726952e+00, -3.9608026e+00, 5.5742388e+00, ...,
4.9396091e+00, -2.8744078e+00, -8.3122082e+00],
[-1.3442982e+00, -5.5807371e+00, 4.7524319e+00, ...,
5.0170369e+00, 2.9530718e+00, -7.1846304e+00],
[-1.7616816e+00, -6.7234058e+00, -8.3512306e+00, ...,
4.1365266e+00, -2.8818092e+00, -2.9208889e+00]]]],
dtype=float32)
</code></pre>
<p>while</p>
<pre><code>m_m.predict(data)
</code></pre>
<p>output</p>
<pre><code>array([[[[ -7.836284 , -2.3029385 , -3.6463926 , ..., -1.104739 ,
12.992413 , -6.7326055 ],
[-11.714638 , -2.161682 , -2.0715065 , ..., -0.0467519 ,
6.557784 , -2.7576606 ],
[ -8.029486 , -4.068902 , -4.6803293 , ..., 7.022674 ,
7.741771 , -1.874607 ],
...,
[-11.229774 , -5.3050747 , 2.807798 , ..., 1.1340691 ,
4.3236184 , -5.2162905 ],
[-11.458603 , -6.2387724 , 0.25091058, ..., 1.0305461 ,
5.9631624 , -6.284294 ],
[ -8.663513 , -1.8256164 , -3.0079443 , ..., 5.9437366 ,
7.0928698 , -1.0781381 ]],
[[ -4.362539 , -2.8450599 , -3.1030283 , ..., -1.5129573 ,
2.2504683 , -8.414198 ],
[ -6.308961 , -4.99597 , -3.8596241 , ..., 4.2793174 ,
2.7787375 , -5.9963284 ],
[ -4.8252788 , -1.5710263 , -6.083002 , ..., 4.856139 ,
2.9387665 , 0.29977918],
...,
[ -0.8481703 , 5.348722 , 2.3885899 , ..., -19.35567 ,
13.1428795 , 12.364189 ],
[ -1.8864173 , -3.7014763 , -2.5292692 , ..., -3.6618025 ,
4.3906307 , 0.03934002],
[ -6.0526505 , -5.504422 , -3.8778243 , ..., 4.3741727 ,
1.0135782 , -5.1025114 ]],
[[ -6.7328253 , -1.5671132 , 0.16782492, ..., -2.5069456 ,
1.4343324 , -8.59162 ],
[ -7.5468965 , -5.6893063 , 0.13871288, ..., 0.22174302,
1.1608338 , -8.77916 ],
[ -5.940791 , 1.1769392 , -4.5080614 , ..., 3.5371704 ,
2.4181929 , -2.7893126 ],
...,
[ -9.490874 , -2.3575358 , 2.5908213 , ..., -18.813345 ,
-3.4546187 , 4.8375816 ],
[ -5.1123285 , 3.3766522 , -10.71935 , ..., -5.8476105 ,
-3.5569503 , 0.6331433 ],
[ -6.2075157 , 0.4942119 , -7.044799 , ..., 5.191918 ,
2.7723277 , -4.5243273 ]],
...,
[[ -7.06453 , -1.3950944 , -0.37429178, ..., -0.11883163,
0.22527158, -9.231563 ],
[ -4.0204725 , -3.6592636 , 0.15709507, ..., 1.7647433 ,
4.6479545 , -3.8798246 ],
[ 0.75817275, 9.890637 , -7.069035 , ..., 2.995041 ,
6.8453026 , 6.028713 ],
...,
[ -1.5892754 , 2.119719 , -10.078391 , ..., -2.546938 ,
6.5255003 , -6.749384 ],
[ -3.2769198 , -0.46709523, -2.1529863 , ..., 1.8028917 ,
7.2509494 , -7.5441256 ],
[ -1.2531447 , 0.96327865, -1.0863694 , ..., 2.423694 ,
-1.1047542 , -6.4944725 ]],
[[-10.218704 , -2.5448627 , -0.6002845 , ..., 0.80485874,
2.7691112 , -7.374723 ],
[ -8.354421 , -5.461962 , 5.2284613 , ..., 0.5315646 ,
5.701563 , -4.0477304 ],
[ -2.7866952 , -5.8492465 , -1.5627437 , ..., 1.9490132 ,
4.0491743 , -2.7550128 ],
...,
[ -4.5389686 , -3.2624135 , 0.7429285 , ..., 2.5953412 ,
3.8780956 , -8.652936 ],
[ -5.704813 , -3.730238 , 4.87866 , ..., 2.6826556 ,
4.8833456 , -6.8225956 ],
[ -0.16680491, -0.4325713 , -4.7689047 , ..., 8.588567 ,
6.786765 , -4.7118473 ]],
[[ -1.4958351 , 2.151188 , -4.1733856 , ..., -1.891511 ,
12.969635 , -2.5913832 ],
[ -7.6865544 , 0.5423928 , 6.2699823 , ..., -2.4558625 ,
6.1929445 , -2.7875526 ],
[ -6.995783 , 2.609788 , 5.6196365 , ..., -0.6639404 ,
5.7171726 , -3.7962272 ],
...,
[ -3.6628227 , -1.3322173 , 4.7582774 , ..., 2.122392 ,
3.1294663 , -8.338194 ],
[ -3.0116327 , -1.322252 , 4.802135 , ..., 1.9731755 ,
8.750839 , -6.989321 ],
[ 2.3386476 , -2.4584374 , -5.9336634 , ..., 0.48920852,
3.540884 , -2.9136944 ]]]], dtype=float32)
</code></pre>
<p>It obviously not because of floating rounding, since the outputs are quite different. I don't understand why. Please help</p>
|
<p>I found out the reason by extracting layer by layer. There are BatchNormalizing layers in the model and weight changed although I set them as not trainable.</p>
|
tensorflow|keras|imagenet
| 0
|
3,467
| 64,581,993
|
I have an error on Pytorch and in particular with nllloss
|
<p>I want to appply the criterion, where
<code>criterion = nn.NLLLoss()</code>
I apply it on output and labels</p>
<pre><code>loss = criterion(output.view(-1,1), labels.long())
</code></pre>
<p>where:</p>
<p>*the shape of the labels</p>
<pre><code>labels
tensor([ 1, 4, 1, 1, 4, 1, 2, 3, 2, 4, 2, 3, 3, 4,
0, 4])
output
tensor([ 0.1829, 0.1959, 0.1909, 0.1895, 0.1914, 0.1883, 0.1895,
0.1884, 0.1865, 0.1931, 0.1883, 0.1917, 0.1942, 0.1937,
0.1897, 0.1934])
</code></pre>
<p>the shape of the output
<code>torch.Size([16])</code></p>
<p>On the following line:</p>
<p><code>loss = criterion(output.view(-1,1), labels.long())</code></p>
<p>I get this error:</p>
<p>The error is:</p>
<blockquote>
<p>RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /opt/conda/conda-bld/pytorch_1524584710464/work/aten/src/THNN/generic/ClassNLLCriterion.c:97</p>
</blockquote>
<p>Any ideas?</p>
|
<p>Your label and output shapes must be <code>[batch_size]</code> and <code>[batch_size, n_classes]</code> respectively.</p>
|
pytorch
| 1
|
3,468
| 47,952,930
|
How can I use LSTM in pytorch for classification?
|
<p>My code is as below:</p>
<pre><code>class Mymodel(nn.Module):
def __init__(self, input_size, hidden_size, output_size, num_layers, batch_size):
super(Discriminator, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.num_layers = num_layers
self.batch_size = batch_size
self.lstm = nn.LSTM(input_size, hidden_size)
self.proj = nn.Linear(hidden_size, output_size)
self.hidden = self.init_hidden()
def init_hidden(self):
return (Variable(torch.zeros(self.num_layers, self.batch_size, self.hidden_size)),
Variable(torch.zeros(self.num_layers, self.batch_size, self.hidden_size)))
def forward(self, x):
lstm_out, self.hidden = self.lstm(x, self.hidden)
output = self.proj(lstm_out)
result = F.sigmoid(output)
return result
</code></pre>
<p>I want to use LSTM to classify a sentence to good (1) or bad (0). Using this code, I get the result which is time_step * batch_size * 1 but not 0 or 1. How to edit the code in order to get the classification result?</p>
|
<h2>Theory:</h2>
<p>Recall that an LSTM outputs a vector for every input in the series. You are using sentences, which are a series of words (probably converted to indices and then embedded as vectors). This code from the LSTM <a href="http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#lstm-s-in-pytorch" rel="noreferrer">PyTorch tutorial</a> makes clear exactly what I mean (***emphasis mine):</p>
<pre><code>lstm = nn.LSTM(3, 3) # Input dim is 3, output dim is 3
inputs = [autograd.Variable(torch.randn((1, 3)))
for _ in range(5)] # make a sequence of length 5
# initialize the hidden state.
hidden = (autograd.Variable(torch.randn(1, 1, 3)),
autograd.Variable(torch.randn((1, 1, 3))))
for i in inputs:
# Step through the sequence one element at a time.
# after each step, hidden contains the hidden state.
out, hidden = lstm(i.view(1, 1, -1), hidden)
# alternatively, we can do the entire sequence all at once.
# the first value returned by LSTM is all of the hidden states throughout
# the sequence. the second is just the most recent hidden state
# *** (compare the last slice of "out" with "hidden" below, they are the same)
# The reason for this is that:
# "out" will give you access to all hidden states in the sequence
# "hidden" will allow you to continue the sequence and backpropagate,
# by passing it as an argument to the lstm at a later time
# Add the extra 2nd dimension
inputs = torch.cat(inputs).view(len(inputs), 1, -1)
hidden = (autograd.Variable(torch.randn(1, 1, 3)), autograd.Variable(
torch.randn((1, 1, 3)))) # clean out hidden state
out, hidden = lstm(inputs, hidden)
print(out)
print(hidden)
</code></pre>
<p>One more time: <strong>compare the last slice of "out" with "hidden" below, they are the same</strong>. <em>Why? Well...</em></p>
<p>If you're familiar with LSTM's, I'd recommend the PyTorch LSTM <a href="http://pytorch.org/docs/master/nn.html#torch.nn.LSTM" rel="noreferrer">docs</a> at this point. Under the output section, notice <em>h_t</em> is output at every <em>t</em>.</p>
<p>Now if you aren't used to LSTM-style equations, take a look at Chris Olah's <a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/" rel="noreferrer">LSTM blog post</a>. Scroll down to the diagram of the unrolled network:</p>
<p><a href="https://i.stack.imgur.com/tFk1Z.png" rel="noreferrer"><img src="https://i.stack.imgur.com/tFk1Z.png" alt="Credit C Olah, "Understanding LSTM Networks""></a></p>
<p>As you feed your sentence in word-by-word (<code>x_i</code>-by-<code>x_i+1</code>), you get an output from each timestep. You want to interpret the entire sentence to classify it. So you must wait until the LSTM has seen all the words. That is, you need to take <code>h_t</code> where <code>t</code> is the number of words in your sentence.</p>
<h2>Code:</h2>
<p>Here's a coding <a href="https://github.com/yuchenlin/lstm_sentence_classifier" rel="noreferrer">reference</a>. I'm not going to copy-paste the entire thing, just the relevant parts. The magic happens at <code>self.hidden2label(lstm_out[-1])</code></p>
<pre><code>class LSTMClassifier(nn.Module):
def __init__(self, embedding_dim, hidden_dim, vocab_size, label_size, batch_size):
...
self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim)
self.hidden2label = nn.Linear(hidden_dim, label_size)
self.hidden = self.init_hidden()
def init_hidden(self):
return (autograd.Variable(torch.zeros(1, self.batch_size, self.hidden_dim)),
autograd.Variable(torch.zeros(1, self.batch_size, self.hidden_dim)))
def forward(self, sentence):
embeds = self.word_embeddings(sentence)
x = embeds.view(len(sentence), self.batch_size , -1)
lstm_out, self.hidden = self.lstm(x, self.hidden)
y = self.hidden2label(lstm_out[-1])
log_probs = F.log_softmax(y)
return log_probs
</code></pre>
|
pytorch
| 8
|
3,469
| 49,161,202
|
Pandas: aggregate column based on values in a different column
|
<p>Lets say I start with a dataframe that looks like this:</p>
<pre><code> Group Val date
0 home first 2017-12-01
1 home second 2017-12-02
2 away first 2018-03-07
3 away second 2018-03-01
</code></pre>
<p>Data types are [string, string, datetime]. I would like to get a dataframe that for each group, shows me the value that was entered most recently:</p>
<pre><code> Group Most rececnt Val Most recent date
0 home second 12-02-2017
1 away first 03-07-2018
</code></pre>
<p>(Data types are [string, string, datetime])</p>
<p>My initial thought is that I should be able to do something like this by grouping by 'group' and then aggregating the dates and vals. I know I can get the most recent datetime using the 'max' agg function, but I'm stuck on what function to use to get the corresponding val:</p>
<pre><code>df.groupby('Group').agg({'val':lambda x: ____????____
'date':'max'})
</code></pre>
<p>Thanks,</p>
|
<p>First select the indeces of the dataframe whose variable value is maximum</p>
<pre><code>max_indeces = df.groupby(['Group'])['date'].idxmax()
</code></pre>
<p>and then select the corresponding rows in the original dataframe, maybe only indicating the actual value you are interested in:</p>
<pre><code>df.iloc[max_indeces]['Val']
</code></pre>
|
python|python-3.x|pandas|pandas-groupby
| 1
|
3,470
| 58,979,090
|
What could be driving a SyntaxError when trying to combine two columns to create dataframe names?
|
<p>I am aiming to develop multiple dataframe names using two columns from my source dataframe as naming conventions for each col1 col2 combination .</p>
<p>For instance, if <code>period</code> and <code>dps</code> are columns in the source dataframe I want to create dataframes for each <code>period-dps</code> combination like so:</p>
<pre><code>period = ['a','b','c']
dps = ['x','y','z']
for d in dps:
for p in period:
exec('{}{} = pd.DataFrame()'.format(p,d))
</code></pre>
<p>This code works fine as tested, but when I incorporate my actual data I get a <em>SyntaxError: invalid syntax</em> error. </p>
<p>My question is what could be driving this error? Is there a possible issue with my original data I should review and clean first? </p>
<p>Thank you</p>
|
<p>Don't use <code>exec</code>. Create a <code>dict</code> to store your dataframes.</p>
<pre><code>period = ['a','b','c']
dps = ['x','y','z']
frames = {}
for d in dps:
for p in period:
frames[f'{p}{d}'] = pd.DataFrame()
</code></pre>
<p>You might also consider nested dicts.</p>
<pre><code>from collections import defaultdict
frames = defaultdict(dict)
for d in dps:
for p in period:
frames[p][d] = pd.DataFrame()
</code></pre>
|
python|pandas
| 1
|
3,471
| 58,953,816
|
Pandas merging with condition on columns
|
<p>Greeting all, </p>
<p>Does anybody know how to join two dataframes according specific behavior using <strong>pandas</strong> please using pandas no other libraries.</p>
<p>like <code>df1 inner join df2 where df1.t < df2.t ..</code></p>
|
<p>Do it in sql, it comes with Python's standard library. </p>
<pre><code>from sqlite3 import sqlite3
import pandas
# define your dataframes here
df1 = ...
df2 = ...
# load the dataframes to memory
sql_ptr = sqlite3.connect(':memory:')
df1.to_sql('df1', sql_ptr)
df2.to_sql('df2', sql_ptr)
# execute the query
df3 = pd.read_sql_query("select * from df1 inner join df2 on <insert columns to join on> where df1.ts < df2.ts", sql_ptr)
</code></pre>
<p><strong>And please keep in mind</strong> that this query will do two different steps: </p>
<ol>
<li>An inner join ==> an intersection on the specified columns</li>
<li>filter the result table on the given condition (! given on other columns !)</li>
</ol>
<p>An inner join in relational algebra is an intersection between two sets ==> There is no such thing as inner join on condition between columns (other than the implied equality condition)</p>
<p>e.g. this query <code>"select * from df1 inner join df2 on df1.ts = df2.ts where df1.ts < df2.ts"</code> will yield an empty view cause inner join will find an empty intersection between the tables df1 and df2. </p>
|
python|pandas|merge
| 1
|
3,472
| 58,948,939
|
How can i create a new column that inserts the cell value of grouped column 'ID' (in time) when 'interaction' is 1
|
<p>I have three relevant columns: time, id, and interaction.
How can i create a new column with the id values that have a '1' in column 'interaction' in the given time window? </p>
<p>Should look something like this:</p>
<pre><code>time id vec_len quadrant interaction Paired with
1 3271 0.9 7 0
1 3229 0.1 0 0
1 4228 0.5 0 0
1 2778 -0.3 5 0
2 4228 0.2 0 0
2 3271 0.1 6 0
2 3229 -0.7 5 1 [2778, 4228]
2 3229 -0.3 2 0
2 4228 -0.8 5 1 [2778, 3229]
2 2778 -0.6 5 1 [4228, 3229]
3 4228 0.2 0 0
3 3271 0.1 6 0
3 4228 -0.7 5 1 [3271]
3 3229 -0.3 2 0
3 3271 -0.8 5 1 [4228]
</code></pre>
<p>Thank you for helping!!</p>
|
<pre class="lang-py prettyprint-override"><code>import numpy as np
# initialize dict for all time blocks
dict_time_ids = dict.fromkeys(df.time.unique(), set())
# populate dictionary with ids for each time block where interaction == 1
dict_time_ids.update(df.query('interaction == 1').groupby('time').id.apply(set).to_dict())
# make new column with set of corresponding ids where interaction == 1
df['paired'] = np.where(df.interaction == 1, df.time.apply(lambda x: dict_time_ids[x]), set())
# remove the id from the set and convert to list
df.paired = df.apply(lambda x: list(x.paired - {x.id}), axis=1)
# Out:
time id interaction paired
0 1 3271 0 []
1 1 3229 0 []
2 1 4228 0 []
3 1 2778 0 []
4 2 4228 0 []
5 2 3271 0 []
6 2 3229 1 [2778, 4228]
7 2 3229 0 []
8 2 4228 1 [2778, 3229]
9 2 2778 1 [4228, 3229]
10 3 4228 0 []
11 3 3271 0 []
12 3 4228 1 [3271]
13 3 3229 0 []
14 3 3271 1 [4228]
</code></pre>
|
python|pandas|dataframe|if-statement|iteration
| 1
|
3,473
| 70,137,677
|
Plotting Stackbar chart in Seaborn for showing clustering
|
<p>Here is my data where I have Wines%, Fruits%, etc which sums up to 1 and is based on the Total_Spent column. There's also a cluster columns that you can see:</p>
<p><a href="https://i.stack.imgur.com/E6Atz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E6Atz.png" alt="enter image description here" /></a></p>
<p>Now, I want to show a stacked bar chart where on the x axis I'll have the clusters and the vertical stacked bar will be all the wines%, meat%, etc for every cluster. Using this chart, I'll be able to observe which cluster is spending what percent of their money on which product. I'm trying to use <strong>seaborn</strong> for this. Can anyone help me in figuring out a way to plot this stacked bar plot?</p>
<p><strong>Update</strong></p>
<p>So I have written this code to get the data in the correct format:</p>
<pre><code>df_test = df[['Wines%', 'Fruits%', 'Meat%', 'Fish%', 'Sweets%','Gold%', 'Clusters']]
df_unpivoted = df_test.melt(id_vars=['Clusters'], var_name='Category', value_name='Spend%')
df_unpivoted.head()
df_new = pd.pivot_table(df_unpivoted, index=['Clusters','Category'])
</code></pre>
<p>And the dataframe looks like this:</p>
<p><a href="https://i.stack.imgur.com/a62AJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a62AJ.png" alt="enter image description here" /></a></p>
<p>How can I achieve the same result with this dataframe now?</p>
|
<p>Ok, I achieved the solution with this:</p>
<pre><code>df_test = df[['Wines%', 'Fruits%', 'Meat%', 'Fish%', 'Sweets%','Gold%', 'Clusters']]
df_unpivoted = df_test.melt(id_vars=['Clusters'], var_name='Category', value_name='Spend%')
df_unpivoted.head()
df_new = pd.pivot_table(df_unpivoted, index=['Clusters','Category'])
df_new = df_new.reset_index(level=[0,1])
sns.barplot(x='Clusters',y='Spend%', hue='Category', data=df_new)
</code></pre>
<p>I had to change the multi-index column to a single index column with the reset_index code and then just plot it using barplot.</p>
|
python|pandas|matplotlib|seaborn
| -1
|
3,474
| 55,699,801
|
Can I do data augmentation with gaussian blur on my tf.Dataset using my GPU?
|
<p>I would like to change my old queue based pipeline to the new dataset API on tensorflow for reasons of performance. However, once my code changed, it runs in 8 hours instead of 2. </p>
<p>The use of my GPU was about 30/40% and it's now between 0 and 6%.</p>
<p>I found the line which making it so slow, and it's when I apply gaussian blur on my dataset :</p>
<pre><code>def gaussian_blur(imgs,lbls):
imgs = tf.nn.conv2d(imgs,k_conv,
strides=[1, 1, 1, 1],
padding='SAME',
data_format='NHWC'
)
return imgs, lbls
ds = ds.map(gaussian_blur)
</code></pre>
<p>With my old queue-based pipeline, this line almost doesn't slow my program.</p>
<p>I think it's because this line used to run on the GPU but the new dataset API forces it to run on the CPU which is way slower and already 100% used.</p>
<p>Do you have any idea on how can I apply gaussian blur with out decreasing so much the performance? Should I keep my old queue based pipeline?</p>
|
<p>Although I have not tried this on a tf dataset, this should be applicable. I found this combination to be highly performant and simplistic:</p>
<pre><code>import tensorflow as tf
import tensorflow_addons as tfa
dummy_dataset = tf.ones((1000, 224, 224, 3))
blurred_dummy_dataset = tf.map_fn(tfa.image.gaussian_filter2d, dummy_dataset)
# ~ 3 second runtime in Google Colab on a Tesla K80
</code></pre>
<p><a href="https://www.tensorflow.org/addons/api_docs/python/tfa/image/gaussian_filter2d" rel="nofollow noreferrer">https://www.tensorflow.org/addons/api_docs/python/tfa/image/gaussian_filter2d</a>
<a href="https://www.tensorflow.org/api_docs/python/tf/map_fn" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/map_fn</a></p>
|
tensorflow|tensorflow-datasets|gaussianblur
| 0
|
3,475
| 55,975,063
|
Is there a quick way to strip out a specific character from all the rows in one column in a pandas DataFrame?
|
<p>I am trying to strip out the date from a column and make it a new column. I wrote a function to do it, but I'm not sure how to apply it to the pandas framework. </p>
<p>Here's the original df:</p>
<pre><code>ID var1 var2
abc_20190503_xyz 100 10
fds_20190503_fnk 234 32
ree_20190503_fds 555 23
</code></pre>
<p>I wrote the following function: </p>
<pre><code>def strip_date(pid,file_date):
pid=list(pid)
pid.remove(file_date)
return ''.join(pid)
file_date='20190503'
org_df['NewID']=strip_date(org_df['ID'],file_date)
org_df
</code></pre>
<p>Issues:</p>
<ol>
<li>This is giving me the error message: list.remove(x): x not in list</li>
<li>It seems that my current def only removes one file_date in the string; if there are multiple, I have to restrip. e.g. if the id is 'abc_20190503_xyz_20190503', it only strips out the first one. Is there a better solution?</li>
</ol>
<p>The desired output:</p>
<pre><code>New ID ID var1 var2
abc__xyz abc_20190503_xyz 100 10
fds__fnk fds_20190503_fnk 234 32
ree__fds ree_20190503_fds 555 23
</code></pre>
<p>Also, I'd like to use New ID as the index.</p>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.replace.html" rel="nofollow noreferrer"><code>Series.str.replace</code></a> with <code>regex</code> for this to extract all numbers from your ID column.</p>
<pre><code>df['New_ID'] = df['ID'].str.replace('([0-9]+)', '')
</code></pre>
<hr>
<pre><code>print(df)
ID var1 var2 New_ID
0 abc_20190503_xyz 100 10 abc__xyz
1 fds_20190503_fnk 234 32 fds__fnk
2 ree_20190503_fds 555 23 ree__fds
</code></pre>
|
python|pandas
| 3
|
3,476
| 55,938,627
|
Traversing groups of group by object pandas
|
<p>I need help with some big pandas issue.</p>
<p>As a lot of people asked to have the real input and real desired output in order to answer the question, there it goes:
So I have the following dataframe</p>
<pre><code>Date user cumulative_num_exercises total_exercises %_exercises
2017-01-01 1 2 7 28,57
2017-01-01 2 1 7 14.28
2017-01-01 4 3 7 42,85
2017-01-01 10 1 7 14,28
2017-02-02 1 2 14 14,28
2017-02-02 2 3 14 21,42
2017-02-02 4 4 14 28,57
2017-02-02 10 5 14 35,71
2017-03-03 1 3 17 17,64
2017-03-03 2 3 17 17,64
2017-03-03 4 5 17 29,41
2017-03-03 10 6 17 35,29
%_exercises_accum
28,57
42,85
85,7
100
14,28
35,7
64,27
100
17,64
35,28
64,69
100
</code></pre>
<p>-The column %_exercises is the value of the column (cumulative_num_exercises/total_exercises)*100
-The column %_exercises_accum is the value of the sum of the %_exercises <strong>for each month</strong>. (Note that at the end of each month, it reaches the value 100).</p>
<p>-I need to calculate, whith this data, the % of users that contributed to do a 50%, 80% and 90% of the total exercises, during each month.</p>
<p>-In order to do so, I have thought to create a new column, called category, which will later be used to count how many users contributed to each of the 3 percentages (50%, 80% and 90%). The category column takes the following values:</p>
<ul>
<li><p>0 if the user did a %_exercises_accum = 0.</p></li>
<li><p>1 if the user did a %_exercises_accum < 50 and > 0.</p></li>
<li><p>50 if the user did a %_exercises_accum = 50.</p></li>
<li><p>80 if the user did a %_exercises_accum = 80.</p></li>
<li><p>90 if the user did a %_exercises_accum = 90.</p></li>
</ul>
<p>And so on, because there are many cases in order to determine who contributes to which percentage of the total number of exercises on each month.</p>
<p>I have already determined all the cases and all the values that must be taken.</p>
<p>Basically, I traverse the dataframe using a <strong>for loop</strong>, and with <strong>two main ifs</strong>:</p>
<p>if (df.iloc[i][date] == df.iloc[i][date].shift()):</p>
<p>calculations to determine the percentage or percentages to which the user from the second to the last row of the same month group contributes
(because the same user can contribute to all the percentages, or to more than one)</p>
<p>else:</p>
<p>calculations to determine to which percentage of exercises the first
member of each
month group contributes.</p>
<p>The calculations involve:</p>
<ol>
<li><p>Looking at the value of the category column in the previous row using shift().</p></li>
<li><p>Doing while loops inside the for, because when a user suddenly reaches a big percentage, we need to go back for the users in the same month, and change their category_column value to 50, as they have contributed to the 50%, but didn't reach it. for instance, in this situation:</p>
<p>Date %_exercises_accum
2017-01-01 1,24
2017-01-01 3,53
2017-01-01 20,25
2017-01-01 55,5</p></li>
</ol>
<p>The desired output for the given dataframe at the beginning of the question would include the same columns as before (date, user, cumulative_num_exercises, total_exercises, %_exercises and %_exercises_accum) plus the category column, which is the following:</p>
<pre><code>category
50
50
508090
90
50
50
5080
8090
50
50
5080
8090
</code></pre>
<p>Note that the rows with the values: 508090, or 8090, mean that that user is contributing to create:</p>
<ol>
<li><p>508090: both 50%, 80% and 90% of total exercises in a month.</p></li>
<li><p>8090: both 80% and 90% of exercises in a month.</p></li>
</ol>
<p>Does anyone know how can I simplify this for loop by traversing the groups of a group by object?</p>
<p>Thank you very much!</p>
|
<p>Given no sense of what calculations you wish to accomplish, this is my best guess at what you're looking for. However, I'd re-iterate <a href="https://stackoverflow.com/questions/55938627/traversing-groups-of-group-by-object-pandas#comment98530627_55938627">Datanovice's point</a> that the best way to get answers is to provide a sample output.</p>
<p>You can slice to each unique date using the following code:</p>
<pre><code>dates = ['2017-01-01', '2017-01-01','2017-01-01','2017-01-01','2017-02-02','2017-02-02','2017-02-02','2017-02-02','2017-03-03','2017-03-03','2017-03-03','2017-03-03']
df = pd.DataFrame(
{'date':pd.to_datetime(dates),
'user': [1,2,4,10,1,2,4,10,1,2,4,10],
'cumulative_num_exercises':[2,1,3,1,2,3,4,5,3,3,5,6],
'total_exercises':[7,7,7,7,14,14,14,14,17,17,17,17]}
)
df = df.set_index('date')
for idx in df.index.unique():
hold = df.loc[idx]
### YOUR CODE GOES HERE ###
</code></pre>
|
python|pandas|pandas-groupby
| 0
|
3,477
| 55,716,916
|
Efficient enumeration of non-negative integer composition
|
<p>I would like to write a function <code>my_func(n,l)</code> that, for some positive integer <code>n</code>, efficiently enumerates the ordered non-negative integer composition* of length <code>l</code> (where <code>l</code> is greater than <code>n</code>). For example, I want <code>my_func(2,3)</code> to return <code>[[0,0,2],[0,2,0],[2,0,0],[1,1,0],[1,0,1],[0,1,1]]</code>.</p>
<p>My initial idea was to use existing code for positive integer partitions (e.g. <code>accel_asc()</code> from <a href="https://stackoverflow.com/a/44209393">this post</a>), extend the positive integer partitions by a couple zeros and return all permutations.</p>
<pre><code>def my_func(n, l):
for ip in accel_asc(n):
nic = numpy.zeros(l, dtype=int)
nic[:len(ip)] = ip
for p in itertools.permutations(nic):
yield p
</code></pre>
<p>The output of this function is wrong, because every non-negative integer composition in which a number appears twice (or multiple times) appears several times in the output of <code>my_func</code>. For example, <code>list(my_func(2,3))</code> returns <code>[(1, 1, 0), (1, 0, 1), (1, 1, 0), (1, 0, 1), (0, 1, 1), (0, 1, 1), (2, 0, 0), (2, 0, 0), (0, 2, 0), (0, 0, 2), (0, 2, 0), (0, 0, 2)]</code>.</p>
<p>I could correct this by generating a list of all non-negative integer compositions, removing repeated entries, and then returning a remaining list (instead of a generator). But this seems incredibly inefficient and will likely run into memory issues. What is a better way to fix this?</p>
<p><strong>EDIT</strong></p>
<p>I did a quick comparison of the solutions offered in answers to this post and to <a href="https://stackoverflow.com/questions/37711817/generate-all-possible-outcomes-of-k-balls-in-n-bins-sum-of-multinomial-catego">another post</a> that cglacet has pointed out in the comments. <a href="https://i.stack.imgur.com/MpNJx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MpNJx.png" alt="enter image description here" /></a></p>
<p>On the left, we have the <code>l=2*n</code> and on the right we have <code>l=n+1</code>. In these two cases, user2357112's second solutions is faster than the others, when <code>n<=5</code>. For <code>n>5</code>, solutions proposed by user2357112, Nathan Verzemnieks, and AndyP are more or less tied. But the conclusions could be different when considering other relationships between <code>l</code> and <code>n</code>.</p>
<h2>..........</h2>
<p>*I originally asked for non-negative integer <em>partitions</em>. Joseph Wood correctly pointed out that I am in fact looking for integer <em>compositions</em>, because the order of numbers in a sequence matters to me.</p>
|
<p>Use the <a href="https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)" rel="nofollow noreferrer">stars and bars</a> concept: pick positions to place <code>l-1</code> bars between <code>n</code> stars, and count how many stars end up in each section:</p>
<pre><code>import itertools
def diff(seq):
return [seq[i+1] - seq[i] for i in range(len(seq)-1)]
def generator(n, l):
for combination in itertools.combinations_with_replacement(range(n+1), l-1):
yield [combination[0]] + diff(combination) + [n-combination[-1]]
</code></pre>
<p>I've used <code>combinations_with_replacement</code> instead of <code>combinations</code> here, so the index handling is a bit different from what you'd need with <code>combinations</code>. The code with <code>combinations</code> would more closely match a standard treatment of stars and bars.</p>
<hr>
<p>Alternatively, a different way to use <code>combinations_with_replacement</code>: start with a list of <code>l</code> zeros, pick <code>n</code> positions with replacement from <code>l</code> possible positions, and add 1 to each of the chosen positions to produce an output:</p>
<pre><code>def generator2(n, l):
for combination in itertools.combinations_with_replacement(range(l), n):
output = [0]*l
for i in combination:
output[i] += 1
yield output
</code></pre>
|
python|python-3.x|numpy|permutation|combinatorics
| 4
|
3,478
| 65,010,583
|
If/Then Pandas Condition
|
<p>I am looking to put in a Pandas condition that basically assigns a new value to an existing condition if it meets the criteria. The following psuedo code explains further what I mean:</p>
<p>if postal code is 33707 AND number of bedrooms equals 2
then rent = SquareFeet * 1200</p>
<p>So far the closest I have come to matching this is:</p>
<pre><code>df.loc[(df['PostalCode'] == 33707) & (df['BedroomsTotal'] == 2), 'Rent'] = df['LivingArea'] * 2
print(df['Rent'])
</code></pre>
|
<p>Its better to use <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> for such cases:</p>
<pre><code>import numpy as np
df['Rent'] = np.where(df['PostalCode'].eq(33707) & df['BedroomsTotal'].eq(2), df['LivingArea'] * 2, df.Rent)
</code></pre>
|
python|pandas|dataframe
| 1
|
3,479
| 64,830,587
|
Regex syntax for replacing multiple strings: where have I gone wrong?
|
<p>I have a dataframe with the column 'purpose' that has a lot of string values that I want to standardize by finding a string and replacing it.</p>
<p>For instance, some very similar values are <em>car purchase, buying a second-hand car, buying my own car, cars, second-hand car purchase, car, to own a car, purchase of a car, to buy a car</em></p>
<p>I used the following to make this change:</p>
<p><em>#replace anything to do with buying a car with "Vehicle"</em></p>
<pre><code>credit_data['purpose'] = credit_data.purpose.str.replace(r'(^.*car.*$)','Vehicle')
</code></pre>
<p>and it worked great, all of those values were replaced with 'Vehicle'</p>
<p>I have a number of other similar strings in this column for other types, like education - <em>supplementary education, education, getting an education, to get a supplementary education, university education, etc.</em></p>
<p>so, I looked up regex syntax and came up with the following:</p>
<p><em>#replace anything to do with education with "Education"</em></p>
<pre><code>credit_data['purpose'] = credit_data.purpose.str.replace(r'(^.*education|university|educated.*$)','Education')
</code></pre>
<p>the results for this are similar to above - everything says education now - yay!</p>
<p>which brings me to my question - I've gone wrong somewhere in applying this to some of my other strings - for instance, I used a similar method for real estate:</p>
<p><em>#replace anything to do with real estate with real estate</em></p>
<pre><code>credit_data['purpose'] = credit_data.purpose.str.replace(r'(^.*real estate|housing|house|property.*$)','Real Estate')
</code></pre>
<p>and my results here are different - I started with values like <em>purchase my own house, building a house, purchase of a property, etc.</em> and all the method seems to have done was replace just the string i identified, instead of the entire string with just the replacement string.</p>
<p>so instead of having a bunch of entries that say "Real Estate" I have a bunch of entries that say <em>purchase my own Real Estate, building a Real Estate, purchase of a Real Estate, etc.</em></p>
<p>I'm not sure where I've gone wrong?</p>
<p>Thanks in advance.</p>
<p>edited to add requested series from the dataframe:</p>
<p><code>Df = [purchase of the house, car purchase, supplementary education, to have a wedding, housing, transactions, education, having a wedding, purchase of the house for my family, buy real estate, buy commercial real estate, buy residential real estate, construction of own property, property, building a property, buying a second-hand car, buying my own car, transactions with commercial real estate, building a real estate, housing, transactions with my real estate, cars, to become educated, second-hand car purchase, getting an education, car, wedding ceremony, to get a supplementary education, purchase of my own house, real estate transactions, getting higher education, to own a car, purchase of a car, profile education, university education, buying property for renting out, to buy a car, housing renovation, going to university]</code></p>
|
<p>You are making the regular expression too restrictive and using the wrong character for alternation. You can use <code>\b</code> to match a word boundary, <code>|</code> to match multiple patterns and IGNORECASE to cover case issues. So for example</p>
<pre><code>credit_data.purpose.str.replace(r'\b(real estate|housing|house|property)\b',
'Real Estate', regex=True, flags=re.IGNORECASE)
</code></pre>
<p>If you want to replace the entire string, you can use dot-all (<code>.*</code>).</p>
<pre><code>credit_data.purpose.str.replace(r'.*(real estate|housing|house|property).*',
'Real Estate', regex=True, flags=re.IGNORECASE)
</code></pre>
|
python|regex|pandas
| 1
|
3,480
| 64,951,910
|
What loss function and metric should I use for multi-label classification in keras?
|
<p>Model's final activation is softmax.(output means importance in my case.)
I want to pick top 3, then I used
categorical crossentropy for loss function / accuracy for metric.</p>
<p>for example:
prediction : [0.44, 0.03, 0.01, 0.02, 0.30, 0.20]
true: [1, 0, 0, 0, 1, 1 ]</p>
<p>Is it right to use them?</p>
|
<h1>What loss function and metric to use for multi-label classification?</h1>
<p>For a multi-label classification problem, use <code>sigmoid</code> (<em>not softmax</em>).</p>
<p>For a loss function use <code>tf.keras.losses.binary_crossentropy</code></p>
<p>For example, lets say you have pictures as <em>X</em> and <em>Y</em> is 5 boolean values if the picture has one of the following items: a house, a person, a balloon, a bicycle, a dog. If a picture can have a house and a dog, then this is indeed a multi-label classification and the appropriate output would be <em>sigmoid</em>.</p>
<p>For your accuracy, simply use 'accuracy' like so</p>
<pre><code>model.compile(loss=tf.keras.losses.binary_crossentropy,
optimizer='sgd', # any optimizer you like
metrics=['accuracy'] # <-- like so
)
</code></pre>
<p>I'm not sure what you are trying to solve with top 3, but that will probably not help with a loss function or accuracy. If you are trying to show the top three predicted labels, you can do that post prediction with something like <code>argmax</code> from numpy.</p>
|
python|tensorflow|keras
| 1
|
3,481
| 64,725,275
|
How to configure dataset pipelines with Tensorflow make_csv_dataset for Keras Model
|
<p>I have a structured dataset(csv features files) of around 200 GB. I'm using <a href="https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset" rel="nofollow noreferrer">make_csv_dataset</a> to make the input pipelines. Here is my code</p>
<pre><code>def pack_features_vector(features, labels):
"""Pack the features into a single array."""
features = tf.stack(list(features.values()), axis=1)
return features, labels
def main():
defaults=[float()]*len(selected_columns)
data_set=tf.data.experimental.make_csv_dataset(
file_pattern = "./../path-to-dataset/Train_DS/*/*.csv",
column_names=all_columns, # all_columns=["col1,col2,..."]
select_columns=selected_columns, # selected_columns= a subset of all_columns
column_defaults=defaults,
label_name="Target",
batch_size=1000,
num_epochs=20,
num_parallel_reads=50,
# shuffle_buffer_size=10000,
ignore_errors=True)
data_set = data_set.map(pack_features_vector)
N_VALIDATION = int(1e3)
N_TRAIN= int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 1000
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
validate_ds = data_set.take(N_VALIDATION).cache().repeat()
train_ds = data_set.skip(N_VALIDATION).take(N_TRAIN).cache().repeat()
# validate_ds = validate_ds.batch(BATCH_SIZE)
# train_ds = train_ds.batch(BATCH_SIZE)
model = tf.keras.Sequential([
layers.Flatten(),
layers.Dense(256, activation='elu'),
layers.Dense(256, activation='elu'),
layers.Dense(128, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(32, activation='elu'),
layers.Dense(1,activation='sigmoid')
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(train_ds,
validation_data=validate_ds,
validation_steps=1,
steps_per_epoch= 1,
epochs=20,
verbose=1
)
if __name__ == "__main__":
main()
print('Training completed!')
</code></pre>
<p>Now, when I execute this code , it's completed within few minutes (I think not going through the whole training data) with the following warnings:</p>
<blockquote>
<p>W tensorflow/core/kernels/data/cache_dataset_ops.cc:798] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to <code>dataset.cache().take(k).repeat()</code>. You should use <code>dataset.take(k).cache().repeat()</code> instead.</p>
</blockquote>
<p>As per this warning and as training is completed in few minutes meaning that... input pipeline is not configured correctly... Can anyone please guide me, how to correct this problem.</p>
<p>GPU of my system is NVIDIA Quadro RTX 6000 (compute capability 7.5).</p>
<p>A solution based on some other function like <code>experimental.CsvDataset</code> would work as well.</p>
<p><strong>Edit</strong></p>
<p>That warning gone by changing the code to avoid any cache as</p>
<pre><code> validate_ds = data_set.take(N_VALIDATION).repeat()
train_ds = data_set.skip(N_VALIDATION).take(N_TRAIN).repeat()
</code></pre>
<p>But now the problem is I'm getting zero accuracy, even on the training data. Which I think is a problem of input pipelines. Here is the output.</p>
<p><a href="https://i.stack.imgur.com/Ozhbr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ozhbr.png" alt="enter image description here" /></a></p>
<p><strong>Edit2</strong></p>
<p>After some efforts, I managed to resolve the known issues by using a bit lower level but similar API, <a href="https://www.tensorflow.org/api_docs/python/tf/data/experimental/CsvDataset" rel="nofollow noreferrer">CsvDataset</a>. But now, I'm getting the accuracy=1.00 which I think is not OK. At first epoch, it's .95 and then for next 19 epochs, it's 1.00. Here is my final code.</p>
<pre><code>def preprocess(*fields):
features=tf.stack(fields[:-1])
# convert Target column values to int to make it work for binary classification
labels=tf.stack([int(x) for x in fields[-1:]])
return features,labels # x, y
def main():
# selected_columns=["col1,col2,..."]
selected_indices=[]
for selected_column in selected_columns:
index=all_columns.index(selected_column)
selected_indices.append(index)
print("All_columns length"+str(len(all_columns)))
print("selected_columns length"+str(len(selected_columns)))
print("selected_indices length"+str(len(selected_indices)))
print(selected_indices)
defaults=[float()]*(len(selected_columns))
#defaults.append(int())
print("defaults"+str(defaults))
print("defaults length"+str(len(defaults)))
FEATURES = len(selected_columns) - 1
training_csvs = sorted(str(p) for p in pathlib.Path('.').glob("path-to-data/Train_DS/*/*.csv"))
testing_csvs = sorted(str(p) for p in pathlib.Path('.').glob("path-to-data/Test_DS/*/*.csv"))
training_csvs
testing_csvs
training_dataset=tf.data.experimental.CsvDataset(
training_csvs,
record_defaults=defaults,
compression_type=None,
buffer_size=None,
header=True,
field_delim=',',
# use_quote_delim=True,
# na_value="",
select_cols=selected_indices
)
print(type(training_dataset))
for features in training_dataset.take(1):
print("Training samples before mapping")
print(features)
validate_ds = training_dataset.map(preprocess).take(10).batch(100).repeat()
train_ds = training_dataset.map(preprocess).skip(10).take(90).batch(100).repeat()
validate_ds
train_ds
for features,labels in train_ds.take(1):
print("Training samples")
print(features)
print(labels)
testing_dataset=tf.data.experimental.CsvDataset(
testing_csvs,
record_defaults=defaults,
compression_type=None,
buffer_size=None,
header=True,
field_delim=',',
use_quote_delim=True,
na_value="",
select_cols=selected_indices
)
print(type(testing_dataset))
test_ds = testing_dataset.map(preprocess).batch(100).repeat()
test_ds
for features,labels in test_ds.take(1):
print("Testing samples")
print(features)
print(labels)
model = tf.keras.Sequential([
layers.Dense(256,activation='elu'),
layers.Dense(128,activation='elu'),
layers.Dense(64,activation='elu'),
layers.Dense(1,activation='sigmoid')
])
history = model.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(from_logits=False),
metrics=['accuracy'])
model.fit(train_ds,
validation_data=validate_ds,
validation_steps=20,
steps_per_epoch= 20,
epochs=20,
verbose=1
)
loss, accuracy = model.evaluate(test_ds)
print("Test Accuracy", accuracy)
if __name__ == "__main__":
main()
print('Training completed!')
</code></pre>
<p><strong>I tried to feed just the few useless features to the model, but still, it's giving accuracy=1.00 or 100 %. Which is going wrong now? Overfitting etc?</strong></p>
|
<p>In the snippets, you wrote</p>
<pre><code>model.fit(train_ds,
validation_data=validate_ds,
validation_steps=1,
steps_per_epoch= 1,
epochs=20,
verbose=1)
</code></pre>
<p>Is the <code>steps_per_epoch= 1</code> a typo? If not, that would mean you only use one batch per training, which explains the fast training and the low accuracy. <code>validation_steps=1</code> is also an issue</p>
|
python|tensorflow|machine-learning|tensorflow2.0|tensorflow-datasets
| 1
|
3,482
| 64,962,033
|
python dataframe income column cleanup
|
<p>This maybe a simple solution, but I am finding it hard to make this function work for my dataset.</p>
<p>I have a salary column with variety of data in it. Example dataframe below:</p>
<pre><code>ID Income desired Output
1 26000 26000
2 45K 45000
3 - NaN
4 0 NaN
5 N/A NaN
6 2000 2000
7 30000 - 45000 37500 (30000+45000/2)
8 21000 per Annum 21000
9 50000 per annum 50000
10 21000 to 30000 25500 (21000+30000/2)
11 NaN
12 21000 To 50000 35500 (21000+50000/2)
13 43000/year 43000
14 NaN
15 80000/Year 80000
16 12.40 p/h 12896 (12.40 x 20 x 52)
17 12.40 per hour 12896 (12.40 x 20 x 52)
18 45000.0 (this is a float value) 45000
</code></pre>
<p>@user34974 - has been very helpful in providing the workable solution (below). However, the solution provides me with an error because the dataframe column also consists of float values. Can anyone help in catering for float values in the function that can be taken care of in dataframe column? In the end the output in updated column should be float values.</p>
<pre><code>Normrep = ['N/A','per Annum','per annum','/year','/Year','p/h','per hour',35000.0]
def clean_income(value):
for i in Normrep:
value = value.replace(i,"")
if len(value) == 0 or value.isspace() or value == '-': #- cannot be clubbed to array as used else where in data
return np.nan
elif value == '0':
return np.nan
# now there should not be any extra letters with K hence can be done below step
if value.endswith('K'):
value = value.replace('K','000')
# for to and -
vals = value.split(' to ')
if len(vals) != 2:
vals = value.split(' To ')
if len(vals) != 2:
vals = value.split(' - ')
if len(vals) == 2:
return (float(vals[0]) + float(vals[1]))/2
try:
a = float(value)
return a
except:
return np.nan # Either not proper data or need to still handle some fromat of inputs.
testData = ['26000','45K','-','0','N/A','2000','30000 - 45000','21000 per Annum','','21000 to 30000','21000 To 50000','43000/year', 35000.0]
df = pd.DataFrame(testData)
print(df)
df[0] = df[0].apply(lambda x: clean_income(x))
print(df)
</code></pre>
|
<p>I would like to reiterate if this are only possible combinations of the data, then i have done and provided the below code.</p>
<p>Even if there is any small change you will need to edit to cater to new change. Let me explain what i have done, for all the strings that you want to replace with "" i have created a array Normrep. So, if you some more strings to be removed you can add in the elements. Also, for 'K','p/h','per hour' they need to specifically handled and the conversion needs to be done. So, if the string in your data may change then you need to handle that over here.</p>
<pre><code>import pandas as pd
import numpy as np
Normrep = ['N/A', 'per Annum', 'per annum', '/year', '/Year']
def clean_income(value):
if isinstance(value,float):
return value
else:
isHourConversionNeeded = False;
for i in Normrep:
value = value.replace(i, "")
if len(value) == 0 or value.isspace() or value == '-': # - cannot be clubbed to array as used else where in data
return np.nan
elif value == '0':
return np.nan
# now there should not be any extra letters with K hence can be done below step
if value.endswith('K'):
value = value.replace('K', '000')
elif value.endswith('p/h') or value.endswith('per hour'):
isHourConversionNeeded = True
value = value.replace('p/h',"")
value = value.replace('per hour',"")
# for to and -
vals = value.split(' to ')
if len(vals) != 2:
vals = value.split(' To ')
if len(vals) != 2:
vals = value.split(' - ')
if len(vals) == 2:
return (float(vals[0]) + float(vals[1])) / 2
try:
a = float(value)
if isHourConversionNeeded:
a = a * 20 * 52
return a
except:
return np.nan # Either not proper data or need to still handle some fromat of inputs.
testData = ['26000', '45K', '-', '0', 'N/A', '2000', '30000 - 45000', '21000 per Annum', '', '21000 to 30000',
'21000 To 50000', '43000/year', 35000.0,'12.40 p/h','12.40 per hour']
df = pd.DataFrame(testData)
print(df)
df[0] = df[0].apply(lambda x: clean_income(x))
print(df)
</code></pre>
|
python|pandas|data-cleaning
| 1
|
3,483
| 40,051,478
|
Simple Logistic Regression Error in Python
|
<p>Here is the line of code. I know the issue is that I only have a 1-d array but I cannot figure the code for casting it to a 2-d array inline.</p>
<pre><code>def classification_model(model, data, predictors, outcome):
model.fit(data[predictors],data[outcome])
</code></pre>
<p>where data is a 1-d array that has been read from a .csv file.</p>
<p>The <code>classification_model()</code> is invoked like this:
<code>classification_model(LogisticRegression(), data, 'HvA', 'FTR')</code>
Where FTR and HvA are column names in the .csv and therefore array positions in my data array (Pandas)</p>
<p>Trace is:
Traceback (most recent call last):</p>
<pre><code>File "Predict.py", line 112, in <module>
classification_model(LogisticRegression(), reader, 'HvA', 'FTR')
File "Predict.py", line 15, in classification_model
model.fit(data[predictors],data[outcome])
File "/usr/local/lib/python2.7/dist-packages/sklearn/linear_model/logistic.py", line 1174, in fit
order="C")
File "/usr/local/lib/python2.7/dist-packages/sklearn/utils/validation.py", line 531, in check_X_y
check_consistent_length(X, y)
File "/usr/local/lib/python2.7/dist-packages/sklearn/utils/validation.py", line 181, in check_consistent_length
" samples: %r" % [int(l) for l in lengths])
ValueError: Found input variables with inconsistent numbers of samples: [1, 370]
</code></pre>
<p>The heading line and first line of data from .csv file</p>
<pre><code>FTHG FTAG FTR HTHG HTAG HTR HS AS HST AST HF AF HC AC HY AY HR AR VCH VCD VCA Bb1X2 BbMxH BbAvH BbMxD BbAvD BbMxA BbAvA BbOU BbMx>2.5 BbAv>2.5 BbMx<2.5 BbAv<2.5 BbAH BbAHh BbMxAHH BbAvAHH BbMxAHA BbAvAHA PSCH PSCD PSCA HvA
0 0 1 0 0 1 25 10 5 2 19 11 7 2 3 3 0 1 3.4 3.5 2.25 39 3.5 3.26 3.6 3.42 2.3 2.2 37 1.95 1.86 2.02 1.92 24 0.25 2.02 1.95 1.94 1.9 3.22 3.5 2.36 0
</code></pre>
<p>Thanks</p>
|
<pre><code>data[col_name].values.reshape(len(data), 1)
</code></pre>
<p>As given by Michael K above</p>
|
python|arrays|pandas|scikit-learn
| 0
|
3,484
| 39,809,650
|
selecting rows in a tuplelist in pandas
|
<p>I have a list of tuples as below in python:</p>
<pre><code> Index Value
0 (1,2,3)
1 (2,5,4)
2 (3,3,3)
</code></pre>
<p>How can I select rows from this in which the second value is less than or equal 2?</p>
<p><strong><em>EDIT:</em></strong></p>
<p>Basically, the data is in the form of <code>[(1,2,3), (2,5,4), (3,3,3)....]</code></p>
|
<p>You could slice the <code>tuple</code> by using <code>apply</code>:</p>
<pre><code>df[df['Value'].apply(lambda x: x[1] <= 2)]
</code></pre>
<p><a href="https://i.stack.imgur.com/MZBwY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MZBwY.png" alt="Image"></a></p>
<hr>
<p>Seems, it was a list of tuples and not a <code>DF</code>:</p>
<p>To return a <code>list</code> back:</p>
<pre><code>data = [item for item in [(1,2,3), (2,5,4), (3,3,3)] if item[1] <= 2]
# [(1, 2, 3)]
</code></pre>
<p>To return a <code>series</code> instead:</p>
<pre><code>pd.Series(data)
#0 (1, 2, 3)
#dtype: object
</code></pre>
|
python|pandas|tuples
| 2
|
3,485
| 39,578,466
|
pandas date to string
|
<p>i have a datetime <code>pandas.Series</code>. One column called "dates". I want to get 'i' element in loop like string.</p>
<p><code>s.apply(lambda x: x.strftime('%Y.%m.%d'))</code> or
<code>astype(str).tail(1).reset_index()['date']</code> or many other solutions don't work.</p>
<p>I just want a string like <code>'2016-09-16'</code> (first datetime element in series) and not what is currently returned, which is:</p>
<pre><code> ss = series_of_dates.astype(str).tail(1).reset_index()['date']
"lol = %s" % ss
</code></pre>
<blockquote>
<p><code>lol = 0 2016-09-16\nName: date, dtype: object</code></p>
</blockquote>
<p>I need just:</p>
<blockquote>
<p><code>lol = 2016-09-16</code></p>
</blockquote>
<p>because I need </p>
<blockquote>
<p><strong><code>some string</code></strong> % a , b , s ,d</p>
</blockquote>
<p>..... without even '/n' in a, b ,s ...</p>
|
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.strftime.html" rel="nofollow"><code>strftime</code></a> for convert <code>datetime</code> column to <code>string</code> column:</p>
<pre><code>import pandas as pd
start = pd.to_datetime('2015-02-24 10:00')
rng = pd.date_range(start, periods=10)
df = pd.DataFrame({'dates': rng, 'a': range(10)})
print (df)
a dates
0 0 2015-02-24 10:00:00
1 1 2015-02-25 10:00:00
2 2 2015-02-26 10:00:00
3 3 2015-02-27 10:00:00
4 4 2015-02-28 10:00:00
5 5 2015-03-01 10:00:00
6 6 2015-03-02 10:00:00
7 7 2015-03-03 10:00:00
8 8 2015-03-04 10:00:00
9 9 2015-03-05 10:00:00
s = df.dates
print (s.dt.strftime('%Y.%m.%d'))
0 2015.02.24
1 2015.02.25
2 2015.02.26
3 2015.02.27
4 2015.02.28
5 2015.03.01
6 2015.03.02
7 2015.03.03
8 2015.03.04
9 2015.03.05
Name: dates, dtype: object
</code></pre>
<p>Loop with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.iteritems.html" rel="nofollow"><code>Series.iteritems</code></a>:</p>
<pre><code>for idx, val in s.dt.strftime('%Y.%m.%d').iteritems():
print (val)
2015.02.24
2015.02.25
2015.02.26
2015.02.27
2015.02.28
2015.03.01
2015.03.02
2015.03.03
2015.03.04
2015.03.05
</code></pre>
|
python|datetime|pandas|time-series
| 4
|
3,486
| 69,306,507
|
Creating multiple additional calculated columns to a dataframe in pandas using a for statement
|
<p>I am trying to create an additional column to my dataframe using a loop, where the additional columns would be a multiple of a current column, but that multiple will change. I understand that this can be solved relatively easily by just creating a new column for each, but this is part of a larger project that I cant do that for.</p>
<p>Starting with this dataframe:</p>
<pre><code> Time Amount
0 20 10
1 10 5
2 15 25
</code></pre>
<p>Hoping for the following outcome:</p>
<pre><code> Time Amount Amount i=2 Amount i=3 Amount i=4
0 20 10 20 30 40
1 10 5 10 15 20
2 15 25 50 75 75
</code></pre>
<p>Think there should be an easy answer, but cant find anything online. So far I have this:</p>
<pre><code>data = {'Time': [20,10,15],
'Amount': [10,5,25]}
df = pd.DataFrame(data)
for i in range(2,5):
df = df.append(df['Amount']*i)
print(df)
</code></pre>
<p>Thanks</p>
|
<p>Do you want something like this ?</p>
<pre><code>for i in range(2,5):
df["Amout i={}".format(i)] = df['Amount']*i
</code></pre>
<p>Output :</p>
<pre><code> Time Amount Amout i=2 Amout i=3 Amout i=4
0 20 10 20 30 40
1 10 5 10 15 20
2 15 25 50 75 100
</code></pre>
|
python|pandas
| 2
|
3,487
| 69,619,509
|
How can I separate a column in new columns
|
<p>I don't know how to ask this question, but i'll try do explain my case.</p>
<p>I have a dataset with the data as following:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Product</th>
<th>Value</th>
<th>Value type</th>
<th>year</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>21,5</td>
<td>Price</td>
<td>21</td>
</tr>
<tr>
<td>A</td>
<td>5</td>
<td>Volume</td>
<td>21</td>
</tr>
<tr>
<td>B</td>
<td>55,3</td>
<td>Price</td>
<td>21</td>
</tr>
<tr>
<td>B</td>
<td>10</td>
<td>Volume</td>
<td>21</td>
</tr>
<tr>
<td>C</td>
<td>70,0</td>
<td>Price</td>
<td>21</td>
</tr>
<tr>
<td>D</td>
<td>37,5</td>
<td>Price</td>
<td>21</td>
</tr>
<tr>
<td>D</td>
<td>7,7</td>
<td>Volume</td>
<td>21</td>
</tr>
</tbody>
</table>
</div>
<p>And I want to reach something like that:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Product</th>
<th>Price</th>
<th>Volume</th>
<th>Year</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>21,5</td>
<td>5</td>
<td>21</td>
</tr>
<tr>
<td>B</td>
<td>55,3</td>
<td>10</td>
<td>21</td>
</tr>
<tr>
<td>c</td>
<td>70,0</td>
<td>-</td>
<td>21</td>
</tr>
<tr>
<td>D</td>
<td>37,0</td>
<td>7,7</td>
<td>21</td>
</tr>
</tbody>
</table>
</div>
<p>I mind that the unstack function can solve the problem, but i don't know how, cause i'm not getting all the columns back.</p>
<p>I found a complex solution but it's not working.</p>
<pre><code>container = []
for label, _df in df.groupby(['Year','Product']):
_df.set_index('Value type', inplace = True)
container.append(pd.DataFrame({
"Product": [label[1]],
"Price":[_df.loc['Price', 'Value']],
"Volume": [_df.loc['Volume', 'Value']],
"Year":[label[0]]}))
df_new = pd.concat(container)
</code></pre>
<p>This solution doesn't work, because the missing line for Volume for product C.</p>
<p>How can I reach the expected dataframe?
Is there any fast way to calculate this?</p>
|
<p>Use <code>pivot</code>:</p>
<pre><code>out = df.pivot(index=['Product', 'year'], columns='Value type', values=['Value']) \
.droplevel(0, axis=1).reset_index().rename_axis(None, axis=1) \
[['Product', 'Price', 'Volume', 'year']]
</code></pre>
<pre><code>>>> out
Product Price Volume year
0 A 21.5 5.0 21
1 B 55.3 10.0 21
2 C 70.0 NaN 21
3 D 37.5 7.7 21
</code></pre>
|
python|pandas|dataframe
| 1
|
3,488
| 69,466,183
|
Python: I need to find the average over x amount of rows in a specific column of a large csv file
|
<p>I have a large CSV file with two columns in it as shown below:</p>
<p><a href="https://i.stack.imgur.com/FZc1D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FZc1D.png" alt="enter image description here" /></a></p>
<p>I have already filtered the data. I need to calculate the average pressure every x amount of rows.</p>
<p>I've looked for a while on here but was unable to find how to calculate the average every x amount of rows for a specific column. Thanks for any help you can provide.</p>
|
<p>numpy - <a href="https://numpy.org/doc/stable/reference/generated/numpy.average.html" rel="nofollow noreferrer">average</a> & <a href="https://numpy.org/doc/stable/reference/generated/numpy.reshape.html" rel="nofollow noreferrer">reshape</a></p>
<pre><code>n = 3
x = df['Pressure']
# calculates the average
avgResult = np.average(givenArray.reshape(-1, n), axis=1)
</code></pre>
<p>the result is array, which divide columns into n sets:</p>
<p>eg:</p>
<pre><code>array([3.33333333, 4.66666667])
</code></pre>
<p>in:</p>
<pre><code>n=3
x = np.array([1, 4, 5,2,8,4])
</code></pre>
|
pandas
| 0
|
3,489
| 54,222,063
|
How move some cell in pandas dataframe?
|
<p>I am trying to move some data in a pandas data frame.</p>
<p>I have this data now:</p>
<p><img src="https://i.stack.imgur.com/KBkpR.jpg" alt="enter image description here"></p>
<p>My expected behavior is:</p>
<p><img src="https://i.stack.imgur.com/8rhOW.jpg" alt="enter image description here"></p>
<p>So when <code>col B = date/time</code> the col B-E are shifted by one.</p>
|
<p>You can try this:</p>
<pre><code>df.loc[1:,'B':] = df.loc[1:,'B':].shift(1, axis=1).fillna(0)
</code></pre>
<p>Output:</p>
<pre><code> A B C D E
0 1 8 2011-06-01 ABC ABC
1 2 0 2011-06-01 ABC ABC
</code></pre>
|
python|pandas|dataframe
| 0
|
3,490
| 53,803,058
|
value range in Pandas
|
<p>I have a simple code for titanic data:</p>
<pre><code>import pandas as pd
def pClassSurvivorDetails(df,pClass):
print('\nResults for Pclass =', pClass, '\n -------------------- ')
print("The following did not survive")
notSurvive = df['Sex'][df['Survived']==0][df['Pclass']==pClass]
print(notSurvive.value_counts())
print("The following did survive")
survive = df['Sex'][df['Survived']==1][df['Pclass']==pClass]
print(survive.value_counts())
def main():
df = pd.read_csv("titanic.csv")
for value in [1, 2, 3]:
pClassSurvivorDetails(df,value, )
main()
</code></pre>
<p>Now I need to do the same result but instead of for value in [1, 2, 3] i need first number =x last number= y and all between should be included ...something like [1:3](but it doesn't work this way). Any ideas please...</p>
|
<p>To cycle through all values between two variables in Python, you can use:</p>
<pre><code>for i in range(x, y):
</code></pre>
<p>Or, since it is up to and not including y, you could include y with:</p>
<pre><code>for i in range(x, y + 1):
</code></pre>
<p>To get all values in this range, and then access only one, the simplest way is to store it as a list.</p>
<pre><code>my_values = list(range(x, y))
</code></pre>
<p>And then you can access with indexing, e.g.:</p>
<pre><code>my_values[2]
</code></pre>
|
python|pandas
| 0
|
3,491
| 53,874,275
|
Error when filtering Dataframe (TypeError: invalid type comparison)
|
<p>I am trying to filter out a Dataframe based on a column but I get an error <code>TypeError: invalid type comparison</code></p>
<p>Given below is view of my Dataframe:</p>
<pre><code>id,name,start_date,new_customer
101,customer_1,2018-12-01,True
102,customer_2,2018-11-21,False
103,customer_3,2018-12-11,True
104,customer_4,2018-11-30,False
</code></pre>
<p>I get the error when I try to do</p>
<pre><code>df = df['new_customer']=='True'
</code></pre>
<p>Update</p>
<pre><code>df.dtypes
id - object
name - object
start_date - datetime64[ns]
new_customer - bool
</code></pre>
|
<p>Use <code>True</code> without apostrophe</p>
<pre><code>df = df['new_customer'] == True
</code></pre>
|
pandas
| 1
|
3,492
| 38,231,168
|
Problems with Tensorboard on Ubuntu 16.04
|
<p>I'm running code from <a href="https://github.com/MorvanZhou/tutorials/blob/master/tensorflowTUT/tf15_tensorboard/full_code.py" rel="nofollow">https://github.com/MorvanZhou/tutorials/blob/master/tensorflowTUT/tf15_tensorboard/full_code.py</a> on my own computer, which is a sample of how to use Tensorboard, however, I see nothing from the Tensor board from my computer: every tab of the Tensorflow is empty, saying no XX data is found.</p>
<p>I tried the '--inspect option':</p>
<blockquote>
<p>zhao@zhao-ubuntu:~/Desktop/samples$ tensorboard --logdir = 'logs'
--inspect
======================================================================
Processing event files... (this can take a few minutes) ======================================================================</p>
<p>No event files found within logdir =</p>
</blockquote>
<p>and the '--debug' option:</p>
<blockquote>
<p>zhao@zhao-ubuntu:~/Desktop/samples$ tensorboard --logdir = 'logs'
--debug INFO:tensorflow:TensorBoard is in debug mode. INFO:tensorflow:Starting TensorBoard in directory
/home/zhao/Desktop/samples </p>
<p>INFO:tensorflow:TensorBoard path_to_run is
{'/home/zhao/Desktop/samples/=': None} </p>
<p>INFO:tensorflow:Multiplexer done loading. Load took 0.0 secs </p>
<p>INFO:tensorflow:TensorBoard is tag: b'22' </p>
<p>Starting TensorBoard b'22' on port 6006 (You can navigate to
<a href="http://0.0.0.0:6006" rel="nofollow">http://0.0.0.0:6006</a>)</p>
</blockquote>
<p>BTW, I'm using python3.5.1 on my ubuntu machine.</p>
|
<p>I think I have solved the problem: I typed a redundant space to the command line:</p>
<pre><code>tensorboard --logdir = 'logs' --inspect
</code></pre>
<p>which should be:</p>
<pre><code>tensorboard --logdir ='logs' --inspect
</code></pre>
<p>or:</p>
<pre><code>tensorboard --logdir 'logs' --inspect
</code></pre>
<p>where space represents an equal.</p>
<p>I got this answer from:
<a href="https://github.com/tensorflow/tensorflow/issues/3209" rel="nofollow">https://github.com/tensorflow/tensorflow/issues/3209</a></p>
|
google-chrome|ubuntu|tensorflow|tensorboard
| 0
|
3,493
| 38,264,881
|
Write numpy.ndarray with Russian characters to file
|
<p>I try to write <code>numpy.ndarray</code> to file.
I use</p>
<pre><code>unique1 = np.unique(df['search_term'])
unique1 = unique1.tolist()
</code></pre>
<p>and next try
1)</p>
<pre><code>edf = pd.DataFrame()
edf['term'] = unique1
writer = pd.ExcelWriter(r'term.xlsx', engine='xlsxwriter')
edf.to_excel(writer)
writer.close()
</code></pre>
<p>and 2)</p>
<pre><code>thefile = codecs.open('domain.txt', 'w', encoding='utf-8')
for item in unique:
thefile.write("%s\n" % item)
</code></pre>
<p>But all return <code>UnicodeDecodeError: 'utf8' codec can't decode byte 0xd7 in position 9: invalid continuation byte</code></p>
|
<p>The second example should work if you encode the strings as utf8. </p>
<p>The following works in Python2 with a utf8 encoded file:</p>
<pre><code># _*_ coding: utf-8
import pandas as pd
edf = pd.DataFrame()
edf['term'] = ['foo', 'bar', u'русском']
writer = pd.ExcelWriter(r'term.xlsx', engine='xlsxwriter')
edf.to_excel(writer)
writer.save()
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/Cq0lV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cq0lV.png" alt="enter image description here"></a></p>
|
python|excel|numpy|pandas|utf-8
| 0
|
3,494
| 65,917,546
|
How to change the color channel of the OpenCV input frame to fit the model?
|
<p>I am still new to deep learning. So, I am trying to run OpenCV to capture frames and pass those frames to my trained model. The input required for the model has dimensions of (48,48,1).</p>
<p>The first layer in the model:</p>
<pre><code>model.add(Conv2D(input_shape=(48,48,1),filters=64,kernel_size=(3,3),padding="same", activation="relu"))
</code></pre>
<p>I am trying to convert the OpenCV frame input to fit the dimensions of the model. However, I tried to use <code>cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)</code> and resize but the output dimension is (48,48) only</p>
<p>I have tried another method shown below but the output was (48,48,3) and then after adding the axis to be able to pass it to the model the output dimension was (1,48,48,3)</p>
<pre><code>coverted_image= cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces_detected = face_haar_cascade.detectMultiScale(coverted_image)
#Draw Triangles around the faces detected
for (x,y,w,h) in faces_detected:
cv2.rectangle(frame,(x,y), (x+w,y+h), (255,0,0))
roi_gray=frame[y:y+w,x:x+h]
roi_gray=cv2.resize(roi_gray,(48,48))
image_pixels = tf.keras.preprocessing.image.img_to_array(roi_gray)
print(image_pixels.shape)
image_pixels = np.expand_dims(image_pixels, axis = 0)
print(image_pixels.shape)
image_pixels /= 255
print(image_pixels.shape)
</code></pre>
<p>How can I adjust the shape of the input to (48,48,1) to be able to get the prediction from the model?</p>
|
<p>openCV images are just numpy arrays, so they can be manipulated easily using numpy commands.</p>
<p>E.g.:</p>
<pre><code>import numpy as np
x = np.array(range(48*48)).reshape(48,48)
x.shape
</code></pre>
<blockquote>
<p>(48, 48)</p>
</blockquote>
<pre><code>x = x.reshape(48,48,1)
x.shape
</code></pre>
<blockquote>
<p>(48, 48, 1)</p>
</blockquote>
|
python|numpy|opencv|image-processing|image-resizing
| 1
|
3,495
| 66,287,204
|
np.meshgrid using up too much ram Google Colab
|
<p>So I'm working on this assignment for my class and I'm having an issue where Google Colab says that I've used up all the RAM when the np.meshgrid line is executed. I understand the meshgrid is using like 50GB of space but I can't figure out how to reduce that. Can someone help me understand what I'm doing wrong please? This is the question:</p>
<p><a href="https://i.stack.imgur.com/sP9fG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sP9fG.png" alt="Programming Question" /></a></p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from matplotlib import lines
from mpl_toolkits.mplot3d import Axes3D
from sklearn.datasets import load_digits
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
# you need to define the min and max values from the data
step_size = 0.05
data, labels = load_digits(return_X_y=True)
(n_samples, n_features), n_digits = data.shape, np.unique(labels).size
reduced_data = PCA(n_components=3).fit_transform(data)
kmeans = KMeans(init="k-means++", n_clusters=n_digits, n_init=4)
kmeans.fit(reduced_data)
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
z_min, z_max = reduced_data[:, 2].min() - 1, reduced_data[:, 2].max() + 1
# x_min = -32.16990591295523
# x_max = 32.700123434559885
# y_min = -28.49444674444793
# y_max = 31.092205057161326
# z_min = -30.301750160946828
# z_max = 33.70884919881785
xx, yy, zz = np.meshgrid(np.arange(x_min, x_max, step_size),
np.arange(y_min, y_max, step_size),
np.arange(z_min, z_max, step_size))
# mesh grid size = 3894
</code></pre>
|
<p>So your 3 <code>arange</code> have about the same size</p>
<pre><code>In [38]: np.arange(-32,32,.05).shape
Out[38]: (1280,)
</code></pre>
<p><code>meshgrid</code> makes 3 "cubes" with 1280 points on each dimension</p>
<pre><code>In [42]: (1280**3)/1e9
Out[42]: 2.097152
</code></pre>
<p>That's 2G of points, or 16Gbytes of memory; time 3 is 50 GB of memory use.</p>
<p>No wonder it's complaining!</p>
|
python|numpy|machine-learning|scikit-learn
| 1
|
3,496
| 52,750,090
|
Categorical dtype changes after using melt
|
<p>In answering <a href="https://stackoverflow.com/q/52749898/6671176">this question</a>, I found that after using <code>melt</code> on a pandas dataframe, a column that was previously an ordered Categorical dtype becomes an <code>object</code>. Is this intended behaviour?</p>
<p>Note: not looking for a solution, just wondering if there is any reason for this behaviour or if it's not intended behavior.</p>
<p><strong>Example:</strong></p>
<p>Using the following dataframe <code>df</code>:</p>
<pre><code> Cat L_1 L_2 L_3
0 A 1 2 3
1 B 4 5 6
2 C 7 8 9
df['Cat'] = pd.Categorical(df['Cat'], categories = ['C','A','B'], ordered=True)
# As you can see `Cat` is a category
>>> df.dtypes
Cat category
L_1 int64
L_2 int64
L_3 int64
dtype: object
melted = df.melt('Cat')
>>> melted
Cat variable value
0 A L_1 1
1 B L_1 4
2 C L_1 7
3 A L_2 2
4 B L_2 5
5 C L_2 8
6 A L_3 3
7 B L_3 6
8 C L_3 9
</code></pre>
<p>Now, if I look at <code>Cat</code>, it's become an object:</p>
<pre><code>>>> melted.dtypes
Cat object
variable object
value int64
dtype: object
</code></pre>
<p><strong>Is this intended?</strong></p>
|
<p>In <a href="https://github.com/pandas-dev/pandas/commit/9dc9d805fe58cf32a8a3c8ab2277517eaf73d4c6#diff-9d24108841e0a76fe806a808d84f4561" rel="nofollow noreferrer">source</a> code . 0.22.0(My old version)</p>
<pre><code> for col in id_vars:
mdata[col] = np.tile(frame.pop(col).values, K)
mcolumns = id_vars + var_name + [value_name]
</code></pre>
<p>Which will return the datatype object with <code>np.tile</code>. </p>
<p>It has been fixed in 0.23.4(After I update my <code>pandas</code>)</p>
<pre><code>df.melt('Cat')
Out[6]:
Cat variable value
0 A L_1 1
1 B L_1 4
2 C L_1 7
3 A L_2 2
4 B L_2 5
5 C L_2 8
6 A L_3 3
7 B L_3 6
8 C L_3 9
df.melt('Cat').dtypes
Out[7]:
Cat category
variable object
value int64
dtype: object
</code></pre>
<p>More info how it fixed : </p>
<pre><code>for col in id_vars:
id_data = frame.pop(col)
if is_extension_type(id_data): # here will return True , then become concat not np.tile
id_data = concat([id_data] * K, ignore_index=True)
else:
id_data = np.tile(id_data.values, K)
mdata[col] = id_data
</code></pre>
|
python|pandas
| 2
|
3,497
| 52,510,275
|
I am getting tensorflow installation error on pycharm
|
<p><a href="https://i.stack.imgur.com/iIDcz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iIDcz.jpg" alt="Error on pycharm"></a></p>
<p>pip version I am using is 18.0 and tensorflow I was trying to install version 1.11.0rc2. I tried it with other versions of pip too but didn't work.</p>
|
<p>That's because there is no release for Python 3.7: <a href="https://pypi.org/project/tensorflow/1.11.0rc2/#files" rel="nofollow noreferrer">https://pypi.org/project/tensorflow/1.11.0rc2/#files</a></p>
|
python|python-3.x|tensorflow|pip|pycharm
| 1
|
3,498
| 46,238,307
|
How to filter out rows from multiple data frames that are inside a dictionary in python
|
<p>I have a <code>dictionary</code> that contains many <code>dataframes</code>.</p>
<p>Sample data:</p>
<pre><code>dataframe1 = pd.DataFrame({"variable1":["a","a","b"]})
dataframe2 = pd.DataFrame({"variable1":["b","a","b"]})
dictionary = dict(zip(["dataframe1","dataframe2"],[dataframe1,dataframe2]))
</code></pre>
<p>What i would like to do, is to create a new <code>dictionary</code>, which will contain the <code>dataframe</code>s but it will exclude the rows from each dataframe for which <code>variable1=="a"</code></p>
<p>The equivalent <code>R</code> command with <code>lists</code> would be </p>
<pre><code>dictionary_new <- lapply(dictionary ,function(x){x[!variable1=="a",]})
</code></pre>
<p>How can i translate that to <code>Python</code> ?</p>
|
<p>Use dict comprehension with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html" rel="nofollow noreferrer"><code>query</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>. For exclude <code>a</code> need filter all values which are not <code>a</code>.</p>
<pre><code>dictionary = {k:v.query('variable1!="a"') for k, v in dictionary.items()}
</code></pre>
<p>Or:</p>
<pre><code>dictionary = {k:v[v.variable1!="a"] for k, v in dictionary.items()}
print (dictionary)
{'dataframe1': variable1
2 b, 'dataframe2': variable1
0 b
2 b}
</code></pre>
|
python|python-3.x|pandas|dictionary
| 4
|
3,499
| 46,354,509
|
Transfer unmasked elements from maskedarray into regular array
|
<p>I have a <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/maskedarray.html" rel="nofollow noreferrer"><code>MaskedArray</code></a> <code>a</code> of shape (L,M,N), and I want to transfer the unmasked elements to a normal array <code>b</code> (with the same shape), such that along the last dimension, the first elements receive the non-masked values, and the remaining elements are zero. For example, in 2D:</p>
<pre><code>a = [[--, 1, 2, --, 7, --, 5],
[3 , --, --, 2, --, --, --]]
# Transfer to:
b = [[1, 2, 7, 5, 0, 0, 0],
[3, 2, 0, 0, 0, 0, 0]]
</code></pre>
<p>The simplest way to do this would be over a for loop, e.g.,</p>
<pre><code>for idx in np.ndindex(np.shape(a)[:-1]):
num = a[idx].count()
b[idx][:num] = a[idx].compressed()
# or perhaps,
# b[idx][:num] = a[idx][~a[idx].mask]
</code></pre>
<p>But this will be very slow for large arrays (and in fact, I have many different arrays with the same mask values, all of which I'd like to convert in the same way). <strong>Is there a fancy slicing way to do this?</strong></p>
<hr>
<p>Edit: Here is one way to construct the appropriate indexing tuple to assign value, but it seems ugly. Perhaps there's something better?</p>
<pre><code>b = np.zeros(x.shape)
# Construct a list with a list for each dimension.
left = [[] for ii in range(a.ndim)]
# In each sub-list, construct the indices to `b` to store each value from `a`
for idx in np.ndindex(a.shape[:-1]):
num = a[idx].count()
# here `ii` is the dimension number, and jj the index in that dimension
for ii, jj in enumerate(idx):
left[ii] = left[ii] + num*[jj]
right[ii] = right[ii] + num*[jj]
# The last dimension is just consecutive numbers for as many values
left[-1] = left[-1] + list(range(num))
a[left] = b[~b.mask]
</code></pre>
|
<p>Adapting @divakar's answers from the linked 'pad with 0s' questions,</p>
<p><a href="https://stackoverflow.com/questions/38619143/convert-python-sequence-to-numpy-array-filling-missing-values">Convert Python sequence to NumPy array, filling missing values</a></p>
<pre><code>In [464]: a=np.array([[0,1,2,0,7,0,5],[3,0,0,2,0,0,0]])
In [465]: Ma = np.ma.masked_equal(a, 0)
In [466]: Ma
Out[466]:
masked_array(data =
[[-- 1 2 -- 7 -- 5]
[3 -- -- 2 -- -- --]],
mask =
[[ True False False True False True False]
[False True True False True True True]],
fill_value = 0)
</code></pre>
<p>Getting the number of 0s we need to pad with is easy here - just sum the mask Trues</p>
<pre><code>In [467]: cnt=Ma.mask.sum(axis=1) # also np.ma.count_masked(Ma,1)
In [468]: cnt
Out[468]: array([3, 5])
In [469]:
In [469]: mask=(7-cnt[:,None])>np.arange(7) # key non intuitive step
In [470]: mask
Out[470]:
array([[ True, True, True, True, False, False, False],
[ True, True, False, False, False, False, False]], dtype=bool)
</code></pre>
<p>The <code>mask</code> is constructed such that the first elements <code>cnt</code> elements (along each dim-0 axis) are True, the rest are False.</p>
<p>Now just use this mask to copy the <code>compressed</code> values to a blank array:</p>
<pre><code>In [471]: M=np.zeros((2,7),int)
In [472]: M[mask]=Ma.compressed()
In [473]: M
Out[473]:
array([[1, 2, 7, 5, 0, 0, 0],
[3, 2, 0, 0, 0, 0, 0]])
</code></pre>
<p>I had to fiddle around with the <code>cnt</code> and <code>np.arange(7)</code> to get the desired mix of True/False values (left justified Trues).</p>
<p>Count unmasked values per row:</p>
<pre><code>In [486]: np.ma.count(Ma,1)
Out[486]: array([4, 2])
</code></pre>
<hr>
<p>Generalizing this to N-dimensions:</p>
<pre><code>def compress_masked_array(vals, axis=-1, fill=0.0):
cnt = vals.mask.sum(axis=axis)
shp = vals.shape
num = shp[axis]
mask = (num - cnt[..., np.newaxis]) > np.arange(num)
n = fill * np.ones(shp)
n[mask] = vals.compressed()
return n
</code></pre>
|
python|arrays|numpy|slice
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.