Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
9,900
58,465,937
How to use the first layers of a pretrained model to extract features inside a Keras model (Functional API)
<p>I would like to use the first layers of a pre-trained model --say in Xception up and including the add_5 layer to extract features from an input. Then pass the output of the add_5 layer to a dense layer that will be trainable.</p> <p>How can I implement this idea?</p>
<p>Generally you need to reuse layers from one model, to pass them as an input to the rest layers and to create a Model object with input and output of the combined model specified. For example alexnet.py from <a href="https://github.com/FHainzl/Visualizing_Understanding_CNN_Implementation.git" rel="nofollow noreferrer">https://github.com/FHainzl/Visualizing_Understanding_CNN_Implementation.git</a>.</p> <p>They have</p> <pre><code>from keras.models import Model from keras.layers.convolutional import Conv2D, MaxPooling2D, ZeroPadding2D def alexnet_model(): inputs = Input(shape=(3, 227, 227)) conv_1 = Conv2D(96, 11, strides=4, activation='relu', name='conv_1')(inputs) … prediction = Activation("softmax", name="softmax")(dense_3) m = Model(input=inputs, output=prediction) return m </code></pre> <p>and then they take this returned model, the desired intermediate layer and make a model that returns this layer’s outputs:</p> <pre><code>def _sub_model(self): highest_layer_name = 'conv_{}'.format(self.highest_layer_num) highest_layer = self.base_model.get_layer(highest_layer_name) return Model(inputs=self.base_model.input, outputs=highest_layer.output) </code></pre> <p>You will need similar thing, </p> <pre><code>highest_layer = self.base_model.get_layer('add_5') </code></pre> <p>then continue it like </p> <pre><code>my_dense = Dense(... name=’my_dense’)(highest_layer.output) … </code></pre> <p>and finish with </p> <pre><code>return Model(inputs=self.base_model.input, outputs=my_prediction) </code></pre> <p>Since highest_layer is a layer (graph node), not a connection, returning result (graph arc), you’ll need to add <code>.output</code> to <code>highest_layer</code>.</p> <p><em>Not sure how exactly to combine models if the upper one is also ready. Maybe something like</em></p> <pre><code>model_2_lowest_layer = model_2.get_layer(lowest_layer_name) upper_part_model = Model(inputs= model_2_lowest_layer.input, outputs=model_2.output) upper_part = upper_part_model()(highest_layer.output) return Model(inputs=self.base_model.input, outputs=upper_part) </code></pre>
python|tensorflow2.0|transfer-learning|keras-2|pre-trained-model
1
9,901
69,118,850
Convert a date with string format to timestap in a loop
<p>I have a dict with dates, some are timestamps, some date as string. I would like to iterate over a dict. If the val is as sting, convert it to a timestamp. If not only print it.</p> <pre><code>import pandas as pd ts = pd.Timestamp('2017-01-01T12') date = '2017-01-01' timedict = { &quot;timestamp&quot;: ts, &quot;date&quot;: date } for key, val in timedict.items(): if val == #string: val.strftime('%d.%m.%Y') print(val) else: print(val) </code></pre>
<p>Something like this?</p> <p>Using <a href="https://docs.python.org/3/library/datetime.html" rel="nofollow noreferrer"><code>datetime</code></a> to parse and format the string</p> <pre class="lang-py prettyprint-override"><code>from datetime import datetime import pandas as pd ts = pd.Timestamp(&quot;2017-01-01T12&quot;) date = &quot;2017-01-01&quot; timedict = { &quot;timestamp&quot;: ts, &quot;date&quot;: date, } for key, val in timedict.items(): if isinstance(val, str): val = datetime.strptime(val, &quot;%Y-%m-%d&quot;) val = val.strftime(&quot;%d.%m.%Y&quot;) print(val) else: print(val) </code></pre> <p>Which outputs:</p> <pre class="lang-none prettyprint-override"><code>2017-01-01 12:00:00 01.01.2017 </code></pre> <hr /> <p>Or, as you're already using pandas you can create a <code>pd.Timestamp</code> from the parsed <code>datetime</code> object:</p> <pre class="lang-py prettyprint-override"><code>for key, val in timedict.items(): if isinstance(val, str): val = pd.Timestamp(datetime.strptime(val, &quot;%Y-%m-%d&quot;)) print(val) else: print(val) </code></pre>
python|pandas|date|timestamp
2
9,902
69,259,069
Transpose rows of a Multi-index df into columns
<p>I have a df that looks like this:</p> <pre><code> pid time id vid id1 vis_id1 pid1 t_0 vis_id1 pid2 t_1 id2 vis_id2 pid1 t_3 vis_id2 pid2 t_4 vis_id2 pid3 t_5 vis_id2 pid4 t_6 </code></pre> <p>I am looking to transpose the rows of the df for the <code>pid</code> and <code>time</code> for some <code>n</code> number of rows for each <code>i</code></p> <p>Before:</p> <pre><code> pid time id vid id1 vis_id1 pid1 t_0 vis_id1 pid2 t_1 id2 vis_id2 pid2 t_3 vis_id2 pid2 t_4 vis_id2 pid3 t_5 vis_id2 pid4 t_6 </code></pre> <p>After:</p> <pre><code> step1 step2 step3 step4 id vid id1 vis_id1 pid1 pid2 NA NA id2 vis_id2 pid1 pid2 pid3 pid4 </code></pre> <p>So the original <code>pid</code> becomes step 1 (I can just rename the column before I transpose) and then the previous <code>pids</code> are transposed such that they maintain their order (up-&gt;down) to (left-&gt;right). It would be helpful to remove the columns with time as well.</p>
<p>We can enumerate groups using <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>groupby cumcount</code></a> based on level=0, add as an additional level of the index (<a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> with <code>append=True</code> to add to existing MultiIndex), then <code>unstack</code> into columns:</p> <pre><code>new_df = df.set_index( df.groupby(level=0).cumcount() + 1, append=True ).unstack() </code></pre> <p><code>new_df</code>:</p> <pre><code> pid time 1 2 3 4 1 2 3 4 id vid id1 vis_id1 pid1 pid2 NaN NaN t_0 t_1 NaN NaN id2 vis_id2 pid1 pid2 pid3 pid4 t_3 t_4 t_5 t_6 </code></pre> <hr /> <p>To match shown output select only the columns desired, and flatten MultiIndex:</p> <pre><code>new_df = df[['pid']].set_index( df.groupby(level=0).cumcount() + 1, append=True ).unstack() new_df.columns = [f'step{i}' for i in new_df.columns.get_level_values(1)] </code></pre> <p><code>new_df</code>:</p> <pre><code> step1 step2 step3 step4 id vid id1 vis_id1 pid1 pid2 NaN NaN id2 vis_id2 pid1 pid2 pid3 pid4 </code></pre> <hr /> <p>Setup Used:</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'id': ['id1', 'id1', 'id2', 'id2', 'id2', 'id2'], 'vid': ['vis_id1', 'vis_id1', 'vis_id2', 'vis_id2', 'vis_id2', 'vis_id2'], 'pid': ['pid1', 'pid2', 'pid1', 'pid2', 'pid3', 'pid4'], 'time': ['t_0', 't_1', 't_3', 't_4', 't_5', 't_6'] }).set_index(['id', 'vid']) </code></pre> <p>Related reading <a href="https://stackoverflow.com/questions/69089182/pandas-groupby-list-to-multiple-rows">Pandas Groupby / List to Multiple Rows</a></p>
python|pandas|dataframe|pivot|multi-index
0
9,903
69,188,132
How to convert all float64 columns to float32 in Pandas?
<p>Is there a generic way to convert all float64 values in a pandas dataframe to float32 values? But not changing uint16 to float32? I don't know the signal names in advance but just want to have no float64.</p> <p>Something like:</p> <pre><code>if float64, then convert to float32, else nothing? </code></pre> <p>The structure of the data is:</p> <pre><code>DF.dtypes Counter uint16 p_007 float64 p_006 float64 p_005 float64 p_004 float64 </code></pre>
<p>Try this:</p> <pre><code>df[df.select_dtypes(np.float64).columns] = df.select_dtypes(np.float64).astype(np.float32) </code></pre>
python|pandas|type-conversion|dtype
3
9,904
68,876,300
How to insert values of one tensor into another?
<p>I am trying to insert tensor y into the tensors x final dimension (y_dim). The final tensor should be of size (100, 16, 16, 1) where the values of y are placed in each of 100 x 0dimension</p> <pre><code>import torch y_dim = 1 x = torch.randn(100, 16, 16, y_dim) #OR x = torch.randn(100, 16, 16) y = torch.randn(100) Xy = torch.cat((x, y), dim=3) </code></pre>
<p>I think you are missing something in your understanding of tensors and dimensions. The easiest thing is to consider your tensor <code>x</code> as a batch containing <code>100</code> maps of width and height <code>16</code>, <em>i.e.</em> <code>100</code> <code>16x16</code>-maps. So you are manipulating a tensor containing <code>100*16*16</code> elements. Your <code>y</code>, on the other hand, contains <code>100</code> scalar values, it has <code>100</code> elements.</p> <p>I'm turning the question back to you:</p> <blockquote> <p>How would you concatenate <code>100</code> <code>16x16</code>-maps with <code>100</code> scalar values?</p> </blockquote> <hr /> <p>The above question has no answer. There are certain things that can be done though, assumptions that can be made on <code>y</code> in order to perform a concatenation:</p> <ul> <li><p>If you had a tensor <code>y</code> containing <code>16x16</code> maps as well, then yes this operation would be achievable:</p> <pre><code>&gt;&gt;&gt; x = torch.rand(100, 16, 16) &gt;&gt;&gt; y = torch.rand(100, 16, 16) &gt;&gt;&gt; torch.cat((x, y)).shape torch.Size([200, 16, 16]) </code></pre> </li> <li><p>If you consider the <code>y</code> in your question, you could expand the <code>100</code> scalar values to <code>16x16</code> maps. And, then concatenate with <code>x</code>:</p> <pre><code>&gt;&gt;&gt; x = torch.rand(100, 16, 16) &gt;&gt;&gt; y = torch.rand(100) &gt;&gt;&gt; y_r = y[:, None, None].repeat(1, 16, 16) &gt;&gt;&gt; torch.cat((x, y_r)) torch.Size([200, 16, 16]) </code></pre> </li> </ul>
python|pytorch
1
9,905
69,106,204
probleme of neural network :mat1 and mat2 shapes cannot be multiplied
<p>I implemented a simple neuron network like this:</p> <pre class="lang-py prettyprint-override"><code>import torch from torch import nn class Simple_NN(nn.Module): ''' Multilayer Perceptron. ''' def __init__(self, input_dim): super().__init__() self.input = input_dim #self.out = out_dim self.layer = nn.Linear(self.input, 1, bias=False) def getweights(self): return self.layer.weight def normalize(self): self.layer.weight.data /= self.layer.weight.data.sum() return self.layer.weight def forward(self, x, dim = 0): sort = torch.sort(x, dim, descending = True)[0] #top = torch.topk(x, 4, dim) sort = self.layer(sort) return sort </code></pre> <p>when I run this piece of code:</p> <pre><code>outputs = torch.tensor([[1.9, 0.4, 1.3, 0.8, 0.2, 0.0],[1.7, 1.4, 0.3, 1.8, 1.2, 1.1]]) model = Simple_NN(input_dim = outputs.shape[0]) model.getweights() model.normalize() </code></pre> <p>I get the following result:</p> <pre><code>Parameter containing: tensor([[0.9772, 0.0228]], requires_grad=True) </code></pre> <p>but, when I run this line:</p> <pre><code>model(outputs, dim=0) </code></pre> <p>I get this error:</p> <pre><code> &lt;ipython-input-1-dd06de9bb6ad&gt; in forward(self, x, dim) 20 sort = torch.sort(x, dim, descending = True)[0] 21 #top = torch.topk(x, 4, dim) ---&gt; 22 sort = self.layer(sort) 23 return sort RuntimeError: mat1 and mat2 shapes cannot be multiplied (2x6 and 2x1) </code></pre> <p>How can I solve this problem?</p>
<p>As you didn't provide more details, here's 2 possible ways to solve this:</p> <ol> <li><p>If the <code>batch_size=2</code>, the <code>input_dim</code> should be 6, not 2:</p> <pre class="lang-py prettyprint-override"><code>model = Simple_NN(input_dim = outputs.shape[1]) # change [0] to [1] </code></pre> </li> <li><p>If the <code>batch_size=6</code>, then <code>outputs</code> needs to be transposed:</p> <pre class="lang-py prettyprint-override"><code>model(outputs.t(), dim=0) # add .t() </code></pre> </li> </ol> <p>I think the correct solution to your case is the first one, but both of them work. It depends on what you actually want.</p>
python|neural-network|pytorch|runtime-error
0
9,906
60,819,177
Calling a multiple argument function generated by 'lambdify' on numpy array
<p>I wrote an expression f in SymPy as shown in the code and then converted into a function using <code>lambdify</code>. Then, I vectorzed it using <code>np.vectorize(f)</code>to be able to apply it on a numpy array.</p> <pre><code>import numpy as np from math import exp a = Symbol('a') x = Symbol('x') b = Symbol('b') c = Symbol('c') from sympy import * f = exp(-(a+b+c)*x)*(4+exp(-(a+b)*x) -2*exp(-a*x) - 2*exp(-c*x)) f = lambdify([x,(a,b,c)], f) vff = np.vectorize(f) t = np.arange(0, 5, 0.01, dtype=np.float64) y = vff(t, (1,1,1)) # (1,1,1) stands for (a,b,c) </code></pre> <p>But while doing so, the last line throws following error. </p> <blockquote> <p><code>TypeError: _lambdifygenerated() missing 1 required positional argument: '_Dummy_191'</code>. </p> </blockquote> <p>I think the syntax might be wrong. I searched on internet, but could not find the right syntax. Can anyone tell me the correct syntax?</p>
<pre><code>In [8]: print(f.__doc__) Created with lambdify. Signature: func(x, arg_1) Expression: (exp(x*(-a - b)) + 4 - 2*exp(-c*x) - 2*exp(-a*x))*exp(x*(-a - b - c)) Source code: def _lambdifygenerated(x, _Dummy_166): [a, b, c] = _Dummy_166 return ((exp(x*(-a - b)) + 4 - 2*exp(-c*x) - 2*exp(-a*x))*exp(x*(-a - b - c))) </code></pre> <p>Imported modules:</p> <p><code>f</code> can be run with:</p> <pre><code>In [9]: f(np.arange(3),(1,1,1)) Out[9]: array([1. , 0.13262366, 0.00861856]) </code></pre> <p>Since <code>f</code> works with an array input, I don't think you need the <code>vectorized</code> form.</p> <p>I get your error if I call <code>vff</code> with one argument:</p> <pre><code>In [14]: vff(np.arange(3)) --------------------------------------------------------------------------- ... TypeError: _lambdifygenerated() missing 1 required positional argument: '_Dummy_166' </code></pre> <p>With 2 arguments I get a different error:</p> <pre><code>In [15]: vff(np.arange(3),(1,1,1)) --------------------------------------------------------------------------- ... &lt;lambdifygenerated-1&gt; in _lambdifygenerated(x, _Dummy_166) 1 def _lambdifygenerated(x, _Dummy_166): ----&gt; 2 [a, b, c] = _Dummy_166 3 return ((exp(x*(-a - b)) + 4 - 2*exp(-c*x) - 2*exp(-a*x))*exp(x*(-a - b - c))) TypeError: 'numpy.int64' object is not iterable </code></pre> <p>That's because <code>vectorize</code> is passing scalar <code>1</code> as the second argument. <code>vectorize</code> passes scalar tuples to the function, not arrays. It's designed from functions that take scalar arguments, not arrays or even tuples. There are some ways around that - but keep in mind that <code>vectorize</code> is <strong>not</strong> a speed tool.</p> <p>Since you don't want <code>vectorize</code> to iterate on the (a,b,c) argument, we can 'exclude' it:</p> <pre><code>In [16]: vff = np.vectorize(f, excluded=[1]) In [17]: vff(np.arange(3),(1,1,1)) Out[17]: array([1. , 0.13262366, 0.00861856]) </code></pre> <p>That works the same as <code>f</code> (above). But it is much slower:</p> <pre><code>In [18]: timeit vff(np.arange(300),(1,1,1)) 5.29 ms ± 11.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [19]: timeit f(np.arange(300),(1,1,1)) 103 µs ± 41 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) </code></pre> <p><code>vectorize</code> is even slower than a list comprehension:</p> <pre><code>In [20]: timeit np.array([f(i,(1,1,1)) for i in range(300)]) 3.91 ms ± 6.17 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) </code></pre>
python|numpy|vectorization|sympy
2
9,907
71,452,274
explode the pandas nested array in python
<p>I am reading data from MongoDB and dropping in s3. Reading the data using Athena.</p> <p>This is my collection which contains Items columns which is an array. How to explode that into separate columns when saving that to s3.</p> <pre><code>{&quot;_id&quot;:{&quot;$oid&quot;:&quot;11111111&quot;}, &quot;receiptId&quot;:&quot;rtrtrtrttrtrtrtr&quot;, &quot;paymentSystem&quot;:&quot;CARD&quot;, &quot;lastFourDigit&quot;:&quot;1111&quot;, &quot;cardType&quot;:&quot;ghsl&quot;, &quot;paidOn&quot;:{&quot;$numberLong&quot;:&quot;1623078706000&quot;}, &quot;currency&quot;:&quot;USD&quot;, &quot;totalAmountInCents&quot;:{&quot;$numberInt&quot;:&quot;0000&quot;}, &quot;items&quot;:[{&quot;title&quot;:&quot;Jun 21 - Jun 21,2022&quot;, &quot;description&quot;:&quot;Starter&quot;, &quot;currency&quot;:&quot;USD&quot;, &quot;amountInCents&quot;:{&quot;$numberInt&quot;:&quot;0000&quot;}, &quot;itemType&quot;:&quot;SUBSCRIPTION_PLAN&quot;, &quot;id&quot;:{&quot;$numberInt&quot;:&quot;1&quot;}, &quot;frequency&quot;:&quot;YEAR&quot;, &quot;periodStart&quot;:{&quot;$numberLong&quot;:&quot;1624288306000&quot;}, &quot;periodEnd&quot;:{&quot;$numberLong&quot;:&quot;1655824306000&quot;}}], &quot;subscriptionPlanTitle&quot;:&quot;Starter&quot;, &quot;subscriptionPlanFrequency&quot;:&quot;YEAR&quot;, &quot;uuid&quot;:&quot;1111111111&quot;, &quot;createTimestamp&quot;:{&quot;$numberLong&quot;:&quot;1624292188650&quot;}, &quot;updateTimestamp&quot;:{&quot;$numberLong&quot;:&quot;1624292188650&quot;}} </code></pre> <p>Python Code I tried,</p> <pre class="lang-py prettyprint-override"><code>mylist = [] myresult = collection.find(query) mylist = [] for x in myresult: mylist.append(x) df = json_normalize(mylist) df1 = df.applymap(str) </code></pre> <p>I am able to save that into parquet. But items all are in a single column. Is there a way to explode dynamically?</p> <p>output schema might be</p> <pre><code> _id object id object createTimestamp object updateTimestamp object deleteTimestamp object receiptId object paymentSystem object lastFourDigit object cardType object paidOn object currency object totalAmountInCents object items.title object items.description object items.currency object items.amountInCents object items.itemType object items.id object items.frequency object items.periodstart object items.periodend object subscriptionPlanTitle object subscriptionPlanFrequency object uuid object consumerEmail object taxAmountInCents object gifted object </code></pre>
<p>You could use <code>json_normalize</code>:</p> <pre><code>out = pd.json_normalize(data, ['items'], list(data.keys() - {'items'}), record_prefix = 'items.') </code></pre> <p>Another option is to create a DataFrame with <code>data</code>; then <code>explode</code> and build a DataFrame separately with &quot;items&quot; column; then <code>join</code>:</p> <pre><code>df = pd.json_normalize(data) out1 = df.join(df['items'].explode().pipe(lambda x: pd.DataFrame(x.tolist())).add_prefix('items.')).drop(columns='items') </code></pre> <p>Output:</p> <pre><code> items.title items.description items.currency items.itemType \ 0 Jun 21 - Jun 21,2022 Starter USD SUBSCRIPTION_PLAN items.frequency items.amountInCents.$numberInt items.id.$numberInt \ 0 YEAR 0000 1 items.periodStart.$numberLong items.periodEnd.$numberLong cardType ... \ 0 1624288306000 1655824306000 ghsl ... uuid lastFourDigit _id currency \ 0 1111111111 1111 {'$oid': '11111111'} USD totalAmountInCents createTimestamp \ 0 {'$numberInt': '0000'} {'$numberLong': '1624292188650'} paidOn updateTimestamp \ 0 {'$numberLong': '1623078706000'} {'$numberLong': '1624292188650'} subscriptionPlanTitle paymentSystem 0 Starter CARD [1 rows x 22 columns] </code></pre> <p>Note that some of the keys in the metadata (e.g. &quot;taxAmountInCents&quot;) don't exist in the sample.</p>
python-3.x|pandas|dataframe|json-normalize|pandas-explode
1
9,908
71,569,951
Iterate dataframe and assign value to each row- I get the same value while I want different ones
<p>I'm using the library isbntools to assign book titles to isbns. From a dataframe that has isbns, I want to create a column named title and assign the title to the corresponding isbn. Problem is I get the same title.</p> <p>Example dataframe:</p> <p><code>isbn</code></p> <p><code>01234567</code></p> <p>Desidred output</p> <p><code>isbn, title</code> <code>01234567, Curious George</code></p> <p>Code:</p> <pre><code>from isbntools.app import * for i in range(len(df_all['isbn'])): for isbnz in df_all['isbn']: meta_dict = meta(isbnz, service='goob') title = meta_dict['Title'] df_all.iloc[i, df['Title'][i]] </code></pre> <p>I tried with iloc but it seems it didn't work</p>
<p>Use:</p> <pre><code># isbnlib is already installed as a dependency of isbntools import isbnlib def get_title(isbn): try: return isbnlib.meta(isbn)['Title'] except isbnlib.NotValidISBNError: return None df['Title'] = df['isbn'].astype(str).map(get_title) </code></pre> <pre><code>&gt;&gt;&gt; df isbn Title 0 01234567 None 1 9780007525546 The Lord Of The Rings </code></pre>
python|python-3.x|pandas|dataframe
0
9,909
71,661,634
How to get only column value of a dataframe (without reference formula)
<p>I want to get only column value of csv file rather then reference formula.</p> <pre><code> df_csv = pd.read_csv(file_name) print(df_csv[&quot;column_head&quot;]) </code></pre> <p>the output is not the value. It is a csv reference formula.</p> <pre><code>0 =ROUND(IF(J2,I2/J2,0),4) 1 =ROUND(IF(J3,I3/J3,0),4) </code></pre> <p>But I want only cell values not the formula. How can I do this in python?</p>
<p>Something seems to be wrong with the format of the file you read from. I checked on a spreadsheet containing formulas for some cells. here as an example:</p> <p><a href="https://i.stack.imgur.com/ApniF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ApniF.png" alt="XLTest" /></a></p> <p>The 'D' column contains moving averages from the 'B' column. I saved the spreadsheet as a 'xlsx' file and as a 'csv' file.</p> <p>For the first, importing with:</p> <pre><code>import pandas as pd df_ex = pd.read_excel(&quot;xltest.xlsx&quot;) df_ex = df_ex[df_ex['Global 10 years MA'].notnull()] df_ex.head() </code></pre> <p>and the second, importing with:</p> <pre><code>df_ex2 = pd.read_csv('xltest.csv') df_ex2[df_ex2['Global 10 years MA'].notnull()].head() </code></pre> <p>both gave this output (note: the 'notnull()' mask is just extracting the lines with a value in this example):</p> <p><a href="https://i.stack.imgur.com/b8ZZA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b8ZZA.png" alt="enter image description here" /></a></p> <p>Extracting the &quot;Column with a Formula&quot; from the first dataframe:</p> <pre><code>ser = df_ex['Global 10 years MA'] ser.head() </code></pre> <p>gives this output:</p> <p><a href="https://i.stack.imgur.com/JECXo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JECXo.png" alt="enter image description here" /></a></p> <p>As you see, values, and no excel formulas. Check you save your file as xlsx or csv format, in your spreadsheet program.</p>
python|pandas|dataframe|csv
1
9,910
71,608,715
Having trouble expanding/normalizing a dataframe column of dictionary values into a dataframe/ other columns
<p><img src="https://i.stack.imgur.com/qciwZ.png" alt="enter image description here" /></p> <p>I'm trying to expand a dataframe column of dictionaries into it's own dataframe/other columns. I have already tried using json_normalize, iteration, and list comprehension but for some reason it just returns a blank dataframe. I've attached a link to the CSV I'm working with.</p> <p><a href="https://drive.google.com/file/d/1tHjy9S1XVosYjKTlwp6eMU0UYPpnuA70/view?usp=sharing" rel="nofollow noreferrer">its a csv file with yelp data and the this issue is occurring with all the columns of dictionaries</a></p> <pre><code>import matplotlib.pyplot as plt import pandas as pd import requests from pandas.io.json import json_normalize import seaborn as sns import json from google.colab import files import io uploaded = files.upload() yelpdf = pd.read_csv(io.BytesIO(uploaded['yelp_reviews.csv'])) print(yelpdf['Ambience']) df2 = pd.json_normalize(yelpdf['Ambience']) print(df2.info()) print(df2.shape) print(df2.head()) </code></pre>
<p>The issue is the elements in the 'Ambience' are strings not dictionaries. You just need to convert to dictionaries first before using <code>json_normalize</code>. You can do this using the <code>literal_eval</code> function within the ast Python package.</p> <pre><code>yelp_df['Ambience'] = yelp_df['Ambience'].apply(lambda x: ast.literal_eval(x) if pd.notnull(x) else x) </code></pre> <p>There are some nans in the column, so you'll just need to convert only the strings and leave the nan's as is.</p> <p>Then you can just run your code as normal -</p> <pre><code>df2 = pd.json_normalize(yelp_df['Ambience']) </code></pre> <p>Which should yield the dataframe you want.</p>
python|json|pandas|dataframe
1
9,911
42,562,199
How to find common name from index(first) column using python?
<p>find common name from index(first) column using python and sum it's following column from same row.</p> <h2>For example I have below two csv.</h2> <pre><code>df1 Name sub1 sub2 sub3 X 1 2 5 Y 4 5 6 df2 Name sub1 sub2 sub3 A 3 5 3 Y 3 1 4 </code></pre> <p>Output should display only Y in first column as common and display column contents as df2 but in sub3 column it should average from df1 and df2.</p> <pre><code>output Name sub1 sub2 sub3 Y 3(df2) 1(df2) 5=(df1+df2)/2 </code></pre>
<p>Pandas merge with on = 'Name' will give you only the rows with common name. You can then drop unnecessary columns and find mean of sub3 like this.</p> <pre><code>df_result = pd.merge(df2, df1, on = 'Name') df_result['sub3'] = df_result[['sub3_x', 'sub3_y']].mean(axis = 1) df_result = df_result.drop(['sub3_x','sub1_y','sub2_y','sub3_y'], axis = 1) df_result.columns = ['Name', 'sub1', 'sub2', 'sub3'] </code></pre> <p>Resulting dataframe</p> <pre><code> Name sub1 sub2 sub3 0 Y 3 1 5 </code></pre>
python|pandas
1
9,912
43,383,151
Convert boolean DataFrame to binary number array
<p>I have a boolean pandas DataFrame, as follow</p> <pre><code>aaa = pd.DataFrame([[False,False,False], [True,True,True]]) </code></pre> <p>I want to convert it to a binary number array, for this DataFrame "aaa", the result is [000,111]</p> <p>How can I implement this conversion?</p> <p>Any help will be greatly appreciated. Thanks</p>
<p>You can do:</p> <pre><code>aaa = pd.DataFrame([[False,False,False], [True,True,True]]) aaa=aaa.astype(int) </code></pre> <p>Then <code>aaa</code> is</p> <pre><code> 0 1 2 0 0 0 0 1 1 1 1 </code></pre> <p>If you want to get the array <code>['000','111']</code> you can do:</p> <pre><code>aaa = pd.DataFrame([[False,False,False], [True,True,True]]) aaa=aaa.astype(int).astype(str) [''.join(i) for i in aaa.values.tolist()] </code></pre>
python|pandas
5
9,913
43,220,729
access elements from matrix in numpy
<p>I have a matrix A mXn and an array of size m The array indicates the index of column which has to be index from A. So, an example would</p> <pre><code>A = [[ 1,2,3],[1,4,6],[2,9,0]] indices = [0,2,1] </code></pre> <p>The output I want is</p> <p><code>C = [1,6,9]</code> (corresponding values from each row of matrix A)</p> <p>Whats a vectorized way to do this. Thanks</p>
<p>Use <a href="https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing" rel="nofollow noreferrer">advanced indexing</a>:</p> <pre><code>A = np.array([[ 1,2,3],[1,4,6],[2,9,0]]) indices = np.array([0,2,1]) # here use an array [1,2,3] to represent the row positions, and combined with indices as # column positions, it gives an array at corresponding positions (1, 0), (2, 2), (3, 1) A[np.arange(A.shape[0]), indices] # array([1, 6, 9]) </code></pre>
python|numpy
3
9,914
43,401,393
What is a _Head object in Tensorflow?
<p>Looking at the docs for <a href="https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/contrib/learn/DNNLinearCombinedEstimator" rel="nofollow noreferrer">DNNLinearCombinedEstimator</a>, I see the first param is a _Head object:</p> <blockquote> <p>Args:</p> <p>head: A _Head object.</p> </blockquote> <p>I can't find docs about this at all.</p> <p>What is it?</p>
<p>I found this: <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/estimators/head.py#L53" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/estimators/head.py#L53</a></p> <blockquote> <p>Interface for the head/top of a model.</p> <p>Given logits (or output of a hidden layer), a Head knows how to compute predictions, loss, default metric and export signature.</p> </blockquote> <p>Reading on, it looks it's just some object from which you can get the predictions, loss, and more of a model, created to simplify model_fn(). And you typically have one head object per model objective.</p>
tensorflow
4
9,915
43,227,040
Why does transposing a numpy array rotate it 90 degrees?
<p>I am trying to read images from an <code>lmdb</code> <code>dataset</code>, augment each one and then save them into another <code>dataset</code> for being used in my trainings.<br> These images axis were initially changed to <code>(3,32,32)</code> when they were being saved into the <code>lmdb dataset</code>, So in order to augment them I had to transpose them back into their actual shape.<br> The problem is whenever I try to display them using either <code>matplotlib</code>'s <code>show()</code> method or <code>scipy</code>'s <code>toimage()</code>, they show a rotated version of the image. So we have : </p> <pre><code>img_set = np.transpose(data_train,(0,3,2,1)) #trying to display an image using pyplot, makes it look like this: plt.subplot(1,2,1) plt.imshow(img_set[0]) </code></pre> <p><a href="https://i.stack.imgur.com/ux3Qr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ux3Qr.png" alt="enter image description here"></a></p> <p>showing the same image using <code>toimage</code> : </p> <p><a href="https://i.stack.imgur.com/Qqi5p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qqi5p.png" alt="enter image description here"></a></p> <p>Now if I dont transpose <code>data_train</code>, <code>pyplot</code>'s <code>show()</code> generates an error while <code>toimage()</code> displays the image well:<br> <a href="https://i.stack.imgur.com/gmpoM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gmpoM.png" alt="enter image description here"></a></p> <p>What is happening here?<br> When I feed the transposed data_train to my augmenter, I also get the result rotated just like previous examples.<br> Now I'm not sure whether this is a displaying issue, or the actual images are indeed rotated!<br> What should I do ? </p>
<p>First, look closely. The transoposed array is not rotated but mirrored on the diagonal (i.e. X and Y axes are swapped).</p> <p>The original shape is <code>(3,32,32)</code>, which I interpret as <code>(RGB, X, Y)</code>. However, <code>imshow</code> expects an array of shape <code>MxNx3</code> - the color information must be in the last dimension.</p> <p>By transposing the array you invert the order of dimensions: <code>(RGB, X, Y)</code> becomes <code>(Y, X, RGB)</code>. This is fine for matplotlib because the color information is now in the last dimension but X and Y are swapped, too. If you want to preserve the order of X, Y you can tell <code>transpose to do so</code>:</p> <pre><code>import numpy as np img = np.zeros((3, 32, 64)) # non-square image for illustration print(img.shape) # (3, 32, 64) print(np.transpose(img).shape) # (64, 32, 3) print(np.transpose(img, [1, 2, 0]).shape) # (32, 64, 3) </code></pre> <hr> <p>When using <code>imshow</code> to display an image be aware of the following pitfalls:</p> <ol> <li><p>It treats the image as a matrix, so the dimensions of the array are interpreted as (ROW, COLUMN, RGB), which is equivalent to (VERTICAL, HORIZONTAL, COLOR) or (Y, X, RGB).</p></li> <li><p>It changes direction of the y axis so the upper left corner is img[0, 0]. This is different from matplotlib's normal coordinate system where (0, 0) is the bottom left.</p></li> </ol> <p>Example:</p> <pre><code>import matplotlib.pyplot as plt img = np.zeros((32, 64, 3)) img[1, 1] = [1, 1, 1] # marking the upper right corner white plt.imshow(img) </code></pre> <p><a href="https://i.stack.imgur.com/rdV0P.png" rel="noreferrer"><img src="https://i.stack.imgur.com/rdV0P.png" alt="enter image description here"></a></p> <p>Note that the smaller first dimension corresponds to the vertical direction of the image.</p>
python|numpy|matplotlib|scipy|caffe
10
9,916
72,186,590
Cartesian product for values in a single row pandas df
<p>I have a df with 14 columns and 20,000 rows. I would like to create a two column dataframe that represents each unique pairing for the data entries within each single row. Example:</p> <pre><code>#sample df: ​data = {'first': ['red', 'blue', 'yellow'], 'second': ['blue', 'pink', 'orange'], 'third': ['green', 'grey', None]} df = pd.DataFrame(data) df first second third 0 red blue green 1 blue pink grey 2 yellow orange None </code></pre> <p>for this df input my desired output would be:</p> <pre><code> pairA pairB 0 red blue 1 red green 2 blue green 3 blue pink 4 blue grey 5 pink grey 6 yellow orange 7 yellow None 8 orange None </code></pre> <p>I have tried to use itertools product, but have only made that work column-wise for two columns. I believe a for loop would take way too long with the size of the data. Is there a pandas way to do this?</p>
<p>Try this:</p> <pre class="lang-py prettyprint-override"><code>from itertools import combinations combo = df.apply(lambda row: list(combinations(row, 2)), axis=1).explode().to_list() pd.DataFrame(combo, columns=[&quot;pairA&quot;, &quot;pairB&quot;]) </code></pre>
python|pandas|dataframe
1
9,917
72,147,167
Add a comma after two words in pandas
<p>I have the following texts in a df column:</p> <pre><code>La Palma La Palma Nueva La Palma, Nueva Concepcion El Estor El Estor Nuevo Nuevo Leon San Jose La Paz Colombia Mexico Distrito Federal El Estor, Nuevo Lugar </code></pre> <p>What I need is to add a comma at the end of each row but the condition that it is only two words. I found a partial solution:</p> <pre><code>df['Column3'] = df['Column3'].apply(lambda x: str(x)+',') </code></pre> <p>(solution found in stackoverflow)</p>
<p>Given:</p> <pre><code> words 0 La Palma 1 La Palma Nueva 2 La Palma, Nueva Concepcion 3 El Estor 4 El Estor Nuevo 5 Nuevo Leon 6 San Jose 7 La Paz Colombia 8 Mexico Distrito Federal 9 El Estor, Nuevo Lugar </code></pre> <p>Doing:</p> <pre><code>df.words = df.words.apply(lambda x: x+',' if len(x.split(' ')) == 2 else x) print(df) </code></pre> <p>Outputs:</p> <pre><code> words 0 La Palma, 1 La Palma Nueva 2 La Palma, Nueva Concepcion 3 El Estor, 4 El Estor Nuevo 5 Nuevo Leon, 6 San Jose, 7 La Paz Colombia 8 Mexico Distrito Federal 9 El Estor, Nuevo Lugar </code></pre>
python|pandas|conditional-statements
0
9,918
72,185,667
intersection of two geopandas GeoSeries gives warning "The indices of the two GeoSeries are different." and few matches
<p>I am using geopandas for finding intersections between points and polygons. When I use the following:</p> <pre><code>intersection_mb = buffers_df.intersection(rest_VIC) </code></pre> <p>I get this output with a warning basically saying there are no intersections:</p> <pre><code>0 None 112780 None 112781 None 112782 None 112784 None ... 201314 None 201323 None 201403 None 201404 None 201444 None Length: 3960, dtype: geometry </code></pre> <p>Warning message:</p> <pre><code> C:\Users\Name\Anaconda3\lib\site-packages\geopandas\base.py:31: UserWarning: The indices of the two GeoSeries are different. warn(&quot;The indices of the two GeoSeries are different.&quot;) </code></pre> <p>I looked for any suggestions and found that I could solve by setting crs for both geoseries for which the intersection is to be performed on, but it did not work.</p> <pre><code>rest_VIC = rest_VIC.set_crs(epsg=4326, allow_override=True) buffers_df = buffers_df.set_crs(epsg=4326, allow_override=True) </code></pre> <p>Any suggestions will be helpful. Thanks.</p>
<p><a href="https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoSeries.intersection.html" rel="nofollow noreferrer"><code>geopandas.GeoSeries.intersection</code></a> is an <em>element-wise</em> operation. From the intersection docs:</p> <blockquote> <p>The operation works on a 1-to-1 row-wise manner</p> </blockquote> <p>There is an optional <code>align</code> argument which determines whether the series should be first compared based after aligning based on the index (if True, the default) or if the intersection operation should be performed row-wise based on position.</p> <p>So the warning you're getting, and the resulting NaNs, are because you're performing an elementwise comparison on data with unmatched indices. The same issue would occur in pandas when trying to merge columns for DataFrames with un-aligned indices.</p> <p>If you're trying to find the mapping from points to polygons across all combinations of rows across the two dataframes, you're looking for a spatial join, which can be done with <a href="https://geopandas.org/en/stable/docs/reference/api/geopandas.sjoin.html" rel="nofollow noreferrer"><code>geopandas.sjoin</code></a>:</p> <pre class="lang-py prettyprint-override"><code>intersection_mb = geopandas.sjoin( buffers_df, rest_VIC, how='outer', predicate='intersects', ) </code></pre> <p>See the geopandas guide to <a href="https://geopandas.org/en/stable/docs/user_guide/mergingdata.html" rel="nofollow noreferrer">merging data</a> for more info.</p>
geospatial|intersection|geopandas
2
9,919
50,245,076
How can I rename a column that contains special (Greek) characters
<p>I have a dataframe and in early in my script I name my columns using: </p> <pre><code>beta = 1.17 names =np.arange((beta-0.05),(beta+0.05),.01) dfs.columns = [r'$\beta$'+str(i) for i in names] </code></pre> <p>Later in the script I want to replace <code>r'$\beta$'</code> with <code>ats</code>.</p> <p>I have tried the following:</p> <pre><code>dfs.columns = dfs.columns.str.replace("[(r'$\beta$')]", "ats") </code></pre> <p>But, it isn't working as expected. Any suggestions are appreciate. </p> <p>Thanks. </p>
<p>You need escape special regex characters <code>$</code>:</p> <pre><code>beta = 1.17 names =np.arange((beta-0.05),(beta+0.05),.01) dfs = pd.DataFrame(0, columns=names, index=[0]) dfs.columns = [r'$\beta$'+str(i) for i in names] dfs.columns = dfs.columns.str.replace(r'\$\\beta\$', "ats") print (dfs) ats1.1199999999999999 ats1.13 ats1.14 ats1.15 ats1.16 ats1.17 \ 0 0 0 0 0 0 0 ats1.18 ats1.19 ats1.2 ats1.21 ats1.22 0 0 0 0 0 0 </code></pre>
pandas|dataframe|replace
2
9,920
50,302,790
How to feed tensorflow image value of shape (3,3) into targets which has shape (?, 2)
<p>I am trying to feed a value of shape 3,3 into a tensor of shape (?, 2). My question is how do I reshape my (3,3) value so it is compatible with the latter.</p> <p>Here is my main training loop:</p> <pre><code>for epoch in range(epochs): batches = dg.get_mini_batches(batchSize,(128,128), allchannel=False) for imgs ,labels in batches: imgs=np.divide(imgs, 255) error, sumOut, acu, steps,_ = sess.run([cost, summaryMerged, accuracy,global_step,optimizer], feed_dict={input_img: imgs, target_labels: labels}) writer.add_summary(sumOut, steps) print("epoch=", epoch, "Total Samples Trained=", steps*batchSize, "err=", error, "accuracy=", acu) if steps % 100 == 0: print("Saving the mdl") saver.save(sess, mdl_save_path+mdl_name, global_step=steps) Traceback (most recent call last): File "C:/Users/name/PycharmProjects/tf-foodar/tf-foodar-beta.py", line 87, in &lt;module&gt; feed_dict={input_img: imgs, target_labels: labels}) File "C:\Users\name\anaconda\envs\lib\site-packages\tensorflow\python\client\session.py", line 895, in run run_metadata_ptr) File "C:\Users\name\anaconda\envs\lib\site-packages\tensorflow\python\client\session.py", line 1104, in _run % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape()))) ValueError: Cannot feed value of shape (3, 3) for Tensor 'Target/Targets:0', which has shape '(?, 2)' </code></pre>
<p>You can't.</p> <p>A tensor with shape <code>(3,3)</code> can't be reshaped (using <code>tf.reshape</code>) into a tensor with shape <code>(?,2)</code> where <code>?</code> is an unknown dimension and <code>2</code> is fixed.</p> <p>This is because you have <code>3 x 3 = 9</code> elements and <code>9/2 = 4.5</code> is not an integer (4.5 is the computed value of the unknown dimension). Hence you can't create a new tensor with shape <code>(4.5, 2)</code>.</p> <p>However, there's something wrong with your reasoning. Ask yourself: why I want to feed a network with data whose network is not able to accept?</p>
python|tensorflow
2
9,921
50,550,126
TF DATA API: How to produce tensorflow input to object set recognition
<p>Consider this problem: select a random number of samples from a random subject in an image dataset (like ImageNet) as an input element for Tensorflow graph which functions as an object set recognizer. For each batch, each class has a same number of samples to facilitate computation. But a different batch would have a different number of images for one class, i.e. batch_0:<code>num_imgs_per_cls</code>=2; batch_1000:<code>num_imgs_per_cls</code>=3. </p> <p>If there is existing functionality in Tensorflow, explanation for the whole process from scratch (like from directories of images) will be really appreciated.</p>
<p><em>There is a very similar answer by @mrry <a href="https://stackoverflow.com/questions/50356677/how-to-create-tf-data-dataset-from-directories-of-tfrecords">here</a>.</em></p> <h1>Sampling balanced batches</h1> <p>In face recognition we often use triplet loss (or similar losses) to train the model. The usual way to sample triplets to compute the loss is to create a <strong>balanced batch of images</strong> where we have for instance 10 different classes (i.e. 10 different people) with 5 images each. This gives a total batch size of 50 in this example.</p> <p>More generally the problem is to sample <code>num_classes_per_batch</code> (10 in the example) classes, and then sample <code>num_images_per_class</code> (5 in the example) images for each class. The total batch size is:</p> <pre class="lang-py prettyprint-override"><code>batch_size = num_classes_per_batch * num_images_per_class </code></pre> <hr> <h2>Have one dataset for each class</h2> <p>The easiest way to deal with a lot of different classes (100,000 in MS-Celeb) is to create one dataset for each class.<br> For instance you can have one tfrecord for each class and create the datasets like this:</p> <pre class="lang-py prettyprint-override"><code># Build one dataset per class. filenames = ["class_0.tfrecords", "class_1.tfrecords"...] per_class_datasets = [tf.data.TFRecordDataset(f).repeat(None) for f in filenames] </code></pre> <hr> <h2>Sample from the datasets</h2> <p>Now we would like to be able to sample from these datasets. For instance we want the following labels in our batch:</p> <pre><code>1 1 1 3 3 3 9 9 9 4 4 4 </code></pre> <p>This corresponds to <code>num_classes_per_batch=4</code> and <code>num_images_per_class=3</code>.</p> <p>To do this we will need to use features that will be released in <code>r1.9</code>. The function should be called <code>tf.contrib.data.choose_from_datasets</code> (see <a href="https://github.com/tensorflow/tensorflow/commit/c2643d12c552799532b933238711d5c433e4df17" rel="nofollow noreferrer">here</a> for a discussion on this).<br> It should look like:</p> <pre class="lang-py prettyprint-override"><code>def choose_from_datasets(datasets, selector): """Chooses elements with indices from selector among the datasets in `datasets`.""" </code></pre> <p>So we create this <code>selector</code> which will output <code>1 1 1 3 3 3 9 9 9 4 4 4</code> and combine it with <code>datasets</code> to obtain our final dataset that will output balanced batches:</p> <pre class="lang-py prettyprint-override"><code>def generator(_): # Sample `num_classes_per_batch` classes for the batch sampled = tf.random_shuffle(tf.range(num_classes))[:num_classes_per_batch] # Repeat each element `num_images_per_class` times batch_labels = tf.tile(tf.expand_dims(sampled, -1), [1, num_images_per_class]) return tf.to_int64(tf.reshape(batch_labels, [-1])) selector = tf.contrib.data.Counter().map(generator) selector = selector.apply(tf.contrib.data.unbatch()) dataset = tf.contrib.data.choose_from_datasets(datasets, selector) # Batch batch_size = num_classes_per_batch * num_images_per_class dataset = dataset.batch(batch_size) </code></pre> <hr> <p>You can test this with the nightly TensorFlow build and by using <code>DirectedInterleaveDataset</code> as a workaround:</p> <pre class="lang-py prettyprint-override"><code># The working option right now is from tensorflow.contrib.data.python.ops.interleave_ops import DirectedInterleaveDataset dataset = DirectedInterleaveDataset(selector, datasets) </code></pre> <p>I also wrote about this workaround <a href="https://github.com/omoindrot/tensorflow-triplet-loss/issues/7" rel="nofollow noreferrer">here</a>.</p>
tensorflow|tensorflow-datasets
5
9,922
45,684,199
Error when importing tensorflow in Spyder
<p>I just installed tensorflow on the new laptop.</p> <p>(Anaconda 4.3.24, Python 3.6.1, TensorFlow: 1.2.1, GPU: NVIDIA 1060 6GB)</p> <p>Four problems currently.</p> <p><strong>{1} "Failed to load the native TensorFlow runtime" error in Spyder</strong></p> <pre><code>File "D:/Programs/Codes-Python/OpenCVtest.py", line 13, in &lt;module&gt; import tensorflow as tf File "D:\Programs\Anaconda\lib\site-packages\tensorflow\__init__.py", line 24, in &lt;module&gt; from tensorflow.python import * File "D:\Programs\Anaconda\lib\site-packages\tensorflow\python\__init__.py", line 49, in &lt;module&gt; from tensorflow.python import pywrap_tensorflow File "D:\Programs\Anaconda\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 52, in &lt;module&gt; raise ImportError(msg) ImportError: Traceback (most recent call last): File "D:\Programs\Anaconda\lib\site- packages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in swig_import_helper return importlib.import_module(mname) File "D:\Programs\Anaconda\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "&lt;frozen importlib._bootstrap&gt;", line 978, in _gcd_import File "&lt;frozen importlib._bootstrap&gt;", line 961, in _find_and_load File "&lt;frozen importlib._bootstrap&gt;", line 950, in _find_and_load_unlocked File "&lt;frozen importlib._bootstrap&gt;", line 648, in _load_unlocked File "&lt;frozen importlib._bootstrap&gt;", line 560, in module_from_spec File "&lt;frozen importlib._bootstrap_external&gt;", line 922, in create_module File "&lt;frozen importlib._bootstrap&gt;", line 205, in _call_with_frames_removed ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\Programs\Anaconda\lib\site- packages\tensorflow\python\pywrap_tensorflow.py", line 41, in &lt;module&gt; from tensorflow.python.pywrap_tensorflow_internal import * File "D:\Programs\Anaconda\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 21, in &lt;module&gt; _pywrap_tensorflow_internal = swig_import_helper() File "D:\Programs\Anaconda\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 20, in swig_import_helper return importlib.import_module('_pywrap_tensorflow_internal') File "D:\Programs\Anaconda\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ModuleNotFoundError: No module named '_pywrap_tensorflow_internal' Failed to load the native TensorFlow runtime. </code></pre> <p><strong>{2} ...but tensorflow loads without (much) problem from command prompt</strong></p> <p>The confusing thing is that when I load the tensorflow via anaconda prompt -> activate tensorflow -> python -> import tensorflow: Then there is no error while importing tensorflow. </p> <p>How come? If the tensorflow library is installed only for certain environment the error message in Spyder should be "No Module Named TensorFlow"....</p> <p><strong>{3} Some discrepancies when running an example</strong> </p> <p>Now when I run the test code in anaconda prompt:</p> <pre><code>&gt;&gt;&gt; import tensorflow as tf &gt;&gt;&gt; hello = tf.constant('Hello, TensorFlow!') &gt;&gt;&gt; sess = tf.Session() </code></pre> <p>I get the following 'errors'?</p> <pre><code>2017-08-14 23:39:37.137745: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE instructions, but these are available on your machine and could speed up CPU computations. 2017-08-14 23:39:37.137929: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE2 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-14 23:39:37.139157: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-14 23:39:37.139677: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-14 23:39:37.140599: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-14 23:39:37.141239: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-08-14 23:39:37.141915: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-14 23:39:37.142529: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. </code></pre> <p>But I can still run the last part of the test "print(sess.run(hello))" and get the following result.</p> <pre><code>&gt;&gt;&gt; print(sess.run(hello)) b'Hello, TensorFlow!' </code></pre> <p>The 'b' in front of Hello is not supposed to be there, but always present when I run the code. Why?</p> <p><strong>{4} No known Device error - GPU unrecognised?</strong> Lastly, when I check the device being used, tensorflow does not seem to recognise the GPU. Why? I've tried uninstalling, re-installing tensorflow and tensorflow-GPU separately to no avail.</p> <pre><code>&gt;&gt;&gt; sess = tf.Session(config = tf.ConfigProto(log_device_placement=True)) Device mapping: no known devices. 2017-08-14 23:50:42.624086: I c:\tf_jenkins\home\workspace\release-win\m\windows\py\36\tensorflow\core\common_runtime\direct_session.cc:265] Device mapping: </code></pre> <p>Any help would be much appreciated. Thanks, CN</p>
<p>For your {1} question, because you activate Tensorflow in {2}, I guess your Spyder is installed in different enviroment. Maybe you could try to change the python interpreter of Spyder from Preference->Console->Advanced settings. For {4}, did you install Nvidia Cuda tool? Hope this will help.</p> <p>Best, Robin</p>
python|tensorflow|anaconda|gpu|spyder
0
9,923
45,421,820
How can I read in from a file, then write out to another file only if certain values are in a range?
<p>This is a sample from my peaks_ef.xpk file, which I am reading in. </p> <pre><code>label dataset sw sf 1H 1H_2 NOESY_F1eF2f.nv 4807.69238281 4803.07373047 600.402832031 600.402832031 1H.L 1H.P 1H.W 1H.B 1H.E 1H.J 1H.U 1H_2.L 1H_2.P 1H_2.W 1H_2.B 1H_2.E 1H_2.J 1H_2.U vol int stat comment flag0 flag8 flag9 0 {1.H2'} 4.93607 0.05000 0.10000 ++ {0.0} {} {1.H1'} 5.82020 0.05000 0.10000 ++ {0.0} {} 0.0 100.0000 0 {} 0 0 0 1 {1.H2'} 4.93607 0.05000 0.10000 ++ {0.0} {} {1.H1'} 5.82020 0.05000 0.10000 ++ {0.0} {} 0.0 100.0000 0 {} 0 0 0 2 {1.H3'} 4.70891 0.05000 0.10000 ++ {0.0} {} {1.H8} 8.13712 0.05000 0.10000 ++ {0.0} {} 0.0 100.0000 0 {} 0 0 0 3 {1.H2'} 4.93607 0.05000 0.10000 ++ {0.0} {} {1.H8} 8.13712 0.05000 0.10000 ++ {0.0} {} 0.0 100.0000 0 {} 0 0 0 4 {2.H2'} 4.55388 0.05000 0.10000 ++ {0.0} {} {2.H1'} 5.90291 0.05000 0.10000 ++ {0.0} {} 0.0 100.0000 0 {} 0 0 0 5 {2.H2'} 4.55388 0.05000 0.10000 ++ {0.0} {} {2.H1'} 5.90291 0.05000 0.10000 ++ {0.0} {} 0.0 100.0000 0 {} 0 0 0 6 {2.H3'} 4.60420 0.05000 0.10000 ++ {0.0} {} {2.H8} 7.61004 0.05000 0.10000 ++ {0.0} {} 0.0 100.0000 0 {} 0 0 0 7 {2.H2'} 4.55388 0.05000 0.10000 ++ {0.0} {} {2.H8} 7.61004 0.05000 0.10000 ++ {0.0} {} 0.0 100.0000 0 {} 0 0 0 8 {1.H3'} 4.70891 0.05000 0.10000 ++ {0.0} {} {2.H8} 7.61004 0.05000 0.10000 ++ {0.0} {} 0.0 100.0000 0 {} 0 0 0 9 {1.H2'} 4.93607 0.05000 0.10000 ++ {0.0} {} {2.H8} 7.61004 0.05000 0.10000 ++ {0.0} {} 0.0 100.0000 0 {} 0 0 0 10 {3.H5} 5.20481 0.05000 0.10000 ++ {0.0} {} {2.H8} 7.61004 0.05000 0.10000 ++ {0.0} {} 0.0 100.0000 0 {} 0 0 0 </code></pre> <p>I want to take the values in the columns 1H.P and 1H_2.P and write them out to another file, but I only want to include values that are within a certain range. I thought I was doing that for my code. The <code>mask</code> variable should "filter" the values right? </p> <p>This is my code:</p> <pre><code>import pandas as pd import os import sys import re i=0; contents_peak=[] peak_lines=[] with open ("ee_pinkH1.xpk","r") as peakPPM: for PPM in peakPPM.readlines(): float_num = re.findall("[\s][1-9]{1}\.[0-9]+",PPM) if (len(float_num)&gt;1): i=i+1 value = ('Peak '+ str(i) + ' ' + str(float_num[0]) + ' 0.05 ' + str(float_num[1]) + ' 0.05' + '\n') peak_lines.append(value) tclust_peak = open("tclust.txt","w+") tclust_peak.write("rbclust \n") for value in peak_lines: tclust_peak.write(value) tclust_peak.close() result={} text = 'ee' filename= 'ee_pinkH1.xpk' if text == 'ee': df=pd.read_csv("peaks_ee.xpk",sep=" ", skiprows=5) shift1 = df["1H.P"] shift2 = df["1H_2.P"] if filename=='ee_pinkH1.xpk': mask = ((shift1&gt;5.1) &amp; (shift1&lt;6)) &amp; ((shift2&gt;7) &amp; (shift2&lt;8.25)) elif filename == 'ee_pinkH2.xpk': mask = ((shift1&gt;3.25)&amp;(shift1&lt;5))&amp;((shift2&gt;7)&amp;(shift2&lt;8.5)) if text == 'ef': df = pd.read_csv('peaks_ef.xpk',sep = " ",skiprows=5) shift1=df["1H.P"] shift2=df["1H_2.P"] if filename == 'ef_blue.xpk': mask = ((shift1&gt;5) &amp; (shift1&lt;6)) &amp; ((shift2&gt;7.25) &amp; (shift2&lt;8.25)) elif filename == 'ef_green.xpk': mask = ((shift1&gt;7) &amp; (shift1&lt;9)) &amp; ((shift2&gt;5.25) &amp; (shift2&lt;6.2)) elif filename == 'ef_orange.xpk': mask = ((shift1&gt;3) &amp; (shift1&lt;5)) &amp; ((shift2&gt;5.2) &amp; (shift2&lt;6.25)) if text == 'fe': df = pd.read_csv('peaks_fe.xpk', sep=" ",skiprows=5) shift1= df["1H.P"] shift2= df["1H_2.P"] if filename == 'fe_yellow.xpk': mask = ((shift1&gt;3) &amp; (shift1&lt;5)) &amp; ((shift2&gt;5) &amp; (shift2&lt;6)) elif filename == 'fe_green.xpk': mask = ((shift1&gt;5.1) &amp; (shift1&lt;6)) &amp; ((shift2&gt;7) &amp; (shift2&lt;8.25)) result = df[mask] result = result[["1H.L","1H_2.L"]] for col in result.columns: result[col] = result[col].str.strip("{} ") result.drop_duplicates(keep='first', inplace=True) result = result.set_index([['Atom '+str(i) for i in range(1,len(result)+1)]]) tclust_atom=open("tclust.txt","a") result.to_string(tclust_atom, header = False) df1 = df.copy()[['1H.L','1H.P']] df2 = df.copy()[['1H_2.L','1H_2.P']] df2.rename(columns={'1H_2.L': '1H.L', '1H_2.P': '1H.P'}, inplace=True) df = pd.concat([df1,df2]) df['1H.L']=df['1H.L'].apply(lambda row: row.strip('{}')) df['new']=0.3 df.drop_duplicates(keep='first',inplace=True) tclust_atom=open("tclust_ppm.txt","w+") df.to_csv("tclust_ppm.txt",sep=" ", index=False, header=False) </code></pre> <p>A sample of my output is:</p> <pre><code>5.H3' 4.43488 0.3 6.H2' 4.49744 0.3 7.H1' 5.95115 0.3 6.H3' 4.51612 0.3 8.H5 5.39709 0.3 7.H3' 4.62099 0.3 7.H2 7.67414 0.3 8.H2' 4.31783 0.3 9.H1' 5.91813 0.3 8.H3' 4.45577 0.3 10.H5 5.17157 0.3 9.H3' 4.66179 0.3 </code></pre> <p>Based on my code, the filter or "mask" variable is in the if statement: </p> <pre><code>if text == 'ef': df = pd.read_csv('peaks_ef.xpk',sep = " ",skiprows=5) shift1=df["1H.P"] shift2=df["1H_2.P"] if filename == 'ef_blue.xpk': mask = ((shift1&gt;5) &amp; (shift1&lt;6)) &amp; ((shift2&gt;7.25) &amp; (shift2&lt;8.25)) elif filename == 'ef_green.xpk': mask = ((shift1&gt;7) &amp; (shift1&lt;9)) &amp; ((shift2&gt;5.25) &amp; (shift2&lt;6.2)) elif filename == 'ef_orange': mask = ((shift1&gt;3) &amp; (shift1&lt;5)) &amp; ((shift2&gt;5.2) &amp; (shift2&lt;6.25)) </code></pre> <p>and it should come from the <code>elif filename =='ef_orange':</code> and both shift1 and shift2 should not be greater than 6.25, but in my output I am getting an answer that is 7.67414. Why is my filtering not working and how can I fix it? </p>
<p>by using </p> <pre><code>shift1=df["1H.P"] shift2=df["1H_2.P"] </code></pre> <p>you are condensining your filter to only one serires, that being your column, when instead you want to filiter on the entire dataframe, for your sake, it will be easier to see as its own function.</p> <pre><code>def fil(df,oneLow,oneHigh,twoLow,twoHigh): df = df[((df['1H.P'] &gt; oneLow) &amp; (df['1H.P'] &lt; oneHigh)) &amp; ((df['1H_2.P'] &gt; twoLow) &amp; (df['1H_2.P'] &lt; twoHigh))] return df if text == 'ef': df = pd.read_csv('peaks_ef.xpk',sep = " ",skiprows=5) #shift1=df["1H.P"] remove #shift2=df["1H_2.P"] remove if filename == 'ef_blue.xpk': #mask = ((shift1&gt;5) &amp; (shift1&lt;6)) &amp; ((shift2&gt;7.25) &amp; (shift2&lt;8.25)) df = fil(df,5,6,7.25,8.25) elif filename == 'ef_green.xpk': #mask = ((shift1&gt;7) &amp; (shift1&lt;9)) &amp; ((shift2&gt;5.25) &amp; (shift2&lt;6.2)) df = fil(df,7,9,5.25,6.2) elif filename == 'ef_orange': #mask = ((shift1&gt;3) &amp; (shift1&lt;5)) &amp; ((shift2&gt;5.2) &amp; (shift2&lt;6.25)) df = fil(df,3,5,5.2,6.25) </code></pre> <p><strong>Edit with full code</strong></p> <pre><code>import pandas as pd import os import sys import re def fil(df,oneLow,oneHigh,twoLow,twoHigh): df = df[((df['1H.P'] &gt; oneLow) &amp; (df['1H.P'] &lt; oneHigh)) &amp; ((df['1H_2.P'] &gt; twoLow) &amp; (df['1H_2.P'] &lt; twoHigh))] return df i=0; contents_peak=[] peak_lines=[] with open ("ee_pinkH1.xpk","r") as peakPPM: for PPM in peakPPM.readlines(): float_num = re.findall("[\s][1-9]{1}\.[0-9]+",PPM) if (len(float_num)&gt;1): i=i+1 value = ('Peak '+ str(i) + ' ' + str(float_num[0]) + ' 0.05 ' + str(float_num[1]) + ' 0.05' + '\n') peak_lines.append(value) tclust_peak = open("tclust.txt","w+") tclust_peak.write("rbclust \n") for value in peak_lines: tclust_peak.write(value) tclust_peak.close() result={} text = 'ee' filename= 'ee_pinkH1.xpk' if text == 'ee': df=pd.read_csv("peaks_ee.xpk",sep=" ", skiprows=5) if filename=='ee_pinkH1.xpk': result = fil(df,5.1,6,7,8.25) elif filename == 'ee_pinkH2.xpk': result = fil(df,3.25,5,7,8.5) if text == 'ef': df = pd.read_csv('peaks_ef.xpk',sep = " ",skiprows=5) if filename == 'ef_blue.xpk': result = fil(df,5,6,7.25,8.25) elif filename == 'ef_green.xpk': result = fil(df,7,9,5.25,6.2) elif filename == 'ef_orange.xpk': result = fil(df,3,5,5.2,6.25) if text == 'fe': df = pd.read_csv('peaks_fe.xpk', sep=" ",skiprows=5) if filename == 'fe_yellow.xpk': result= fil(df,3,5,5,6) elif filename == 'fe_green.xpk': result= fil(df,5.1,6,7,8.25) for col in result.columns: result[col] = result[col].str.strip("{} ") result.drop_duplicates(keep='first', inplace=True) result = result.set_index([['Atom '+str(i) for i in range(1,len(result)+1)]]) tclust_atom=open("tclust.txt","a") result.to_string(tclust_atom, header = False) df1 = df.copy()[['1H.L','1H.P']] df2 = df.copy()[['1H_2.L','1H_2.P']] df2.rename(columns={'1H_2.L': '1H.L', '1H_2.P': '1H.P'}, inplace=True) df = pd.concat([df1,df2]) df['1H.L']=df['1H.L'].apply(lambda row: row.strip('{}')) df['new']=0.3 df.drop_duplicates(keep='first',inplace=True) tclust_atom=open("tclust_ppm.txt","w+") df.to_csv("tclust_ppm.txt",sep=" ", index=False, header=False) </code></pre>
python|pandas|dictionary
0
9,924
45,473,434
Pandas: How can I find col, index where Nan value exists?
<pre><code>In [3]: import numpy as np In [4]: b = pd.DataFrame(np.array([ ...: [1,np.nan,3,4], ...: [np.nan, 4, np.nan, 4] ...: ])) In [13]: b Out[13]: 0 1 2 3 0 1.0 NaN 3.0 4.0 1 NaN 4.0 NaN 4.0 </code></pre> <p>I want to find column name and index where <code>Nan</code> value exists.</p> <p>For example, "<code>b</code> has <code>NaN</code> value at <code>index 0, col1</code>, <code>index 0, col0</code>, <code>index 1 col2</code>.</p> <p>What I've tried:</p> <p>1</p> <pre><code>In [14]: b[b.isnull()] Out[14]: 0 1 2 3 0 NaN NaN NaN NaN 1 NaN NaN NaN NaN </code></pre> <p>=> I don't know why it shows <code>DataFrame</code> filled with <code>NaN</code></p> <p>2</p> <pre><code>In [15]: b[b[0].isnull()] Out[15]: 0 1 2 3 1 NaN 4.0 NaN 4.0 </code></pre> <p>=> It only shows part of <code>DataFrame</code> where <code>Nan</code> value exist in <code>column 0</code>..</p> <p>How can I </p>
<p>You could use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a> to find the indices where <code>pd.isnull(b)</code> is True:</p> <pre><code>import numpy as np import pandas as pd b = pd.DataFrame(np.array([ [1,np.nan,3,4], [np.nan, 4, np.nan, 4]])) idx, idy = np.where(pd.isnull(b)) result = np.column_stack([b.index[idx], b.columns[idy]]) print(result) # [[0 1] # [1 0] # [1 2]] </code></pre> <p>or use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a> to reshape the DataFrame by moving the column labels into the index. This creates a Series which is True where <code>b</code> is null:</p> <pre><code>mask = pd.isnull(b).stack() # 0 0 False # 1 True # 2 False # 3 False # 1 0 True # 1 False # 2 True # 3 False </code></pre> <p>and then read off the row and column labels from the MultiIndex:</p> <pre><code>print(mask.loc[mask]) # 0 1 True # 1 0 True # 2 True # dtype: bool print(mask.loc[mask].index.tolist()) # [(0, 1), (1, 0), (1, 2)] </code></pre>
python|pandas
5
9,925
62,672,198
Error while creating comma separated list from data-frame to pass into SQL query
<p>I am trying to create a comma separated list to pass SQL query.</p> <p><strong>my code</strong></p> <pre><code>sql1 = '''select carrier_name, carrier_account, invoice_number, invoice_amount, currency, invoice_date from invoice_summary where invoice_number in {}'''.format(tuple(data1['invoice_number'].values.tolist())) </code></pre> <p><strong>Current Output</strong></p> <pre><code> &quot;select carrier_name, carrier_account, invoice_number, invoice_amount, currency, invoice_date\nfrom invoice_summary where invoice_number in ('BHX3327983',)&quot; </code></pre> <p><strong>Expected Output</strong> &quot;</p> <pre><code>select carrier_name, carrier_account, invoice_number, invoice_amount, currency, invoice_date\nfrom invoice_summary where invoice_number in ('BHX3327983')&quot; </code></pre> <p>I am looking for a solution that works when there is single input or there mutliple inputs to be passed.</p> <p>Whats wrong in my code.</p>
<p>Try using <code>join</code> and put the parenthesis inside the string:</p> <pre><code>sql1 = '''select carrier_name, carrier_account, invoice_number, invoice_amount, currency, invoice_date from invoice_summary where invoice_number in ({})'''.format(','.join([&quot;'{}'&quot;.format(x) for x in data1['invoice_number']])) </code></pre> <hr /> <h3>Update</h3> <p>You could use the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.empty.html" rel="nofollow noreferrer"><code>DataFrame.empty</code></a> property to conditionally set the value of the sql statement. If <code>data1</code> is empty, then set your <code>WHERE</code> clause to something that is False, eg <code>1 = 0</code>:</p> <pre><code>if data1.empty: sql1 = '''select carrier_name, carrier_account, invoice_number, invoice_amount, currency, invoice_date from invoice_summary where 1 = 0''' else: sql1 = ('''select carrier_name, carrier_account, invoice_number, invoice_amount, currency, invoice_date from invoice_summary where invoice_number in ({})''' .format(','.join([&quot;'{}'&quot;.format(x) for x in data1['invoice_number']]))) </code></pre>
python|pandas|dataframe
1
9,926
62,529,574
Python df returning NaN instead of values
<p>I'm trying to import a csv file into python but the values won't show.</p> <pre><code>data = pd.read_csv(&quot;test.csv&quot;, header=None) df=pd.DataFrame(data, columns=['time','x','y']) print(df) </code></pre> <p>Output shows as:</p> <pre><code>time x y 0 NaN NaN NaN 1 NaN NaN NaN 2 NaN NaN NaN 3 NaN NaN NaN 4 NaN NaN NaN .. ... .. .. 875 NaN NaN NaN 876 NaN NaN NaN 877 NaN NaN NaN 878 NaN NaN NaN 879 NaN NaN NaN </code></pre> <p>CSV file looks like: <a href="https://i.stack.imgur.com/EtYKM.png" rel="nofollow noreferrer">csv</a></p> <p>I want to be able subtract time to find the difference. So far I've tried time_1 - time_0 but then it also returns NaN.</p> <p>Can someone offer some guidance.</p>
<p>pandas.read_csv() already returns a DataFrame. So you can do it like this:</p> <pre><code>df = pd.read_csv('/test.csv', header=None, sep = ';') df.columns = ['time','x','y'] </code></pre>
python|pandas|dataframe
0
9,927
62,751,139
Plotting several plots in matplotlib based on conditions from two columns in dataframe Python?
<p>I have the following dataframe. You can see that each island_id has 1 or more location_id. This dataframe is a very small sample of the real dataframe (13,000,000rows and 4columns).</p> <pre><code>df = {'location_id': [1,1,1,2,2,2,3,3,3,4,4,4,5,5,5,6,6,6,7,7,7,8,8,8], 'timestamp':['2020-05-26 22:00:52','2020-05-26 22:01:52','2020-05-26 22:02:52', '2020-05-26 22:00:52','2020-05-26 22:01:52','2020-05-26 22:02:52', '2020-05-26 22:00:52','2020-05-26 22:01:52','2020-05-26 22:02:52', '2020-05-26 22:00:52','2020-05-26 22:01:52','2020-05-26 22:02:52', '2020-05-26 22:00:52','2020-05-26 22:01:52','2020-05-26 22:02:52', '2020-05-26 22:00:52','2020-05-26 22:01:52','2020-05-26 22:02:52', '2020-05-26 22:00:52','2020-05-26 22:01:52','2020-05-26 22:02:52', '2020-05-26 22:00:52','2020-05-26 22:01:52','2020-05-26 22:02:52'], 'temperature_value': [20,21,22,23,24,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44], 'humidity_value':[60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83], 'island_id':[10,10,10,20,20,20,20,20,20,30,30,30,30,30,30,30,30,30,40,40,40,40,40,40]} dataframe = pd.DataFrame(df) </code></pre> <p>What I'm trying to achieve here is to plot the temperature_value of all island_id that have at least 2 location_id. So for example island_id = 30 contains location_id = [4,5,6]. So in this case, I should plot all temperature values for locations 6, 7 and 8 in the same plot and on the x-axis the timestamp. So in my case, I am expecting to get like 20 or 30subplots. Each plot will show the temperature_values of the locations that are in the same island as a function of timestamp.So for an island_id have 3locations, the temperature values for these 3 locations should be shown in the plot(3 curves). (Note: The plots should be under each other, like subplots)</p> <p>Is there a way to do it in Python ? I would really appreciate it if someone can give me a solution :) !</p>
<p><code>.groupby</code>, and <code>filter</code> 'location_id' whose count equals or more than three into a new datframe.</p> <pre><code>df2=df.groupby('island_id').filter(lambda x:x.location_id.nunique()&gt;=2) </code></pre> <p>Plot</p> <pre><code>g=df2.groupby(['location_id','island_id']) for x, df in g: df.plot(kind='bar', x='timestamp',y=['temperature_value', 'humidity_value']) plt.title(x) </code></pre>
python|pandas|matplotlib
1
9,928
54,612,188
one_shot_iterator, placeholder, cannot capture placeholder
<p>I try to make a <code>one_shot_iterator</code> from a data set.</p> <p>I use placeholder to use less GPU memory and expect that I only have to initialize the iterator for only once.</p> <p>But I get error:</p> <pre><code>Traceback (most recent call last): File "test_placeholder.py", line 18, in &lt;module&gt; it = dset.make_one_shot_iterator() File "&lt;...&gt;/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 205, in make_one_shot_iterator six.reraise(ValueError, err) File "&lt;...&gt;/site-packages/six.py", line 692, in reraise raise value.with_traceback(tb) ValueError: Cannot capture a placeholder (name:Placeholder, type:Placeholder) by value. </code></pre> <p>Test:</p> <pre><code>import tensorflow as tf import numpy as np buf_size = 50 batch_size = 10 n_rows = 117 a = np.random.choice(7, size=n_rows) b = np.random.uniform(0, 1, size=(n_rows, 4)) a_ph = tf.placeholder(a.dtype, a.shape) b_ph = tf.placeholder(b.dtype, b.shape) with tf.Session() as sess: dset = tf.data.Dataset.from_tensor_slices((a_ph, b_ph)) dset = dset.shuffle(buf_size).batch(batch_size).repeat() feed_dict = {a_ph: a, b_ph: b} it = dset.make_one_shot_iterator() n_batches = len(a) // batch_size sess.run(it.initializer, feed_dict=feed_dict) for i in range(n_batches): a_chunk, b_chunk = it.get_next() print(a_chunk, b_chunk) </code></pre> <p>What went wrong?</p> <p>Thanks.</p>
<p>Check out the guide for <a href="https://www.tensorflow.org/guide/datasets#creating_an_iterator" rel="nofollow noreferrer">importing data</a></p> <p>"A one-shot iterator is the simplest form of iterator, which only supports iterating once through a dataset, with no need for explicit initialization. One-shot iterators handle almost all of the cases that the existing queue-based input pipelines support, but they do not support parameterization."</p> <p>That is the reason for your error, as any parameterization with a placeholder is not supported by this particular iterator. We can make use of make_initializable_iterator instead. </p> <p>Here is your code with that modification and the result you are looking for.</p> <pre><code>buf_size = 50 batch_size = 10 n_rows = 117 a = np.random.choice(7, size=n_rows) b = np.random.uniform(0, 1, size=(n_rows, 4)) a_ph = tf.placeholder(a.dtype, a.shape) b_ph = tf.placeholder(b.dtype, b.shape) with tf.Session() as sess: dset = tf.data.Dataset.from_tensor_slices((a_ph, b_ph)) dset = dset.shuffle(buf_size).batch(batch_size).repeat() feed_dict = {a_ph: a, b_ph: b} it = dset.make_initializable_iterator() n_batches = len(a) // batch_size sess.run(it.initializer, feed_dict=feed_dict) for i in range(n_batches): a_chunk, b_chunk = it.get_next() print(a_chunk, b_chunk) </code></pre> <p>Result:</p> <pre><code>Tensor("IteratorGetNext:0", shape=(?,), dtype=int32) Tensor("IteratorGetNext:1", shape=(?, 4), dtype=float64) Tensor("IteratorGetNext_1:0", shape=(?,), dtype=int32) Tensor("IteratorGetNext_1:1", shape=(?, 4), dtype=float64) Tensor("IteratorGetNext_2:0", shape=(?,), dtype=int32) Tensor("IteratorGetNext_2:1", shape=(?, 4), dtype=float64) Tensor("IteratorGetNext_3:0", shape=(?,), dtype=int32) Tensor("IteratorGetNext_3:1", shape=(?, 4), dtype=float64) Tensor("IteratorGetNext_4:0", shape=(?,), dtype=int32) Tensor("IteratorGetNext_4:1", shape=(?, 4), dtype=float64) Tensor("IteratorGetNext_5:0", shape=(?,), dtype=int32) Tensor("IteratorGetNext_5:1", shape=(?, 4), dtype=float64) Tensor("IteratorGetNext_6:0", shape=(?,), dtype=int32) Tensor("IteratorGetNext_6:1", shape=(?, 4), dtype=float64) Tensor("IteratorGetNext_7:0", shape=(?,), dtype=int32) Tensor("IteratorGetNext_7:1", shape=(?, 4), dtype=float64) Tensor("IteratorGetNext_8:0", shape=(?,), dtype=int32) Tensor("IteratorGetNext_8:1", shape=(?, 4), dtype=float64) Tensor("IteratorGetNext_9:0", shape=(?,), dtype=int32) Tensor("IteratorGetNext_9:1", shape=(?, 4), dtype=float64) Tensor("IteratorGetNext_10:0", shape=(?,), dtype=int32) Tensor("IteratorGetNext_10:1", shape=(?, 4), dtype=float64) </code></pre>
python|tensorflow
1
9,929
54,524,856
Get "RuntimeError: generator raised StopIteration" while trying to update a Pandas dataframe
<p>I have a Pandas dataframe and want to compute bigrams with the following code:</p> <pre><code>from nltk import bigrams df['tweet_bigrams'] = df['tweet_tokenized'].apply(lambda x: list(bigrams(x))) </code></pre> <p>It was working fine in Jupyter. However, when I tried to run it on Linux terminal, I keep receiving the following error:</p> <pre><code>Traceback (most recent call last): File "/usr/licensed/anaconda3/5.3.1/lib/python3.7/site-packages/nltk/util.py", line 468, in ngrams history.append(next(sequence)) StopIteration The above exception was the direct cause of the following exception: Traceback (most recent call last): File "url_tweet_feature_extraction.py", line 143, in &lt;module&gt; df['tweet_bigrams'] = df['tweet_tokenized'].apply(lambda x: list(bigrams(x))) File "/usr/licensed/anaconda3/5.3.1/lib/python3.7/site-packages/pandas/core/series.py", line 3194, in apply mapped = lib.map_infer(values, f, convert=convert_dtype) File "pandas/_libs/src/inference.pyx", line 1472, in pandas._libs.lib.map_infer File "url_tweet_feature_extraction.py", line 143, in &lt;lambda&gt; df['tweet_bigrams'] = df['tweet_tokenized'].apply(lambda x: list(bigrams(x))) File "/usr/licensed/anaconda3/5.3.1/lib/python3.7/site-packages/nltk/util.py", line 491, in bigrams for item in ngrams(sequence, 2, **kwargs): RuntimeError: generator raised StopIteration </code></pre> <p>Any idea on how to resolve this?</p>
<p>Update your NLTK. You need version 3.4 (or higher, for future readers). Old versions relied on <code>StopIteration</code> handling that changed in Python 3.7.</p>
python|python-3.x|pandas
2
9,930
54,570,428
Rescale matrix by summating over pixels
<p>Is there a quick way to rescale a matrix by simply adding adjacent pixels?</p> <p>So for a <code>X=N*M</code> matrix you get a <code>Y=(N/n) *(N/m)</code> where <code>n * m</code> is the area I should add the pixel in.</p> <p>I've been doing that manually (via script) but I think there has to be somewhere a way to do it.</p> <pre><code>for i in range(0, X.shape[0]/n): for j in range(0, X.shape[1]/m): Y[i, j] = np.sum(X[i*n:i*n+n, j*m:j*m+m]) </code></pre> <p>E.G. </p> <pre><code>X = [[0 1 2 3] [2 3 4 5] [3 4 6 8] [2 3 4 5]] Y = [[ 6 14] [12 23]] </code></pre>
<p>A pure numpy way would be to reshape the matrix into more axes and sum over the appropiate axes.</p> <pre><code>Y = X.reshape(X.shape[0]/n, n, X.shape[1]/m, m).sum((1, 3)) </code></pre>
python|numpy|matrix|resize
2
9,931
54,424,532
How can create calculated field in Python similar to excel?
<p>I want to migrate pivot tables from Excel to python, for using visualizations and others. I use two calculated fields in excel, so I want to know if is possible using similar idea with Pandas ? Thanks.</p>
<p>Not sure what your data looks like, but this definitely possible with pandas. </p> <p>Here's an example:</p> <pre><code># example dataframe df = pd.DataFrame({'age': [17, 23, 4, 27], 'name': ['John', 'Mark', 'Alice', 'Alice']}) </code></pre> <p><strong>Output1</strong></p> <pre><code> age name 0 17 John 1 23 Mark 2 4 Alice 3 27 Alice </code></pre> <p>Create calculated field with <code>np.where</code> method<br> Logic behind this method: <code>np.where(condition, true value, false value)</code><br> Find more <a href="https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.where.html" rel="nofollow noreferrer">here</a></p> <pre><code>df['adult_indicator'] = np.where(df.age &gt;= 18, 1, 0) </code></pre> <p><strong>Output2</strong></p> <pre><code> age name adult_indicator 0 17 John 0 1 23 Mark 1 2 4 Alice 0 3 27 Alice 1 </code></pre> <p>Apply <code>pivot</code> method from the <code>pandas</code> module</p> <pre><code>df.pivot(index='name', columns='age', values='adult_indicator') </code></pre> <p><strong>Output3</strong></p> <pre><code> age 4 17 23 27 name Alice 0.0 NaN NaN 1.0 John NaN 0.0 NaN NaN Mark NaN NaN 1.0 NaN </code></pre>
python|pandas
2
9,932
73,764,681
numpy array replace value with a conditional loop
<p>I have a numpy array,</p> <pre><code>myarray= np.array([49, 7, 44, 27, 13, 35, 171]) </code></pre> <p>i wanted to replace the values if it is greater than <code>45</code>, so i applied the below code</p> <pre><code>myarray=np.where(myarray&gt; 45,myarray - 45, myarray) </code></pre> <p>but this is applied only once in that array, for example, the above array becomes</p> <pre><code>myarray= np.array([4, 7, 44, 27, 13, 35, 126]) </code></pre> <p>Expected array</p> <pre><code>myarray= np.array([4, 7, 44, 27, 13, 35, 36]) </code></pre> <p>How do i run the <code>np.where</code> till the condition is satisfied? basically in the above array i dont want any value to be greater than <code>45</code>, is there a pythonic way of doing it. Thanks in advance.</p>
<p>Well the subtraction happens only once, since you did the operation only once, hence 171 - 45 =&gt; 126 and the operation has completed. Try using the modulo operator if you wanna do it this way.</p> <pre><code>myarray = np.array([49, 7, 44, 27, 13, 35, 171]) myarray = np.where(myarray&gt; 45, myarray % 45, myarray) print(myarray) </code></pre> <p>The output matches your prompt.</p> <pre><code>[ 4 7 44 27 13 35 36] </code></pre>
python|arrays|numpy
2
9,933
52,055,014
Fill NAs forwards or backwards if values in other columns are the same
<p>Given this example:</p> <pre><code>import pandas as pd df = pd.DataFrame({ "date": ["20180724", "20180725", "20180731", "20180723", "20180731"], "identity": [None, "A123456789", None, None, None], "hid": [12345, 12345, 12345, 54321, 54321], "hospital": ["A", "A", "A", "B", "B"], "result": [70, None, 100, 90, 78] }) </code></pre> <p>Because the first three rows have the same <code>hid</code> and <code>hospital</code>, the values in <code>identity</code> should also be identical. As for the other two rows, they have the same <code>hid</code> and <code>hospital</code> as well, but no known <code>identity</code> was provided, so the values in <code>identity</code> should remain missing. In other words, the desired output is:</p> <pre><code> date identity hid hospital result 0 20180724 A123456789 12345 A 70.0 1 20180725 A123456789 12345 A NaN 2 20180731 A123456789 12345 A 100.0 3 20180723 None 54321 B 90.0 4 20180731 None 54321 B 78.0 </code></pre> <p>I can loop through all combinations of <code>hid</code>s and <code>hospital</code>s like <code>for hid, hospital in df[["hid", "hospital"]].drop_duplicates().itertuples(index=False)</code>, but I don't know how to do next.</p>
<p>Use <code>groupby</code> and <code>apply</code> in combination with <code>ffill</code> and <code>bfill</code>:</p> <pre><code>df['identity'] = df.groupby(['hid', 'hospital'])['identity'].apply(lambda x: x.ffill().bfill()) </code></pre> <p>This will fill NaNs forward <em>and</em> backwards while separating the values for the specified groups.</p>
python|pandas|missing-data|fillna
1
9,934
52,140,392
import tensorflow SyntaxError: invalid syntax
<p><a href="https://i.stack.imgur.com/3vX2a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3vX2a.png" alt="enter image description here"></a></p> <p>Using Virtualenv on Mac I have encountered the showing SyntaxError when I import tensorflow I tried many times uninstall but now working... please help me!</p>
<p>Tensorflow is not supported on Python 3.7. You'll need to use python3.6 or earlier.</p> <p>async which was fine as a variable name in earlier versions of Python, is a keyword in Python 3.7. This is why it is failing to import.</p>
macos|tensorflow
2
9,935
52,421,855
Group rows where columns have values within range in pandas df
<p>I have a pandas df: </p> <pre><code>number sample chrom1 start chrom2 end 1 s1 1 0 2 1500 2 s1 2 10 2 50 19 s2 3 3098318 3 3125700 19 s3 3 3098720 3 3125870 20 s4 3 3125694 3 3126976 20 s1 3 3125694 3 3126976 20 s1 3 3125695 3 3126976 20 s5 3 3125700 3 3126976 21 s3 3 3125870 3 3134920 22 s2 3 3126976 3 3135039 24 s5 3 17286051 3 17311472 25 s2 3 17286052 3 17294628 26 s4 3 17286052 3 17311472 26 s1 3 17286052 3 17311472 27 s3 3 17286405 3 17294550 28 s4 3 17293197 3 17294628 28 s1 3 17293197 3 17294628 28 s5 3 17293199 3 17294628 29 s2 3 17294628 3 17311472 </code></pre> <p>I am trying to group lines that have different numbers, but where the <code>start</code> is within <code>+/- 10</code> <strong>AND</strong> the end is also within <code>+/- 10</code> on the same chromosomes. </p> <p>In this example I want to find these two lines: </p> <pre><code>24 s5 3 17286051 3 17311472 26 s4 3 17286052 3 17311472 </code></pre> <p>Where both have the same <code>chrom1</code> <code>[3]</code> and <code>chrom2</code> <code>[3]</code> , and the <code>start</code> and end values are <code>+/- 10</code> from each other, and group them under the same number:</p> <pre><code>24 s5 3 17286051 3 17311472 24 s4 3 17286052 3 17311472 # Change the number to the first seen in this series </code></pre> <p>Here's what I'm trying: </p> <pre><code>import pandas as pd from collections import defaultdict def parse_vars(inFile): df = pd.read_csv(inFile, delimiter="\t") df = df[['number', 'chrom1', 'start', 'chrom2', 'end']] vars = {} seen_l = defaultdict(lambda: defaultdict(dict)) # To track the `starts` seen_r = defaultdict(lambda: defaultdict(dict)) # To track the `ends` for index in df.index: event = df.loc[index, 'number'] c1 = df.loc[index, 'chrom1'] b1 = int(df.loc[index, 'start']) c2 = df.loc[index, 'chrom2'] b2 = int(df.loc[index, 'end']) print [event, c1, b1, c2, b2] vars[event] = [c1, b1, c2, b2] # Iterate over windows +/- 10 for i, j in zip( range(b1-10, b1+10), range(b2-10, b2+10) ): # if : # i in seen_l[c1] AND # j in seen_r[c2] AND # the 'number' for these two instances is the same: if i in seen_l[c1] and j in seen_r[c2] and seen_l[c1][i] == seen_r[c2][j]: print seen_l[c1][i], seen_r[c2][j] if seen_l[c1][i] != event: print"Seen: %s %s in event %s %s" % (event, [c1, b1, c2, b2], seen_l[c1][i], vars[seen_l[c1][i]]) seen_l[c1][b1] = event seen_r[c2][b2] = event </code></pre> <p>The problem I'm having, is that <code>seen_l[3][17286052]</code> exists in both <code>numbers</code> <code>25</code> and <code>26</code>, and as their respective <code>seen_r</code> events (<code>seen_r[3][17294628] = 25</code>, <code>seen_r[3][17311472] = 26</code>) are not equal, I am unable to join these lines together. </p> <p>Is there a way that I can use a list of <code>start</code> values as the nested key for <code>seen_l</code> dict? </p>
<p>Interval overlaps are easy in <a href="https://github.com/biocore-ntnu/pyranges" rel="nofollow noreferrer">pyranges</a>. Most of the code below is to separate out the starts and ends into two different dfs. Then these are joined based on an interval overlap of +-10:</p> <pre><code>from io import StringIO import pandas as pd import pyranges as pr c = """number sample chrom1 start chrom2 end 1 s1 1 0 2 1500 2 s1 2 10 2 50 19 s2 3 3098318 3 3125700 19 s3 3 3098720 3 3125870 20 s4 3 3125694 3 3126976 20 s1 3 3125694 3 3126976 20 s1 3 3125695 3 3126976 20 s5 3 3125700 3 3126976 21 s3 3 3125870 3 3134920 22 s2 3 3126976 3 3135039 24 s5 3 17286051 3 17311472 25 s2 3 17286052 3 17294628 26 s4 3 17286052 3 17311472 26 s1 3 17286052 3 17311472 27 s3 3 17286405 3 17294550 28 s4 3 17293197 3 17294628 28 s1 3 17293197 3 17294628 28 s5 3 17293199 3 17294628 29 s2 3 17294628 3 17311472""" df = pd.read_table(StringIO(c), sep="\s+") df1 = df[["chrom1", "start", "number", "sample"]] df1.insert(2, "end", df.start + 1) df2 = df[["chrom2", "end", "number", "sample"]] df2.insert(2, "start", df.end - 1) names = ["Chromosome", "Start", "End", "number", "sample"] df1.columns = names df2.columns = names gr1, gr2 = pr.PyRanges(df1), pr.PyRanges(df2) j = gr1.join(gr2, slack=10) # +--------------+-----------+-----------+-----------+------------+-----------+-----------+------------+------------+ # | Chromosome | Start | End | number | sample | Start_b | End_b | number_b | sample_b | # | (category) | (int32) | (int32) | (int64) | (object) | (int32) | (int32) | (int64) | (object) | # |--------------+-----------+-----------+-----------+------------+-----------+-----------+------------+------------| # | 3 | 3125694 | 3125695 | 20 | s4 | 3125700 | 3125699 | 19 | s2 | # | 3 | 3125694 | 3125695 | 20 | s1 | 3125700 | 3125699 | 19 | s2 | # | 3 | 3125695 | 3125696 | 20 | s1 | 3125700 | 3125699 | 19 | s2 | # | 3 | 3125700 | 3125701 | 20 | s5 | 3125700 | 3125699 | 19 | s2 | # | ... | ... | ... | ... | ... | ... | ... | ... | ... | # | 3 | 17294628 | 17294629 | 29 | s2 | 17294628 | 17294627 | 25 | s2 | # | 3 | 17294628 | 17294629 | 29 | s2 | 17294628 | 17294627 | 28 | s5 | # | 3 | 17294628 | 17294629 | 29 | s2 | 17294628 | 17294627 | 28 | s1 | # | 3 | 17294628 | 17294629 | 29 | s2 | 17294628 | 17294627 | 28 | s4 | # +--------------+-----------+-----------+-----------+------------+-----------+-----------+------------+------------+ # Unstranded PyRanges object has 13 rows and 9 columns from 1 chromosomes. # For printing, the PyRanges was sorted on Chromosome. # to get the data as a pandas df: jdf = j.df </code></pre>
python|python-2.7|pandas
0
9,936
60,402,309
Pandas Groupby at least 1 of 2 columns match
<p>I have a pandas df with a column for Names and 2 columns for 2 possible birth years. I want to groupby the name and birthyears, if at least one of the birthyear columns match.</p> <pre><code>FullName BirthYr1 BirthYr2 Smith, Joe 1985 1986 Dolan, Tom 1991 1992 Smith, Alex 1984 1985 Smith, Joe 1984 1985 Dolan, Tom 1991 1992 Smith, Alex 1986 1987 </code></pre> <p>BirthYr2 is always 1 more than BirthYr1.</p> <p>The 2 'Smith, Joe' would be grouped since they both have a 1985 (1 match), the 2 'Dolan, Tom' would be grouped since the have both columns the same (2 matches), while the 2 'Smith, Alex' would <strong>not</strong> be grouped since they don't have any matches.</p> <p>Once I figure this out I plan on using ngroup() to assign a unique id to each group.</p>
<p>This feels overcomplicated, but I think it achieves what you're looking for. Assuming your starting DataFrame is named <code>df</code>:</p> <pre><code># "Melt" the birth year columns such that each value is given its own # row. Throw away the redundant column names BirthYr1 and BirthYr2, # since their values are equally important to us. melted = df.melt(id_vars='FullName', value_name='BirthYr').drop(columns='variable') melted FullName BirthYr 0 Smith, Joe 1985 1 Dolan, Tom 1991 2 Smith, Alex 1984 3 Smith, Joe 1984 4 Dolan, Tom 1991 5 Smith, Alex 1986 6 Smith, Joe 1986 7 Dolan, Tom 1992 8 Smith, Alex 1985 9 Smith, Joe 1985 10 Dolan, Tom 1992 11 Smith, Alex 1987 # Group by fullname, then birth year. grouped = melted.groupby(['FullName', 'BirthYr']).size() grouped FullName BirthYr Dolan, Tom 1991 2 1992 2 Smith, Alex 1984 1 1985 1 1986 1 1987 1 Smith, Joe 1984 1 1985 2 1986 1 dtype: int64 # Any group with more than one member represents a match. grouped[grouped &gt; 1].reset_index()['FullName'].unique() array(['Dolan, Tom', 'Smith, Joe'], dtype=object) </code></pre>
python|pandas
1
9,937
60,606,292
Dataframe groupby to new dataframe
<p>I have a table as below.</p> <pre><code>Month,Count,Parameter March 2015,1,40 March 2015,1,10 March 2015,1,1 March 2015,1,25 March 2015,1,50 April 2015,1,15 April 2015,1,1 April 2015,1,1 April 2015,1,15 April 2015,1,15 </code></pre> <p>I need to create a new table from above as shown below.</p> <pre><code>Unique Month,Total Count,&lt;=30 March 2015,5,3 April 2015,5,5 </code></pre> <p>The logic for new table is as follows. "Unique Month" column is unique month from original table and needs to sorted. "Total Count" is sum of "Count" column from original table for that particular month. "&lt;=30" column is count of "Parameter &lt;= 30" for that particular month.</p> <p>Is there an easy way to do this in dataframes?</p> <p>Thanks in advance.</p>
<p>IIUC, just check for <code>Parameter &lt; 30</code> and then groupby:</p> <pre><code>(df.assign(le_30=df.Parameter.le(30)) .groupby('Month', as_index=False) # pass sort=False if needed [['Count','le_30']].sum() ) </code></pre> <p>Or</p> <pre><code>(df.Parameter.le(30) .groupby(df['Month']) # pass sort=False if needed .agg(['count','sum']) ) </code></pre> <p>Output:</p> <pre><code> Month Count le_30 0 April 2015 5 5.0 1 March 2015 5 3.0 </code></pre> <hr> <p><strong>Update</strong>: as commented above, adding <code>sort=False</code> to <code>groupby</code> will respect your original sorting of <code>Month</code>. For example:</p> <pre><code>(df.Parameter.le(30) .groupby(df['Month'], sort=False) .agg(['count','sum']) .reset_index() ) </code></pre> <p>Output:</p> <pre><code> Month count sum 0 March 2015 5 3.0 1 April 2015 5 5.0 </code></pre>
python|pandas|dataframe
0
9,938
60,378,262
How to keep the dimensions when using the basic arithmetic operations with Numpy
<p>Recently I encounter a dimension problem and has to reshape the array after each calculation. For example,</p> <pre><code>a=np.random.rand(2,3,4) t=2 b=a[:,1,:] + a[:,2,:]*t </code></pre> <p>The second axis of <code>a</code> is reduced automatically and <code>b</code> becomes a 2x4 array. How to keep the shape of <code>b</code> to be [2,1,4]. In <code>numpy.sum()</code>, we can set <code>keepdims=True</code>, but for the basic arithmetic operations, how to do it? </p>
<p>Convert the integer indies into lists:</p> <pre><code>&gt;&gt;&gt; b = a[:,[1],:] + a[:,[2],:]*t &gt;&gt;&gt; b.shape (2, 1, 4) </code></pre>
python|numpy
1
9,939
60,341,348
Merge multiple dataframes without common columns
<p>I have 2 datasets.</p> <pre><code>dict1 =pd.DataFrame({'Name' : ['A','B','C','D'], 'Score' : [19,20,11,12]}) list1 =pd.DataFrame(['Math', 'English', 'History', 'Science']) concat_data = pd.concat([dict1,list1]) </code></pre> <p>output :</p> <pre><code>Name Score 0 0 A 19.0 NaN 1 B 20.0 NaN 2 C 11.0 NaN 3 D 12.0 NaN 0 NaN NaN Math 1 NaN NaN English 2 NaN NaN History 3 NaN NaN Science </code></pre> <p>The output I am looking for:</p> <pre><code>Name Score 0 0 A 19.0 Math 1 B 20.0 English 2 C 11.0 History 3 D 12.0 Science </code></pre> <p>Can anyone help me with this?</p>
<p>All you need to do is pass the correct <code>axis</code>. The <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer">default behavior for concat</a> is <code>axis=0</code> which means the operation takes place index-wise or row-wise, while you are needing the operation to be performed column-wise:</p> <blockquote> <p>axis: {0/’index’, 1/’columns’}, default 0 The axis to concatenate along.</p> </blockquote> <pre><code>concat_data = pd.concat([dict1,list1],axis=1) print(concat_data) </code></pre> <p>Outputs:</p> <pre><code> Name Score 0 0 A 19 Math 1 B 20 English 2 C 11 History 3 D 12 Science </code></pre>
python|pandas|dataframe
1
9,940
60,713,502
Reshaping dataframe with multiple IDs
<p>I need helo creating the format transformation:</p> <p><a href="https://i.stack.imgur.com/7fiW5.png" rel="nofollow noreferrer">See here</a></p> <p>I have created pd.wide_to_long but I cannot find an example identical to mine?</p> <p>Can you help?</p>
<p>Use:</p> <pre><code>df.set_index(['City','Var']).rename_axis(columns='Year').stack().unstack('Var') </code></pre>
python|pandas
1
9,941
59,733,941
Create a new col in pandas with element of other columns
<p>Hel lo, I would need help.</p> <p>I have a dataframe such as :</p> <p>table: </p> <pre><code>Col1 Col2 Col3 Sign Loc1 1 60 - Loc2 10 90 + Loc3 40 100 + Loc4 20 40 - </code></pre> <p>and from this table I want to create a <code>Newcol</code> with elements in others columns such as : </p> <pre><code>Col1 Col2 Col3 Sign Newcol Loc1 1 60 - Loc1:1-60(-) Loc2 10 90 + Loc2:11-90(+) Loc3 40 100 + Loc3:41-100(+) Loc4 20 40 - Loc4:20-40(-) </code></pre> <p>I tried:</p> <pre><code>table["Newcol"]=table['Col1']+":"+str(table['Col2'])+"-"+str(table['Col3'])+"("+table['Sign']+")" </code></pre> <p>But how can I take into account the fact that when I have a <code>+</code> sign, I have to add <code>+1</code> to the <code>Col3</code> for the <code>Newcol</code> name ? </p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.astype.html" rel="nofollow noreferrer"><code>Series.astype</code></a> for convert to strings and for add <code>1</code> compare by <code>+</code> and convert value to integer with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.add.html" rel="nofollow noreferrer"><code>Series.add</code></a>:</p> <pre><code>table["Newcol"] = (table['Col1']+":"+ (table['Col2'].add(table['Sign'].eq('+').astype(int))).astype(str)+"-"+ (table['Col3']).astype(str)+"("+ table['Sign']+")") print (table) Col1 Col2 Col3 Sign Newcol 0 Loc1 1 60 - Loc1:1-60(-) 1 Loc2 10 90 + Loc2:11-90(+) 2 Loc3 40 100 + Loc3:41-100(+) 3 Loc4 20 40 - Loc4:20-40(-) </code></pre>
python-3.x|pandas
2
9,942
59,777,735
Generate equally-spaced values including the right end using NumPy.arange
<p>Suppose I want to generate an array between 0 and 1 with spacing 0.1. In R, we can do </p> <pre><code>&gt; seq(0, 1, 0.1) [1] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 </code></pre> <p>In Python, since <code>numpy.arange</code> doesn't include the right end, I need to add a small amount to the <code>stop</code>.</p> <pre><code>np.arange(0, 1.01, 0.1) array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ]) </code></pre> <p>But that seems a little weird. Is it possible to force <code>numpy.arange</code> to include the right end? Or maybe some other functions can do it?</p>
<p>You should be very careful using <code>arange</code> for floating point steps. <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html" rel="nofollow noreferrer">From the docs</a>:</p> <blockquote> <p>When using a non-integer step, such as 0.1, the results will often not be consistent. It is better to use numpy.linspace for these cases.</p> </blockquote> <p>Instead, use <code>linspace</code>, which allows you to specify the exact number of values returned.</p> <pre><code>&gt;&gt;&gt; np.linspace(0, 1, 11) array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ]) </code></pre> <p><code>linspace</code> <em>also</em> does in fact let you specify whether or not to include an endpoint (<code>True</code> by default):</p> <pre><code>&gt;&gt;&gt; np.linspace(0, 1, 11, endpoint=True) array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ]) &gt;&gt;&gt; np.linspace(0, 1, 11, endpoint=False) array([0. , 0.09090909, 0.18181818, 0.27272727, 0.36363636, 0.45454545, 0.54545455, 0.63636364, 0.72727273, 0.81818182, 0.90909091]) </code></pre>
python|numpy
4
9,943
61,650,749
Sorting pandas column based on number in the suffix after underscore
<p>I have a Dataframe with the below set of columns:</p> <pre><code>bill_id, product_1, product_20, product_300, price_1, price_20, price_300, quantity_1, quantity_20, quantity_300 </code></pre> <p>I would like this to be sorted in the below sequence based on the number after the underscore at the end of each column label</p> <pre><code>bill_id, product_1, price_1, quantity_1, product_20, price_20, quantity_20, product_300, price_300, quantity_300 </code></pre>
<p>Use <code>sorted</code> with lambda function by number after <code>_</code> by all columns without first and then change order by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a>:</p> <pre><code>c = 'bill_id, product_1, product_20, product_300, price_1, price_20, price_300, quantity_1, quantity_20, quantity_300' </code></pre> <hr> <pre><code>df = pd.DataFrame(columns=c.split(', ')) print (df) Empty DataFrame Columns: [bill_id, product_1, product_20, product_300, price_1, price_20, price_300, quantity_1, quantity_20, quantity_300] Index: [] </code></pre> <hr> <pre><code>c = sorted(df.columns[1:], key=lambda x: int(x.split('_')[-1])) print (c) ['product_1', 'price_1', 'quantity_1', 'product_20', 'price_20', 'quantity_20', 'product_300', 'price_300', 'quantity_300'] df = df.reindex(df.columns[:1].tolist() + c, axis=1) print (df) Columns: [bill_id, product_1, price_1, quantity_1, product_20, price_20, quantity_20, product_300, price_300, quantity_300] Index: [] </code></pre> <p>Another idea is create index by all non product columns and sorting by all columns:</p> <pre><code>df = df.set_index('bill_id') c = sorted(df.columns, key=lambda x: int(x.split('_')[-1])) df = df.reindex(c, axis=1) </code></pre>
pandas|sorting
2
9,944
57,750,706
simultaneously update theta0 and theta1 to calculate gradient descent in python
<p>I am taking the machine learning course from coursera. There is a topic called gradient descent to optimize the cost function. It says to simultaneously update theta0 and theta1 such that it will minimize the cost function and will reach to global minimum. </p> <p>The formula for gradient descent is </p> <p><a href="https://i.stack.imgur.com/3MhPr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3MhPr.png" alt="enter image description here"></a></p> <p>How do i do this programmatically using python? I am using numpy array and pandas to start from scratch to understand step by step its logic.</p> <p>For now i have only calculated cost function</p> <pre><code># step 1 - collect our data data = pd.read_csv("datasets.txt", header=None) def compute_cost_function(x, y, theta): ''' Taking in a numpy array x, y, theta and generate the cost function ''' m = len(y) # formula for prediction = theta0 + theta1.x predictions = x.dot(theta) # formula for square error = ((theta1.x + theta0) - y)**2 square_error = (predictions - y)**2 # sum of square error function return 1/(2*m) * np.sum(square_error) # converts into numpy represetation of the pandas dataframe. The axes labels will be excluded numpy_data = data.values m = data[0].size x = np.append(np.ones((m, 1)), numpy_data[:, 0].reshape(m, 1), axis=1) y = numpy_data[:, 1].reshape(m, 1) theta = np.zeros((2, 1)) compute_cost_function(x, y, theta) def gradient_descent(x, y, theta, alpha): ''' simultaneously update theta0 and theta1 where theta0 = theta0 - apha * 1/m * (sum of square error) ''' pass </code></pre> <p>I know i have to call that <code>compute_cost_function</code> from gradient descent but could not apply that formula. </p>
<p>What it means is that you use the previous values of the parameters and compute what you need on the right hand side. Once you're done, update the parameters. To do this the most clearly, create a temporary array inside your function that stores the results on the right hand side and return the computed result when you're finished.</p> <pre><code>def gradient_descent(x, y, theta, alpha): ''' simultaneously update theta0 and theta1 where theta0 = theta0 - apha * 1/m * (sum of square error) ''' theta_return = np.zeros((2, 1)) theta_return[0] = theta[0] - (alpha / m) * ((x.dot(theta) - y).sum()) theta_return[1] = theta[1] - (alpha / m) * (((x.dot(theta) - y)*x[:, 1][:, None]).sum()) return theta_return </code></pre> <p>We first declare the temporary array then compute each part of the parameters, namely the intercept and slope separately then return what we need. The nice thing about the above code is that we're doing it vectorized. For the intercept term, <code>x.dot(theta)</code> performs matrix vector multiplication where you have your data matrix <code>x</code> and parameter vector <code>theta</code>. By subtracting this result with the output values <code>y</code>, we are computing the sum over all errors between the predicted values and true values, then multiplying by the learning rate then dividing by the number of samples. We do something similar with the slope term only we additionally multiply by each input value without the bias term. We additionally need to ensure the input values are in columns as slicing along the second column of <code>x</code> results in a 1D NumPy array instead of a 2D with a singleton column. This allows the elementwise multiplication to play nicely together.</p> <p>One more thing to note is that you don't need to compute the cost at all when updating the parameters. Mind you, inside your optimization loop it'll be nice to call it as you're updating your parameters so you can see how well your parameters are learning from your data. </p> <hr> <p>To make this truly vectorized and thus exploiting the simultaneous update, you can formulate this as a matrix-vector multiplication on the training examples alone:</p> <pre><code>def gradient_descent(x, y, theta, alpha): ''' simultaneously update theta0 and theta1 where theta0 = theta0 - apha * 1/m * (sum of square error) ''' return theta - (alpha / m) * x.T.dot(x.dot(theta) - y) </code></pre> <p>What this does is that when we compute <code>x.dot(theta)</code>, this calculates the the predicted values, then we combine this by subtracting with the expected values. This produces the error vector. When we pre-multiply by the transpose of <code>x</code>, what ends up happening is that we take the error vector and perform the summation vectorized such that the first row of the transposed matrix <code>x</code> corresponds to values of 1 meaning that we are simply summing up all of the error terms which gives us the update for the bias or intercept term. Similarly the second row of the transposed matrix <code>x</code> additionally weights each error term by the corresponding sample value in <code>x</code> (without the bias term of 1) and computes the sum that way. The result is a 2 x 1 vector which gives us the final update when we subtract with the previous value of our parameters and weighted by the learning rate and number of samples.</p> <hr> <p>I didn't realize you were putting the code in an iterative framework. In that case you need to update the parameters at each iteration.</p> <pre><code>def gradient_descent(x, y, theta, alpha, iterations): ''' simultaneously update theta0 and theta1 where theta0 = theta0 - apha * 1/m * (sum of square error) ''' theta_return = np.zeros((2, 1)) for i in range(iterations): theta_return[0] = theta[0] - (alpha / m) * ((x.dot(theta) - y).sum()) theta_return[1] = theta[1] - (alpha / m) * (((x.dot(theta) - y)*x[:, 1][:, None]).sum()) theta = theta_return return theta theta = gradient_descent(x, y, theta, 0.01, 1000) </code></pre> <p>At each iteration, you update the parameters then set it properly so that the next time, the current updates become the previous updates. </p>
python|numpy|machine-learning|linear-regression|gradient-descent
5
9,945
54,958,194
How do I convert a number to thousand separator using comma?
<p>I have a pandas dataframe, some rows are number, i want to convert them to thousand separator. Tried this <code>thousands=','</code>but not working, tried other solutions but how can i convert whole dataframe into thousand separator before <code>to_csv</code> </p> <p><strong>Example what i want:</strong> convert <code>31752000</code> to <code>31,752,000</code></p> <p><strong>Here is the code i'm trying:</strong></p> <pre><code>url = 'https://finance.yahoo.com/quote/RDS-A/balance-sheet?p=RDS-A' table = pandas.read_html(url, attrs={ 'class': 'Lh(1.7) W(100%) M(0)'}, header=0) df = pandas.DataFrame(table[0]) print(df) #Then save to csv #df.to_csv('ydata.csv', index=False, header=True) </code></pre>
<p>You could manually add the commas like this (only works with numbers smaller than 1,000,000,000):</p> <pre><code>thousandSeperatedStrings=[] for x in df: string="" if(x&gt;=1000000): x=x%1000000000 string=str(int(x/1000000))+"," x=x%1000000 string=string+str(int(x/100000)) x=x%100000 string=string+str(int(x/10000)) x=x%10000 string=string+str(int(x/1000))+"," x=x%1000 string=string+str(int(x/100)) x=x%100 string=string+str(int(x/10)) x=x%10 string=string+str(int(x)) elif(x&gt;=1000): x=x%1000000 string=string+str(int(x/1000))+"," x=x%1000 string=string+str(int(x/100)) x=x%100 string=string+str(int(x/10)) x=x%10 string=string+str(int(x)) else: string=str(int(x)) thousandSeperatedStrings.append(string) </code></pre>
python|pandas
1
9,946
49,590,478
Are the libtensorflow_jni and libtensorflow_jni_gpu jar dependencies mutually exclusive?
<p>If we add both the libtensorflow_jni and the libtensorflow_jni_gpu to the maven pom or perhaps something like this:</p> <pre><code>&lt;dependency&gt; &lt;groupId&gt;org.tensorflow&lt;/groupId&gt; &lt;artifactId&gt;tensorflow&lt;/artifactId&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;org.tensorflow&lt;/groupId&gt; &lt;artifactId&gt;libtensorflow_jni_gpu&lt;/artifactId&gt; &lt;/dependency&gt; </code></pre> <p>Can we expect that the built application will be able to run on and leverage both CPU and GPU platforms? Or are those libraries mutually exclusive? </p> <p>Documentation is not particularly clear on this: <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/java/maven/README.md#artifact-structure" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/java/maven/README.md#artifact-structure</a></p>
<p>I believe i have an answer of my question. </p> <p>Based on the <code>libtensorflow_jni</code> and <code>libtensorflow_jni_gpu</code> jars structure and the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/java/src/main/java/org/tensorflow/NativeLibrary.java" rel="nofollow noreferrer">NativeLibrary.java</a> implementation i can conclude that their classpath namespaces overlap and therefore those native jars are <strong>mutually exclusive</strong>!</p> <p>At any given moment one can have either</p> <pre><code> &lt;dependency&gt; &lt;groupId&gt;org.tensorflow&lt;/groupId&gt; &lt;artifactId&gt;libtensorflow_jni&lt;/artifactId&gt; &lt;/dependency&gt; </code></pre> <p>or </p> <pre><code> &lt;dependency&gt; &lt;groupId&gt;org.tensorflow&lt;/groupId&gt; &lt;artifactId&gt;libtensorflow_jni_gpu&lt;/artifactId&gt; &lt;/dependency&gt; </code></pre> <p>on the classpath but never both!</p> <p>One practical implication of this is that your application need to build two separate deployment jars (one for CPU and one GPU environment).</p>
java|tensorflow|native
0
9,947
49,635,436
Shapely point geometry in geopandas df to lat/lon columns
<p>I have a geopandas df with a column of shapely point objects. I want to extract the coordinate (lat/lon) from the shapely point objects to generate latitude and longitude columns. There must be an easy way to do this, but I cannot figure it out.</p> <p>I know you can extract the individual coordinates like this:</p> <pre><code>lon = df.point_object[0].x lat = df.point_object[0].y </code></pre> <p>And I could create a function that does this for the entire df, but I figured there was a more efficient/elegant way.</p>
<p>If you have the latest version of geopandas (0.3.0 as of writing), and the if <code>df</code> is a GeoDataFrame, you can use the <code>x</code> and <code>y</code> attributes on the geometry column:</p> <pre><code>df['lon'] = df.point_object.x df['lat'] = df.point_object.y </code></pre> <p>In general, if you have a column of shapely objects, you can also use <code>apply</code> to do what you can do on individual coordinates for the full column:</p> <pre><code>df['lon'] = df.point_object.apply(lambda p: p.x) df['lat'] = df.point_object.apply(lambda p: p.y) </code></pre>
python|gis|latitude-longitude|geopandas|shapely
29
9,948
49,511,742
how to find tfslim output node names
<p>After training some model with tensorflow and slim, I am trying to freeze the model and weights. But it's quite hard for me to find out the output nodes name, which is necessary for <code>freeze_graph.freeze_graph()</code>.</p> <p>my output layers looks like:</p> <pre><code> conv4_1 = slim.conv2d(net,num_outputs=2,kernel_size=[1,1],stride=1,scope='conv4_1',activation_fn=tf.nn.softmax) #conv4_1 = slim.conv2d(net,num_outputs=1,kernel_size=[1,1],stride=1,scope='conv4_1',activation_fn=tf.nn.sigmoid) print conv4_1.get_shape() #batch*H*W*4 bbox_pred = slim.conv2d(net,num_outputs=4,kernel_size=[1,1],stride=1,scope='conv4_2',activation_fn=None) </code></pre> <p>conv4_1 is the softmaxed class like, face or not. bbox_pred is the bounding box regression.</p> <p>when I save the graph with, <code>tf.train.write_graph(self.sess.graph_def, output_path, 'model.pb')</code> and open the model.pb as text, I found that the graph looks like:</p> <pre><code>node { name: "conv4_1/weights/Initializer/random_uniform/shape" ... node { name: "conv4_1/kernel/Regularizer/l2_regularizer" ... node { name: "conv4_1/Conv2D" op: "Conv2D" input: "conv3/add" input: "conv4_1/weights/read" ... node { name: "conv4_1/Softmax" op: "Softmax" input: "conv4_1/Reshape" ... node { name: "Squeeze" op: "Squeeze" input: "conv4_1/Reshape_1" attr { key: "T" value { type: DT_FLOAT } } attr { key: "squeeze_dims" value { list { i: 0 } } } } </code></pre> <p>so, here comes the problem, which is the output node names? </p> <p>tensorflow only ways of writing layers could set "names" like:</p> <pre><code> .conv(3, 3, 32, 1, 1, padding='VALID', relu=False, name='conv3') .prelu(name='PReLU3') .conv(1, 1, 2, 1, 1, relu=False, name='conv4-1') .softmax(3,name='prob1')) (self.feed('PReLU3') #pylint: disable=no-value-for-parameter .conv(1, 1, 4, 1, 1, relu=False, name='conv4-2')) </code></pre> <p>But I can't find setting output names method in tensorflow slim.</p> <p>Thanks!</p>
<p>Output Node names for the 3 of the inception models are given below:</p> <p>inception v3 : InceptionV3/Predictions/Reshape_1 <br/> inception v4 : InceptionV4/Logits/Predictions <br/> inception resnet v2 : InceptionResnetV2/Logits/Predictions</p>
tensorflow|tensorflow-slim
1
9,949
73,367,863
How to create 1 row dataframe from a dataset in pandas
<p>I have a .csv file with many rows and columns. For analysis purposes, I want to select a row number from the dataset and pass it as a dataframe in pandas.</p> <p>Instead of writing the column names and input values inside a dict, how can I make it faster? Right now I have:</p> <pre><code>df= pd.read_csv('filename.csv') df2= pd.DataFrame({'var1': 5, 'var2': 10, 'var3': 15}) </code></pre> <p>var1,var2,var3 are df columns. I want to make a seperate dataframe with df data.</p> <p>You can either select a random row, or a given row number. Thank you for your help.</p>
<pre class="lang-py prettyprint-override"><code>df2 = df.iloc[rownum:rownum + 1, :] </code></pre>
python|python-3.x|pandas|dataframe
0
9,950
73,325,035
Most efficient way to take max of classifier scores in Python and / or PySpark
<p>I have a dataframe with the scores of a two-class classification model...</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Observation</th> <th>Class</th> <th>Probability</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0</td> <td>0.5013</td> </tr> <tr> <td>1</td> <td>1</td> <td>0.4987</td> </tr> <tr> <td>2</td> <td>0</td> <td>0.5010</td> </tr> <tr> <td>2</td> <td>1</td> <td>0.4990</td> </tr> <tr> <td>3</td> <td>0</td> <td>0.5128</td> </tr> <tr> <td>3</td> <td>1</td> <td>0.4872</td> </tr> </tbody> </table> </div> <p>I only care about the &quot;winning&quot; class (either 0 or 1) and its corresponding probability (the max. probability). What is the best way to group or modify this dataframe to only have 3 observations (in this case) with the &quot;winning&quot; class (0 or 1) and the &quot;winning&quot; probability?</p> <p>For example, my desired output...</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Observation</th> <th>Class</th> <th>Probability</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0</td> <td>0.5013</td> </tr> <tr> <td>2</td> <td>0</td> <td>0.5010</td> </tr> <tr> <td>3</td> <td>0</td> <td>0.5128</td> </tr> </tbody> </table> </div>
<p>To select some rows in a df by some rule you can use .loc or .query:</p> <p>timing: 300 µs ± 6 µs per loop</p> <pre><code>df.loc[df['Class']==0] </code></pre> <p>timing: 1.17 ms ± 53 µs per loop</p> <pre><code>df.query('Class == 0') </code></pre>
pandas|dataframe|pyspark|data-science
0
9,951
67,503,053
ValueError in Custom Keras Layer
<p>I implemented a custom layer for Minibatch Standard Deviation:</p> <pre><code>class MinibatchStd(Layer): def __init__(self, group_size=4, epsilon=1e-8): super(MinibatchStd, self).__init__() self.epsilon = epsilon self.group_size = group_size def call(self, input_tensor): n, h, w, c = input_tensor.shape self.group_size = tf.keras.backend.minimum(self.group_size, tf.cast(input_tensor[0], dtype=tf.int32)) x = tf.reshape(input_tensor, [self.group_size, -1, h, w, c]) group_mean, group_var = tf.nn.moments(x, axes=(0), keepdims=False) group_std = tf.sqrt(group_var + self.epsilon) avg_std = tf.reduce_mean(group_std, axis=[1,2,3], keepdims=True) x = tf.tile(avg_std, [self.group_size, h, w, 1]) return tf.concat([input_tensor, x], axis=-1) </code></pre> <p>After executing it, I get the following error:</p> <pre><code>ValueError: in user code: &lt;ipython-input-30-9b80a1ea4799&gt;:20 call * x = tf.reshape(input_tensor, [self.group_size, -1, h, w, c]) C:\ProgramData\Anaconda3\envs\gputest\lib\site-packages\tensorflow\python\ops\array_ops.py:193 reshape ** result = gen_array_ops.reshape(tensor, shape, name) C:\ProgramData\Anaconda3\envs\gputest\lib\site-packages\tensorflow\python\ops\gen_array_ops.py:8087 reshape &quot;Reshape&quot;, tensor=tensor, shape=shape, name=name) C:\ProgramData\Anaconda3\envs\gputest\lib\site-packages\tensorflow\python\framework\op_def_library.py:488 _apply_op_helper (input_name, err)) ValueError: Tried to convert 'shape' to a tensor and failed. Error: Shapes must be equal rank, but are 3 and 0 From merging shape 0 with other shapes. for '{{node minibatch_std_4/Reshape/packed}} = Pack[N=5, T=DT_INT32, axis=0](minibatch_std_4/Minimum, minibatch_std_4/Reshape/packed/1, minibatch_std_4/Reshape/packed/2, minibatch_std_4/Reshape/packed/3, minibatch_std_4/Reshape/packed/4)' with input shapes: [4,4,256], [], [], [], []. </code></pre> <p>It only appears when I add the line:</p> <pre><code>self.group_size = tf.keras.backend.minimum(self.group_size, tf.cast(input_tensor[0], dtype=tf.int32)) </code></pre> <p>I also tried to use <code>tf.math.minimum</code> but also failed.</p> <p>I use <code>Keras = 2.4.3</code> and <code>TF = 2.2.0</code></p>
<p>There are two ways to get tensor shapes for some tensor (say <code>x</code>): <code>x.shape</code> and <code>tf.shape(x)</code>. These two are fundamentally different: The former simply returns a python list of the shape, and the latter adds an op in the dynamic computation graph, including placeholders for <code>None</code> dimensions.</p> <p>In short, instead of</p> <pre><code> n, h, w, c = input_tensor.shape </code></pre> <p>use</p> <pre><code> shape = tf.shape(input_tensor) n = shape[0] h = shape[1] w = shape[2] c = shape[3] </code></pre>
python|tensorflow|keras|tensorflow2.0|tf.keras
1
9,952
67,438,792
How to calculate roc auc score from positive unlabeled learning?
<p>I'm trying to adapt some code for positive unlabeled learning from <a href="https://github.com/phuijse/bagging_pu/blob/master/PU_Learning_simple_example.ipynb" rel="nofollow noreferrer">this example</a>, which runs with my data but I want to also calculate the ROC AUC score which I'm getting stuck on.</p> <p>My data is divided into positive samples (<code>data_P</code>) and unlabeled samples (<code>data_U</code>), each with only 2 features/columns of data such as:</p> <pre><code>#3 example rows: data_P [[-1.471, 5.766], [-1.672, 5.121], [-1.371, 4.619]] </code></pre> <pre><code>#3 example rows: data_U [[1.23, 6.26], [-5.72, 4.1213], [-3.1, 7.129]] </code></pre> <p>I run the positive-unlabeled learning as in the linked example:</p> <pre><code>known_labels_ratio = 0.5 NP = data_P.shape[0] NU = data_U.shape[0] T = 1000 K = NP train_label = np.zeros(shape=(NP+K,)) train_label[:NP] = 1.0 n_oob = np.zeros(shape=(NU,)) f_oob = np.zeros(shape=(NU, 2)) for i in range(T): # Bootstrap resample bootstrap_sample = np.random.choice(np.arange(NU), replace=True, size=K) # Positive set + bootstrapped unlabeled set data_bootstrap = np.concatenate((data_P, data_U[bootstrap_sample, :]), axis=0) # Train model model = DecisionTreeClassifier(max_depth=None, max_features=None, criterion='gini', class_weight='balanced') model.fit(data_bootstrap, train_label) # Index for the out of the bag (oob) samples idx_oob = sorted(set(range(NU)) - set(np.unique(bootstrap_sample))) # Transductive learning of oob samples f_oob[idx_oob] += model.predict_proba(data_U[idx_oob]) n_oob[idx_oob] += 1 predict_proba = f_oob[:, 1]/n_oob </code></pre> <p>This all runs fine but what I want is to run <code>roc_auc_score()</code> which I'm getting stuck on how to do without errors.</p> <p>Currently I am trying:</p> <pre><code>y_pred = model.predict_proba(data_bootstrap) roc_auc_score(train_label, y_pred) ValueError: bad input shape (3, 2) </code></pre> <p>The problem seems to be that <code>y_pred</code> gives an output with 2 columns, looking like:</p> <pre><code>y_pred array([[0.00554287, 0.9944571 ], [0.0732314 , 0.9267686 ], [0.16861796, 0.83138204]]) </code></pre> <p>I'm not sure why <code>y_pred</code> ends up like this, is it giving the probability based on if the sample falls into 2 groups? Positive or other essentially? Could I just filter these to select per row the probability with the highest score? Or is there a way for me to change this or another way for me to calculate the AUCROC score?</p>
<p><code>y_pred</code> must be a single number, giving the probability of the positive class <code>p1</code>; currently your <code>y_pred</code> consists of both probabilities <code>[p0, p1]</code> (with <code>p0+p1=1.0</code> by definition).</p> <p>Assuming that your positive class is class <code>1</code> (i.e. the second element of each array in <code>y_pred</code>), what you should do is:</p> <pre><code>y_pred_pos = [y_pred[i, 1] for i in range(len(y_pred))] y_pred_pos # inspect # [0.9944571, 0.9267686, 0.83138204] roc_auc_score(train_label, y_pred_pos) </code></pre> <p>In case your <code>y_pred</code> is a Numpy array (and not a Python list), you could replace the list comprehension in the first command above with:</p> <pre><code>y_pred_pos = y_pred[:,1] </code></pre>
python|numpy|machine-learning|scikit-learn|auc
2
9,953
60,095,584
pandas pivot_table: can I display sub-totals in the output?
<p>Suppose I have a very simple dataframe, like so:</p> <pre><code>data={"Label": (1,1,1,2,2,2,2,3,3), "Value": ("a","b","b","b","c","a","b","a","c")} df = pd.DataFrame(data = data) </code></pre> <p>I can generate a pivot table as follows, by writing <code>pd.pivot_table(testdf,index=["Label", "Value"],values=["Value"],aggfunc=len)</code>:</p> <pre><code>________________________ |Label | Value | Count | |------+-------+-------| | 1 | a | 1 | | | b | 2 | | 2 | a | 1 | | | b | 2 | | | c | 1 | | 3 | a | 1 | | | c | 1 | |------+-------+-------| </code></pre> <p>Is there any way to replicate Excel pivot table functionality, with the top-level aggregates included?</p> <p><a href="https://i.stack.imgur.com/mgDVE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mgDVE.png" alt="enter image description here"></a></p>
<p>You won't find an explicit equivalent in pandas, but you can always chain multiple functions together. I'll give a <code>groupby</code> example:</p> <pre><code>import pandas as pd data={"Label": (1,1,1,2,2,2,2,3,3), "Value": ("a","b","b","b","c","a","b","a","c")} df = pd.DataFrame(data = data) df["Top_Level_Counts"] = df.groupby("Label").transform("count") df["Counts"] = df.groupby(["Label","Value"]).transform("count") print(df) Label Value Top_Level_Counts Counts 0 1 a 3 1 1 1 b 3 2 2 1 b 3 2 3 2 b 4 2 4 2 c 4 1 5 2 a 4 1 6 2 b 4 2 7 3 a 2 1 8 3 c 2 1 </code></pre> <p>Or a one liner like this (my preference):</p> <pre><code>df = (pd.DataFrame(data = data) .assign(Top_Level_Counts = lambda x:x.groupby("Label").transform("count")) .assign(Counts = lambda x:x.groupby(["Label","Value"]).transform("count")) ).set_index(["Label","Value"]) print(df) Top_Level_Counts Counts Label Value 1 a 3 1 b 3 2 b 3 2 2 b 4 2 c 4 1 a 4 1 b 4 2 3 a 2 1 c 2 1 </code></pre>
pandas|pivot-table
1
9,954
60,157,722
Python - Defining a chunk function to encode genomic data
<p>I'm trying to encode genomes from strings stored in a dataframe to an array of corresponding numerical values. </p> <p>Here is some of my dataframe (for some reason it doesn't give me all 5 columns just 2):</p> <pre><code>Antibiotic ... Genome 0 isoniazid ... ccctgacacatcacggcgcctgaccgacgagcagaagatccagctc... 1 isoniazid ... gggggtgctggcggggccggcgccgataaccccaccggcatcggcg... 2 isoniazid ... aatcacaccccgcgcgattgctagcatcctcggacacactgcacgc... 3 isoniazid ... gttgttgttgccgagattcgcaatgcccaggttgttgttgccgaga... 4 isoniazid ... ttgaccgatgaccccggttcaggcttcaccacagtgtggaacgcgg... </code></pre> <p>So I need to split these strings character by character and assign them to floats. This is the lookup table I was using:</p> <pre><code>lookup = { 'a': 0.25, 'g': 0.50, 'c': 0.75, 't': 1.00 # z: 0.00 } </code></pre> <p>I tried to apply this directly using:</p> <pre><code>dataframe['Genome'].apply(lambda bps: pd.Series([lookup[bp] if bp in lookup else 0.0 for bp in bps.lower()])).values </code></pre> <p>But I have too much data to fit into memory so I'm trying to process using chunks and I'm having trouble defining a reprocessing function.</p> <p>Here's my code so far:</p> <pre><code>lookup = { 'a': 0.25, 'g': 0.50, 'c': 0.75, 't': 1.00 # z: 0.00 } dfpath = 'C:\\Users\\CAAVR\\Desktop\\Ison.csv' dataframe = pd.read_csv(dfpath, chunksize=10) chunk_list = [] def preprocess(chunk): chunk['Genome'].apply(lambda bps: pd.Series([lookup[bp] if bp in lookup else 0.0 for bp in bps.lower()])).values return; for chunk in dataframe: chunk_filter = preprocess(chunk) chunk_list.append(chunk_filter) dataframe1 = pd.concat(chunk_list) print(dataframe1) </code></pre> <p>Thanks in advance! </p>
<p>You have <code>chunk_filter = preprocess(chunk)</code>, but your <code>preprocess()</code> function returns nothing, so <code>chunk_filter</code> is always meaningless. Modify your preprocess function to store the result of the <code>apply()</code> call, then return that value. For example:</p> <pre><code>def preprocess(chunk): processed_chunk = chunk['Genome'].apply(lambda bps: pd.Series([lookup[bp] if bp in lookup else 0.0 for bp in bps.lower()])).values return processed_chunk; </code></pre> <p>By doing this, you actually return the data from the preprocess function so that it can be appended to the chunk list. As you have it currently, the preprocess function works correctly but essentially discards the results.</p>
python|pandas|preprocessor|chunks
1
9,955
60,120,828
Fetch value of a specific key from nested dictionary in pandas with python
<p>I am iterating through the rows in pandas dataframe printing out nested dictionaries from the specific column. My nested dictionary looks like this:</p> <pre><code>{'dek': "&lt;p&gt;Don't forget to buy a card&lt;/p&gt;", 'links': {'edit': {'dev': '//patty-menshealth.feature.net/en/content/edit/76517422-96ad-4b5c-a24a-c080c58bce0c', 'prod': '//patty-menshealth.prod.com/en/content/edit/76517422-96ad-4b5c-a24a-c080c58bce0c', 'stage': '//patty-menshealth.stage.net/en/content/edit/76517422-96ad-4b5c-a24a-c080c58bce0c'}, 'frontend': {'dev': '//menshealth.feature.net/trending-news/a19521193/fathers-day-weekend-plans/', 'prod': '//www.menshealth.com/trending-news/a19521193/fathers-day-weekend-plans/', 'stage': '//menshealth.stage.net/trending-news/a19521193/fathers-day-weekend-plans/'}}, 'header': {'title_color': 1, 'title_layout': 1}, 'sponsor': {'program_type': 1, 'tracking_urls': []}, 'social_dek': "&lt;p&gt;Don't forget to buy a card&lt;/p&gt;", 'auto_social': 0, 'index_title': "\u200bWeekend Guide: Treat Your Dad Right This Father's Day", 'short_title': "Treat Your Dad Right This Father's Day", 'social_title': "\u200bWeekend Guide: Treat Your Dad Right This Father's Day", 'editors_notes': '&lt;p&gt;nid: 2801076&lt;br&gt;created_date: 2017-06-16 13:00:01&lt;br&gt;compass_feed_date: 2017-06-21 14:01:58&lt;br&gt;contract_id: 40&lt;/p&gt;', 'seo_meta_title': "Treat Your Dad Right This Father's Day\u200b | Men’s Health", 'social_share_url': '/trending-news/a19521193/fathers-day-weekend-plans/', 'seo_related_links': {}, 'editor_attribution': 'by', 'hide_from_homepage': 1, 'syndication_rights': 3, 'seo_meta_description': "\u200bFrom gifts to food ideas, we've got your Father's Day covered. Just don't forget to buy him a card."} </code></pre> <p>I use the code below:</p> <pre><code>def recursive_items(dictionary): for key, value in dictionary.iteritems(): if type(value) is dict: yield from recursive_items(value) else: yield (key, value) for key, value in recursive_items(merged_df["metadata_y"]): print(key, value) </code></pre> <p>How do I grab the value of a specific key? I tried to include the index of the key I am looking to fetch with <code>print(key[5], value</code> it gave me an error: TypeError: 'int' object is not subscriptable.</p> <p>How can I grab the value?</p>
<p>Apologies about not directly addressing the original question, but maybe it's worth "flattening" the nested column using <code>json_normalize</code>.</p> <p>For example, if your example data is named <code>dictionary</code>:</p> <pre><code>from pandas.io.json import json_normalize # Flatten the nested dict, resulting in a DataFrame with 1 row and 23 columns this_df = json_normalize(dictionary) # Inspect the resulting columns. Is this structure useful? this_df.columns Index(['dek', 'social_dek', 'auto_social', 'index_title', 'short_title', 'social_title', 'editors_notes', 'seo_meta_title', 'social_share_url', 'editor_attribution', 'hide_from_homepage', 'syndication_rights', 'seo_meta_description', 'links.edit.dev', 'links.edit.prod', 'links.edit.stage', 'links.frontend.dev', 'links.frontend.prod', 'links.frontend.stage', 'header.title_color', 'header.title_layout', 'sponsor.program_type', 'sponsor.tracking_urls'], dtype='object') </code></pre>
python|pandas|dictionary|nested
2
9,956
59,951,043
To extract distinct values for all categorical columns in dataframe
<p>I have a situation where I need to print all the distinct values that are there for all the categorical columns in my data frame The dataframe looks like this :</p> <pre><code>Gender Function Segment M IT LE F IT LM M HR LE F HR LM </code></pre> <p>The output should give me the following:</p> <pre><code>Variable_Name Distinct_Count Gender 2 Function 2 Segment 2 </code></pre> <p>How to achieve this?</p>
<p>using <code>nunique</code> then passing the series into a new datafame and setting column names. </p> <pre><code>df_unique = df.nunique().to_frame().reset_index() df_unique.columns = ['Variable','DistinctCount'] </code></pre> <hr> <pre><code>print(df_unique) Variable DistinctCount 0 Gender 2 1 Function 2 2 Segment 2 </code></pre>
python-3.x|pandas|pandas-groupby
5
9,957
65,311,782
Failed Qiskit installation with Anaconda on Windows
<p>I'm attempting to install Qiskit via pip and Anaconda on my machine. Here's my process</p> <p>1.) Install Anaconda 2.) Open Anaconda 3 prompt 3.) Create a virtual environment using <code>conda create -n &lt;environment-name&gt; python=3</code> command (I've created the environment on different occasions using -n and -m, it creates the environment just fine either way) 4.) Activate the environment 5.) Install Qiskit using <code>pip install qiskit</code></p> <p>When I run <code>pip install qiskit</code>, this mess populates the Anaconda prompt</p> <pre><code>(.venv) C:\Users\brenm&gt;pip install qiskit </code></pre> <p>...</p> <pre><code> Installing build dependencies ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\brenm\anaconda3\envs\.venv\python.exe' 'C:\Users\brenm\anaconda3\envs\.venv\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\brenm\AppData\Local\Temp\pip-build-env-2psge951\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel 'Cython&gt;=0.28.5' 'numpy==1.13.3; python_version=='&quot;'&quot;'3.6'&quot;'&quot;' and platform_system!='&quot;'&quot;'AIX'&quot;'&quot;' and platform_python_implementation == '&quot;'&quot;'CPython'&quot;'&quot;'' 'numpy==1.14.0; python_version=='&quot;'&quot;'3.6'&quot;'&quot;' and platform_system!='&quot;'&quot;'AIX'&quot;'&quot;' and platform_python_implementation != '&quot;'&quot;'CPython'&quot;'&quot;'' 'numpy==1.14.5; python_version=='&quot;'&quot;'3.7'&quot;'&quot;' and platform_system!='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.17.3; python_version&gt;='&quot;'&quot;'3.8'&quot;'&quot;' and platform_system!='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.16.0; python_version=='&quot;'&quot;'3.6'&quot;'&quot;' and platform_system=='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.16.0; python_version=='&quot;'&quot;'3.7'&quot;'&quot;' and platform_system=='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.17.3; python_version&gt;='&quot;'&quot;'3.8'&quot;'&quot;' and platform_system=='&quot;'&quot;'AIX'&quot;'&quot;'' 'scipy&gt;=0.19.1' cwd: None Complete output (641 lines): Ignoring numpy: markers 'python_version == &quot;3.6&quot; and platform_system != &quot;AIX&quot; and platform_python_implementation == &quot;CPython&quot;' don't match your environment Ignoring numpy: markers 'python_version == &quot;3.6&quot; and platform_system != &quot;AIX&quot; and platform_python_implementation != &quot;CPython&quot;' don't match your environment Ignoring numpy: markers 'python_version == &quot;3.7&quot; and platform_system != &quot;AIX&quot;' don't match your environment Ignoring numpy: markers 'python_version == &quot;3.6&quot; and platform_system == &quot;AIX&quot;' don't match your environment Ignoring numpy: markers 'python_version == &quot;3.7&quot; and platform_system == &quot;AIX&quot;' don't match your environment Ignoring numpy: markers 'python_version &gt;= &quot;3.8&quot; and platform_system == &quot;AIX&quot;' don't match your environment Collecting Cython&gt;=0.28.5 Using cached Cython-0.29.21-py2.py3-none-any.whl (974 kB) Collecting numpy==1.17.3 Using cached numpy-1.17.3.zip (6.4 MB) Collecting scipy&gt;=0.19.1 Using cached scipy-1.5.4-cp39-cp39-win_amd64.whl (31.4 MB) Collecting setuptools Using cached setuptools-51.0.0-py3-none-any.whl (785 kB) Collecting wheel Using cached wheel-0.36.2-py2.py3-none-any.whl (35 kB) Building wheels for collected packages: numpy Building wheel for numpy (setup.py): started Building wheel for numpy (setup.py): finished with status 'error' ERROR: Command errored out with exit status 1: command: 'C:\Users\brenm\anaconda3\envs\.venv\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '&quot;'&quot;'C:\\Users\\brenm\\AppData\\Local\\Temp\\pip-install-8_a1i30j\\numpy_78428c83c4dd4130b43d0502153b50e8\\setup.py'&quot;'&quot;'; __file__='&quot;'&quot;'C:\\Users\\brenm\\AppData\\Local\\Temp\\pip-install-8_a1i30j\\numpy_78428c83c4dd4130b43d0502153b50e8\\setup.py'&quot;'&quot;';f=getattr(tokenize, '&quot;'&quot;'open'&quot;'&quot;', open)(__file__);code=f.read().replace('&quot;'&quot;'\r\n'&quot;'&quot;', '&quot;'&quot;'\n'&quot;'&quot;');f.close();exec(compile(code, __file__, '&quot;'&quot;'exec'&quot;'&quot;'))' bdist_wheel -d 'C:\Users\brenm\AppData\Local\Temp\pip-wheel-8jv9o836' cwd: C:\Users\brenm\AppData\Local\Temp\pip-install-8_a1i30j\numpy_78428c83c4dd4130b43d0502153b50e8\ Complete output (292 lines): Running from numpy source directory. blas_opt_info: blas_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\\Users\\brenm\\anaconda3\\envs\\.venv\\lib', 'C:\\', 'C:\\Users\\brenm\\anaconda3\\envs\\.venv\\libs', 'C:\\Users\\brenm\\anaconda3\\Library\\lib'] NOT AVAILABLE </code></pre> <p>... ---------------------------------------- ERROR: Failed building wheel for numpy Running setup.py clean for numpy ERROR: Command errored out with exit status 1: command: 'C:\Users\brenm\anaconda3\envs.venv\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '&quot;'&quot;'C:\Users\brenm\AppData\Local\Temp\pip-install-8_a1i30j\numpy_78428c83c4dd4130b43d0502153b50e8\setup.py'&quot;'&quot;'; <strong>file</strong>='&quot;'&quot;'C:\Users\brenm\AppData\Local\Temp\pip-install-8_a1i30j\numpy_78428c83c4dd4130b43d0502153b50e8\setup.py'&quot;'&quot;';f=getattr(tokenize, '&quot;'&quot;'open'&quot;'&quot;', open)(<strong>file</strong>);code=f.read().replace('&quot;'&quot;'\r\n'&quot;'&quot;', '&quot;'&quot;'\n'&quot;'&quot;');f.close();exec(compile(code, <strong>file</strong>, '&quot;'&quot;'exec'&quot;'&quot;'))' clean --all cwd: C:\Users\brenm\AppData\Local\Temp\pip-install-8_a1i30j\numpy_78428c83c4dd4130b43d0502153b50e8 Complete output (10 lines): Running from numpy source directory.</p> <pre><code> `setup.py clean` is not supported, use one of the following instead: - `git clean -xdf` (cleans all files) - `git clean -Xdf` (cleans all versioned files, doesn't touch files that aren't checked into the git repo) Add `--force` to your command to use it anyway if you must (unsupported). ---------------------------------------- ERROR: Failed cleaning build dir for numpy Failed to build numpy Installing collected packages: numpy, wheel, setuptools, scipy, Cython Running setup.py install for numpy: started Running setup.py install for numpy: finished with status 'error' ERROR: Command errored out with exit status 1: command: 'C:\Users\brenm\anaconda3\envs\.venv\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '&quot;'&quot;'C:\\Users\\brenm\\AppData\\Local\\Temp\\pip-install-8_a1i30j\\numpy_78428c83c4dd4130b43d0502153b50e8\\setup.py'&quot;'&quot;'; __file__='&quot;'&quot;'C:\\Users\\brenm\\AppData\\Local\\Temp\\pip-install-8_a1i30j\\numpy_78428c83c4dd4130b43d0502153b50e8\\setup.py'&quot;'&quot;';f=getattr(tokenize, '&quot;'&quot;'open'&quot;'&quot;', open)(__file__);code=f.read().replace('&quot;'&quot;'\r\n'&quot;'&quot;', '&quot;'&quot;'\n'&quot;'&quot;');f.close();exec(compile(code, __file__, '&quot;'&quot;'exec'&quot;'&quot;'))' install --record 'C:\Users\brenm\AppData\Local\Temp\pip-record-yymyimu0\install-record.txt' --single-version-externally-managed --prefix 'C:\Users\brenm\AppData\Local\Temp\pip-build-env-2psge951\overlay' --compile --install-headers 'C:\Users\brenm\AppData\Local\Temp\pip-build-env-2psge951\overlay\Include\numpy' cwd: C:\Users\brenm\AppData\Local\Temp\pip-install-8_a1i30j\numpy_78428c83c4dd4130b43d0502153b50e8\ Complete output (297 lines): Running from numpy source directory. Note: if you need reliable uninstall behavior, then install with pip instead of using `setup.py install`: - `pip install .` (from a git repo or downloaded source release) - `pip install numpy` (last NumPy release on PyPi) blas_opt_info: blas_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\\Users\\brenm\\anaconda3\\envs\\.venv\\lib', 'C:\\', 'C:\\Users\\brenm\\anaconda3\\envs\\.venv\\libs', 'C:\\Users\\brenm\\anaconda3\\Library\\lib'] NOT AVAILABLE blis_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries blis not found in ['C:\\Users\\brenm\\anaconda3\\envs\\.venv\\lib', 'C:\\', 'C:\\Users\\brenm\\anaconda3\\envs\\.venv\\libs', 'C:\\Users\\brenm\\anaconda3\\Library\\lib'] NOT AVAILABLE openblas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas not found in ['C:\\Users\\brenm\\anaconda3\\envs\\.venv\\lib', 'C:\\', 'C:\\Users\\brenm\\anaconda3\\envs\\.venv\\libs', 'C:\\Users\\brenm\\anaconda3\\Library\\lib'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas not found in ['C:\\Users\\brenm\\anaconda3\\envs\\.venv\\lib', 'C:\\', 'C:\\Users\\brenm\\anaconda3\\envs\\.venv\\libs', 'C:\\Users\\brenm\\anaconda3\\Library\\lib'] NOT AVAILABLE atlas_3_10_blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas not found in ['C:\\Users\\brenm\\anaconda3\\envs\\.venv\\lib', 'C:\\', 'C:\\Users\\brenm\\anaconda3\\envs\\.venv\\libs', 'C:\\Users\\brenm\\anaconda3\\Library\\lib'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in ['C:\\Users\\brenm\\anaconda3\\envs\\.venv\\lib', 'C:\\', 'C:\\Users\\brenm\\anaconda3\\envs\\.venv\\libs', 'C:\\Users\\brenm\\anaconda3\\Library\\lib'] NOT AVAILABLE atlas_blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in ['C:\\Users\\brenm\\anaconda3\\envs\\.venv\\lib', 'C:\\', 'C:\\Users\\brenm\\anaconda3\\envs\\.venv\\libs', 'C:\\Users\\brenm\\anaconda3\\Library\\lib'] NOT AVAILABLE accelerate_info: NOT AVAILABLE C:\Users\brenm\AppData\Local\Temp\pip-install-8_a1i30j\numpy_78428c83c4dd4130b43d0502153b50e8\numpy\distutils\system_info.py:690: UserWarning: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. self.calc_info() blas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries blas not found in ['C:\\Users\\brenm\\anaconda3\\envs\\.venv\\lib', 'C:\\', 'C:\\Users\\brenm\\anaconda3\\envs\\.venv\\libs', 'C:\\Users\\brenm\\anaconda3\\Library\\lib'] NOT AVAILABLE C:\Users\brenm\AppData\Local\Temp\pip-install-8_a1i30j\numpy_78428c83c4dd4130b43d0502153b50e8\numpy\distutils\system_info.py:690: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. self.calc_info() blas_src_info: NOT AVAILABLE C:\Users\brenm\AppData\Local\Temp\pip-install-8_a1i30j\numpy_78428c83c4dd4130b43d0502153b50e8\numpy\distutils\system_info.py:690: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. self.calc_info() NOT AVAILABLE 'svnversion' is not recognized as an internal or external command, operable program or batch file. non-existing path in 'numpy\\distutils': 'site.cfg' lapack_opt_info: lapack_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\\Users\\brenm\\anaconda3\\envs\\.venv\\lib', 'C:\\', 'C:\\Users\\brenm\\anaconda3\\envs\\.venv\\libs', 'C:\\Users\\brenm\\anaconda3\\Library\\lib'] NOT AVAILABLE openblas_lapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas not found in ['C:\\Users\\brenm\\anaconda3\\envs\\.venv\\lib', 'C:\\', 'C:\\Users\\brenm\\anaconda3\\envs\\.venv\\libs', 'C:\\Users\\brenm\\anaconda3\\Library\\lib'] NOT AVAILABLE openblas_clapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries openblas,lapack not found in ['C:\\Users\\brenm\\anaconda3\\envs\\.venv\\lib', 'C:\\', 'C:\\Users\\brenm\\anaconda3\\envs\\.venv\\libs', 'C:\\Users\\brenm\\anaconda3\\Library\\lib'] NOT AVAILABLE flame_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries flame not found in ['C:\\Users\\brenm\\anaconda3\\envs\\.venv\\lib', 'C:\\', 'C:\\Users\\brenm\\anaconda3\\envs\\.venv\\libs', 'C:\\Users\\brenm\\anaconda3\\Library\\lib'] NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\brenm\anaconda3\envs\.venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:\Users\brenm\anaconda3\envs\.venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\brenm\anaconda3\envs\.venv\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:\Users\brenm\anaconda3\envs\.venv\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\brenm\anaconda3\Library\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries tatlas,tatlas not found in C:\Users\brenm\anaconda3\Library\lib &lt;class 'numpy.distutils.system_info.atlas_3_10_threads_info'&gt; NOT AVAILABLE atlas_3_10_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\brenm\anaconda3\envs\.venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:\Users\brenm\anaconda3\envs\.venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\brenm\anaconda3\envs\.venv\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:\Users\brenm\anaconda3\envs\.venv\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\brenm\anaconda3\Library\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries satlas,satlas not found in C:\Users\brenm\anaconda3\Library\lib &lt;class 'numpy.distutils.system_info.atlas_3_10_info'&gt; NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\brenm\anaconda3\envs\.venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:\Users\brenm\anaconda3\envs\.venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\brenm\anaconda3\envs\.venv\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:\Users\brenm\anaconda3\envs\.venv\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\brenm\anaconda3\Library\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries ptf77blas,ptcblas,atlas not found in C:\Users\brenm\anaconda3\Library\lib &lt;class 'numpy.distutils.system_info.atlas_threads_info'&gt; NOT AVAILABLE atlas_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\brenm\anaconda3\envs\.venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:\Users\brenm\anaconda3\envs\.venv\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:\ No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\brenm\anaconda3\envs\.venv\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:\Users\brenm\anaconda3\envs\.venv\libs No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack_atlas not found in C:\Users\brenm\anaconda3\Library\lib No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries f77blas,cblas,atlas not found in C:\Users\brenm\anaconda3\Library\lib &lt;class 'numpy.distutils.system_info.atlas_info'&gt; NOT AVAILABLE lapack_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries lapack not found in ['C:\\Users\\brenm\\anaconda3\\envs\\.venv\\lib', 'C:\\', 'C:\\Users\\brenm\\anaconda3\\envs\\.venv\\libs', 'C:\\Users\\brenm\\anaconda3\\Library\\lib'] NOT AVAILABLE C:\Users\brenm\AppData\Local\Temp\pip-install-8_a1i30j\numpy_78428c83c4dd4130b43d0502153b50e8\numpy\distutils\system_info.py:1712: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. if getattr(self, '_calc_info_{}'.format(lapack))(): lapack_src_info: NOT AVAILABLE C:\Users\brenm\AppData\Local\Temp\pip-install-8_a1i30j\numpy_78428c83c4dd4130b43d0502153b50e8\numpy\distutils\system_info.py:1712: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. if getattr(self, '_calc_info_{}'.format(lapack))(): NOT AVAILABLE C:\Users\brenm\anaconda3\envs\.venv\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) running install running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building py_modules sources building library &quot;npymath&quot; sources No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils error: Microsoft Visual C++ 14.0 or greater is required. Get it with &quot;Microsoft C++ Build Tools&quot;: https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Command errored out with exit status 1: 'C:\Users\brenm\anaconda3\envs\.venv\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '&quot;'&quot;'C:\\Users\\brenm\\AppData\\Local\\Temp\\pip-install-8_a1i30j\\numpy_78428c83c4dd4130b43d0502153b50e8\\setup.py'&quot;'&quot;'; __file__='&quot;'&quot;'C:\\Users\\brenm\\AppData\\Local\\Temp\\pip-install-8_a1i30j\\numpy_78428c83c4dd4130b43d0502153b50e8\\setup.py'&quot;'&quot;';f=getattr(tokenize, '&quot;'&quot;'open'&quot;'&quot;', open)(__file__);code=f.read().replace('&quot;'&quot;'\r\n'&quot;'&quot;', '&quot;'&quot;'\n'&quot;'&quot;');f.close();exec(compile(code, __file__, '&quot;'&quot;'exec'&quot;'&quot;'))' install --record 'C:\Users\brenm\AppData\Local\Temp\pip-record-yymyimu0\install-record.txt' --single-version-externally-managed --prefix 'C:\Users\brenm\AppData\Local\Temp\pip-build-env-2psge951\overlay' --compile --install-headers 'C:\Users\brenm\AppData\Local\Temp\pip-build-env-2psge951\overlay\Include\numpy' Check the logs for full command output. ---------------------------------------- ERROR: Command errored out with exit status 1: 'C:\Users\brenm\anaconda3\envs\.venv\python.exe' 'C:\Users\brenm\anaconda3\envs\.venv\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\brenm\AppData\Local\Temp\pip-build-env-2psge951\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel 'Cython&gt;=0.28.5' 'numpy==1.13.3; python_version=='&quot;'&quot;'3.6'&quot;'&quot;' and platform_system!='&quot;'&quot;'AIX'&quot;'&quot;' and platform_python_implementation == '&quot;'&quot;'CPython'&quot;'&quot;'' 'numpy==1.14.0; python_version=='&quot;'&quot;'3.6'&quot;'&quot;' and platform_system!='&quot;'&quot;'AIX'&quot;'&quot;' and platform_python_implementation != '&quot;'&quot;'CPython'&quot;'&quot;'' 'numpy==1.14.5; python_version=='&quot;'&quot;'3.7'&quot;'&quot;' and platform_system!='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.17.3; python_version&gt;='&quot;'&quot;'3.8'&quot;'&quot;' and platform_system!='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.16.0; python_version=='&quot;'&quot;'3.6'&quot;'&quot;' and platform_system=='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.16.0; python_version=='&quot;'&quot;'3.7'&quot;'&quot;' and platform_system=='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.17.3; python_version&gt;='&quot;'&quot;'3.8'&quot;'&quot;' and platform_system=='&quot;'&quot;'AIX'&quot;'&quot;'' 'scipy&gt;=0.19.1' Check the logs for full command output. </code></pre>
<p>The issue is that you're running on Python 3.9. The currently released versions of Qiskit did not have support for Python 3.9 and therefore didn't include precompiled binaries for 3.9 environments. So what is happening is that pip is trying to build Qiskit from source, as a fallback because compatible binaries were not found, which is failing because you're missing build dependencies. The next Qiskit release will include Python 3.9 support and publish binaries for Python 3.9 too. That being said you can fix this in the short term by changing <code>conda create -n &lt;environment-name&gt; python=3</code> to a more specific python version like: <code>conda create -n &lt;environment-name&gt; python=3.8</code> to use a version of Python that the current release has compatible binaries for.</p>
python|numpy|pip|anaconda|qiskit
3
9,958
65,280,541
operating on pairs of columns in R (or numpy)
<p>I have two matrices: A (k rows, m columns), B(k rows, n columns)</p> <p>I want to operate on all pairs of columns (one from A and one from B), the result should be a matrix C (m rows, n columns) where C[i,j] = f(A[,i],B[,j]) now, if the function f was the sum of the dot product, then the whole thing was just a simple multiplication of matrices (C = t(A) %*% B) but my f is different (specifically, I count the number equal entries:</p> <pre><code>f = function(x,y) sum(x==y) </code></pre> <p>my question if there is a simple (and fast, because my matrices are big) way to compute the result?</p> <p>preferably in R, but possibly in python (numpy). I thought about using outer(A,B,&quot;==&quot;) but this results in a 4 dimensional array which I havent figured out what exactly to do with it.</p> <p>Any help is appreciated</p>
<p>In <code>R</code>, we can <code>split</code> them into <code>list</code> and apply the function <code>f</code> with a nested <code>lapply/sapply</code></p> <pre><code>lapply(asplit(A, 2), function(x) sapply(asplit(B, 2), function(y) f(x, y))) </code></pre> <hr /> <p>Or using <code>outer</code> after converting to <code>data.frame</code> because the unit will be column, while for <code>matrix</code>, it is a single element (as <code>matrix</code> is a <code>vector</code> with <code>dim</code> attributes)</p> <pre><code>outer(as.data.frame(A), as.data.frame(B), FUN = Vectorize(f)) </code></pre> <h3>data</h3> <pre><code>A &lt;- cbind(1:5, 6:10) B &lt;- cbind(c(1:3, 1:2), c(5:7, 6:7)) </code></pre>
r|numpy|matrix
1
9,959
65,303,876
Reshape Pandas DatafRames by binary columns value
<p>Can't figure out how to reshape my DataFrame into new one by several binary columns value.</p> <p>Input:</p> <pre><code>data code a b c 2016-01-07 foo 0 0 0 2016-01-12 bar 0 0 1 2016-01-03 gar 0 1 0 2016-01-22 foo 1 1 0 2016-01-26 bar 1 1 0 </code></pre> <p>I want to reshape by binary values, i.e. column <strong>a/b/c</strong>, if their value == 1, I need every time new column with all data.</p> <p>Expected output:</p> <pre><code> data code a 2016-01-22 foo a 2016-01-26 bar b 2016-01-03 gar b 2016-01-22 foo b 2016-01-26 bar c 2016-01-12 bar </code></pre> <p>Stucked here from the morning, will appreciate help very much !</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.melt.html" rel="nofollow noreferrer"><code>DataFrame.melt</code></a> with filtering <code>1</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>, <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pop.html" rel="nofollow noreferrer"><code>DataFrame.pop</code></a> is used for removing column after filtration:</p> <pre><code>df = df.melt(['data','code'], var_name='type') df = df[df.pop('value').eq(1)] print (df) data code type 3 2016-01-22 foo a 4 2016-01-26 bar a 7 2016-01-03 gar b 8 2016-01-22 foo b 9 2016-01-26 bar b 11 2016-01-12 bar c </code></pre>
python|pandas|dataframe|numpy|reshape
2
9,960
65,364,061
Create dynamic ranges and calculate mean
<p>I would like to create an additional column with averages based on column A, using dynamic ranges.</p> <pre><code>import numpy as np import pandas as pd test = {'A' : [100, 120, 70, 300, 190, 70, 300, 190, 70], 'B' : [80, 50, 64, 288, 172, 64, 288, 172, 64], 'C' : ['NO', 'NO', 'YES', 'NO', 'YES', 'YES', 'NO', 'YES', 'YES'], 'D' : [0, 1, 0, 3, 2, 2, 3, 1, 4] } df = pd.DataFrame(data=test) A B C D 0 100 80 NO 0 1 120 50 NO 1 2 70 64 YES 0 3 300 288 NO 3 4 190 172 YES 2 5 70 64 YES 2 6 300 288 NO 3 7 190 172 YES 1 8 70 64 YES 4 </code></pre> <p>When item in column <code>C</code> is <code>YES</code> get average value from dynamic range in column <code>A</code> by using value in column <code>D</code> as starting row index and row index of current row <code>-1</code> as highest row index.</p> <p>Below is the outcome I am seeking to achieve.</p> <pre><code> A B C D Dyn_Ave 0 100 80 NO 0 NaN 1 120 50 NO 1 NaN 2 70 64 YES 0 110 3 300 288 NO 3 NaN 4 190 172 YES 2 185 5 70 64 YES 2 187 6 300 288 NO 3 NaN 7 190 172 YES 1 175 8 70 64 YES 4 188 </code></pre> <p>My attempt at creating the column has lead me to a np.where approach, although I am coming across the following error - TypeError: Cannot index by location index with a non-integer key</p> <pre><code>df['Dyn_Ave'] = np.where(df['C'] == 'YES', df['A'].iloc[df['D']:df.loc['C'][-1]].mean(), np.nan) </code></pre>
<p>Let's try:</p> <pre><code>s = df['A'].cumsum().shift(fill_value=0) df['Dyn_Ave'] = np.where(df['C'] == 'YES', (s - s.reindex(df['D']).values) / (np.arange(len(df)) - df['D']), np.nan) </code></pre> <p>Output:</p> <pre><code> A B C D Dyn_Ave 0 100 80 NO 0 NaN 1 120 50 NO 1 NaN 2 70 64 YES 0 110.000000 3 300 288 NO 3 NaN 4 190 172 YES 2 185.000000 5 70 64 YES 2 186.666667 6 300 288 NO 3 NaN 7 190 172 YES 1 175.000000 8 70 64 YES 4 187.500000 </code></pre> <hr /> <p><strong>Explanation</strong>: Let's first forget about <code>C=='YES'</code> for a moment and paying attention to dynamic average. The average from row <code>df['D']</code> to row <code>j-1</code> can be seen as</p> <pre><code>(cumsum[j-1] - cumsum[df['D']-1])/(j-df['D']) </code></pre> <p>or:</p> <pre><code>(cumsum.shift()[j] - cumsum.shift()[df['D']) / (j-df['D']) </code></pre> <p>That's why we first calculate the cumsum, then shift it:</p> <pre><code>s = df['A'].cumsum().shift(fill_value=0) </code></pre> <p>To get the cumsum at <code>df['D']</code>, we use reindex and pass the underlying numpy array for subtraction:</p> <pre><code>(s - s.reindex(df['D']).values) </code></pre> <p>The number of rows can be easily seen as:</p> <pre><code>(np.arange(len(df)) - df['D']) </code></pre> <p>The last part is just filling in where <code>C=='YES'</code>, just as you were trying to accomplish.</p>
python|pandas|dataframe|mean
2
9,961
65,390,694
Running TensorFlow with XLA tf.function throws error
<p>When I'm trying to compile this <a href="https://www.tensorflow.org/xla/tutorials/jit_compile" rel="nofollow noreferrer">code</a>, getting the below error.</p> <pre><code>File &quot;xla_test.py&quot;, line 25, in &lt;module&gt; @tf.function(jit_compile=True) TypeError: function() got an unexpected keyword argument 'jit_compile' </code></pre>
<p>Without switching to tf-nightly just use:</p> <pre><code>@tf.function(experimental_compile=True) </code></pre> <p>From the <a href="https://www.tensorflow.org/api_docs/python/tf/function#args_1" rel="nofollow noreferrer">tensorflow docs</a>:</p> <blockquote> <p>experimental_compile If True, the function is always compiled by XLA. XLA may be more efficient in some cases (e.g. TPU, XLA_GPU, dense tensor computations).</p> </blockquote> <p>In my case MCMC sampling took without that parameter: ~1 min 37 sec, with experimental_compile=True: ~6 sec. Tensorflow built from sources (r2.4 branch).</p>
tensorflow2.0|tensorflow-xla
1
9,962
50,037,654
Tensorflow: load images into memory only when needed
<p>I am using TensorFlow V1.7 with the new high-level Estimator interface. I was able to create and train my own network with my own dataset. </p> <p>However, the policy I use to I load images just doesn't seem right to me. The approach I have used so far (largely inspired by the MNIST tutorial) is to <strong>load all images into memory since the beginning</strong> (here is a tiny code snippet to give you an idea):</p> <pre><code> for filename in os.listdir(folder): filepath = os.path.join(folder, filename) # using OpenCV to read image images.append(cv2.imread(filepath, cv2.IMREAD_GRAYSCALE)) labels.append(&lt;corresponding label&gt;) # shuffle samples and labels in the same way temp = list(zip(images, labels)) random.shuffle(temp) images, labels = zip(*temp) return images, labels </code></pre> <p>This means that I have to load into memory all my training set, containing something like 32k images, before training the net. However since my batch size is 100 the net will not need more than 100 images at a time.</p> <p>This approach seems quite weird to me. I understand that this way secondary memory is only accessed once, maximizing the performances; however, if my dataset was really big, this could overload my RAM, couldn't it?</p> <p>As a consequence, I would like to use a lazy approach, only loading images when they are needed (i.e. when they happen to be in a batch). How can I do this? I have searched the TF documentation, but I have not found anything so far. </p> <p>Is there something I'm missing?</p>
<p>It's advised to use the <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset" rel="nofollow noreferrer">Dataset</a> module, which provides you the ability (among other things) to use queues, prefetching of a small number of examples to memory, number of threads and much more.</p>
python|image|memory|tensorflow|load
2
9,963
64,126,592
tensorflow versioning mis-match in python
<p>I have no idea, why my python is acting so weirdly. I had tensorflow-2.3, as that doesn't go with cuda-10.1, so I had to go back to tensorflow-2.1. So my commands were :</p> <pre><code>pip uninstall tensorflow tensorflow-gpu pip install tensorflow==2.1.0 tensorflow-gpu==2.1.0 </code></pre> <p>But when I try to use tensorflow in python, it says it's version 2.3.</p> <p>When I tried to uninstall the second time, it says I have 2.1.0.</p> <p><strong>Then why python is showing version 2.3 instead of 2.1?</strong></p> <p>Here is a snap :</p> <p><a href="https://i.stack.imgur.com/XogsT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XogsT.png" alt="enter image description here" /></a></p>
<p>You are installing to python3.6. But your “python” command is executing python3.7 Try typing python3.6</p> <p>PS: Use conda for python environment management.</p>
python|tensorflow|tensorflow2.0
1
9,964
63,869,177
Fetch Bitcoin Data Information through Pandas DataReader
<p>I am wanting to ask if Pandas DataReader may be used to extract Bitcoin information from blockchain.com ?</p> <p>I am aware we may use it together with Alpha Vantage API Key to extract stocks through:</p> <pre><code>import pandas as pd import pandas_datareader as dr reader = dr.DataReader('AAPL', 'av-daily', start = '2020-08-01', end = '2020-08-05', api_key = '') print(reader) </code></pre> <p>But may this same style of function/code be used to extract Bitcoin data? I know of one method but am not a big fan of it:</p> <pre><code>cc = CryptoCurrencies(key='', output_format='pandas') btc, meta_data = cc.get_digital_currency_daily(symbol='BTC', market='CNY') print(btc) </code></pre> <p>I am pretty new to coding and BTC so would appreciate something straightforward if possible, thank you !</p>
<p>Querying Bitcoin prices with <code>pandas_datareader</code> should be straightforward:</p> <pre><code>import pandas_datareader as pdr btc_data = pdr.get_data_yahoo(['BTC-USD'], start=datetime.datetime(2018, 1, 1), end=datetime.datetime(2020, 12, 2))['Close'] </code></pre> <p>Results:</p> <pre><code>Symbols BTC-USD Date 2018-01-01 13657.200195 2018-01-02 14982.099609 2018-01-03 15201.000000 2018-01-04 15599.200195 2018-01-05 17429.500000 ... ... 2020-11-29 18177.484375 2020-11-30 19625.835938 2020-12-01 18802.998047 2020-12-02 19201.091797 2020-12-03 19445.398438 </code></pre>
python|pandas|bitcoin|alpha-vantage
6
9,965
46,862,722
Import statments when using Tensorflow contrib keras
<p>I have a bunch of code written using Keras that was installed as a separate pip install and the import statements are written like <code>from keras.models import Sequential</code>, etc..</p> <p>On a new machine, I have Tensorflow installed which now includes Keras inside the <em>contrib</em> directory. In order to keep the versions consistent I thought it would be best to use what's in <em>contrib</em> instead of installing Keras separately, however this causes some import issues.</p> <p>I can import Keras using <code>import tensorflow.contrib.keras as keras</code> but doing something like <code>from tensorflow.contrib.keras.models import Sequential</code> gives <strong>ImportError: No module named models</strong>, and <code>from keras.models import Sequential</code> gives a similar <strong>ImportError: No module named keras.models</strong>. </p> <p>Is there a simple method to get the <code>from x.y import z</code> statements to work? If not it means changing all the instances to use the verbose naming (ie.. <code>m1 = keras.models.Sequential()</code>) which isn't my preferred syntax but is do-able.</p>
<p>Try this with recent versions of tensorflow:</p> <pre><code>from tensorflow.python.keras.models import Sequential from tensorflow.python.keras.layers import LSTM, TimeDistributed, Dense, ... </code></pre>
python|tensorflow|keras
2
9,966
46,758,620
Convert string to float pandas
<p>Simple question: I have a dataset, imported from a csv file, containing a string column with numeric values. After the comma are decimal places. </p> <p>I want to convert to float. Basically,it's just this:</p> <pre><code>x = ['27,10083'] df = pd.DataFrame(x) df.astype(float) </code></pre> <p>Why does this not work and how to fix this simple issue?</p> <p>Thanks in advance. </p>
<p>Assign output with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html" rel="noreferrer"><code>replace</code></a>:</p> <pre><code>df = df.replace(',','.', regex=True).astype(float) </code></pre> <p>If want specify columns for converting:</p> <pre><code>cols = ['col1','col2'] df[cols] = df[cols].replace(',','.', regex=True).astype(float) </code></pre> <p>Another solution is use parameter <code>decimal=','</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="noreferrer"><code>read_csv</code></a>, then columns are correctly parsed to floats:</p> <pre><code>df = pd.read_csv(file, decimal=',') </code></pre>
python|pandas
15
9,967
62,909,908
Why I have this problem with index range?why does it not work?
<p>I have got this error when try split my one column to few columns. But it split on just on one or two columns.If you wanna split on 3,4,5 columns it writes:</p> <pre><code>ValueError Traceback (most recent call last) /usr/local/Cellar/jupyterlab/2.1.5/libexec/lib/python3.8/site-packages/pandas/core/indexes/range.py in get_loc(self, key, method, tolerance) 349 try: --&gt; 350 return self._range.index(new_key) 351 except ValueError: ValueError: 2 is not in range During handling of the above exception, another exception occurred: KeyError Traceback (most recent call last) &lt;ipython-input-19-d4e6a4d03e69&gt; in &lt;module&gt; 22 data_old[Col_1_Label] = newz[0] 23 data_old[Col_2_Label] = newz[1] ---&gt; 24 data_old[Col_3_Label] = newz[2] 25 #data_old[Col_4_Label] = newz[3] 26 #data_old[Col_5_Label] = newz[4] /usr/local/Cellar/jupyterlab/2.1.5/libexec/lib/python3.8/site-packages/pandas/core/frame.py in __getitem__(self, key) 2798 if self.columns.nlevels &gt; 1: 2799 return self._getitem_multilevel(key) -&gt; 2800 indexer = self.columns.get_loc(key) 2801 if is_integer(indexer): 2802 indexer = [indexer] /usr/local/Cellar/jupyterlab/2.1.5/libexec/lib/python3.8/site-packages/pandas/core/indexes/range.py in get_loc(self, key, method, tolerance) 350 return self._range.index(new_key) 351 except ValueError: --&gt; 352 raise KeyError(key) 353 return super().get_loc(key, method=method, tolerance=tolerance) 354 KeyError: 2 </code></pre> <p>There is my code.I have csv file.And when pandas read it - create one column with value 'Контракт'.Then. I split it on another columns. But it split till two columns.I wanna 7 columns!Help please to understand this logic!</p> <pre><code>import pandas as pd from pandas import Series, DataFrame import re dframe1 = pd.read_csv('po.csv') columns = ['Контракт'] data_old = pd.read_csv('po.csv', header=None, names=columns) data_old # The thing you want to split the column on SplitOn = ':' # Name of Column you want to split Split_Col = 'Контракт' newz = data_old[Split_Col].str.split(pat=SplitOn, n=-1, expand=True) # Column Labels (you can add more if you will have more) Col_1_Label = 'Номер телефону' Col_2_Label = 'Тарифний пакет' Col_3_Label = 'Вихідні дзвінки з України за кордон' Col_4_Label = 'ВАРТІСТЬ ПАКЕТА/ЩОМІСЯЧНА ПЛАТА' Col_5_Label = 'ЗАМОВЛЕНІ ДОДАТКОВІ ПОСЛУГИ ЗА МЕЖАМИ ПАКЕТА' Col_6_Label = 'Вартість послуги &quot;Корпоративна мережа' Col_7_Label = 'ЗАГАЛОМ ЗА КОНТРАКТОМ (БЕЗ ПДВ ТА ПФ)' data_old[Col_1_Label] = newz[0] data_old[Col_2_Label] = newz[1] data_old[Col_3_Label] = newz[2] #data_old[Col_4_Label] = newz[3] #data_old[Col_5_Label] = newz[4] #data_old[Col_6_Label] = newz[5] #data_old[Col_7_Label] = newz[6] data_old </code></pre>
<p>Pandas does not support &quot;unstructured text&quot;, you should convert it to a standard format or python objects and then create a dataframe from it</p> <p>Imagine that you have a file with this text named <code>data.txt</code>:</p> <pre><code>Contract № 12345679 Number of phone: +7984563774 Total price for month : 00.00000 Total price: 10.0000 </code></pre> <p>You can load an process it with Python like this:</p> <pre class="lang-py prettyprint-override"><code>with open('data.txt') as f: content = list(data.readlines()) # First line contains the contract number and phone information contract, phone = content[0].split(':') # find contract number using regex contract = re.findall('\d+', contract)[0] # The phone is strightforward phone = phone.strip() # Second line and third line for prices total_price = float(content[1].split(':')[1].strip()) total_month_price = float(content[2].split(':')[1].strip()) </code></pre> <p>Then with these variables you can create a dataframe</p> <pre><code>df = pd.DataFrame([dict(N_of_contract=contract, total_price=total_price, total_month_price =total_month_price )]) </code></pre> <p>Repeat the same for all files.</p>
python|pandas|numpy|jupyter-notebook
0
9,968
63,238,862
Animate label with bar chart - matplotlib
<p>The code below animates a bar chart and associated label values. The issue I'm having is positioning the label when the integer is negative. Specifically, I want the label to be positioned on top of the bar, not inside it. It's working for the first frame but the subsequent frames of animation revert back to plotting the label inside the bar chart for negative integers.</p> <pre><code>def autolabel(rects, ax): # Get y-axis height to calculate label position from. ts = [] (y_bottom, y_top) = ax.get_ylim() y_height = y_top - y_bottom for rect in rects: height = 0 if rect.get_y() &lt; 0: height = rect.get_y() else: height = rect.get_height() p_height = (height / y_height) if p_height &gt; 0.95: label_position = height - (y_height * 0.05) if (height &gt; -0.01) else height + (y_height * 0.05) else: label_position = height + (y_height * 0.01) if (height &gt; -0.01) else height - (y_height * 0.05) t = ax.text(rect.get_x() + rect.get_width() / 2., label_position, '%d' % int(height), ha='center', va='bottom') ts.append(t) return ts def gradientbars(bars, ax, cmap, vmin, vmax): g = np.linspace(vmin,vmax,100) grad = np.vstack([g,g]).T xmin,xmax = ax.get_xlim() ymin,ymax = ax.get_ylim() ims = [] for bar in bars: bar.set_facecolor('none') im = ax.imshow(grad, aspect=&quot;auto&quot;, zorder=0, cmap=cmap, vmin=vmin, vmax=vmax, extent=(xmin,xmax,ymin,ymax)) im.set_clip_path(bar) ims.append(im) return ims vmin = -6 vmax = 6 cmap = 'PRGn' data = np.random.randint(-5,5, size=(10, 4)) x = [chr(ord('A')+i) for i in range(4)] fig, ax = plt.subplots() ax.grid(False) ax.set_ylim(vmin, vmax) rects = ax.bar(x,data[0]) labels = autolabel(rects, ax) imgs = gradientbars(rects, ax, cmap=cmap, vmin=vmin, vmax=vmax) def animate(i): for rect,label,img,yi in zip(rects, labels, imgs, data[i]): rect.set_height(yi) label.set_text('%d'%int(yi)) label.set_y(yi) img.set_clip_path(rect) anim = animation.FuncAnimation(fig, animate, frames = len(data), interval = 500) plt.show() </code></pre>
<blockquote> <p>It's working for the first frame.</p> </blockquote> <p>You call <code>autolabel(rects, ax)</code> in the first plot, so the label is well placed.</p> <blockquote> <p>The subsequent frames of animation revert back to plotting the label inside the bar chart for negative integers.</p> </blockquote> <p>The label position of subsequent frames is set by <code>label.set_y(yi)</code>. <code>yi</code> is from <code>data[i]</code>, you didn't consider the negative value here.</p> <p>I create a function named <code>get_label_position(height)</code> to calculate the right label position for give height. It uses a global variable <code>y_height</code>. And call this function before <code>label.set_y()</code>.</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt from matplotlib import animation import pandas as pd import numpy as np def get_label_position(height): p_height = (height / y_height) label_position = 0 if p_height &gt; 0.95: label_position = height - (y_height * 0.05) if (height &gt; -0.01) else height + (y_height * 0.05) else: label_position = height + (y_height * 0.01) if (height &gt; -0.01) else height - (y_height * 0.05) return label_position def autolabel(rects, ax): # Get y-axis height to calculate label position from. ts = [] (y_bottom, y_top) = ax.get_ylim() y_height = y_top - y_bottom for rect in rects: height = 0 if rect.get_y() &lt; 0: height = rect.get_y() else: height = rect.get_height() p_height = (height / y_height) if p_height &gt; 0.95: label_position = height - (y_height * 0.05) if (height &gt; -0.01) else height + (y_height * 0.05) else: label_position = height + (y_height * 0.01) if (height &gt; -0.01) else height - (y_height * 0.05) t = ax.text(rect.get_x() + rect.get_width() / 2., label_position, '%d' % int(height), ha='center', va='bottom') ts.append(t) return ts def gradientbars(bars, ax, cmap, vmin, vmax): g = np.linspace(vmin,vmax,100) grad = np.vstack([g,g]).T xmin,xmax = ax.get_xlim() ymin,ymax = ax.get_ylim() ims = [] for bar in bars: bar.set_facecolor('none') im = ax.imshow(grad, aspect=&quot;auto&quot;, zorder=0, cmap=cmap, vmin=vmin, vmax=vmax, extent=(xmin,xmax,ymin,ymax)) im.set_clip_path(bar) ims.append(im) return ims vmin = -6 vmax = 6 cmap = 'PRGn' data = np.random.randint(-5,5, size=(10, 4)) x = [chr(ord('A')+i) for i in range(4)] fig, ax = plt.subplots() ax.grid(False) ax.set_ylim(vmin, vmax) rects = ax.bar(x,data[0]) labels = autolabel(rects, ax) imgs = gradientbars(rects, ax, cmap=cmap, vmin=vmin, vmax=vmax) (y_bottom, y_top) = ax.get_ylim() y_height = y_top - y_bottom def animate(i): for rect,label,img,yi in zip(rects, labels, imgs, data[i]): rect.set_height(yi) label.set_text('%d'%int(yi)) label.set_y(get_label_position(yi)) img.set_clip_path(rect) anim = animation.FuncAnimation(fig, animate, frames = len(data), interval = 500) plt.show() </code></pre>
python|pandas|matplotlib|animation
1
9,969
62,978,582
ValueError: only one element tensors can be converted to Python scalars
<p>I'm following <a href="https://medium.com/datadriveninvestor/deep-learning-and-medical-imaging-how-to-provide-an-automatic-diagnosis-f0138ea824d" rel="nofollow noreferrer">this tutorial</a>.</p> <p>I'm at the last part where we combine the models in a regression.</p> <p>I'm coding this in jupyter as follows:</p> <pre><code>import shutil import os import time from datetime import datetime import argparse import pandas import numpy as np from tqdm import tqdm from tqdm import tqdm_notebook import torch import torch.nn as nn import torch.optim as optim from torch.autograd import Variable from torchsample.transforms import RandomRotate, RandomTranslate, RandomFlip, ToTensor, Compose, RandomAffine from torchvision import transforms import torch.nn.functional as F from tensorboardX import SummaryWriter import dataloader from dataloader import MRDataset import model from sklearn import metrics def extract_predictions(task, plane, train=True): assert task in ['acl', 'meniscus', 'abnormal'] assert plane in ['axial', 'coronal', 'sagittal'] models = os.listdir('models/') model_name = list(filter(lambda name: task in name and plane in name, models))[0] model_path = f'models/{model_name}' mrnet = torch.load(model_path) _ = mrnet.eval() train_dataset = MRDataset('data/', task, plane, transform=None, train=train, ) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=1, shuffle=False, num_workers=10, drop_last=False) predictions = [] labels = [] with torch.no_grad(): for image, label, _ in tqdm_notebook(train_loader): logit = mrnet(image.cuda()) prediction = torch.sigmoid(logit) predictions.append(prediction.item()) labels.append(label.item()) return predictions, labels task = 'acl' results = {} for plane in ['axial', 'coronal', 'sagittal']: predictions, labels = extract_predictions(task, plane) results['labels'] = labels results[plane] = predictions X = np.zeros((len(predictions), 3)) X[:, 0] = results['axial'] X[:, 1] = results['coronal'] X[:, 2] = results['sagittal'] y = np.array(labels) logreg = LogisticRegression(solver='lbfgs') logreg.fit(X, y) task = 'acl' results_val = {} for plane in ['axial', 'coronal', 'sagittal']: predictions, labels = extract_predictions(task, plane, train=False) results_val['labels'] = labels results_val[plane] = predictions y_pred = logreg.predict_proba(X_val)[:, 1] metrics.roc_auc_score(y_val, y_pred) </code></pre> <p>However I get this error:</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-2-979acb314bc5&gt; in &lt;module&gt; 3 4 for plane in ['axial', 'coronal', 'sagittal']: ----&gt; 5 predictions, labels = extract_predictions(task, plane) 6 results['labels'] = labels 7 results[plane] = predictions &lt;ipython-input-1-647731b6b5c8&gt; in extract_predictions(task, plane, train) 54 logit = mrnet(image.cuda()) 55 prediction = torch.sigmoid(logit) ---&gt; 56 predictions.append(prediction.item()) 57 labels.append(label.item()) 58 ValueError: only one element tensors can be converted to Python scalars </code></pre> <p>Here's the MRDataset code in case:</p> <pre><code>class MRDataset(data.Dataset): def __init__(self, root_dir, task, plane, train=True, transform=None, weights=None): super().__init__() self.task = task self.plane = plane self.root_dir = root_dir self.train = train if self.train: self.folder_path = self.root_dir + 'train/{0}/'.format(plane) self.records = pd.read_csv( self.root_dir + 'train-{0}.csv'.format(task), header=None, names=['id', 'label']) else: transform = None self.folder_path = self.root_dir + 'valid/{0}/'.format(plane) self.records = pd.read_csv( self.root_dir + 'valid-{0}.csv'.format(task), header=None, names=['id', 'label']) self.records['id'] = self.records['id'].map( lambda i: '0' * (4 - len(str(i))) + str(i)) self.paths = [self.folder_path + filename + '.npy' for filename in self.records['id'].tolist()] self.labels = self.records['label'].tolist() self.transform = transform if weights is None: pos = np.sum(self.labels) neg = len(self.labels) - pos self.weights = torch.FloatTensor([1, neg / pos]) else: self.weights = torch.FloatTensor(weights) def __len__(self): return len(self.paths) def __getitem__(self, index): array = np.load(self.paths[index]) label = self.labels[index] if label == 1: label = torch.FloatTensor([[0, 1]]) elif label == 0: label = torch.FloatTensor([[1, 0]]) if self.transform: array = self.transform(array) else: array = np.stack((array,)*3, axis=1) array = torch.FloatTensor(array) # if label.item() == 1: # weight = np.array([self.weights[1]]) # weight = torch.FloatTensor(weight) # else: # weight = np.array([self.weights[0]]) # weight = torch.FloatTensor(weight) return array, label, self.weights </code></pre> <p>I've only trained my models using 1 and 2 epochs for each plane of the MRI instead of 35 as in the tutorial, not sure if that has anything to do with it. Other than that I'm stranded as to what this could be? I also removed <code>normalize=False</code> in the options for <code>train_dataset</code> as it kept giving me an error and I read that it could be removed, but I'm not so sure?</p>
<p>Only a tensor that contains a single value can be converted to a scalar with <code>item()</code>, try printing the contents of <code>prediction</code>, I imagine this is a vector of probabilities indicating which label is most likely. Using <code>argmax</code> on <code>prediction</code> will give you your actual predicted label (assuming your labels are <code>0-n</code>).</p>
python|python-3.x|deep-learning|pytorch
1
9,970
67,644,706
Pandas "read_excel" : How to read multi-line cell from "ods" file?
<p>I've a simple &quot;ods&quot; file (Test01.ods) with the below data in &quot;sheet1&quot; :-</p> <p><a href="https://i.stack.imgur.com/1lLUk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1lLUk.png" alt="enter image description here" /></a></p> <p>also I saved it as &quot;xlsx&quot; (Test01.xlsx) so I've two files contains exactly the same data.</p> <p>Now when I try to read them using Pandas &quot;read_excel&quot; with the below code</p> <p><a href="https://i.stack.imgur.com/QCRT5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QCRT5.png" alt="enter image description here" /></a></p> <p>the &quot;xslx&quot; file is showing the line break char &quot;\n&quot; while the &quot;ods&quot; file does not.</p> <p><a href="https://i.stack.imgur.com/NUV5x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NUV5x.png" alt="enter image description here" /></a></p> <p>Any idea why? and how to force &quot;odf engine&quot; to output the &quot;\n&quot; to the dataframe?</p> <p>Thanks in advance</p>
<p>As per <a href="https://github.com/pandas-dev/pandas/issues/41625" rel="nofollow noreferrer">this issue</a> in Pandas's Github, this is an issue with the upstream &quot;odfpy&quot; package, our options are one of the following:</p> <ol> <li>fix upstream (ideal) in odfpy</li> <li>modify the _get_cell_string_value method.</li> </ol> <p>My workaround: save the &quot;odf&quot; file as &quot;xslx&quot; then work with it in pandas instead.</p>
python|pandas|ods|odf
0
9,971
61,390,861
Resolving a Multi index in a Dataframe for better clarity
<p>So, I split the sentences in a row into single words, and thus, the rows of my dataframe got lengthened. however, I am not satisfied with the new indexes. </p> <pre><code>0 0 I 1 don 2 ' 3 t 4 think 5 any </code></pre> <p>What i would like, would be to make the index on the left swap places with the next one, and then, pass a argument like (Sentence: 'X'), where X would denote the original index of the sentence which the word belonged to.</p> <pre><code> 0 Sentence:0 I 1 Sentence:0 don 2 Sentence:0 ' 3 Sentence:0 t 4 Sentence:0 think 5 any </code></pre> <p>However, </p> <pre><code>df['index1'] = df.index </code></pre> <p>only lets me access the second index, and so i cannot get the indexes to swap positions.</p> <p>How might I go about this?</p>
<p>Do I understand right that you want to swap the levels of your Multiindex?</p> <p>Maybe this helps you:</p> <p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.swaplevel.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.swaplevel.html</a></p> <p>Edit: Answering the second part of your question:</p> <pre><code>df.rename(lambda x: "Sentence "+str(x), level=1) </code></pre> <p>does the renaming you asked for</p>
python|pandas|csv|dataframe|indexing
1
9,972
61,200,977
Need to create list using list value which are present in dataframe column
<p>This is my dataframe:</p> <pre><code>prod_sheet: Product ID 0 Prod1 00P000000000101 1 Prod2 00P000000000105 2 Prod3 00P000000000109 3 Prod4 00P000000000119 4 Prod5 00P000000000120 L=[Prod2,Prod4,Prod5] </code></pre> <p>Id for product which are present in list</p> <pre><code>needed_list=[00P000000000105,00P000000000119,00P000000000120] </code></pre>
<p>Use if order is important list comprehension:</p> <pre><code>L=['Prod5','Prod4','Prod3'] s = prod_sheet.set_index('Product')['ID'] needed_list = [s[p] for p in L] print (needed_list) ['00P000000000120', '00P000000000119', '00P000000000109'] </code></pre> <p>If order is not important use:</p> <pre><code>needed_list = prod_sheet.loc[prod_sheet['Product'].isin(L), 'ID'].tolist() print (needed_list) '00P000000000109', '00P000000000119', '00P000000000120'] </code></pre>
python|pandas|list|dataframe
0
9,973
61,376,922
How to get the index of a particular row using column values in numpy?
<p>So if I have the following array arr:</p> <pre><code>&gt;&gt;&gt; arr array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]]) </code></pre> <p>Now if I want to acquire the first row, I would do something like this :</p> <pre><code>&gt;&gt;&gt; arr[0] array([0, 1, 2, 3, 4]) </code></pre> <p>However, when I use np.where to locate a particular row, e.g.:</p> <pre><code>&gt;&gt;&gt; np.where(arr == [0,1,2,3,4]) </code></pre> <p>I get this output! </p> <pre><code>(array([0, 0, 0, 0, 0, 3, 3, 3, 3, 3], dtype=int64), array([0, 1, 2, 3, 4, 0, 1, 2, 3, 4], dtype=int64)) </code></pre> <p>However, this is not what am after. I would like to get the row indices instead. e.g.:</p> <pre><code>(array([0, 3], dtype=int64) </code></pre> <p>Is there a way to achieve that? any advice is very much appreciated!</p>
<p>I think you want to check if the rows are equal to a given array. In which case, you need <code>all</code>:</p> <pre><code>np.where((arr == [0,1,2,3,4]).all(1)) # (array([0, 3]),) </code></pre>
python|numpy|indexing|numpy-slicing
2
9,974
68,604,281
How to take two out of order CSV's and match by ID columns from each CSV and Inject the mac address to the matched ID from another CSV?
<p>How to take two out of order CSV's and match by ID columns from each CSV and Inject the mac address to the matched ID from another CSV?</p>
<blockquote> <p>Takes two CSV's groups two columns from first dict <code>Unique Card ID</code> and <code>mac</code></p> <p>then compares it to see if there is a match from second dict for <code>Bluetooth code</code></p> <p>If a match is found, it will inject the first dict <code>mac</code> into the empty column</p> <p><code>client_mac</code> next to the <code>Bluetooth code</code> code that it matches on from the first dictionary</p> </blockquote> <pre><code>import pandas as pd df_firstDict = pd.read_csv(&quot;seeds/Card Test Data - Sheet1.csv&quot;) dict_compare = {row[&quot;Unique Card ID&quot;]: row[&quot;MAC&quot;] for index, row in df_firstDict[['Unique Card ID', 'MAC']].iterrows()} df_secondDict = pd.read_csv(&quot;seeds/SalasTestData - Sheet1.csv&quot;) df_compare = df_secondDict[&quot;Bluetooth code&quot;].map(lambda x: dict_compare.get(x)) df_secondDict.drop(columns=[&quot;client_mac&quot;], inplace=True) df_secondDict[&quot;client_mac&quot;] = df_compare print(dict_compare) </code></pre>
python|pandas|dataframe|csv
0
9,975
68,759,373
How to get new records from second dataframe?
<p>df1:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>StationName</th> <th>RunID</th> <th>ScheduledDate</th> </tr> </thead> <tbody> <tr> <td>AAA</td> <td>12345</td> <td>2021-08-12</td> </tr> <tr> <td>BBB</td> <td>23456</td> <td>2021-08-12</td> </tr> <tr> <td>DDD</td> <td>91273</td> <td>2021-07-15</td> </tr> </tbody> </table> </div> <p>df2:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>StationName</th> <th>RunID</th> <th>ScheduledDate</th> </tr> </thead> <tbody> <tr> <td>AAA</td> <td>12345</td> <td>2021-08-12</td> </tr> <tr> <td>BBB</td> <td>23456</td> <td>2021-08-12</td> </tr> <tr> <td>AAA</td> <td>65323</td> <td>2021-07-20</td> </tr> <tr> <td>MMM</td> <td>14526</td> <td>2021-05-20</td> </tr> </tbody> </table> </div> <p>I would like to get the new records from df2 using the RunID column while eliminating any duplicate records found from df1.</p> <p>Expected output, df3:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>StationName</th> <th>RunID</th> <th>ScheduledDate</th> </tr> </thead> <tbody> <tr> <td>AAA</td> <td>65323</td> <td>2021-07-20</td> </tr> <tr> <td>MMM</td> <td>14526</td> <td>2021-05-20</td> </tr> </tbody> </table> </div>
<p>use <code>merge()</code> with <code>indicator=True</code> and <code>query()</code> to filter out result and <code>drop()</code> to drop extra column:</p> <pre><code>out=(df1[['RunID']].merge(df2,on='RunID',how='outer',indicator=True) .query(&quot;_merge=='right_only'&quot;).drop(columns='_merge')) </code></pre> <p><strong>Note:</strong> you can also do right merge by passing <code>how='right'</code></p> <p>output of <code>out</code>:</p> <pre><code> RunID StationName ScheduledDate 3 65323 AAA 2021-07-20 4 14526 MMM 2021-05-20 </code></pre>
python|pandas
1
9,976
65,528,954
Display number of images per class using Pytorch
<p>I am using Pytorch with FashionMNIST dataset I would like to display 8 image sample from each of the 10 classes. However, I did not figure how to split the training test into train_labels since I need to loop on the labels(class) and print 8 of each class. any idea how I can achieve this?</p> <pre><code>classes = ('T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot') # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), # transforms.Lambda(lambda x: x.repeat(3,1,1)), transforms.Normalize((0.5, ), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=True) print('Training set size:', len(trainset)) print('Test set size:',len(testset)) </code></pre>
<p>If I understand you correctly you want to group your dataset by labels then display them.</p> <p>You can start by constructing a dictionnary to store examples by label:</p> <pre><code>examples = {i: [] for i in range(len(classes))} </code></pre> <p>Then iterate over the trainset and append to the list using the label's index:</p> <pre><code>for x, i in trainset: examples[i].append(x) </code></pre> <p>However, this will go over the whole set. If you'd like to early stop and avoid gathering more than <em>8</em> per-class you can do so by adding conditions:</p> <pre><code>n_examples = 8 for x, i in trainset: if all([len(ex) == n_examples for ex in examples.values()]) break if len(examples[i]) &lt; n_examples: examples[i].append(x) </code></pre> <p>Only thing left is to display with <a href="https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.ToPILImage" rel="nofollow noreferrer"><code>torchvision.transforms.ToPILImage</code></a>:</p> <pre><code>transforms.ToPILImage()(examples[3][0]) </code></pre> <p>If you want to show more than one, you could use two consecutive <a href="https://pytorch.org/docs/stable/generated/torch.cat.html" rel="nofollow noreferrer"><code>torch.cat</code></a>, one on <code>dim=1</code> (by rows) then on <code>dim=2</code> (by columns) to create a grid.</p> <pre><code>grid = torch.cat([torch.cat(examples[i], dim=1) for i in range(len(classes))], dim=2) transforms.ToPILImage()(grid) </code></pre> <p>Possible result:</p> <p><a href="https://i.stack.imgur.com/Eek2h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Eek2h.png" alt="enter image description here" /></a></p>
pytorch|tensor|pytorch-dataloader
1
9,977
65,796,919
valueerror: time data does not match dataframe of pandas
<p>df having data as below in pandas</p> <pre><code>&quot;val1&quot; 6972.75 01-AUG-18 08.11.51.319 AM &quot;val2&quot; 6974.25 01-OCT-18 08.12.22.322 AM </code></pre> <p>I am using the code</p> <p><code>pd.to_datetime(df['TIME'], format=&quot;%d-%m-%Y %H.%M.%S.%f&quot;)</code></p> <p>when i am running the code its giving error below</p> <pre><code>ValueError: time data '01-AUG-18 08.11.51.319 AM' does not match format '%d-%b-%Y %H.%M.%S.%f' (match) </code></pre>
<p>Your format is all messed up. You used the incorrect format for the month and year and you can deal with the AM/PM with <code>%p</code>. (Most of the formats can be found under the <a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes" rel="nofollow noreferrer"><code>datetime</code> docs</a></p> <pre><code>%b #Month as locale’s abbreviated name: [Jan, Feb, …, Dec (en_US)] %y #Year without century as a zero-padded decimal number: [00, 01, …, 99] %p #Locale’s equivalent of either AM or PM: [AM, PM (en_US)] </code></pre> <hr /> <pre><code>import pandas as pd df = pd.DataFrame({'TIME': ['01-AUG-18 08.11.51.319 AM', '01-OCT-18 08.12.22.322 AM']}) pd.to_datetime(df['TIME'], format=&quot;%d-%b-%y %H.%M.%S.%f %p&quot;, errors='coerce') #0 2018-08-01 08:11:51.319 #1 2018-10-01 08:12:22.322 #Name: TIME, dtype: datetime64[ns] </code></pre>
python|pandas|dataframe|datetime|runtime-error
2
9,978
65,906,171
Is there a way to retrieve the specific parameters used in a random torchvision transform?
<p>I can augment my data during training by applying a random transform (rotation/translation/rescaling) but I don't know the value that was selected.</p> <p>I need to know what values were applied. I can manually set these values, but then I lose a lot of the benefits that torch vision transforms provide.</p> <p>Is there an easy way to get these values are implement them in a sensible way to apply during training?</p> <p>Here is an example. I would love to be able print out the rotation angle, translation/rescaling being applied at each image:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt from torchvision import transforms RandAffine = transforms.RandomAffine(degrees=0, translate=(0.1, 0.1), scale=(0.8, 1.2)) rotate = transforms.RandomRotation(degrees=45) shift = RandAffine composed = transforms.Compose([rotate, shift]) # Apply each of the above transforms on sample. fig = plt.figure() sample = np.zeros((28,28)) sample[5:15,7:20] = 255 sample = transforms.ToPILImage()(sample.astype(np.uint8)) title = ['None', 'Rot','Aff','Comp'] for i, tsfrm in enumerate([None,rotate, shift, composed]): if tsfrm: t_sample = tsfrm(sample) else: t_sample = sample ax = plt.subplot(1, 5, i + 2) plt.tight_layout() ax.set_title(title[i]) ax.imshow(np.reshape(np.array(list(t_sample.getdata())), (-1,28)), cmap='gray') plt.show() </code></pre>
<p>I'm afraid there is no easy way around it: Torchvision's random transforms utilities are built in such a way that the transform parameters will be sampled when called. They are <em>unique</em> random transforms, in the sense that <em>(1)</em> parameters used are not accessible by the user and <em>(2)</em> the same random transformation is <strong>not</strong> repeatable.</p> <p>As of Torchvision <em>0.8.0</em>, random transforms are generally built with two main functions:</p> <ul> <li><p><code>get_params</code>: which will sample based on the transform's hyperparameters (what you have provided when you initialized the transform operator, namely the parameters' range of values)</p> </li> <li><p><code>forward</code>: the function that gets executed when applying the transform. The important part is it gets its parameters from <code>get_params</code> then applies it to the input using the associated deterministic function. For <a href="https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.RandomRotation" rel="nofollow noreferrer"><code>RandomRotation</code></a>, <a href="https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.functional.rotate" rel="nofollow noreferrer"><code>F.rotate</code></a> will get called. Similarly, <a href="https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.RandomAffine" rel="nofollow noreferrer"><code>RandomAffine</code></a> will use <a href="https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.functional.affine" rel="nofollow noreferrer"><code>F.affine</code></a>.</p> </li> </ul> <p>One solution to your problem is sampling the parameters from <code>get_params</code> yourself and calling the functional - <em>deterministic</em> - API instead. So you wouldn't be using <code>RandomRotation</code>, <code>RandomAffine</code>, nor any other <code>Random*</code> transformation for that matter.</p> <hr /> <p>For instance, let's look at <a href="https://github.com/pytorch/vision/blob/d732562f30b8a83af9ab074592af3dd01fe708ee/torchvision/transforms/transforms.py#L1159" rel="nofollow noreferrer"><code>T.RandomRotation</code></a> (I have removed the comments for conciseness).</p> <pre><code>class RandomRotation(torch.nn.Module): def __init__( self, degrees, interpolation=InterpolationMode.NEAREST, expand=False, center=None, fill=None, resample=None): # ... @staticmethod def get_params(degrees: List[float]) -&gt; float: angle = float(torch.empty(1).uniform_(float(degrees[0]), \ float(degrees[1])).item()) return angle def forward(self, img): fill = self.fill if isinstance(img, Tensor): if isinstance(fill, (int, float)): fill = [float(fill)] * F._get_image_num_channels(img) else: fill = [float(f) for f in fill] angle = self.get_params(self.degrees) return F.rotate(img, angle, self.resample, self.expand, self.center, fill) def __repr__(self): # ... </code></pre> <p>With that in mind, here is a possible override to modify <code>T.RandomRotation</code>:</p> <pre><code>class RandomRotation(T.RandomRotation): def __init__(*args, **kwargs): super(RandomRotation, self).__init__(*args, **kwargs) # let super do all the work self.angle = self.get_params(self.degrees) # initialize your random parameters def forward(self): # override T.RandomRotation's forward fill = self.fill if isinstance(img, Tensor): if isinstance(fill, (int, float)): fill = [float(fill)] * F._get_image_num_channels(img) else: fill = [float(f) for f in fill] return F.rotate(img, self.angle, self.resample, self.expand, self.center, fill) </code></pre> <p>I've essentially copied <code>T.RandomRotation</code>'s <code>forward</code> function, the only difference being that the parameters are sampled in <code>__init__</code> (<em>i.e.</em> once) instead of inside the <code>forward</code> (<em>i.e.</em> on every call). Torchvision's implementation covers all cases, you generally won't need to copy the full <code>forward</code>. In some cases, you can just call the functional version pretty much straight away. For example, if you don't need to set the <code>fill</code> parameters, you can just discard that part and only use:</p> <pre><code>class RandomRotation(T.RandomRotation): def __init__(*args, **kwargs): super(RandomRotation, self).__init__(*args, **kwargs) # let super do all the work self.angle = self.get_params(self.degrees) # initialize your random parameters def forward(self): # override T.RandomRotation's forward return F.rotate(img, self.angle, self.resample, self.expand, self.center) </code></pre> <hr /> <p>If you want to override other random transforms you can look at <a href="https://github.com/pytorch/vision/blob/master/torchvision/transforms/transforms.py" rel="nofollow noreferrer">the source code</a>. The API is fairly self-explanatory and you shouldn't have too many issues implementing an override for each transform.</p>
python|pytorch|affinetransform|data-augmentation|torchvision
4
9,979
63,656,858
pandas combine stock data if it falls between specific time only in dataframe
<p>I have minute-by-minute stock data from 2017 to 2019. I want to keep only data after 9:16 for each day therefore I want to convert any data between 9:00 to 9:16 as value of 9:16 ie:</p> <p><strong>value of 09:16 should be</strong></p> <ul> <li><code>open</code> : value of 1st data from 9:00 - 9:16 , here 116.00</li> <li><code>high</code> : highest value from 9:00 - 9:16, here 117.00</li> <li><code>low</code> : lowest value from 9:00 - 9:16, here 116.00</li> <li><code>close</code>: this will be value at 9:16 , here 113.00</li> </ul> <br> <pre><code> open high low close date 2017-01-02 09:08:00 116.00 116.00 116.00 116.00 2017-01-02 09:16:00 116.10 117.80 117.00 113.00 2017-01-02 09:17:00 115.50 116.20 115.50 116.20 2017-01-02 09:18:00 116.05 116.35 116.00 116.00 2017-01-02 09:19:00 116.00 116.00 115.60 115.75 ... ... ... ... ... 2029-12-29 15:56:00 259.35 259.35 259.35 259.35 2019-12-29 15:57:00 260.00 260.00 260.00 260.00 2019-12-29 15:58:00 260.00 260.00 259.35 259.35 2019-12-29 15:59:00 260.00 260.00 260.00 260.00 2019-12-29 16:36:00 259.35 259.35 259.35 259.35 </code></pre> <p>Here is what I tried :</p> <pre><code>#Get data from/to 9:00 - 9:16 and create only one data item convertPreTrade = df.between_time(&quot;09:00&quot;, &quot;09:16&quot;) #09:00 - 09:16 #combine modified value to original data df.loc[df.index.strftime(&quot;%H:%M&quot;) == &quot;09:16&quot; , [&quot;open&quot;,&quot;high&quot;,&quot;low&quot;,&quot;close&quot;] ] = [convertPreTrade[&quot;open&quot;][0], convertPreTrade[&quot;high&quot;].max(), convertPreTrade[&quot;low&quot;].min(), convertPreTrade['close'][-1] ] </code></pre> <p>but this won't give me accurate data</p>
<hr /> <pre><code>d = {'date': 'last', 'open': 'last', 'high': 'max', 'low': 'min', 'close': 'last'} # df.index = pd.to_datetime(df.index) s1 = df.between_time('09:00:00', '09:16:00') s2 = s1.reset_index().groupby(s1.index.date).agg(d).set_index('date') df1 = pd.concat([df.drop(s1.index), s2]).sort_index() </code></pre> <h3>Details:</h3> <p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.between_time.html" rel="nofollow noreferrer"><code>DataFrame.between_time</code></a> to filter the rows in the dataframe <code>df</code> that falls between the time <code>09:00</code> to <code>09:16</code>:</p> <pre><code>print(s1) open high low close date 2017-01-02 09:08:00 116.0 116.0 116.0 116.0 2017-01-02 09:16:00 116.1 117.8 117.0 113.0 </code></pre> <p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>DataFrame.groupby</code></a> to group this filtered dataframe <code>s1</code> on <code>date</code> and aggregate using dictionary <code>d</code>:</p> <pre><code>print(s2) open high low close date 2017-01-02 09:16:00 116.1 117.8 116.0 113.0 </code></pre> <p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html" rel="nofollow noreferrer"><code>DataFrame.drop</code></a> to drop the rows from the original datframe <code>df</code> that falls between the time <code>09:00-09:16</code>, then use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>pd.concat</code></a> to concat it with <code>s2</code>, finally use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_index.html" rel="nofollow noreferrer"><code>DataFrame.sort_index</code></a> to sort the index:</p> <pre><code>print(df1) open high low close date 2017-01-02 09:16:00 116.10 117.80 116.00 113.00 2017-01-02 09:17:00 115.50 116.20 115.50 116.20 2017-01-02 09:18:00 116.05 116.35 116.00 116.00 2017-01-02 09:19:00 116.00 116.00 115.60 115.75 2019-12-29 15:57:00 260.00 260.00 260.00 260.00 2019-12-29 15:58:00 260.00 260.00 259.35 259.35 2019-12-29 15:59:00 260.00 260.00 260.00 260.00 2019-12-29 16:36:00 259.35 259.35 259.35 259.35 2029-12-29 15:56:00 259.35 259.35 259.35 259.35 </code></pre> <hr />
python|pandas|dataframe|stock
5
9,980
63,640,051
Remove rows from a panda dataframe with unsorted index
<p>This is how my data looks:</p> <pre><code>print(len(y_train),len(index_1)) index_1 = pd.DataFrame(data=index_1) print(&quot;y_train: &quot;) print(y_train) print(&quot;index_1: &quot;) print(index_1) </code></pre> <p>Output:</p> <pre><code>1348 555 y_train: 1677 1 1519 0 1114 0 690 1 1012 1 .. 1893 1 1844 0 1027 1 1649 1 1789 1 Name: Team 1 Win, Length: 1348, dtype: int64 index_1: 0 0 0 1 2 2 6 3 7 4 8 .. ... 550 1335 551 1341 552 1342 553 1344 554 1346 </code></pre> <p>I want to remove a number of rows (index_1) from a panda dataframe (y_train). So the values in the index_1 df are the rows I want to remove. Problem is that the dataframe is not in order, so when index_1's first item is 0, I want it to remove the first row in y_train (i.e. index 1677), instead of the row with index 0. This is my attempt:</p> <pre><code>y_train_short = y_train.drop(index_1) </code></pre> <p>This is what I get:</p> <pre><code>--------------------------------------------------------------------------- KeyError Traceback (most recent call last) &lt;ipython-input-57-49f2cce7bac0&gt; in &lt;module&gt; 22 print(index_1) 23 print(index_1) ---&gt; 24 y_train_short = y_train.drop(index_1) 25 26 ~/miniconda3/lib/python3.7/site-packages/pandas/core/series.py in drop(self, labels, axis, index, columns, level, inplace, errors) 4137 level=level, 4138 inplace=inplace, -&gt; 4139 errors=errors, 4140 ) 4141 ~/miniconda3/lib/python3.7/site-packages/pandas/core/generic.py in drop(self, labels, axis, index, columns, level, inplace, errors) 3934 for axis, labels in axes.items(): 3935 if labels is not None: -&gt; 3936 obj = obj._drop_axis(labels, axis, level=level, errors=errors) 3937 3938 if inplace: ~/miniconda3/lib/python3.7/site-packages/pandas/core/generic.py in _drop_axis(self, labels, axis, level, errors) 3968 new_axis = axis.drop(labels, level=level, errors=errors) 3969 else: -&gt; 3970 new_axis = axis.drop(labels, errors=errors) 3971 result = self.reindex(**{axis_name: new_axis}) 3972 ~/miniconda3/lib/python3.7/site-packages/pandas/core/indexes/base.py in drop(self, labels, errors) 5016 if mask.any(): 5017 if errors != &quot;ignore&quot;: -&gt; 5018 raise KeyError(f&quot;{labels[mask]} not found in axis&quot;) 5019 indexer = indexer[~mask] 5020 return self.delete(indexer) KeyError: '[0] not found in axis' </code></pre> <p>Independently of the fact that index 0 doesn't exist in y_train, I imagine that if it did, it would not do what I want it to do. So how do I remove the right rows from this dataframe?</p>
<p>Note that <code>y_train.iloc[index_1[0]]</code> retrieves rows from <em>y_train</em> taking indicated integer positions.</p> <p>When you run <code>y_train.iloc[index_1[0]].index</code>, you will get <strong>indices</strong> of these rows.</p> <p>So do drop these rows, you can run:</p> <pre><code>y_train.drop(y_train.iloc[index_1[0]].index, inplace=True) </code></pre>
python|pandas|numpy
1
9,981
53,757,429
Crop a square shape around a centroid (numpy)
<p>I have an numpy array image which contain circles. I extracted the whole x,y centroids (in pixels) of these circles (a numpy array as well). Now, I want to crop a square around each x,y centroid. Can someone instruct me how to solve it? Note that I didn't find any question in Stack that deals with crop around a specific coordinate. </p> <p>Thank you!</p>
<p>If your centroid has indices <code>i,j</code> and you want to crop a square of size <code>2*w+1</code> around it on a numpy array <code>a</code>, you can do </p> <pre><code>a[i-w:i+w+1,j-w:j+w+1] </code></pre> <p>This is provided your indices are always more than <code>w</code> from the boundary. </p> <p>If they're not, you can do </p> <pre><code>imin = max(0,i-w) imax = min(a.shape[0],i+w+1) jmin = max(0,j-w) jmax = min(a.shape[1],j+w+1) a[imin:imax,jmin:jmax] </code></pre>
python|numpy|coordinates|crop|centroid
2
9,982
72,128,434
Group Dataframe rows while deleting specific columns
<p>I have a dataframe with several columns. I want to group rows based on multiple column values.</p> <p>My source dataframe looks like this:</p> <pre><code>category code color property_value price A xx01 white 128 $10.00 B xx01 white 128 $5.00 A xx02 black 128 $10.00 B xx02 black 128 $5.00 A xx03 white 256 $15.00 B xx03 white 256 $25.00 </code></pre> <p>The purpose of the grouping is to delete columns <code>color</code> and <code>code</code> and only use <code>property_value</code> while saving <code>categories</code>.</p> <p>target dataframe should look like :</p> <pre><code>category property_value price A 128 $10.00 B 128 $5.00 A 256 $15.00 B 256 $25.00 </code></pre> <p>Any leads on how I can achieve this result using pandas ?</p>
<p>This seems more like a drop duplicate operation than a grouping operation:</p> <pre><code># suppose your DataFrame is df df = df[['category', 'property_value', 'price']].drop_duplicates(keep='first') </code></pre>
python|pandas|dataframe|grouping
1
9,983
71,886,048
Look in df list, return Boolean if all list elements have a substring
<p>I have a dataframe with a string column that contains a sequence of author names and their affiliations.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Address</th> </tr> </thead> <tbody> <tr> <td>'Smith, Jane (University of X); Doe, Betty (Institute of Y)'</td> </tr> <tr> <td>'Walter, Bill (Z University); Albertson, John (Z University); Chen, Hilary (University of X)'</td> </tr> <tr> <td>'Note, Joe (University of X); Cal, Stephanie (University of X)'</td> </tr> </tbody> </table> </div> <p>I want to create a new column with a Boolean TRUE/FALSE that tests if <em>all</em> authors are from University X. Note there can be any number of authors in the string.</p> <p>Desired output:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>T/F</th> </tr> </thead> <tbody> <tr> <td>FALSE</td> </tr> <tr> <td>FALSE</td> </tr> <tr> <td>TRUE</td> </tr> </tbody> </table> </div> <p>I think I can split the <code>Address</code> column using</p> <p><code>df['Address_split'] = df['Address'].str.split(';', expand=False)</code></p> <p>which then creates the <em>list</em> of names in the cell.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Address_split</th> </tr> </thead> <tbody> <tr> <td>['Smith, Jane (University of X)', 'Doe, Betty (University of Y)']</td> </tr> </tbody> </table> </div> <p>I even think I can use the <code>all()</code> function to test for a Boolean for one cell at a time.</p> <p><code>all([(&quot;University X&quot; in i) for i in df['Address_split'][2]])</code> returns <code>TRUE</code></p> <p>But I am struggling to think through how I can do this on each cell's list individually. I think I need some combination of <code>map</code> and/or <code>apply</code>.</p>
<p>You can use <code>str.extractall</code> to extract all the universities in parentheses and check if matches with <code>University of X</code>.</p> <pre class="lang-py prettyprint-override"><code>df['T/F'] = df['Address'].str.extractall(r&quot;\(([^)]*)\)&quot;).eq('University of X').groupby(level=0).all() </code></pre> <pre><code> Address T/F 0 'Smith, Jane (University of X); Doe, Betty (In... False 1 'Walter, Bill (Z University); Albertson, John ... False 2 'Note, Joe (University of X); Cal, Stephanie (... True </code></pre>
python|pandas
1
9,984
66,789,485
Foward filling moving limit
<p>I have a dataframe with a lot of blanks. The first table on the image. I want to reach the right table. My idea is to use ffill() with a moving limit. The limit would adjust to what is on the right. So first we count the consecutive elements on the right and fill level2 (yellow) and then do the same for the level1 (green). Is it even possible?</p> <p><a href="https://i.stack.imgur.com/kFvmf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kFvmf.png" alt="enter image description here" /></a></p>
<p>Assuming the empty cells are empty strings (<code>&quot;&quot;</code>), you can try:</p> <pre><code>df[df == &quot;&quot;] = np.nan m = ~df[&quot;Level 1&quot;].isna() df.loc[m, &quot;Level 2&quot;] = &quot;&quot; df.loc[m, &quot;Level 3&quot;] = &quot;&quot; df.loc[:, [&quot;Level 1&quot;, &quot;Level 2&quot;]] = df.loc[:, [&quot;Level 1&quot;, &quot;Level 2&quot;]].ffill() print(df.fillna(&quot;&quot;)) </code></pre> <p>Prints:</p> <pre><code> Level 1 Level 2 Level 3 0 President 1 President Office 2 President Office Lucien 3 President Office Theresa 4 MEP 5 MEP Bureau 6 MEP Bureau Martin 7 MEP Bureau Juliette 8 MEP Bureau Romeo 9 Groups 10 Groups Comittee 11 Groups Comittee Paul 12 Groups Comittee Marc 13 Groups Sub Com 14 Groups Sub Com Julius 15 Groups Sub Com Marcus 16 Groups Sub Com Aurelius </code></pre>
python|pandas|dataframe|variables|fill
0
9,985
67,177,129
How to speed up for loop subsetting a DataFrame by a given value in a column and applying a formula in Python
<p>I was wondering whether there was a way to speed up this code:</p> <pre><code> alphas = [] origins = flows[&quot;OrigCodeNew&quot;].unique() for origin in origins: df = flows[flows[&quot;OrigCodeNew&quot;] == origin] alpha = sum(df[&quot;DestSal&quot;] ** gamma * df[&quot;Dist&quot;] ** beta]) alphas.append(1/alpha) alphas = pd.DataFrame(zip(origins, alphas), columns = [&quot;OrigCodeNew&quot;, &quot;alpha&quot;]) </code></pre> <p>Where the input is a DataFrame of the form:</p> <pre><code>OrigCodeNew Destination DestSal Dist A C 20000 6 A D 30000 8 A E 25000 10 A F 35000 2 B C 20000 7 B D 30000 5 B E 25000 20 B F 35000 13 </code></pre> <p>With an output:</p> <pre><code>OrigCodeNew Alpha (example) A 0.034 B 0.064 </code></pre> <p>I know this is inefficient code and it could be sped up but I'm not sure how. I have been using this for a while and it works but I am trying to refactor code to make it more efficient. I have tried to figure out pandas.groupby with the agg function but haven't been able to figure out how to do it with this sort of equation yet. Any advice would be appreciated.</p>
<p>You didn't provide any sample data or expected outputs, so it is hard to answer this question.</p> <p>Theoretically, you should be able to groupby and then use transform, which will assign the group value to each row in the group. If you are more comfortable using agg, you can calculate the group value and then join the original dataframe and the aggregates on 'OrigCodeNew'.</p> <pre><code>( flows .groupby('OrigCodeNew') .transform(alpha=lambda x: 1 / (sum(x.DestSal ** gamma * x.Dist ** beta))) ) </code></pre>
python|pandas|dataframe|pandas-groupby
0
9,986
66,868,480
DF column with only numeric values
<p>I have DF with two columns. Both contains numeric values together with symbols and letters.</p> <p>I need to create two new columns with numbers only. Nothing can be dropped or deleted- it is important to keep the initial order the same.</p> <p>I tried</p> <pre><code>&gt; df[&quot;PRIMARY&quot;]=df['PRIMARY PHONE'].filter(i for i in s if i.isdigit()) </code></pre> <p>It does not work as initially my column is obj. I need to get (as an example) from PL73)67 only 7367 for each cell</p> <p>Please help</p>
<p>You are looking for <code>map</code>:</p> <pre class="lang-py prettyprint-override"><code>def extract_num(s): return &quot;&quot;.join(x for x in str(s) if x.isdigit()) df[&quot;PRIMARY&quot;] = df[&quot;PRIMARY PHONE&quot;].map(extract_num) </code></pre>
python|pandas
2
9,987
66,875,831
Federated reinforcement learning
<p>I am implementing <strong>federated deep Q-learning</strong> by PyTorch, using multiple agents, each running DQN. My problem is that when I use multiple replay buffers for agents, each appending experiences at the corresponding agent, <strong>two elements of experiences in each agent replay buffer, i. e., &quot;current_state&quot; and &quot;next_state&quot;</strong> becomes the same after the first time slot. I mean in each buffer, we see <strong>the same values for current states and the same values for next states</strong>. I have included simplified parts of the code and results below. Whay is it changing the current states and next states already exixting in the buffer when doing append? Is there something wrong with defining the buffers as a global variable? or do you have another idea?</p> <pre><code>&lt;&lt;&lt; time 0 and agent 0: current_state[0] = [1,2] next_state[0] = [11,12] *** experience: (array([ 1., 2.]), 2.0, array([200]), array([ 11., 12.]), 0) *** buffer: deque([(array([ 1., 2.]), 2.0, array([200]), array([ 11., 12.]), 0)], maxlen=10000) &lt;&lt;&lt; time 0 and agent 1: current_state[1] = [3, 4] next_state[1] = [13, 14] *** experience: (array([ 3., 4.]), 4.0, array([400]), array([ 13., 14.]), 0) *** buffer: deque([(array([ 1., 2.]), 4.0, array([400]), array([ 11., 12.]), 0)], maxlen=10000) &lt;&lt;&lt; time 1 and agent 0: current_state = [11,12] next_state[0] = [110, 120] *** experience: (array([ 11., 12.]), 6.0, array([600]), array([ 110., 120.]), 0) *** buffer: deque([(array([ 11., 12.]), 2.0, array([200]), array([ 110., 120.]), 0),(array([ 11., 12.]), 6.0, array([600]), array([ 110., 120.]), 0)], maxlen=10000) &lt;&lt;&lt; time 1 and agent 1: current_state = [13, 14] next_state[1] = [130, 140] *** experience: (array([ 13., 14.]), 8.0, array([800]), array([ 130., 140.]), 0) *** buffer: deque([(array([ 13., 14.]), 4.0, array([400]), array([ 130., 140.]), 0),(array([ 13., 14.]), 8.0, array([800]), array([ 130., 140.]), 0)], maxlen=10000) </code></pre> <pre><code>class BasicBuffer: def __init__(self, max_size): self.max_size = max_size self.buffer = deque(maxlen=10000) def add(self, current_state, action, reward, next_state, done): ## &quot;&quot;&quot;&quot;Add a new experience to buffer.&quot;&quot;&quot;&quot; experience = (current_state, action, np.array([reward]), next_state, done) self.buffer.append(experience) def DQNtrain(env, state_size, agent): for time in range(time_max): for e in range(agents_numbers): current_state[e,:] next_state_edge[e, :] ## &quot;&quot;&quot;&quot;Add a new experience to buffer.&quot;&quot;&quot;&quot; replay_buffer_t[e].add(current_state, action, reward, next_state, done) current_state[e, :] = next_state[e, :] if __name__ == '__main__': DQNtrain(env, state_size, agent) replay_buffer_t = [[] for _ in range(edge_max)] for e in range(edge_max): replay_buffer_t[e] = BasicBuffer(max_size=agent_buffer_size) </code></pre>
<p>I just found what is causing the problem. I should have used copy.deepcopy() for experiences:</p> <pre><code>experience = copy.deepcopy((current_state, action, np.array([reward]), next_state, done)) self.buffer.append(experience) </code></pre>
python|pytorch|reinforcement-learning|deque|federated-learning
1
9,988
67,071,953
Incompatible Shapes Keras NN
<p>I am trying to make use of NN for 28x28 grey scale images. My training data is shaped as follows:</p> <p>Reshape data</p> <pre><code>out: x_train.shape (24000, 28, 28, 1) y_train.shape (24000, 1) </code></pre> <p>Define the keras model</p> <pre><code>model = Sequential() model.add(layers.Conv2D(28, (1, 1), activation='relu', input_shape=(28, 28, 1))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(56, (1, 1), activation='relu')) model.add(layers.Conv2D(56, (1, 1), activation='relu')) #model.add(layers.Flatten()) #model.add(layers.Dense(56, activation='relu')) #model.add(layers.Dense(10)) model.summary() model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(input_x, input_y, epochs=20, batch_size=28) _, accuracy = model.evaluate(X_validation, y_validation) print('Accuracy: %.2f' % (accuracy*100)) </code></pre> <p>Model Summary</p> <pre><code>conv2d_86 (Conv2D) (None, 28, 28, 28) 56 max_pooling2d_52 (MaxPooling (None, 14, 14, 28) 0 conv2d_87 (Conv2D) (None, 14, 14, 56) 1624 conv2d_88 (Conv2D) (None, 14, 14, 56) 3192 Total params: 4,872 Trainable params: 4,872 Non-trainable params: 0 </code></pre> <p>Error InvalidArgumentError: Incompatible shapes: [28,14,14] vs. [28,1] I've applied a simple NN structure and gives out this error, ill next add flatten and dense layers once I am able to run it with these ones.</p>
<p>The last Conv2D layer output [14*14] cannot be compared with the target shape [1] to calculate the loss. Hence the error. Generally, Conv2D layers need to be flattened and passed through a DNN (the part that you have commented) for the model architecture to be complete. The units(neurons) in the last Dense layer is determined by the nature of the problem that you are trying to solve. Here, you have mentioned it as 10 but then you have used the loss here as binary_crossentropy which is generally used for a binary classification problem. If you are looking to solve a multi class classification problem (where the output in your case is one of 10 classes), then you need to use the loss as 'sparse_catergorical_crossentropy' or 'categorical_crossentropy'. You would also need to use the 'softmax' activation function in the last dense layer for Multi class classification and the 'sigmoid' activation function for binary classification.</p> <p>I would suggest that a basic understanding of Covolutional Neural Networks would be beneficial before proceeding on your task.</p> <p>One great source that I can suggest from which I personally benefited is : <a href="http://introtodeeplearning.com/" rel="nofollow noreferrer">http://introtodeeplearning.com/</a>. Lecture 3 deals with CNNs but it would be good to take a look at lecture 1 before you get to lecture 3. All the best.</p>
python|tensorflow|machine-learning|keras|neural-network
2
9,989
47,327,713
Pandas calculate average number of words in groupby
<p>Lets say I have a dataframe that looks like this:</p> <pre><code>df = pd.DataFrame({'id': [1,1,1,1,2,2,2,3,4,4,4,4,4], 'feedback': ['one word', np.nan, np.nan, 'test', 'second', np.nan, 'test 2', np.nan, 'fourth', 'multiple words', 'test 1 2 3', 'things', np.nan]}) print(df) id feedback 0 1 one word 1 1 NaN 2 1 NaN 3 1 test 4 2 second 5 2 NaN 6 2 test 2 7 3 NaN 8 4 fourth 9 4 multiple words 10 4 test 1 2 3 11 4 things 12 4 NaN </code></pre> <p>I want to calculate some aggregated values: </p> <ul> <li>row count for each <code>id</code> </li> <li>number of rows where feedback was provided for each <code>id</code></li> <li>average number of words of provided feedback for each <code>id</code></li> </ul> <p>My desired output is:</p> <pre><code> id count complete avg_words 0 1 4 2 1.5 1 2 3 2 1.5 2 3 1 0 NaN 3 4 5 4 2.0 </code></pre> <p>I have the following code which accomplishes all but the final point:</p> <pre><code>df.groupby(['id']).agg({'id': 'count', 'feedback': ['count', lambda x: len(x)]}).reset_index() </code></pre> <p>Which gives me:</p> <pre><code> id feedback count count &lt;lambda&gt; 0 1 4 2 4 1 2 3 2 3 2 3 1 0 1 3 4 5 4 5 </code></pre> <p>Everything is correct apart from the final column (the indexing is also a bit strange, but thats a minor issue)</p> <p>The lambda function is a placeholder. How do I calculate the average number of words of only the provided feedback for each <code>id</code>?</p>
<p>Try this:</p> <pre><code>In [96]: df.assign(avg_words=df['feedback'].str.split().str.len()) \ ...: .groupby('id') \ ...: .agg({'id': 'count','feedback': 'count', 'avg_words': 'mean'}) \ ...: .rename(columns={'id':'count', 'feedback':'complete'}) \ ...: .reset_index() Out[96]: id count complete avg_words 0 1 4 2 1.5 1 2 3 2 1.5 2 3 1 0 NaN 3 4 5 4 2.0 </code></pre>
python|pandas|aggregate|pandas-groupby
1
9,990
47,394,572
Simple network for arbitrary shape input
<p>I am trying to create an autoencoder in <code>Keras</code> with <code>Tensorflow</code> backend. I followed <a href="https://blog.keras.io/building-autoencoders-in-keras.html" rel="nofollow noreferrer">this tutorial</a> in order to make my own. Input to the network is kind of arbitrary i.e. each sample is a 2d array with fixed number of columns (<code>12</code> in this case) but rows range between <code>4</code> and <code>24</code>. </p> <p>What I have tried so far is:</p> <pre><code># Generating random data myTraces = [] for i in range(100): num_events = random.randint(4, 24) traceTmp = np.random.randint(2, size=(num_events, 12)) myTraces.append(traceTmp) myTraces = np.array(myTraces) # (read Note down below) </code></pre> <p>and here is my sample model</p> <pre><code>input = Input(shape=(None, 12)) x = Conv1D(64, 3, padding='same', activation='relu')(input) x = MaxPool1D(strides=2, pool_size=2)(x) x = Conv1D(128, 3, padding='same', activation='relu')(x) x = UpSampling1D(2)(x) x = Conv1D(64, 3, padding='same', activation='relu')(x) x = Conv1D(12, 1, padding='same', activation='relu')(x) model = Model(input, x) model.compile(optimizer='adadelta', loss='binary_crossentropy') model.fit(myTraces, myTraces, epochs=50, batch_size=10, shuffle=True, validation_data=(myTraces, myTraces)) </code></pre> <p><strong>NOTE</strong>: As per <a href="https://keras.io/getting-started/sequential-model-guide/#training" rel="nofollow noreferrer">Keras Doc</a>, it says that input should be a numpy array, if I do so I get following error:</p> <p><code>ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (100, 1)</code></p> <p>And if I dont convert it in to numpy array and let it be a list of numpy arrays I get following error:</p> <p><code>ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 100 arrays: [array([[0, 1, 0, 0 ...</code></p> <p>I don't know what I am doing wrong here. Also I am kind of new to <code>Keras</code>. I would really appreciate any help regarding this. </p>
<p>Numpy does not know how to handle a list of arrays with varying row sizes (see <a href="https://stackoverflow.com/questions/3386259/how-to-make-a-multidimension-numpy-array-with-a-varying-row-size">this answer</a>). When you call np.array with traceTmp, it will return a list of arrays, not a 3D array (An array with shape (100, 1) means a list of 100 arrays). Keras will need a homogeneous array as well, meaning all input arrays should have the same shape.</p> <p>What you can do is pad the arrays with zeroes such that they all have the shape (24,12): then np.array can return a 3-dimensional array and the keras input layer does not complain. </p>
python|numpy|tensorflow|deep-learning|keras
3
9,991
68,178,765
convolutional layer - trainable weights TensorFlow2
<p>I am using TF2.5 &amp; Python3.8 where a conv layer is defined as:</p> <pre><code>Conv2D( filters = 64, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), strides = (1, 1), padding = 'same', ) </code></pre> <p>Using a batch of 60 CIFAR-10 dataset as input:</p> <pre><code>x.shape # TensorShape([60, 32, 32, 3]) </code></pre> <p>Output volume of this layer preserves the spatial width and height (32, 32) and has 64 filters/kernel maps applied to the 60 images as batch-</p> <pre><code>conv1(x).shape # TensorShape([60, 32, 32, 64]) </code></pre> <p>I understand this output.</p> <p>Can you explain the output of:</p> <pre><code>conv1.trainable_weights[0].shape # TensorShape([3, 3, 3, 64]) </code></pre>
<p>This is the formula used to compute the number of trainable parameters in a conv layer = [{(m x n x d) + 1} x k]</p> <p>where, m -&gt; width of filter; n -&gt; height of filter; d -&gt; number of channels in input volume; k -&gt; number of filters applied in current layer.</p> <p>The 1 is added as bias for each filter. But in case of TF2.X, for a conv layer, the bias term is set to False. Therefore, it doesn't appear as in formula.</p>
python-3.x|tensorflow2.0
0
9,992
68,317,044
What is the advantage of output_shape argument in Keras Lambda layer
<pre><code>def euclidean_distance(vects): x, y = vects sum_square = K.sum(K.square(x - y), axis=1, keepdims=True) return K.sqrt(K.maximum(sum_square, K.epsilon())) def eucl_dist_output_shape(shapes): shape1, shape2 = shapes return (shape1[0], 1) input_a = Input(shape=(28,28,), name=&quot;left_input&quot;) vect_output_a = base_network(input_a) input_b = Input(shape=(28,28,), name=&quot;right_input&quot;) vect_output_b = base_network(input_b) # measure the similarity of the two vector outputs output = Lambda(euclidean_distance, name=&quot;output_layer&quot;, output_shape=eucl_dist_output_shape)([vect_output_a, vect_output_b]) </code></pre> <p>I am passing functor <code>output_shape=eucl_dist_output_shape</code> in argument in last line of Lambda Layer but it doesn't do anything. It is only inferring the output from function. Can any body tell me what is the advantage or purpose of <code>output_shape</code> argument in Lambda Layer?</p>
<ol> <li>If you are using TF term frequency for similarity measure for of two vector then these points i suggest you for the Lambda Layer?<br /> 1)The main purposes of lambda layer to do some operation on previous layer. but do not want to add any trainable weight to it. 2) Lambda layer is an easy way to customize a layer to do simple arithmetic. Suppose you want to add your own activation function (which is not built-in Keras) First you need to define a function that will take the output from the previous layer as input and apply a custom activation function to it. Pass this function to the lambda layer. Lambda layers are saved by serializing the Python bytecode, subclass overriding get-config method and are thus more portable.</li> </ol> <p>Example1</p> <pre><code>from keras.layers import Lambda from keras import backend as K # defining a custom non linear function def activation_relu(inputs): return K.maximum(0.,inputs) # call function using lambda layer squashed_output = Lambda(activation_relu)(inputs) # where inputs are output from previous layer </code></pre> <p>Example2</p> <pre><code># add a layer that returns the concatenation # of the positive part of the input and # the opposite of the negative part def antirectifier(x): x -= K.mean(x, axis=1, keepdims=True) x = K.l2_normalize(x, axis=1) pos = K.relu(x) neg = K.relu(-x) return K.concatenate([pos, neg], axis=1) model.add(Lambda(antirectifier)) </code></pre> <p>Example3 if you want to build a custom layer that computes the element-wise Euclidean distance between two input tensors, I would define the function to compute the value itself, as well as one that returns the output shape from this function.</p> <pre><code>def euclidean_distance(vecs): x, y = vecs return K.sqrt(K.sum(K.square(x - y), axis=1, keepdims=True)) def euclidean_distance_output_shape(shapes): shape1, shape2 = shapes return (shape1[0], 1) Call these functions using the lambda layer lhs_input = Input(shape=(VECTOR_SIZE,)) lhs = dense(1024, kernel_initializer=&quot;glorot_uniform&quot;, activation=&quot;relu&quot;)(lhs_input) rhs_input = Input(shape=(VECTOR_SIZE,)) rhs = dense(1024, kernel_initializer=&quot;glorot_uniform&quot;, activation=&quot;relu&quot;)(rhs_input) sim = lambda(euclidean_distance, output_shape=euclidean_distance_output_shape)([lhs, rhs]) </code></pre>
keras|tensorflow2.0|tf.keras|keras-layer
0
9,993
59,441,993
Changing json file format
<p>The script scrapes prices, addresses, suburbs and postcodes of houses and then writes them to a csv file.</p> <p>The csv file is imported into panda (only postcode and price) and groupby the mean price of the postcode. This groupby list is written to a json.</p> <p>The csv file looks like this in excel</p> <pre><code> ________________________ |Postcode|Price | ________________________ |5061 | 205000 | ________________________ |5063 | 930000 | ________________________ </code></pre> <p>The code looks like this</p> <pre><code>import pandas as pd from pandas import DataFrame df = pd.read_csv('House_Prices.csv', usecols=[ 'Postcode' , ' Price' ], index_col=False) grouped = df.groupby(['Postcode']).mean() grouped.to_json('average_house_price.json') </code></pre> <p>The code above outputs the json file as</p> <pre><code>{" Price":{"5061":2025000.0,"5063":930000.0}} </code></pre> <p>I want the json file to be outputted like this</p> <pre><code>{"5061":2025000.0,"5063":930000.0} </code></pre> <p>Is there a way with the panda library (or other) to remove the starting Price index?</p>
<p>Add column name for aggregate <code>"Price"</code> or <code>" Price"</code> for <code>Series</code>:</p> <pre><code>grouped = df.groupby(['Postcode'])["Price"].mean() #grouped = df.groupby(['Postcode'])[" Price"].mean() grouped.to_json('average_house_price.json') </code></pre>
python|json|pandas
1
9,994
57,242,167
How to remove duplicates based on two columns removing the the largest of 3rd column in pandas dataframe?
<p>Suppose I have a pandas dataframe that is like this:</p> <pre><code>df= A B 6 2 A C 4 2 D F 9 3 K L 8 9 A B 4 3 D F 8 2 </code></pre> <p>How can I say, if columns A and B have duplicates remove the ones that have the largest column C?</p> <p>So for instance we can see lines 1 and 5 have the same columns A and B.</p> <pre><code>A B 6 2 (Line 1) A B 4 3 (Line 5) </code></pre> <p>I want to remove line 1 as 6 is greater than 4.</p> <p>So my output should be</p> <pre><code>A C 4 2 K L 8 9 A B 4 3 D F 8 2 </code></pre>
<p>Try sorting the column in descending order on which you need to find max value using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>pd.sort_values</code></a></p> <p>Then drop_duplicates using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>pd.drop_duplicate</code></a></p> <pre><code>df.sort_values(by=['C'],ascending=[True],inplace=True) df.drop_duplicates(subset=['A','B'],inplace=True) </code></pre>
python|python-3.x|pandas
1
9,995
45,782,683
Faster method for changing row entries?
<p>I have a pandas dataframe as follows:</p> <pre><code>In [55]: df.head() Out[55]: Country Energy Supply Energy Supply per Capita % Renewable 0 Afghanistan 3.210000e+08 10.0 78.669280 1 Albania 1.020000e+08 35.0 100.000000 2 Algeria1 1.959000e+09 51.0 0.551010 3 American Samoa NaN NaN 0.641026 4 Andorra 9.000000e+06 121.0 88.695650 </code></pre> <p>and suppose I want to remove every numeric character from each entries in <code>df['Country']</code> I wrote the following code:</p> <pre><code>In [15]: for c in energy['Country']: ....: c = ''.join([i for i in c if not i.isdigit()]) ....: </code></pre> <p>and when I call <code>df.head()</code>, output is same i.e no changes at all. As far as I know this method just assigns new value to variable c but doesn't make changes in dataframe(Am I right?)</p> <p>so I tried new code:</p> <pre><code>In [51]: k = 0 In [52]: for c in df['Country']: ....: df.loc[k, "Country"] = ''.join([i for i in c if not i.isdigit()]) ....: k += 1 ....: </code></pre> <p>and It worked. I surely know that this is a very slow method(2nd one),is there any faster method available?</p>
<p>You can utilize Pandas built-in string operation, str.replace()</p> <pre><code>df['Country'] = df['Country'].str.replace('\d','') </code></pre>
pandas|python-3.5
1
9,996
45,837,456
Generic string comparison with numpy
<p>If you have multiple numpy arrays of different string types, such as:</p> <pre><code>In [411]: x1.dtype Out[411]: dtype('S3') In [412]: x2.dtype Out[412]: dtype('&lt;U3') In [413]: x3.dtype Out[413]: dtype('&gt;U5') </code></pre> <p>Is there any way that I can check whether they are <em>all</em> strings without having to compare with each individual type explicitly?</p> <p>For example, I would like to do</p> <pre><code>In [415]: x1.dtype == &lt;something&gt; Out[415]: True In [416]: x2.dtype == &lt;something&gt; # same as above Out[416]: True In [417]: x3.dtype == &lt;something&gt; # same as above Out[417]: True </code></pre> <p>Comparing to <code>str</code> = no bueno:</p> <pre><code>In [410]: x3.dtype == str Out[410]: False </code></pre>
<p>One way would be to use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.issubdtype.html" rel="nofollow noreferrer"><code>np.issubdtype</code></a> with <a href="https://docs.scipy.org/doc/numpy/reference/arrays.scalars.html#scalars" rel="nofollow noreferrer"><code>np.character</code></a>:</p> <pre><code>np.issubdtype(your_array.dtype, np.character) </code></pre> <p>For example:</p> <pre><code>&gt;&gt;&gt; np.issubdtype('S3', np.character) True &gt;&gt;&gt; np.issubdtype('&lt;U3', np.character) True &gt;&gt;&gt; np.issubdtype('&gt;U5', np.character) True </code></pre> <p>This is the NumPy dtype hierarchy (as image!) taken from the NumPy documentation. It's very helpful if you want to check for common dtype classes:</p> <p><a href="https://i.stack.imgur.com/m1GAp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m1GAp.png" alt="enter image description here"></a></p> <p>As you can see <code>np.str_</code> and <code>np.unicode_</code> both "subclass" <code>np.character</code>.</p>
python|numpy
4
9,997
51,101,791
Get indices from array where element in row satisfies condition
<p>I want to find the indexes of an array which satisfy a condition.</p> <p>I have a numpy.ndarray B: (m = number of rows = 8 and 3 columns)</p> <pre><code>array([[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 1.], [ 0., 1., 1.], [ 0., 1., 0.], [ 1., 1., 0.], [ 1., 1., 1.], [ 1., 0., 1.], [ 1., 0., 0.]]) </code></pre> <p>For each column I want to find the index of the rows for which the elements satisfy the following condition: for col in columns: B(row,col)=1 and B(row+1,col)=1 for all rows=1,2,..,m-1 and B(row,col)=1 for rows=0 and m. </p> <p>So the desired outcome is:</p> <pre><code>Sets1=[[5, 6, 7, 8], [3, 4, 5], [2, 6]] </code></pre> <p>So far I've tried this:</p> <pre><code>Sets1=[] for j in range(3): Sets1.append([i for i, x in enumerate(K[1:-1]) if B[x,j]==1 and B[x+1,j]==1]) </code></pre> <p>But this is only the first part of the condition and gives the following wrong output because it takes the index of new set.. So it should be plus 1 actually..</p> <pre><code>Sets1= [[4, 5, 6], [2, 3, 4], [1, 5]] </code></pre> <p>Also the second part of the condition which goes for indexes 0 and m. Isn't included yet..</p> <p>Edit: I fixed the plus 1 part by writing i+1 and I tried the second part of the condition by adding these if statements:</p> <pre><code>Sets1=[] for j in range(3): Sets1.append([i+1 for i, xp in enumerate(K[1:-1]) if B[xp,j]==1 and B[xp+1,j]==1]) if B[0,j]==1: Sets1[j].append(0) if B[(x-1),j]==1: Sets1[j].append(x-1) </code></pre> <p>Which does work since it gives the following output:</p> <pre><code>Sets1= [[5, 6, 7, 8], [3, 4, 5], [2, 6]] </code></pre> <p>So now i just need to add +1 to the elements of the list for the first part of the condition (before the if statements)...</p> <p>I would really appreciate the help!</p>
<p>There is a vectorized way to do that with numpy. 1st we create a mask for where <code>a</code> equals 1:</p> <pre><code>mask=a.T==1.0 </code></pre> <p>2nd mask will tell if next element also equals 1. Since we only want elements that satisfy both conditions we multiply both masks:</p> <pre><code>mask_next=np.ones_like(mask).astype(bool) mask_next[:,:-1]=mask[:,1:] fin_mask=mask*mask_next </code></pre> <p>Get the indexes:</p> <pre><code>idx=np.where(fin_mask) </code></pre> <p>1st index will tell us where to split the row idx:</p> <pre><code>split=np.where(np.diff(idx[0]))[0]+1 out=np.split(idx[1],split) </code></pre> <p><code>out</code> produces the desired outcome. If I understood correctly, you want the indexes of elements equal to one where the next (columnwise) element is also one?</p>
python|arrays|list|numpy|conditional-statements
1
9,998
66,491,926
Pandas apply unique random number to nan else go to next row
<pre><code>import pd as pandas import random import numpy as np data = {'Item_No':['001', '002', '003','004', '005', '006','007','008','009'], 'Group_code':[331, 332, 333, 333, 333, 331, 331, nan, nan]} df = pd.DataFrame(data) </code></pre> <p>I would like to apply a unique random number to 'nan' and keep the group code where group code exists. I've tried the following, but i cant seem to get the syntax right, what am i doing wrong.</p> <pre><code>df['Group_Code'] = df['Group_Code'].apply(lambda v: (random.random() * 1000) if pd.isnull(v['Group_Code'] else v['Group_Code'], axis = 1)) </code></pre>
<p><strong>Step 0:-</strong></p> <p>Your Dataframe:-</p> <pre><code>data = {'Item_No':['001', '002', '003','004', '005', '006','007','008','009'], 'Group_code':[331, 332, 333, 333, 333, 331, 331, np.nan, np.nan]} df = pd.DataFrame(data) </code></pre> <p><strong>Step 1:-</strong></p> <p>Firstly define a function:-</p> <pre><code>def func(val): if pd.isnull(val): return random.random() * 1000 else: return val </code></pre> <p><strong>Step 2:-</strong></p> <p>Then just use <code>apply()</code> method:-</p> <pre><code>df['Group_code']=df['Group_code'].apply(func).astype(int) </code></pre> <p>Now if you print <code>df</code> you will get your expected output</p>
python|pandas|numpy
0
9,999
66,572,414
pandas.read_html returns wrong table contents
<p>I try to scrape two tables (assets and liabilities) from :</p> <p><a href="https://www.marketwatch.com/investing/stock/aapl/financials/balance-sheet" rel="nofollow noreferrer">https://www.marketwatch.com/investing/stock/aapl/financials/balance-sheet</a></p> <p>The first table looks like this: <a href="https://i.stack.imgur.com/qES2e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qES2e.png" alt="enter image description here" /></a></p> <p>Following is my code:</p> <pre><code>tables = pd.read_html(&quot;https://www.marketwatch.com/investing/stock/spg/financials/balance-sheet&quot;) </code></pre> <p><a href="https://i.stack.imgur.com/VTmTL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VTmTL.png" alt="enter image description here" /></a></p> <p>As you can see, the scraped table is completely wrong.</p> <p>How could I scrape the table correctly?</p> <p>Thank you in advance for any help:-)</p>
<p>let's look at selenium for this one, you might be able to do it with bs4 and some fun request stuff</p> <pre><code>from selenium import webdriver import time url = &quot;https://www.marketwatch.com/investing/stock/spg/financials/balance-sheet&quot; driver = webdriver.Firefox() driver.get(url) time.sleep(10) tables = driver.find_elements_by_class_name(&quot;table&quot;) tables[3].text.splitlines() # this looks to split the new line characters tables[4].text.splitlines() </code></pre> <p>after that I think you could assign key value pairs and make a dataframe that way, or use numpy</p>
python|pandas|web-scraping
0