Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
2,500
56,818,822
How can I weigh columns in data frame and add them up
<p>I have a data frame with 5 columns, I only want to add the second and the third, but each point in the third column has to be <strong>multiplied by 3</strong>,<br><br> so I need to add a new column called <br> <code>"Total score" which is df['Second'] + 3* df['Third']</code></p> <p>I have tried with sum but I don't know how to indicate that I want weigh and select only two columns</p>
<p>Make sure your columns is order correctly, then we can using <code>dot</code> </p> <pre><code>df['Total Score'] = df.dot([0,1,3,0,0]) </code></pre> <p>Or to be safe </p> <pre><code>df['Total Score'] = df[['Second','Third']].dot([1,3]) </code></pre>
python|python-3.x|pandas|dataframe
1
2,501
56,841,451
Why is my neural net only predicting one class (binary classification)?
<p>I am having some trouble with my ANN. It is only predicting '0.' The dataset is imbalanced (10:1), ALTHOUGH, I undersampled the training dataset, so I am unsure of what is going on. I am getting 92-93% accuracy on the balanced training set, although on testing (on an unbalanced test set) it just predicts zeroes. Unsure of where to go from here. Anything helps. The data has been one hot encoded and scaled. </p> <pre><code>#create 80/20 train-test split train, test = train_test_split(selection, test_size=0.2) # Class count count_class_0, count_class_1 = train.AUDITED_FLAG.value_counts() # Divide by class df_class_0 = train[train['AUDITED_FLAG'] == 0] df_class_1 = train[train['AUDITED_FLAG'] == 1] df_class_0_under = df_class_0.sample(count_class_1) train_under = pd.concat([df_class_0_under, df_class_1], axis=0) print('Random under-sampling:') print(train_under.AUDITED_FLAG.value_counts()) train_under.AUDITED_FLAG.value_counts().plot(kind='bar', title='Count (target)'); Random under-sampling: 1.0 112384 0.0 112384 #split features and labels y_train = np.array(train_under['AUDITED_FLAG']) X_train = train_under.drop('AUDITED_FLAG', axis=1) y_test = np.array(test['AUDITED_FLAG']) X_test = test.drop('AUDITED_FLAG', axis=1) y_train = y_train.astype(int) y_test = y_test.astype(int) # define model model = Sequential() model.add(Dense(6, input_dim=179, activation='relu')) model.add(Dense(30, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # fit model history = model.fit(X_train, y_train, epochs=5, batch_size=16, verbose=1) #validate test_loss, test_acc = model.evaluate(X_test, y_test) # evaluate the model _, train_acc = model.evaluate(X_train, y_train, verbose=0) _, test_acc = model.evaluate(X_test, y_test, verbose=0) print('Train: %.3f, Test: %.3f' % (train_acc, test_acc)) print('test_acc:', test_acc) # plot history pyplot.plot(history.history['acc'], label='train') #pyplot.plot(history.history['val_acc'], label='test') </code></pre> <p>Train: 0.931, Test: 0.921</p> <pre><code>#preds y_pred = model.predict(X_test) y_pred_bool = np.argmax(y_pred, axis=1) # #plot confusion matrix y_actu = pd.Series(y_test, name='Actual') y_pred_bool = pd.Series(y_pred_bool, name='Predicted') print(pd.crosstab(y_actu, y_pred_bool)) </code></pre> <p>'''</p> <pre><code>Predicted 0 Actual 0 300011 1 28030 </code></pre>
<p>This is not right:</p> <pre><code>y_pred_bool = np.argmax(y_pred, axis=1) </code></pre> <p>Argmax is only used with categorical cross-entropy loss and softmax outputs. For binary cross-entropy and sigmoid outputs, you should round the outputs, which is equivalent to thresholding predictions > 0.5:</p> <pre><code>y_pred_bool = np.round(y_pred) </code></pre> <p>This is what Keras does to compute binary accuracy.</p>
python|tensorflow|keras|neural-network
2
2,502
56,704,844
How to exclude current date in a group by value rolling window execution in pandas?
<p>I have a dataframe containing IDs, a date and numerical values. I group the data for each ID, and then I calculate the cumulative amount of the previous rows, with a time window of 30 days. In the dataframe below this has been accomplished using the code below (the actual dataframe contains more than one ID and more dates).</p> <p>So in short the column SUM_AMOUNT is being created based on the other columns.</p> <p>Code:</p> <pre><code>def get_rolling_amount(grp, freq, on_name, column_name): return grp.rolling(freq, on=on_name, closed='left')[column_name].sum() df[new_column_name] = df.groupby('ID', as_index=False, group_keys=False)\ .apply(get_rolling_amount, '30D', 'DATE', 'AMOUNT') </code></pre> <p>Dataframe:</p> <pre><code> ID DATE AMOUNT SUM_AMOUNT 111935 100000 2015-02-18 455.00 NaN 111936 100000 2015-02-18 455.00 455.00 111937 100000 2015-04-02 455.00 NaN 111938 100000 2015-04-02 925.00 455.00 111939 100000 2015-04-02 2780.00 1380.00 111940 100000 2015-04-09 895.00 4160.00 111941 100000 2015-04-09 425.00 5055.00 111942 100000 2015-04-09 425.00 5480.00 111943 100000 2015-04-09 925.00 5905.00 111944 100000 2015-04-09 455.00 6830.00 111947 100000 2015-05-21 1003.00 NaN 111945 100000 2015-05-26 455.00 1003.00 111946 100000 2015-05-26 925.00 1458.00 111948 100000 2015-05-26 455.00 2383.00 111949 100000 2015-05-26 2780.00 2838.00 111950 100000 2015-05-26 425.00 5618.00 111951 100000 2015-05-26 1000.00 6043.00 111952 100000 2015-05-26 455.00 7043.00 111953 100000 2015-05-26 455.00 7498.00 111954 100000 2015-06-19 925.00 7953.00 111955 100000 2015-06-19 1820.00 8878.00 111956 100000 2015-06-19 925.00 10698.00 </code></pre> <p>As you can see, per ID there are rows that have the same date. I cannot get the dates in a more detailed form. I don't want to take the values of same dates into account in the calculation, because I don't know what their order is if they are on the same date and the order is important.</p> <p><strong>What I actually want</strong></p> <p>I want to be able to get the cumulative sum of all data points that fall in the range of the last 30 days, <strong>excluding the date of the current row</strong>. I have changed the dataframe to reflect what I would like to have: </p> <pre><code> ID DATE AMOUNT SUM_AMOUNT 111935 100000 2015-02-18 455.00 NaN 111936 100000 2015-02-18 455.00 NaN 111937 100000 2015-04-02 455.00 NaN 111938 100000 2015-04-02 925.00 NaN 111939 100000 2015-04-02 2780.00 NaN 111940 100000 2015-04-09 895.00 4160.00 111941 100000 2015-04-09 425.00 4160.00 111942 100000 2015-04-09 425.00 4160.00 111943 100000 2015-04-09 925.00 4160.00 111944 100000 2015-04-09 455.00 4160.00 111947 100000 2015-05-21 1003.00 NaN 111945 100000 2015-05-26 455.00 1003.00 111946 100000 2015-05-26 925.00 1003.00 111948 100000 2015-05-26 455.00 1003.00 111949 100000 2015-05-26 2780.00 1003.00 111950 100000 2015-05-26 425.00 1003.00 111951 100000 2015-05-26 1000.00 1003.00 111952 100000 2015-05-26 455.00 1003.00 111953 100000 2015-05-26 455.00 1003.00 111954 100000 2015-06-19 925.00 7953.00 111955 100000 2015-06-19 1820.00 7953.00 111956 100000 2015-06-19 925.00 7953.00 </code></pre> <p>So if the row's date is 2015-06-19, I want to have the sum of all previous rows in a 30 day window, but rows that have the date of 2015-06-19 should not be included in that window. </p> <p>One other important thing to mention is that I cannot collapse the rows to make one row per ID and DATE.</p> <p>How can I do this?</p>
<p>Because you have several values for a same day, I would say you should first <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html" rel="nofollow noreferrer"><code>resample</code></a> daily to get the <code>sum</code> per day and then <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rolling.html" rel="nofollow noreferrer"><code>rolling</code></a> over the last 30 values prior to the date with the use of <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.shift.html" rel="nofollow noreferrer"><code>shift</code></a> to not include today. Perform these operations per ID with a <code>groupby</code> and then <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a> on the ID and DATE back in <code>df</code>. </p> <pre><code>df = df.merge( (df.groupby('ID').resample('1D', on='DATE').sum()['AMOUNT'].shift() .rolling(30, min_periods=1).sum().fillna(0).reset_index()), on = ['ID', 'DATE'], how='left', suffixes=('', '_SUM')) </code></pre> <p>and you get you <code>df</code> such as:</p> <pre><code> DATE ID AMOUNT AMOUNT_SUM 0 2015-02-18 100000 455.0 0.0 1 2015-02-18 100000 455.0 0.0 2 2015-04-02 100000 455.0 0.0 3 2015-04-02 100000 925.0 0.0 4 2015-04-02 100000 2780.0 0.0 5 2015-04-09 100000 895.0 4160.0 6 2015-04-09 100000 425.0 4160.0 7 2015-04-09 100000 425.0 4160.0 8 2015-04-09 100000 925.0 4160.0 9 2015-04-09 100000 455.0 4160.0 10 2015-05-21 100000 1003.0 0.0 11 2015-05-26 100000 455.0 1003.0 12 2015-05-26 100000 925.0 1003.0 13 2015-05-26 100000 455.0 1003.0 14 2015-05-26 100000 2780.0 1003.0 15 2015-05-26 100000 425.0 1003.0 16 2015-05-26 100000 1000.0 1003.0 17 2015-05-26 100000 455.0 1003.0 18 2015-05-26 100000 455.0 1003.0 19 2015-06-19 100000 925.0 7953.0 20 2015-06-19 100000 1820.0 7953.0 21 2015-06-19 100000 925.0 7953.0 </code></pre>
python|pandas|dataframe
4
2,503
25,464,589
RcppCNPy not usable after Rcpp update
<p>I am trying to use some older code that relies on RcppCNPy, which used to work on my machine. At some point in the past few months I updated Rcpp and now when I try to attach the RcppCNPy library (<code>library() or require()</code>) I get the following:</p> <pre><code>*** caught segfault *** address 0x0, cause 'memory not mapped' Segmentation fault: 11 </code></pre> <p>and then R crashes. I have tried updating both packages (from source and CRAN for Rcpp) and no luck. Any ideas how I could figure out what is going on? </p> <p>Here are some details that may help:</p> <pre><code>R&gt; sessionInfo() R version 3.0.3 (2014-03-06) Platform: x86_64-apple-darwin10.8.0 (64-bit) locale: [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 </code></pre>
<p>Recompilation ought to work. Check that you do not have an old version in your <code>.libPath()</code>.</p> <p>CRAN does checks on packages on would alert the respective maintainer (me, in this case) if RcppCNPy were broken on the Mac. See</p> <ul> <li><a href="http://cran.rstudio.com/web/checks/check_results_RcppCNPy.html" rel="nofollow">Page with all checks</a></li> <li><a href="http://www.r-project.org/nosvn/R.check/r-release-osx-x86_64-mavericks/RcppCNPy-00check.html" rel="nofollow">OS X Mavericks results</a></li> </ul> <p>OS X Mavericks shows a 'Note' but that is simply due to vignette processing and a latex style file.</p>
r|numpy|rcpp
0
2,504
66,906,422
Unable to create a version in Cloud AI Platform using custom containers for prediction
<p>Because of certain VPC restrictions I am forced to use custom containers for predictions for a model trained on Tensorflow. According to the <a href="https://cloud.google.com/ai-platform/prediction/docs/custom-container-requirements" rel="nofollow noreferrer">documentation</a> requirements I have created a HTTP server using Tensorflow Serving. The Dockerfile used to <code>build</code> the image is as follows:</p> <pre><code>FROM tensorflow/serving:2.3.0-gpu # copy the model file ENV MODEL_NAME=my_model COPY my_model /models/my_model </code></pre> <p>Where <code>my_model</code> contains the <code>saved_model</code> inside a folder named <code>1/</code>.</p> <p>I have then pushed the container image to Artifact Registry and then created a <code>Model</code>. To create a <code>Version</code> I have selected <code>Customer Container</code> on the Cloud Console UI and and added the path to the <code>Container Image</code>. I have then mentioned the <strong>Prediction route</strong> and the <strong>Health route</strong> to be <code>/v1/models/my_model:predict</code> and have changed the <strong>Port</strong> to <code>8501</code>. I have also selected the machine type to be a single compute node of type n1-standard-16 and 1 P100 GPU and kept scaling <code>Auto scaling</code>.</p> <p>After clicking on <strong>Save</strong> I can see the Tensorflow Server starting and while viewing the logs we can see the following messages:</p> <pre><code>Successfully loaded servable version {name: my_model version: 1} Running gRPC ModelServer at 0.0.0.0:8500 Exporting HTTP/REST API at:localhost:8501 NET_LOG: Entering the event loop </code></pre> <p>However after about 20-25 minutes the <code>version</code> creation just stops throwing out the following error:</p> <pre><code>Error: model server never became ready. Please validate that your model file or container configuration are valid. </code></pre> <p>I am unable to figure why this is happening. I am able to run the same docker image on my local machine and I am able to successfully get predictions by hitting the endpoint that is created: http://localhost:8501/v1/models/my_model:predict</p> <p>Any help is this regard will be appreciated.</p>
<p>Answering this myself after working with the Google Cloud Support Team to figure out the error.</p> <p>Turns out the port I was creating a <code>Version</code> on was conflicting with the Kubernetes deployment on Cloud AI Platform's side. So I changed the <code>Dockerfile</code> to the following and was able to successfully run Online Predictions on both Classic AI Platform and Unified AI Platform:</p> <pre><code>FROM tensorflow/serving:2.3.0-gpu # Set where models should be stored in the container ENV MODEL_BASE_PATH=/models RUN mkdir -p ${MODEL_BASE_PATH} # copy the model file ENV MODEL_NAME=my_model COPY my_model /models/my_model EXPOSE 5000 EXPOSE 8080 CMD [&quot;tensorflow_model_server&quot;, &quot;--rest_api_port=8080&quot;, &quot;--port=5000&quot;, &quot;--model_name=my_model&quot;, &quot;--model_base_path=/models/my_model&quot;] </code></pre>
docker|tensorflow|google-cloud-platform|tensorflow-serving|google-cloud-ml
2
2,505
67,101,069
Efficient Way to Repeatedly Split Large NumPy Array and Record Middle
<p>I have a large NumPy array <code>nodes = np.arange(100_000_000)</code> and I need to rearrange this array by:</p> <ol> <li>Recording and then removing the middle value in the array</li> <li>Split the array into the <code>left</code> half and <code>right</code> half</li> <li>Repeat Steps 1-2 for each half</li> <li>Stop when all values are exhausted</li> </ol> <p>So, for a smaller input example <code>nodes = np.arange(10)</code>, the output would be:</p> <pre><code>[5 2 8 1 4 7 9 0 3 6] </code></pre> <p>This was accomplished by naively doing:</p> <pre><code>import numpy as np def split(node, out): mid = len(node) // 2 out.append(node[mid]) return node[:mid], node[mid+1:] def reorder(a): nodes = [a.tolist()] out = [] while nodes: tmp = [] for node in nodes: for n in split(node, out): if n: tmp.append(n) nodes = tmp return np.array(out) if __name__ == &quot;__main__&quot;: nodes = np.arange(10) print(reorder(nodes)) </code></pre> <p>However, this is way too slow for <code>nodes = np.arange(100_000_000)</code> and so I am looking for a much faster solution.</p>
<p>Edit: The question has been updated to have a much smaller input array so I leave the below for historical reasons. Basically it was likely a typo but we often get accustomed to computers working with insanely large numbers and when memory is involved they can be a real problem.</p> <p>There is already a numpy based solution submitted by someone else that I think fits the bill.</p> <p>Your code requires an insane amount of RAM just to hold 100 billion 64 bit integers. Do you have 800GB of RAM? Then you convert the numpy array to a list which will be substantially larger than the array (each packed 64 bit int in the numpy array will become a much less memory efficient python int object and the list will have a pointer to that object). Then you make a lot of slices of the list which will not duplicate the data but will duplicate the pointers to the data and use even more RAM. You also append all the result values to a list a single value at a time. Lists are very fast for adding items generally but with such an extreme size this will not only be slow but the way the list is allocated is likely to be extremely wasteful RAM wise and contribute to major problems (I believe they double in size when they get to a certain level of fullness so you will end up allocating more RAM than you need and doing many allocations and likely copies). What kind of machine are you running this on? There are ways to improve your code but unless you're running it on a super computer I don't know that you're going to ever finish that calculation. I only..only? have 32GB of RAM and I'm not going to even try to create a 100B int_64 numpy array as I don't want to use up ssd write life for a mass of virtual memory.</p> <p>As for improving your code stick to numpy arrays don't change to a python list it will greatly increase the RAM you need. Preallocate a numpy array to put the answer in. Then you need a new algorithm. Anything recursive or recursive like (ie a loop splitting the input,) will require tracking a lot of state, your nodes list is going to be extraordinarily gigantic and again use a lot of RAM. You could use len(a) to indicate values that are removed from your list and scan through the entire array each time to figure out what to do next but that will save RAM in favour of a tremendous amount of searching a gigantic array. I feel like there is an algorithm to cut numbers from each end and place them in the output and just track the beginning and end but I haven't figured it out at least not yet.</p> <p>I also think there is a simpler algorithm where you just track the number of splits you've done instead of making a giant list of slices and keeping it all in memory. Take the middle of the left half and then the middle of the right then count up one and when you take the middle of the left half's left half you know you have to jump to the right half then the count is one so you jump over to the original right half's left half and on and on... Based on the depth into the halves and the length of the input you should be able to jump around without scanning or tracking all of those slices though I haven't been able to dedicate much time to thinking this through in my head.</p> <p>With a problem of this nature if you really need to push the limits you should consider using C/C++ so you can be as efficient as possible with RAM usage and because you're doing an insane number of tiny things which doesn't map well to python performance.</p>
python|performance|numpy
1
2,506
66,922,956
How to convert list of dictionaries from CSV to dataframe?
<p>I have list of dictionaries from CSV with header test as follows:</p> <pre><code>[{'points': 50, 'time': '5:00', 'year': 2010}, {'points': 25, 'time': '6:00', 'month': &quot;february&quot;}, {'points':90, 'time': '9:00', 'month': 'january'}, {'points_h1':20, 'month': 'june'}] </code></pre> <p>when I use pd.DataFrame[Dataset[&quot;test&quot;]], the output is the same, which is :</p> <pre><code>[{'points': 50, 'time': '5:00', 'year': 2010}, {'points': 25, 'time': '6:00', 'month': &quot;february&quot;}, {'points':90, 'time': '9:00', 'month': 'january'}, {'points_h1':20, 'month': 'june'}] </code></pre> <p>How do I explode this dictionary?</p> <p>Edit: It only works when I copy and paste the output manually as a new variable and then pd.dataframe it again.</p> <pre><code>a = [{'points': 50, 'time': '5:00', 'year': 2010}, {'points': 25, 'time': '6:00', 'month': &quot;february&quot;}, {'points':90, 'time': '9:00', 'month': 'january'}, {'points_h1':20, 'month': 'june'}] </code></pre> <p>Then the dataframe table appears.</p> <p>I'm sorry, I'm new to this.</p>
<p>The reason your code failed is that your input file is actually <strong>not</strong> any CSV file. It is a <strong>string representation</strong> of your list of dictionaries (not a list of dictionaries).</p> <p>I assume that your input file contains what you put as the first sample.</p> <p>To handle such an input file properly, you can:</p> <ul> <li><strong>read</strong> the whole file into a variable (which will be of <em>str</em> type),</li> <li><strong>evaluate</strong> it to create an actual <strong>list</strong>, but for security reasons it is advisable to use <em>ast.literal_eval</em>,</li> <li>only then create your DataFrame from this list.</li> </ul> <p>The code to do it is:</p> <pre><code>import ast with open('Input.txt') as reader: txt = reader.read() lst = ast.literal_eval(txt) df = pd.DataFrame(lst) </code></pre> <p>The result is:</p> <pre><code> points time year month points_h1 0 50.0 5:00 2010.0 NaN NaN 1 25.0 6:00 NaN february NaN 2 90.0 9:00 NaN january NaN 3 NaN NaN NaN june 20.0 </code></pre> <h1>Another approach</h1> <p>Or maybe your input file is a CSV file anyway, something like:</p> <pre><code>{'points': 50, 'time': '5:00', 'year': 2010};x1 {'points': 25, 'time': '6:00', 'month': &quot;february&quot;};x2 {'points':90, 'time': '9:00', 'month': 'january'};x3 {'points_h1':20, 'month': 'june'};x4 </code></pre> <p>I put above actual CSV content, with 2 columns with no names. In order not to break the first colum into several columns, I assumed that the column separator is some <strong>other</strong> char than a comma, e.g. a semicolon.</p> <p>You can read such file as:</p> <pre><code>Dataset = pd.read_csv('Input.txt', sep=';', names=['test', 'test2']) </code></pre> <p>Then, to create your DataFrame from <em>test</em> column of <em>Dataset</em> (actually a source DataFrame), apply the following function to each element of <em>Dataset.test</em>:</p> <pre><code>df = Dataset.test.apply(lambda tt: pd.Series(ast.literal_eval(tt))) </code></pre> <p>The result is just the same as above.</p>
python|pandas|jupyter-notebook
0
2,507
68,132,060
Drop only Nan values from a row in a dataframe
<p>I have a dataframe which looks something like this:</p> <pre><code>Df lev1 lev2 lev3 lev4 lev5 description RD21 Nan Nan Nan Nan Oil Nan RD32 Nan Nan Nan Oil/Canola Nan Nan RD33 Nan Nan Oil/Canola/Wheat Nan Nan RD34 Nan Nan Oil/Canola/Flour Nan Nan Nan RD55 Nan Oil/Canola/Flour/Thick ED54 Nan Nan Nan Nan Rice Nan ED66 Nan Nan Nan Rice/White Nan Nan ED88 Nan Nan Rice/White/Jasmine Nan Nan ED89 Nan Nan Rice/White/Basmati Nan ED68 Nan Nan Nan Rice/Brown </code></pre> <p>I want to remove all the NaN values and just keep the non Nan values, something like this:</p> <pre><code>DF2 code description RD21 Oil RD32 Oil/Canola RD33 Oil/Canola/Wheat RD34 Oil/Canola/Flour RD55 Oil/Canola/Flour/Thick . . . </code></pre> <p>How do I do this? I tried using notna() method, but it returns a boolean value of the dataframe. Any help would be appreciated.</p>
<p>You can use stack and groupby like this to find the fist non null value,</p> <pre><code>df['code'] = df[['lev1', 'lev2', 'lev3', 'lev4', 'lev5']].stack().groupby(level=0).first().reindex(df.index) </code></pre> <p>Now, you can select the code column and description column</p> <pre><code>df[['code', 'description']] code description 0 RD21 Oil 1 RD32 Oil/Canola 2 RD33 Oil/Canola/Wheat 3 RD34 Oil/Canola/Flour 4 RD55 Oil/Canola/Flour/Thick 5 ED54 Rice 6 ED66 Rice/White 7 ED88 Rice/White/Jasmine 8 ED89 Rice/White/Basmati 9 ED68 Rice/Brown </code></pre>
python|pandas|dataframe
1
2,508
59,309,566
In a Pandas dataframe how do I calculate the median value for each decile within each month
<p>I have a dataframe with 50 data points per month. I'd like to calculate the median value for each decile within each month. In my groupby call I lead with the date, then qcut. But qcut calculates the bins over the whole dataset, not by month. Here's what I have so far:</p> <pre><code>import numpy as np import pandas as pd datecol = pd.date_range('12/31/2018','12/31/2019', freq='M') for ii in range(0,49): datecol = datecol.append(pd.date_range('12/31/2018','12/31/2019', freq='M')) datecol = datecol.sort_values() df = pd.DataFrame(np.random.randn(len(datecol), 1), index=datecol, columns=['Data']) dfg = df.groupby([df.index, pd.qcut(df['Data'], 10)])['Data'].median() </code></pre> <p>I've tried to run a qcut on the monthly grouping, but that hasn't worked.</p>
<p>First, <code>groupby</code> month to create the quantile labels within month. Then <code>groupby</code> month and quantile to find the median. </p> <pre><code>df['q'] = df.groupby(df.index).Data.apply(lambda x: pd.qcut(x, 10, labels=False)) df.groupby([df.index, 'q']).median() </code></pre> <hr> <pre><code> Data q 2018-12-31 0 -1.592383 1 -0.959931 2 -0.662911 3 -0.421994 4 -0.098636 5 0.394583 6 0.578562 ... ... 2019-12-31 5 0.022384 6 0.398127 7 0.562900 8 0.765605 9 1.355345 [130 rows x 1 columns] </code></pre>
python|pandas|group-by
0
2,509
59,105,414
Pandas to_html does not show the appended data
<p>When trying to export my pandas DataFrame to a html page, through the to_html() functionality, the output html page does not show the appended data-rows.</p> <pre><code>import pandas as pd df_test = pd.DataFrame(columns=['TEST1', 'TEST2']) df_test.append({'TEST1':11, 'TEST2':22}, ignore_index=True) df_test.append({'TEST1':33, 'TEST2':44}, ignore_index=True) return df_test.to_html() </code></pre> <p><a href="https://i.stack.imgur.com/mKZ5L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mKZ5L.png" alt="enter image description here"></a></p>
<p>Because pandas <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.append.html" rel="nofollow noreferrer"><code>DataFrame.append</code></a> not working inplace is necessary assign output back:</p> <pre><code>df_test = df_test.append({'TEST1':11, 'TEST2':22}, ignore_index=True) df_test = df_test.append({'TEST1':33, 'TEST2':44}, ignore_index=True) </code></pre>
python|pandas
1
2,510
59,416,760
How to write pandas dataframe into Databricks dbfs/FileStore?
<p><a href="https://i.stack.imgur.com/Waxvu.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Waxvu.png" alt="enter image description here"></a><a href="https://i.stack.imgur.com/YSh53.png" rel="noreferrer"><img src="https://i.stack.imgur.com/YSh53.png" alt="enter image description here"></a>I'm new to the Databricks, need help in writing a pandas dataframe into databricks local file system. </p> <p>I did search in google but could not find any case similar to this, also tried the help guid provided by databricks (attached) but that did not work either. Attempted the below changes to find my luck, the commands goes just fine, but the file is not getting written in the directory (expected wrtdftodbfs.txt file gets created)</p> <ol> <li><code>df.to_csv("/dbfs/FileStore/NJ/wrtdftodbfs.txt")</code></li> </ol> <p>Result: throws the below error</p> <blockquote> <p>FileNotFoundError: [Errno 2] No such file or directory: '/dbfs/FileStore/NJ/wrtdftodbfs.txt'</p> </blockquote> <ol start="2"> <li><code>df.to_csv("\\dbfs\\FileStore\\NJ\\wrtdftodbfs.txt")</code></li> </ol> <p>Result: No errors, but nothing written either</p> <ol start="3"> <li><code>df.to_csv("dbfs\\FileStore\\NJ\\wrtdftodbfs.txt")</code></li> </ol> <p>Result: No errors, but nothing written either</p> <ol start="4"> <li><code>df.to_csv(path ="\\dbfs\\FileStore\\NJ\\",file="wrtdftodbfs.txt")</code></li> </ol> <blockquote> <p>Result: TypeError: to_csv() got an unexpected keyword argument 'path'</p> </blockquote> <ol start="5"> <li><code>df.to_csv("dbfs:\\FileStore\\NJ\\wrtdftodbfs.txt")</code></li> </ol> <p>Result: No errors, but nothing written either</p> <ol start="6"> <li><code>df.to_csv("dbfs:\\dbfs\\FileStore\\NJ\\wrtdftodbfs.txt")</code></li> </ol> <p>Result: No errors, but nothing written either</p> <p>The directory exists and the files created manually shows up but pandas to_csv never writes nor error out.</p> <pre><code>dbutils.fs.put("/dbfs/FileStore/NJ/tst.txt","Testing file creation and existence") dbutils.fs.ls("dbfs/FileStore/NJ") </code></pre> <blockquote> <p>Out[186]: [FileInfo(path='dbfs:/dbfs/FileStore/NJ/tst.txt', name='tst.txt', size=35)]</p> </blockquote> <p>Appreciate your time and pardon me if the enclosed details are not clear enough.</p>
<p>Try with this in your notebook databricks:</p> <pre><code>import pandas as pd from io import StringIO data = """ CODE,L,PS 5d8A,N,P60490 5d8b,H,P80377 5d8C,O,P60491 """ df = pd.read_csv(StringIO(data), sep=',') #print(df) df.to_csv('/dbfs/FileStore/NJ/file1.txt') pandas_df = pd.read_csv("/dbfs/FileStore/NJ/file1.txt", header='infer') print(pandas_df) </code></pre>
python|pandas|dataframe|amazon-s3|databricks
9
2,511
57,170,571
Python - GeoPandas Does Not Work After Opening .DXF With Adobe Illustrator
<p>I'm attempting to plot a CAD file (.dxf) using GeoPandas then save it as a KML file. When I attempt to do so - the CAD file ends up showing up in the wrong place (in the middle of the ocean - when it should be in Florida). The strange part is this only occurs after opening the .dxf then saving it with Adobe Illustrator (in order to perform cleanup). If I run the same process without opening and saving with Illustrator - the files plot correctly. </p> <p>I've done a considerable amount of research - but it appears I'm doing everything correctly using GeoPandas (I've reduced my code to the following few lines for simplicity - but the result is the same - once the .dxf has been opened with Illustrator - it ends up in the middle of the ocean when opening the .kml!) </p> <pre><code>import geopandas as gpd from geopandas import GeoDataFrame import os import fiona from fiona.crs import from_epsg # Enable Fiona KML driver gpd.io.file.fiona.drvsupport.supported_drivers['KML'] = 'rw' # Read (and display) Data from CAD File plano = gpd.read_file('C:/Users/dev/Desktop/ ... 2000.dxf') # Add the Coordinate Reference System plano.crs = {'init':'epsg:3517'} plano.plot() # Write KML file with fiona.Env(): plano.to_file('C:/Users/dev/Desktop/ ... /2000.kml', driver='KML') </code></pre> <p><a href="https://i.stack.imgur.com/GxBi5.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GxBi5.jpg" alt="enter image description here"></a></p> <p>I have no idea why this is happening - any suggestions are GREATLY appreciated. </p>
<p>The fix / solution to this issue is to use an Adobe Illustrator plugin which allows for the preservation of GIS / Geo-spatial data. We've decided to use: <a href="https://www.avenza.com/mapublisher/" rel="nofollow noreferrer">https://www.avenza.com/mapublisher/</a></p> <p>Thank you to everyone who provided input regarding this issue. </p>
python|gis|geopandas|epsg
0
2,512
35,568,605
Why cumulative sum is not being carried over in the following numerical integration to calculate the area bewteen two curves?
<p>Description:</p> <p>In the following python code, I am producing a Gaussian PDF, namely p(y). I am trying to find the area confined between the curve and any horizontal line in the range of [min_p, max_p] through the method of rectangular summation. My main problem is in the implementation of the function that is supposed to calculate this area iteratively for any particular element in p array (as defined in the code) and plot it as a function of a given value in the above range.</p> <p>Issue:</p> <p>Code runs perfectly but it is not producing the desired monotonically increasing/decreasing function.</p> <pre><code>import matplotlib from scipy.stats import norm from scipy import integrate import matplotlib.pyplot as pyplot import numpy as np N = 10 # Number of sigmas away from central value M, K = 2**10, 2**10 # Number of grid points along y and p(y) mean, sigma = 10.0, 1.0 #Mean value and standard deviation of a Gaussian probability distribuiton (PDF) ymin, ymax = -N*sigma+mean, N*sigma+mean #Minimum and maximum and spacing between grid points along y-axis ylims = [ymin, ymax] y = np.linspace(ylims[0],ylims[1],M) #The values of y-axis on grid points pdf = norm.pdf(y,loc=mean,scale=sigma) # Definiton of the normalized probability distribuiton function (PDF) min_p , max_p = min(pdf), max(pdf) #The maximum, minimum value of p(y) p = np.linspace(min_p, max_p, K) #The values of p(y)-axis on grid points def Area(p): #Calculating the area under the PDF for which probability is more than pj-value for a particular jth index in pj_array above Areas = [] for yi in y.flat: for p_j in p.flat: Area = 0.0 delta_y = (ymax - ymin) / (M-1) #The spacing beteen grid points along y-axis if (pdf[yi] &gt; p_j): Area += (delta_y * pdf.sum()) else: Area += 0.0 Areas.append(Area) return Areas pyplot.plot(p, Area(p), '-') pyplot.axis([min_p, max_p, 0, 1]) pyplot.show() </code></pre>
<p>After making quite a few changes here is the code I think you are trying to produce:</p> <pre><code>from scipy.stats import norm import numpy as np import pylab as p %matplotlib inline N = 10 # Number of sigmas away from central value M, K = 2**10, 2**10 # Number of grid points along x and y mean, sigma = 10.0, 1.0 #Mean value and standard deviation of a Gaussian probability distribuiton (PDF) xmin, xmax = -N*sigma+mean, N*sigma+mean #Minimum and maximum and spacing between grid points along y-axis xx = np.linspace(xmin,xmax,M) #The values of x-axis on grid points pdf = norm.pdf(xx,loc=mean,scale=sigma ) # Definiton of the normalized probability distribuiton function (PDF) min_p , max_p = min(pdf), max(pdf) pp = np.linspace(min_p,max_p, K) #The values of y-axis on grid points delta_x = xx[1]-xx[0] # spacing beteen grid points along x-axis delta_p = pp[1]-pp[0] # spacing between grid points on y-axis def Area(p): #Calculating the area under the PDF for which probability is more than pj-value for a particular jth index in pj_array above areas = [] area=0 for i in range(nx): for pi in pp : if (pi&lt; pdf[i]): area += delta_x * delta_p areas.append(area) return areas p.subplot(121) p.plot(xx,pdf) p.subplot(122) p.plot(xx, Area(p), '-') </code></pre> <p>It shows the probability density functions and it's integral. The thing is that you don't need to actually add up little boxes of delta_x* delta_p. For each step in x (I changed your variables for clarity) you just need to add the rectangle delta_x * pdf[i]. </p> <pre><code>from scipy.stats import norm import numpy as np import pylab as p %matplotlib inline N = 10 # Number of sigmas away from central value M, K = 2**10, 2**10 # Number of grid points along x and y mean, sigma = 10.0, 1.0 #Mean value and standard deviation of a Gaussian probability distribuiton (PDF) xmin, xmax = -N*sigma+mean, N*sigma+mean #Minimum and maximum and spacing between grid points along y-axis xx = np.linspace(xmin,xmax,M) #The values of x-axis on grid points pdf = norm.pdf(xx,loc=mean,scale=sigma ) # Definiton of the normalized probability distribuiton function (PDF) delta_x = xx[1]-xx[0] # spacing between grid points along x-axis def Area(p): #Calculating the area under the PDF for which probability is more than pj-value for a particular jth index in pj_array above areas = [] area=0 for i in range(nx): area += delta_x * pdf[i] areas.append(area) return areas p.subplot(121) p.plot(xx,pdf) p.subplot(122) p.plot(xx, Area(p), '-') </code></pre> <p>To actually just do the integral above any given line p=const, you only need to subtract that fixed value from <code>pdf[i]</code> , and possibly add a conditional depending of what you mean with integral (subtract negatives or ignore them) in your particular context. </p>
python|arrays|for-loop|numpy|sympy
0
2,513
28,505,008
numpy.polyfit: How to get 1-sigma uncertainty around the estimated curve?
<p>I use numpy.polyfit to fit observations. polyfit gives me the estimated coefficients of the polynomial and can also provides me the error covariance matrix of the estimated coefficients. Fine. Now, I would like to know if there is a way to estimate the +/- 1sigma uncertainty around the estimated curve.</p> <p>I know MatLab can do it (<a href="https://stats.stackexchange.com/questions/56596/finding-uncertainty-in-coefficients-from-polyfit-in-matlab">https://stats.stackexchange.com/questions/56596/finding-uncertainty-in-coefficients-from-polyfit-in-matlab</a>) but I did not found a way to make it in python.</p>
<p>If you have enough data points, you can get with the parameter <code>cov=True</code> an estimated covariance matrix from <code>polyfit()</code>. Remember that you can write a polynomial <code>p[0]*t**n + p[1]*t**(n-1) + ... + p[n]</code> as the matrix product <code>np.dot(tt, p)</code> with <code>tt=[t**n, tt*n-1, ..., 1]</code>. <code>t</code> can be either a single value or a column vector. Since this a linear equation, with the covariance matrix <code>C_p</code> of <code>p</code>, the covariance matrix of the values is <code>np.dot(tt, np.dot(C_p tt.T))</code>.</p> <p>So a simple example</p> <pre><code>import numpy as np import matplotlib.pyplot as plt # sample data: x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0]) y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0, -3.0]) n = 3 # degree of polynomial p, C_p = np.polyfit(x, y, n, cov=True) # C_z is estimated covariance matrix # Do the interpolation for plotting: t = np.linspace(-0.5, 6.5, 500) # Matrix with rows 1, t, t**2, ...: TT = np.vstack([t**(n-i) for i in range(n+1)]).T yi = np.dot(TT, p) # matrix multiplication calculates the polynomial values C_yi = np.dot(TT, np.dot(C_p, TT.T)) # C_y = TT*C_z*TT.T sig_yi = np.sqrt(np.diag(C_yi)) # Standard deviations are sqrt of diagonal # Do the plotting: fg, ax = plt.subplots(1, 1) ax.set_title("Fit for Polynomial (degree {}) with $\pm1\sigma$-interval".format(n)) ax.fill_between(t, yi+sig_yi, yi-sig_yi, alpha=.25) ax.plot(t, yi,'-') ax.plot(x, y, 'ro') ax.axis('tight') fg.canvas.draw() plt.show() </code></pre> <p>gives <img src="https://i.stack.imgur.com/x2Kic.png" alt="Polyfit with 1-sigma intervals"></p> <p>Note that calculating the complete matrix <code>C_yi</code> is computationally and memorywise not very efficient.</p> <p><strong>Update</strong> - on the request on @oliver-w a couple words on the methodology:</p> <p><code>polyfit</code> assumes that the parameters <code>x_i</code> are deterministic and <code>y_i</code> are uncorrelated random variables with the expected value <code>y_i</code> and identical variance <code>sigma</code>. So it is a linear estimation problem and an ordinary least squares method can be used. By determining the sample variance of the residues, <code>sigma</code> can be approximated. Based on<code>sigma</code> the covariance matrix of <code>pp</code> can be calculated as shown in the <a href="https://en.wikipedia.org/wiki/Least_squares#Least_squares.2C_regression_analysis_and_statistics" rel="noreferrer">Wikipedia article on Least Squares</a>. That is almost the method that <code>polyfit()</code> uses: There for <code>sigma</code> the more conservative factor <code>S/(n-m-2)</code> instead of <code>S/(n-m)</code> is used. </p>
python|numpy
8
2,514
28,571,741
Retrieve approximate Hessian inverse from L-BFGS-B
<p>With the L-BFGS-B minimizer in scipy, is it possible to retrieve the approximate inverse Hessian that's calculated internally?</p> <p>Having it in the implicit factored form, so that it's possible to compute arbitrary inverse Hessian matrix - vector products, would be fine.</p>
<p>It doesn't appear so. I'm not an expert on these algorithms but it seems that with L-BFGS specifically it is not possible. According to <a href="http://en.wikipedia.org/wiki/Limited-memory_BFGS" rel="nofollow">Wikipedia</a>:</p> <blockquote> <p>Instead of the inverse Hessian H_k, L-BFGS maintains a history of the past m updates of the position x and gradient ∇f(x), where generally the history size m can be small (often m&lt;10). These updates are used to implicitly do operations requiring the H_k-vector product.</p> </blockquote> <p>However, if you use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_bfgs.html#scipy.optimize.fmin_bfgs" rel="nofollow"><code>scipy.fmin_bfgs</code></a> it <em>does</em> return the approximate (inverse of the) Hessian matrix, at the cost of the greater memory needed to maintain it.</p>
python|numpy|scipy|mathematical-optimization|hessian-matrix
2
2,515
50,674,011
Replace the year in pandas.datetime column
<p>I have a dataframe with a date column converted using pd.to_datetime(). When I inspected the data I found few of these dates with year mentioned as 2216, which should have been 2016. Can you please help me change the year for these dates from 2216 to 2016</p> <pre><code> Date 0 2216-12-21 1 2216-12-23 2 2216-01-31 3 2016-12-23 4 2216-12-27 5 2216-12-25 6 2016-12-23 </code></pre> <p>I tried using str.replace</p> <pre><code> df['Date'] = df['Date'].str.replace("2216","2016") </code></pre> <p>but got the following error</p> <pre><code> Can only use .str accessor with string values, which use np.object_ dtype in pandas </code></pre> <p>Thanks In advance</p>
<p>Use:</p> <pre><code>df['Date'] = df['Date'].mask(df['Date'].dt.year == 2216, df['Date'] + pd.offsets.DateOffset(year=2016)) print (df) Date 0 2016-12-21 1 2016-12-23 2 2016-01-31 3 2016-12-23 4 2016-12-27 5 2016-12-25 6 2016-12-23 </code></pre> <p>For better performance:</p> <pre><code>df['Date'] = df['Date'].mask(df['Date'].dt.year == 2216, df['Date'] - pd.to_timedelta(200, unit='y') + pd.to_timedelta(12, unit='h')) print (df) Date 0 2016-12-21 1 2016-12-23 2 2016-01-31 3 2016-12-23 4 2016-12-27 5 2016-12-25 6 2016-12-23 </code></pre>
python|pandas
14
2,516
51,090,580
Pandas dataframe adding zero-padding before the datetime
<p>I'm using Pandas dataframe. And I have a dataFrame <code>df</code> as the following:</p> <pre><code>time id ------------- 5:13:40 1 16:20:59 2 ... </code></pre> <p>For the first row, the time <code>5:13:40</code> has no zero padding before, and I want to convert it to <code>05:13:40</code>. So my expected <code>df</code> would be like:</p> <pre><code>time id ------------- 05:13:40 1 16:20:59 2 ... </code></pre> <p>The type of <code>time</code> is <code>&lt;class 'datetime.timedelta'&gt;</code>.Could anyone give me some hints to handle this problem? Thanks so much!</p>
<p>Use <code>pd.to_timedelta</code>:</p> <pre><code>df['time'] = pd.to_timedelta(df['time']) </code></pre> <p>Before:</p> <pre><code>print(df) time id 1 5:13:40 1.0 2 16:20:59 2.0 df.info() &lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 2 entries, 1 to 2 Data columns (total 2 columns): time 2 non-null object id 2 non-null float64 dtypes: float64(1), object(1) memory usage: 48.0+ bytes </code></pre> <p>After:</p> <pre><code>print(df) time id 1 05:13:40 1.0 2 16:20:59 2.0 df.info() d&lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 2 entries, 1 to 2 Data columns (total 2 columns): time 2 non-null timedelta64[ns] id 2 non-null float64 dtypes: float64(1), timedelta64[ns](1) memory usage: 48.0 bytes </code></pre>
python|pandas
1
2,517
33,147,411
Adding a pandas column without creating a list
<p>I have 2 datasets of more than 1million rows and I am analyzing it with pandas (therefore they both are <code>pd.Dataframe</code> and noted <code>df1</code> and <code>df2</code>). I need to do add a column to df1 depending on the value of df2. I used the python list, but it is incredibly slow. Any advice to be quicker ?</p> <pre><code>import pandas as pd, numpy as np numObs = [] for line in np.array(df1): numObs.append([num for i,num,exp in df2 if i==line[0]][0]) df1['NumObs'] = pd.Series(np.array(numObs),index = df1.index) </code></pre>
<p>It's not so much that you are creating a list, but that you have a nested loop, taking you over all combinations of <code>df1</code> and <code>df2</code>. Roughly</p> <pre><code>for line in np.array(df1): numObs.append([num for i,num,exp in df2 if i==line[0]][0]) </code></pre> <p>expands to</p> <pre><code>for line in np.array(df1): for i, num, exp in df2: finds = [] if i==line[0]: finds.append(num) numObs.append(finds[0]) </code></pre> <p>Normally a list comprehension is faster than an explicit loop, but here you are throwing away most of what the inner loop finds. Simply breaking from the inner loop when a match is found, could save a lot of time (depending on how far it has to iterate in df2 to find the match.</p> <pre><code>for line in np.array(df1): for i, num, exp in df2: finds = [] if i==line[0]: numObs.append(num) break </code></pre> <p>I'm not that familiar with Pandas. Is 'i' the row count, and 'num' the value? So that if <code>line[0]</code> is 10, you want <code>df2[10]</code> (or some such expression)? One way or other you are looking up values in <code>df2</code> based on the first 'column' of <code>df1</code>, right?</p>
python|list|numpy|pandas
0
2,518
66,685,526
Function to select pandas dataframe rows based on list of tuples of columns and cutoffs?
<p>I´m trying to create a python function that takes 2 arguments: a pandas dataframe, and a list of tuples, where each tuple in the list have 3 elements, a column name, a min value and a max value. So each tuple represent a condition to be applied to a column in the dataframe. And then the function would return a sub data set for which all the conditions are true.</p> <p>I have tried to create boolean conditions looping on each tuple on the list but then I couldn´t figure out how to make the function return a selection based on all the conditions being true, also because I couldn´t give appropriate names to each conditions since I'm looping on the tuples and could not change the names of the conditions on each pass of the loop.</p> <p>I think I'm not approaching it in the correct way. Do you have any suggestion?</p>
<h1>Dynamic Query function</h1> <p>Since you want to check for all the conditions, these will be AND. So we can start filtering them one by one.</p> <pre><code>import pandas as pd def sub_df(dx,cuts): for cx in cuts: col = cx[0] minval = cx[1] maxval = cx[2] dx = dx[(dx[col] &gt;= minval) &amp; (dx[col] &lt;= maxval)] #or you can also give it like this # #dx = dx[dx[col].between(minval, maxval)] # return dx df = pd.DataFrame( {&quot;A&quot;: [100, 200, 300, 400],&quot;B&quot;: [10,20,30,40], &quot;C&quot;: [200, 400, 600, 800],&quot;D&quot;: [20,40,60,80], &quot;E&quot;: [150, 300, 450, 600],&quot;F&quot;: [15,30,45,60], &quot;G&quot;: [500, 600, 700, 800],&quot;H&quot;: [50,60,70,80]}) print (df) cutoffs = [('A',150, 350),('G',650, 750)] df1 = sub_df(df,cutoffs) print (df1) cutoffs = [('B',10, 30),('C',50, 350),('F',10, 50)] df1 = sub_df(df,cutoffs) print (df1) cutoffs = [('B',10, 30),('D',50, 100),('H',10, 50)] df1 = sub_df(df,cutoffs) print (df1) </code></pre> <p>Outputs for these are as follows:</p> <p>Original DataFrame:</p> <pre><code> A B C D E F G H 0 100 10 200 20 150 15 500 50 1 200 20 400 40 300 30 600 60 2 300 30 600 60 450 45 700 70 3 400 40 800 80 600 60 800 80 </code></pre> <p>Results for condition 1: <code>[('A',150, 350),('G',650, 750)]</code></p> <pre><code> A B C D E F G H 2 300 30 600 60 450 45 700 70 </code></pre> <p>Results for condition 2: <code>[('B',10, 30),('C',50, 350),('F',10, 50)]</code></p> <pre><code> A B C D E F G H 0 100 10 200 20 150 15 500 50 </code></pre> <p>Results for condition 3: <code>[('B',10, 30),('D',50, 100),('H',10, 50)]</code></p> <pre><code>Empty DataFrame Columns: [A, B, C, D, E, F, G, H] Index: [] </code></pre> <h1>Prev Answer</h1> <p>I think you are looking for this:</p> <pre><code>import pandas as pd def sub_df(dx,tup_vals): return dx[(dx[tup_vals[0]] &gt;= tup_vals[1]) &amp; (dx[tup_vals[0]] &lt;= tup_vals[2])] </code></pre> <p>Here <code>dx</code> is the dataframe passed to the function <code>tup_vals</code> will have <code>(colname,min,max)</code></p> <p>Example of usage of this function:</p> <pre><code>df = pd.DataFrame( {&quot;A&quot;: [200, 400, 600, 800],&quot;B&quot;: [10,20,30,40]}) print (df) tups = ('A',300, 700) df1 = sub_df(df,tups) print (df1) </code></pre> <p>Output of this will be:</p> <p>Original DF:</p> <pre><code> A B 0 200 10 1 400 20 2 600 30 3 800 40 </code></pre> <p>Returned DF: (values in col A between 300 and 700)</p> <pre><code> A B 1 400 20 2 600 30 </code></pre>
python|pandas
2
2,519
16,207,023
Python pandas read_csv like functionality from list to a DataFrame?
<p>I have a list with values like the following:</p> <pre><code>[['2013-04-02 19:42:00.474', '1'], ['2013-04-02 19:42:00.529', '2'], ['2013-04-02 19:42:00.543', '3'], ['2013-04-02 19:42:00.592', '4'], ['2013-04-02 19:42:16.671', '5'], ['2013-04-02 19:42:16.686', '6'], ['2013-04-02 19:42:16.708', '7'], ['2013-04-02 19:42:16.912', '8'], ['2013-04-02 19:42:16.941', '9'], ['2013-04-02 19:42:19.721', '10'], ['2013-04-02 19:42:22.826', '11'], ['2013-04-02 19:42:25.609', '8'], ['2013-04-02 19:42:58.225', '5']] </code></pre> <p>I know if this were in a csv file, I could read it into a DataFrame with the Date and Timestamps put into index to make the DataFrame a time series.</p> <p>How to achieve this without saving the list to a csv file?</p> <p>I tried df=pd.DataFrame(tlist, columns=['date_time', 'count']) and then df=df.set_index('date_time')</p> <p>But the index values are coming out as objects, rather than TimeStamps:</p> <pre><code>df.index Index([2013-04-02 19:42:00.474, 2013-04-02 19:42:00.529, 2013-04-02 19:42:00.543, ............], dtype=object) </code></pre>
<pre><code>In [40]: df.index = df.index.to_datetime() In [41]: df.index Out[41]: &lt;class 'pandas.tseries.index.DatetimeIndex'&gt; [2013-04-02 19:42:00.474000, ..., 2013-04-02 19:42:58.225000] Length: 13, Freq: None, Timezone: None </code></pre>
python|datetime|csv|pandas
3
2,520
16,329,218
Face Recognition - How to return the correct image?
<p>I am trying to make hand gesture recognition (similar to face recognition) using Principal Component Analysis(PCA) in python. I have a Test image and I want to get its nearest match from a set of Training images.</p> <p>Here is my code:</p> <pre><code>import os, sys import numpy as np import PIL.Image as Image def read_images(path, sz=None): c = 0 X,y = [], [] for dirname, dirnames, filenames in os.walk(path): for subdirname in dirnames: subject_path = os.path.join(dirname, subdirname) for filename in os.listdir(subject_path): try: im = Image.open(os.path.join(subject_path, filename)) im = im.convert("L") # resize to given size (if given) if (sz is not None): im = im.resize(sz, Image.ANTIALIAS) X.append(np.asarray(im, dtype=np.uint8)) y.append(c) except IOError: print "I/O error({0}): {1}".format(errno, strerror) except: print "Unexpected error:", sys.exc_info()[0] raise c = c+1 return [X,y] def asRowMatrix(X): if len(X) == 0: return np.array([]) mat = np.empty((0, X[0].size), dtype=X[0].dtype) for row in X: mat = np.vstack((mat, np.asarray(row).reshape(1,-1))) return mat def asColumnMatrix(X): if len(X) == 0: return np.array([]) mat = np.empty((X[0].size, 0), dtype=X[0].dtype) for col in X: mat = np.hstack((mat, np.asarray(col).reshape(-1,1))) return mat def pca(X, y, num_components=0): [n,d] = X.shape if (num_components &lt;= 0) or (num_components&gt;n): num_components = n mu = X.mean(axis=0) X = X - mu if n&gt;d: C = np.dot(X.T,X) [eigenvalues,eigenvectors] = np.linalg.eigh(C) else: C = np.dot(X,X.T) [eigenvalues,eigenvectors] = np.linalg.eigh(C) eigenvectors = np.dot(X.T,eigenvectors) for i in xrange(n): eigenvectors[:,i] = eigenvectors[:,i]/np.linalg.norm(eigenvectors[:,i]) # or simply perform an economy size decomposition # eigenvectors, eigenvalues, variance = np.linalg.svd(X.T, full_matrices=False) # sort eigenvectors descending by their eigenvalue idx = np.argsort(-eigenvalues) eigenvalues = eigenvalues[idx] eigenvectors = eigenvectors[:,idx] # select only num_components eigenvalues = eigenvalues[0:num_components].copy() eigenvectors = eigenvectors[:,0:num_components].copy() return [eigenvalues, eigenvectors, mu, X] #Get eigenvalues, eigenvectors, mean and shifted images (Training) [a, b] = read_images('C:\\Users\\Karim\\Desktop\\Training &amp; Test images\\AT&amp;T\\att_faces', (90,90)) [evalues, evectors, mean_image, shifted_images] = pca(asRowMatrix(a), b) #Input(Test) image input_image = Image.open('C:\\Users\\Karim\\Desktop\\Training &amp; Test images\\AT&amp;T\\Test\\4.pgm').convert('L').resize((90, 90)) input_image = np.asarray(input_image).flatten() #Normalizing input image shifted_in = input_image - mean_image #Finding weights w = evectors.T * shifted_images w = np.asarray(w) w_in = evectors.T * shifted_in w_in = np.asarray(w_in) #Euclidean distance df = np.asarray(w - w_in) # the difference between the images dst = np.sqrt(np.sum(df**2, axis=1)) # their euclidean distances </code></pre> <p>Now I have an array of distances <code>dst</code> containing the euclidean distance between the Test image and each image in the set of Training image.</p> <p>How to get the image with the nearest (minimum) distance and its path (or subdirectory name)? Not the value of the minimum distance nor its index in the array <code>dst</code></p>
<p><code>dst.argmin()</code> will tell you the index of the element in <code>dst</code> which is smallest.</p> <p>So the closest image would be</p> <pre><code>idx = dst.argmin() closest = a[idx] </code></pre> <p>since <code>a</code> is a list of arrays representing training faces.</p> <p>To display the closest image, you could use:</p> <pre><code>img = Image.fromarray(closest, 'L') img.show() </code></pre> <hr> <p>To find the file path of the closest image, I would alter <code>read_images</code> to return a list of all the file paths, so it could be indexed just like the list of images.</p> <pre><code>def read_images(path, sz=None): X, y = [], [] for dirname, dirnames, filenames in os.walk(path): for filename in filenames: subject_path = os.path.join(dirname, filename) try: im = Image.open(subject_path) except IOError as err: print "I/O error: {e}: {f}".format(e=err, f=subject_path) except: print "Unexpected error:", sys.exc_info()[0] raise else: im = im.convert("L") # resize to given size (if given) if (sz is not None): im = im.resize(sz, Image.ANTIALIAS) X.append(np.asarray(im, dtype=np.uint8)) y.append(subject_path) return [X, y] </code></pre> <p>Below, call it like this:</p> <pre><code>images, paths = read_images(TRAINING_DIR, (90, 90)) </code></pre> <p>You can then obtain the full path to the closest image with</p> <pre><code>idx = dst.argmin() filename = paths[idx] </code></pre> <p>If you want just the path to the subdirectory, use </p> <pre><code>os.path.dirname(filename) </code></pre> <p>And for the name of the subdirectory, use</p> <pre><code>os.path.basename(os.path.dirname(filename)) </code></pre>
python|image-processing|numpy|face-recognition|pca
3
2,521
57,662,437
How to check if every item pandas column of lists is an int?
<p>I have a pandas column of lists. I need to check if every item in those lists are ints. </p> <p>For a regular list, I can find if an item is an int using</p> <pre><code>all(isinstance(x, int) for x in lst) </code></pre> <p>and for a regular pandas column, I can check if they're all ints using </p> <pre><code>df.loc[~df['Field1'].str.isdigit(), 'Field1'] </code></pre> <p>But what if the column contains a list in each row? </p> <p>Edit:</p> <p>He is a minimal reproducable example</p> <pre><code>A = np.random.randint(0,40,20) B = [np.random.randint(0,40,k) for k in np.random.randint(2,20,20)] A32 = A.astype(np.int32) from itertools import chain sizes = np.fromiter(chain((0,),map(len,B)),np.int32,len(B)+1) boundaries = sizes.cumsum() # force int32 B_all = np.empty(boundaries[-1],np.int32) B32 = np.split(B_all, boundaries[1:-1]) df = pd.DataFrame([A32, B32]).T df[1] = df[1].apply(lambda x: x.tolist() ) df.columns = ['a', 'b'] df.at[10,'b'] = [ 3, 5, 2, 1, 'a', 4, 4] </code></pre>
<p>You can use <code>apply</code> with your current list check:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import random # create random df x = [{'A': [random.randint(0,300) for i in range(10)]} for i in range(10)] df = pd.DataFrame(x) df.A.apply(lambda x: all(isinstance(y, int) for y in x)) 0 True 1 True 2 True 3 True 4 True 5 True 6 True 7 True 8 True 9 True # add non-int row x = [{'A': [random.randint(0,300) for i in range(10)]} for i in range(10)] + [{'A':[chr(a) for a in range(100,120)]}] df = pd.DataFrame(x) df.A.apply(lambda x: all(isinstance(y, int) for y in x)) 0 True 1 True 2 True 3 True 4 True 5 True 6 True 7 True 8 True 9 True 10 False Name: A, dtype: bool </code></pre>
python|pandas
2
2,522
24,183,101
Pandas: Bar-Plot with two bars and two y-axis
<p>I have a DataFrame looking like this:</p> <pre><code> amount price age A 40929 4066443 B 93904 9611272 C 188349 19360005 D 248438 24335536 E 205622 18888604 F 140173 12580900 G 76243 6751731 H 36859 3418329 I 29304 2758928 J 39768 3201269 K 30350 2867059 </code></pre> <p>Now I'd like to plot a bar-plot with the age on the x-axis as labels. For each x-tick there should be two bars, one bar for the amount, and one for the price. I can get this working by using simply:</p> <pre><code>df.plot(kind='bar') </code></pre> <p>The problem is the scaling. The prices are so much higher that I can not really identify the amount in that graph, see:</p> <p><img src="https://i.stack.imgur.com/8PZSi.png" alt="enter image description here"></p> <p>Thus I'd like a second y-axis. I tried it using:</p> <pre><code>df.loc[:,'amount'].plot(kind='bar') df.loc[:,'price'].plot(kind='bar',secondary_y=True) </code></pre> <p>but this just overwrites the bars and does NOT place them side-by-side. Is there any way to do this without having to access the lower-level matplotlib (which would be possible obviously by placing the bars side by side manually)?</p> <p>For now, I'm using two single plots within subplots:</p> <pre><code>df.plot(kind='bar',grid=True,subplots=True,sharex=True); </code></pre> <p>resulting in:</p> <p><img src="https://i.stack.imgur.com/oSz6a.png" alt="enter image description here"></p>
<p>Using the new pandas release (0.14.0 or later) the below code will work. To create the two axis I have manually created two matplotlib axes objects (<code>ax</code> and <code>ax2</code>) which will serve for both bar plots.</p> <p>When plotting a Dataframe you can choose the axes object using <code>ax=...</code>. Also in order to prevent the two plots from overlapping I have modified where they align with the <code>position</code> keyword argument, this defaults to <code>0.5</code> but that would mean the two bar plots overlapping.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import pandas as pd from io import StringIO s = StringIO(""" amount price A 40929 4066443 B 93904 9611272 C 188349 19360005 D 248438 24335536 E 205622 18888604 F 140173 12580900 G 76243 6751731 H 36859 3418329 I 29304 2758928 J 39768 3201269 K 30350 2867059""") df = pd.read_csv(s, index_col=0, delimiter=' ', skipinitialspace=True) fig = plt.figure() # Create matplotlib figure ax = fig.add_subplot(111) # Create matplotlib axes ax2 = ax.twinx() # Create another axes that shares the same x-axis as ax. width = 0.4 df.amount.plot(kind='bar', color='red', ax=ax, width=width, position=1) df.price.plot(kind='bar', color='blue', ax=ax2, width=width, position=0) ax.set_ylabel('Amount') ax2.set_ylabel('Price') plt.show() </code></pre> <p><img src="https://i.stack.imgur.com/cmffl.png" alt="Plot"></p>
python|matplotlib|plot|pandas
96
2,523
43,894,828
Group data based on column name pandas
<p>In the example below, I want to first sort based on UID and then the TSTAMP for each TID.</p> <p>In this context, here is a minimal working example I generated:</p> <pre><code>df = pd.read_csv(dataset_path, names = ['TID','UID','TSTAMP'], delimiter=';') df = df.sort_values(by=['TID'], ascending=[True]) print df #print df.groupby('UID').describe() </code></pre> <p>However, this does not <code>groupby('UID')</code> the way want to sort it.</p> <pre><code> TID UID TSTAMP 22267 77 (!?} 1494417075666 9095 77 U|X^ 1494415815098 15266 77 ~Mb{ 1494416401082 15263 77 ~Mb{ 1494416401061 15255 77 Qh9~ 1494416398799 15252 77 Qh9~ 1494416398786 15239 77 xF#u 1494416397542 15236 77 xF#u 1494416397540 9105 77 U|X^ 1494415815197 </code></pre> <p>Something like this as the final result:</p> <pre><code> TID UID TSTAMP 22267 77 (!?} 1494417075666 15263 77 ~Mb{ 1494416401061 15266 77 ~Mb{ 1494416401082 15252 77 Qh9~ 1494416398786 15255 77 Qh9~ 1494416398799 9095 77 U|X^ 1494415815098 9105 77 U|X^ 1494415815197 15236 77 xF#u 1494416397540 15239 77 xF#u 1494416397542 </code></pre> <p>I'm a learning <code>pandas</code>.. any help will be appreciated.</p> <p>Thanks to @jezrael, here is the final solution</p> <pre><code>df = pd.read_csv(dataset_path, names = ['TID','UID','TSTAMP'], delimiter=';') df = df.sort_values(['TID', 'TSTAMP', 'UID'], ascending=[True, False, True]) d = {'min':'TSTAMP-INIT','max':'TSTAMP-FIN'} df = df.groupby(['UID','TID'])['TSTAMP'].agg([min, max]).reset_index().rename(columns=d) for i, row in df.T.iteritems(): print row </code></pre>
<p>It seems you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>sort_values</code></a>:</p> <pre><code>df = df.sort_values(['TID', 'TSTAMP', 'UID'], ascending=[True, False, True]) print (df) TID UID TSTAMP 22267 77 (!?} 1494417075666 15266 77 ~Mb{ 1494416401082 15263 77 ~Mb{ 1494416401061 15255 77 Qh9~ 1494416398799 15252 77 Qh9~ 1494416398786 15239 77 xF#u 1494416397542 15236 77 xF#u 1494416397540 9105 77 U|X^ 1494415815197 9095 77 U|X^ 1494415815098 </code></pre> <p>If first column is not necessary sort, omit it:</p> <pre><code>df = df.sort_values(['TSTAMP', 'UID'], ascending=[False, True]) print (df) TID UID TSTAMP 22267 77 (!?} 1494417075666 15266 77 ~Mb{ 1494416401082 15263 77 ~Mb{ 1494416401061 15255 77 Qh9~ 1494416398799 15252 77 Qh9~ 1494416398786 15239 77 xF#u 1494416397542 15236 77 xF#u 1494416397540 9105 77 U|X^ 1494415815197 9095 77 U|X^ 1494415815098 </code></pre>
python|pandas
2
2,524
43,879,875
concat specific rows of two pandas dataframe using data in two columns as reqs
<p>I have two dataframes DF1 and DF2, where </p> <p>both have subframes "data" and "metadata," and DF1 has substantially more rows than DF2</p> <pre><code>DF1 DATA METADATA 0 1 2 3 4 5 attr1 attr2 .. attrN 11 1 1 1 1 1 1 000 apple 13 1 1 1 1 1 1 140 orange 19 1 1 1 1 1 1 199 pineapple 25 1 5 1 1 1 2 000 apple .. DF2 DATA METADATA x y z k attr1 attr2 .. attrK 000 2 2 2 2 000 bean 001 2 2 2 2 001 bean 002 2 2 2 2 002 bean 003 2 2 2 2 003 bean .. 199 2 2 2 2 199 bean 200 2 2 2 2 000 orange 201 2 2 2 2 001 orange .. 340 1 2 3 4 140 orange .. 500 4 3 2 1 000 apple .. 700 2 2 2 2 350 bread .. 999 5 5 5 5 199 pineapple </code></pre> <p>I want to concatenate columnwise specific rows in DF2 to rows in DF1, based off attributes in DF2. </p> <p>Specifically:</p> <p>For every row in DF1, I want to concatenate just the DATA from the row in DF2 such that the entry in DF1.METADATA.attr1 &amp; DF2.METADATA.attr1 and DF1.METADATA.attr2 &amp; DF2.METADATA.attr2 are the same, for each row. The result here would be:</p> <pre><code> DF3 (desired result) DATA METADATA 0 1 2 3 4 5 x y z k attr1 attr2 .. attr N 11 1 1 1 1 1 1 4 3 2 1 000 apple 13 1 1 1 1 1 1 1 2 3 4 140 orange 19 1 1 1 1 1 1 5 5 5 5 199 pineapple 25 1 5 1 1 1 2 4 3 2 1 000 apple </code></pre> <p>I have managed to do it by looping through, but I get a terrible runtime and having a lot of data I need to make it run faster and there should be a quick and easy way to do this through pandas (i think!)</p>
<p>It sounds like you want to do a merge on attr1, something like:</p> <pre><code>df1.merge(df2, how='left') </code></pre> <p>For example (slightly tweaked):</p> <pre><code>In [11]: df1 Out[11]: DATA METADATA 0 1 2 3 4 5 attr1 attr2 11 1 1 1 1 1 1 0 bean 13 1 1 1 1 1 1 140 orange 19 1 1 1 1 1 1 199 pineapple 25 1 5 1 1 1 2 0 apple In [12]: df2 Out[12]: DATA METADATA x y z k attr1 attr2 0 2 2 2 2 0 bean 1 7 2 2 2 0 apple 2 2 2 2 2 2 bean 3 7 2 2 2 3 bean In [13]: df1.merge(df2, how="left") Out[13]: DATA METADATA DATA 0 1 2 3 4 5 attr1 attr2 x y z k 0 1 1 1 1 1 1 0 bean 2.0 2.0 2.0 2.0 1 1 1 1 1 1 1 140 orange NaN NaN NaN NaN 2 1 1 1 1 1 1 199 pineapple NaN NaN NaN NaN 3 1 5 1 1 1 2 0 apple 7.0 2.0 2.0 2.0 </code></pre> <p>Note: this merges on the shared columns, in this case METADATA attr1 and attr2. See the <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging" rel="nofollow noreferrer">merge section of the docs</a>.</p>
python|python-3.x|pandas
0
2,525
43,771,023
Interpolate a curve on itself using NumPy
<p>I have the following curve as two arrays, of x and y positions. </p> <p><img src="https://i.stack.imgur.com/yS0Bp.png" alt="curve"></p> <p>Imagine if you were to draw vertical lines going through each point, and add points on the curve wherever these lines intersect the curve. This is what I want. </p> <p>I tried using <code>np.interp(x, x, y)</code>, but I ended up with the following mess: </p> <p><img src="https://i.stack.imgur.com/lGZMX.png" alt="mess"></p> <p>How can I do this? Is it possible with <code>np.interp</code>? </p> <p>This might be something that should be asked in a different question, but I would also like there to be points added where the curve crosses over itself.</p>
<p>According to the <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html" rel="nofollow noreferrer">docs</a> the array of X values should be sorted (or periodic), otherwise "the result is nonsense". You can try to split your curve into sections, and then interpolate each part on the others. You can find the correct splitting places by looking at where <code>np.diff(x)</code> changes sign.</p>
python|numpy|interpolation
0
2,526
73,159,568
Multi-channel, 2D mask weights using BCEWithLogitsLoss in Pytorch
<p>I have a set of 256x256 images that are each labeled with nine, binary 256x256 masks. I am trying to calculate the <code>pos_weight</code> in order to weight the <code>BCEWithLogitsLoss</code> using Pytorch.</p> <p>The shape of my masks tensor is <code>tensor([1000, 9, 256, 256])</code> where 1000 is the number of training images, 9 is the number of mask channels (all encoded to 0/1), and 256 is the size of each image side.</p> <p>To calculate pos_weight, I have summed the zeros in each mask, and divided that number by the sum of all of the ones in each mask (following the advice suggested <a href="https://discuss.pytorch.org/t/shape-for-multiple-channel-2d-mask-weights-using-bcewithlogitsloss/157643" rel="nofollow noreferrer">here</a>.):</p> <pre class="lang-py prettyprint-override"><code>(masks[:,channel,:,:]==0).sum()/masks[:,channel,:,:].sum() </code></pre> <p>Calculating the weight for every mask channel provides a tensor with the shape of <code>tensor([9])</code>, which seems intuitive to me, since I want a pos_weight value for each of the nine mask channels. However when I try to fit my model, I get the following error message:</p> <pre class="lang-py prettyprint-override"><code>RuntimeError: The size of tensor a (9) must match the size of tensor b (256) at non-singleton dimension 3 </code></pre> <p>This error message is surprising because it suggests that the weights need to be the size of one of the image sides, but not the number of mask channels. What shape should <code>pos_weight</code> be and how do I specify that it should be providing weights for the mask channels instead of the image pixels?</p>
<p>TLDR; This is a broadcasting issue which is surprisingly not handled by PyTorch's <a href="https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html?highlight=bce#torch.nn.BCEWithLogitsLoss" rel="nofollow noreferrer"><code>nn.BCEWithLogitsLoss</code></a> namely <a href="https://github.com/pytorch/pytorch/blob/master/torch/nn/functional.py#L3092" rel="nofollow noreferrer"><code>F.binary_cross_entropy_with_logits</code></a>. It might actually be worth putting out a Github issue linking to this SO thread to notify the developers of this undesirable behaviour.</p> <p>In the documentation page of <a href="https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html?highlight=bce#torch.nn.BCEWithLogitsLoss" rel="nofollow noreferrer"><code>nn.BCEWithLogitsLoss</code></a>, it is stated that the provided positive weights tensor <code>pos_weight</code>:</p> <blockquote> <p>Must be a vector with <em><strong>length</strong></em> equal to the number of classes.</p> </blockquote> <p>This is of course what you were expecting (rightly so) since positive weights refer to the weight given to the positive instances <em>for every single</em> class. Since your prediction and target tensors are multi-dimensional this seems to not be handled properly by PyTorch.</p> <hr /> <p>Anyhows, here is a minimal example showing how you can bypass this error and also showing the manual computation of the binary cross-entropy, as reference.</p> <p>Here is the setup of the prediction and target tensors <code>pred</code> and <code>label</code> respectively:</p> <pre><code>&gt;&gt;&gt; c=2;b=5;h=3;w=3 &gt;&gt;&gt; pred = torch.rand(b,c,h,w) &gt;&gt;&gt; label = torch.randint(0,2, (b,c,h,w), dtype=float) </code></pre> <p>Now for the definition of the positive weight, notice the leading singletons dimensions:</p> <pre><code>&gt;&gt;&gt; pos_weight = torch.rand(c,1,1) </code></pre> <p>In your case, with your existing 1D tensor of length <code>c</code>, you would simply have to unsqueeze two extra dimensions for the height and width dimensions. This means doing something like: <code>pos_weight = pos_weight[:,None,None]</code>.</p> <p>Calling the bce with logits function or its oop equivalent:</p> <pre><code>&gt;&gt;&gt; F.binary_cross_entropy_with_logits(pred, label, pos_weight=pos_weight).mean() </code></pre> <p>Which is equivalent, in plain code to:</p> <pre><code>&gt;&gt;&gt; z = torch.sigmoid(pred) &gt;&gt;&gt; bce = -(pos_weight*label*torch.log(z) + (1-label)*torch.log(1-z)) </code></pre> <p>Note, that the built-in function would have the desired behaviour (<em>i.e.</em> no error message) if the <em>class</em> dimension was last in your prediction and target tensors.</p> <pre><code>&gt;&gt;&gt; pos_weight = torch.rand(c) &gt;&gt;&gt; F.binary_cross_entropy_with_logits( ... pred.transpose(1,-1), ... label.transpose(1,-1), ... pos_weight=pos_weight) </code></pre> <p>In other words, we are applying the function with format <code>NHWC</code> which means the <code>pos_weight</code> of format <code>C</code> can be multiplied properly. So the result above effectively yields the same result as:</p> <pre><code>&gt;&gt;&gt; F.binary_cross_entropy_with_logits( ... pred, ... label, ... pos_weight=pos_weight[:,None,None]) </code></pre> <p>You can read more about the <code>pos_weight</code> in <code>BCEWithLogitsLoss</code> <a href="https://stackoverflow.com/questions/68611397/pos-weight-in-binary-cross-entropy-calculation/68769210#68769210">in another thread here</a></p>
python|deep-learning|pytorch|loss-function|weighted
1
2,527
73,146,875
How to select values out of many in pandas dataframe using conditions?
<p>I have a CSV with multiple values for a single value and I have to filter them out based on several conditions. Below is an example of my data.</p> <pre><code>df1 = pd.DataFrame( data=[['Afghanistan','2.7;2.7','27.0;26.7','','22.9;22.8'], ['Bahrain','6.3;6.3;6.4','13.0;13.0;13.0','16.8;17.0',''], ['Djibouti','3.0;3.0;3.0','2.0','','23.1;24']], columns=['Country', '2019', '2018', '2017', '2016']) </code></pre> <p>Following are the conditions to use to filter:</p> <ol> <li>if the values are duplicated, select one.</li> <li>if the values differ and the difference is less than 0.5, for eg. 26.7 and 27.0, we select 26.7 as we want to preserve the precision and would discard the rounding offs eg. 6.7 and 6.8, preserving both as both give precision. However, this contradicts the 0.5 rule, so taking any is also okay</li> <li>If the values differ and the difference is more than 0.5, select both eg. 23.1 and 24, select both</li> </ol> <p>Below is my desired output for this example.</p> <pre><code>desired_op = pd.DataFrame( data=[['Afghanistan','2.7','26.7','','22.9;22.8'], ['Bahrain','6.3;6.4','13.0','16.8',''], ['Djibouti','3.0','2.0','','23.1;24']], columns=['Country', '2019', '2018', '2017', '2016']) </code></pre> <p>This is a small example of the dataset. To conduct this operation, I have to convert the values to numeric format first, However, the row headings (country) and column headings(year) still have to be a string. I have more than 20 columns, and more than 50 datasets so converting each column's data to numeric is also not feasible. Please Help!</p>
<p>Use from apply method for each col</p> <pre class="lang-py prettyprint-override"><code>def f(x): a = x.split(';') if cond1: return ... if cond2: return ... if cond3: return ... df['2019']=df['2019'].apply(f) ... </code></pre> <p>For your many cols you can do:</p> <pre class="lang-py prettyprint-override"><code>for i in df.columns: if i != 'country': df[i]=df[i].apply(f) </code></pre> <p>also you can put your dataframes to a list and with a for loop iterate on each df and do above operations</p> <p>for example you can remove duplicate such as below:</p> <pre class="lang-py prettyprint-override"><code>def f(x): a=x.split(';') if len(a)&gt;len(set(a)): return ';'.join(list(set(a))) </code></pre> <p>it returns a string value</p>
python|pandas|dataframe|data-cleaning
1
2,528
72,866,905
Create line from list of points while ignoring outliers
<p>I have a list of points that almost create a straight line (but they are not perfectly align on that line). I want to create a line that best describes those points.</p> <p>For example, for points:</p> <pre><code>points = [(150, 250),(180, 220), (200, 195), (225, 180), (250, 150), (275, 115), (300, 100)] </code></pre> <p>I want to create line similar to this:</p> <p><img src="https://i.stack.imgur.com/C4GTT.png" alt="enter image description here" /></p> <p>The problem is that sometimes there are points that are very far from that line (outliers). I want to ignore those outliers while creating the line:</p> <p><img src="https://i.stack.imgur.com/MdONd.png" alt="enter image description here" /></p> <p><strong>How can I create this line?</strong></p> <p>P.S. this is the code for colab to generate the points:</p> <pre><code>import numpy as np import cv2 from google.colab.patches import cv2_imshow img = np.zeros([400,500,3],dtype=np.uint8) points = [(150, 250),(180, 225), (200, 200), (225, 100), (250, 150), (275, 115), (300, 100)] #points = [(150, 250),(180, 220), (200, 195), (225, 180), (250, 150), (275, 115), (300, 100)] for idx, p in enumerate(points): img = cv2.circle(img, p, radius=0, color=(0, 0, 255), thickness=10) text_x, text_y = p p = round(text_x-20), round(text_y+5) img = cv2.putText(img=img, text=str(idx), fontFace=cv2.FONT_HERSHEY_SCRIPT_COMPLEX, org=p, fontScale=0.5, color=(0,255,0)) image = cv2.line(img, points[0], points[-1], (255, 0, 255), 1) cv2_imshow(img) </code></pre> <p><strong>In my code, I generate the line between first and last element of the list of points, so of course if the last point is outlier, all the line is disrupted:</strong></p> <p><img src="https://i.stack.imgur.com/L8EGz.png" alt="enter image description here" /></p>
<p>Thanks for <code>@Christoph Rackwitz</code>'s answer, I followed sklearn's doc for <a href="https://scikit-learn.org/stable/auto_examples/linear_model/plot_ransac.html" rel="nofollow noreferrer">RANSAC</a>, and created simple script to calculate the <code>RANSAC</code> (of course that it's need to be polished):</p> <pre><code>import numpy as np from matplotlib import pyplot as plt from sklearn import linear_model, datasets &quot;&quot;&quot; Add points: &quot;&quot;&quot; points = [(150, 250),(175, 225), (200, 200), (225, 175), (250, 150), (275, 115), (300, 150)] Y = [] X = [] for x,y in points: Y.append(y) X.append(x) Y = np.array(Y) X = np.array(X) lr = linear_model.LinearRegression() lr.fit(X.reshape(-1, 1), Y) # Robustly fit linear model with RANSAC algorithm ransac = linear_model.RANSACRegressor() ransac.fit(X.reshape(-1, 1), Y) inlier_mask = ransac.inlier_mask_ outlier_mask = np.logical_not(inlier_mask) # Predict data of estimated models line_X = np.arange(X.min(), X.max())[:, np.newaxis] line_y = lr.predict(line_X) line_y_ransac = ransac.predict(line_X) # Compare estimated coefficients print(&quot;Estimated coefficients (true, linear regression, RANSAC):&quot;) print(coef, lr.coef_, ransac.estimator_.coef_) lw = 2 plt.gca().invert_yaxis() # Mirror points plt.scatter( X[inlier_mask], Y[inlier_mask], color=&quot;yellowgreen&quot;, marker=&quot;.&quot;, label=&quot;Inliers&quot; ) plt.scatter( X[outlier_mask], Y[outlier_mask], color=&quot;gold&quot;, marker=&quot;.&quot;, label=&quot;Outliers&quot; ) plt.plot(line_X, line_y, color=&quot;navy&quot;, linewidth=lw, label=&quot;Linear regressor&quot;) plt.plot( line_X, line_y_ransac, color=&quot;cornflowerblue&quot;, linewidth=lw, label=&quot;RANSAC regressor&quot;, ) plt.legend(loc=&quot;lower right&quot;) plt.xlabel(&quot;Input&quot;) plt.ylabel(&quot;Response&quot;) plt.show() </code></pre> <p>And I got the following image (which looks great):</p> <p><a href="https://i.stack.imgur.com/46urX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/46urX.png" alt="enter image description here" /></a></p>
python-3.x|numpy|opencv|outliers
1
2,529
73,159,673
Using PANDAS to conditionally manipulate specific cells based on another cell and getting it to change original df
<pre><code>andrew_ramirez[andrew_ramirez['Datacenter'].isin(['ATL2','ACT1'])]['payout']*= 0.25 </code></pre> <p>This is not changing my dataframe (named andrew_ramirez) based on the criteria. Am i missing something?</p>
<p>You can use from this code:</p> <pre class="lang-py prettyprint-override"><code>df['payout']=np.select([df.Datecenter.isin(['ATL2', 'ACT1']), [df.payout*0.25], df.payout) </code></pre>
pandas
0
2,530
72,903,643
Filter a string value from a column in Python?
<p>Need to extract Diabetes value from column name chronic from a df in python.</p> <p>Can anyone pls help to retrieve this in python?</p> <pre class="lang-none prettyprint-override"><code>Patients Chronic 1 Diabetes 2 Diabetes 3 Hypertension 4 Hypertension 5 Diabetes </code></pre>
<p>If your <code>df</code> is:</p> <pre class="lang-py prettyprint-override"><code> Patients Chronic 0 1 Diabetes 1 2 Diabetes 2 3 Hypertension 3 4 Hypertension 4 5 Diabetes type 1 </code></pre> <p>Then:</p> <pre class="lang-py prettyprint-override"><code>mask = df[&quot;Chronic&quot;].str.contains(&quot;Diabetes&quot;) print(df.loc[mask, &quot;Patients&quot;].to_list()) </code></pre> <p>Will print list of all Patient IDs who has <code>&quot;Diabetes&quot;</code> in <code>Chronic</code> column:</p> <pre class="lang-py prettyprint-override"><code>[1, 2, 5] </code></pre>
python|regex|pandas
-1
2,531
10,817,360
Array order in pytables
<p>With <a href="http://www.pytables.org/moin" rel="nofollow">pytables</a>'s <a href="http://pytables.github.com/usersguide/libref.html#carrayclassdescr" rel="nofollow"><code>CArray</code></a>, is there a way to specify the order in which the data is stored on disk (Fortran/C)?</p> <p>I am looking for something similar to <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html" rel="nofollow"><code>ndarray</code></a>'s <code>order</code> parameter.</p>
<p>You can use the <code>chunkshape</code> parameter that in effect specifies the data order:</p> <p><a href="http://pytables.github.com/usersguide/libref.html#tables.File.createCArray" rel="nofollow">http://pytables.github.com/usersguide/libref.html#tables.File.createCArray</a></p> <p>For instance, for 2-D data, <code>chunkshape=(2000, 1)</code> would be efficient if data is accessed in Fortran order, and <code>chunkshape=(1, 2000)</code> if it is accessed in C order. You may need to play with the numbers a bit: <a href="http://pytables.github.com/usersguide/optimization.html" rel="nofollow">http://pytables.github.com/usersguide/optimization.html</a></p>
python|numpy|pytables
2
2,532
70,733,261
Joining dataframes using rust polars in Python
<p>I am experimenting with <code>polars</code> and would like to understand why using <code>polars</code> is slower than using <code>pandas</code> on a particular example:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import polars as pl n=10_000_000 df1 = pd.DataFrame(range(n), columns=['a']) df2 = pd.DataFrame(range(n), columns=['b']) df1p = pl.from_pandas(df1.reset_index()) df2p = pl.from_pandas(df2.reset_index()) # takes ~60 ms df1.join(df2) # takes ~950 ms df1p.join(df2p, on='index') </code></pre>
<p>A pandas <code>join</code> uses the indexes, which are cached.</p> <p>A comparison where they do the same:</p> <pre class="lang-py prettyprint-override"><code># pandas # CPU times: user 1.64 s, sys: 867 ms, total: 2.5 s # Wall time: 2.52 s df1.merge(df2, left_on=&quot;a&quot;, right_on=&quot;b&quot;) # polars # CPU times: user 5.59 s, sys: 199 ms, total: 5.79 s # Wall time: 780 ms df1p.join(df2p, left_on=&quot;a&quot;, right_on=&quot;b&quot;) </code></pre>
python|pandas|dataframe|python-polars|rust-polars
4
2,533
70,514,988
taking out specific indexes from array
<p>I have and array which I am trying to slice/split, small part of the array is as follow:</p> <pre><code>[(2008, b'2-room', 82000, 107000) (2008, b'3-room', 135000, 211000) (2008, b'4-room', 223000, 327000) (2008, b'5-room', 305000, 428000) (2008, b'3-room', 142000, 160000) (2008, b'4-room', 211000, 253000) ........ (2019, b'5-room', 409000, 510000) (2019, b'2-room', 86000, 128000) (2019, b'3-room', 165000, 194000) (2019, b'4-room', 244000, 295000) (2019, b'5-room', 336000, 383000)] </code></pre> <pre><code>dataprice = np.loadtxt(price,skiprows=1,usecols=(0,2,3,4),dtype=[('financial_year','i8'),('room_type','S8'), ('min_selling_price','i8'),('max_selling_price','i8')] ,delimiter=&quot;,&quot;) list2019 =[] list_rest=[] for y in dataprice['financial_year']: if y == 2019 : list2019.append(???) else: list_rest.append(???) </code></pre> <p>I would like to take out rows that have 2019 only, is there a specific code to take out those rows?</p>
<p>As suggested in the comment to the question by @Tim Roberts, using fancy indexing can help:</p> <pre class="lang-py prettyprint-override"><code>mask = dataprice['financial_year']==2019 list_2019 = dataprice[mask] list_rest = dataprice[~mask] </code></pre>
python|numpy
-1
2,534
42,875,356
Selection with pandas multiIndexed dataframe
<p>I have a multiIndexed dataframe that looks like this:</p> <pre><code>df.head(): </code></pre> <p><a href="https://i.stack.imgur.com/y2yUo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y2yUo.png" alt="enter image description here"></a></p> <p>How can I select all of the rows where the first index == "particular school name" and all of second indices, where the Month column == "Jan"?</p> <p>I haven't worked with multiIndexed dataframes before, I can select all rows where Month == "Jan" like this: </p> <pre><code>df[df['Month'] == 'Jan'] </code></pre> <p>but this gives me all of the schools. I've been playing with it, but haven't been able to add indexing for just one school. So how does this work?</p> <p><strong>Edit:</strong> Ok so this works: <code>df[df['Month'] == 'Mar'].loc["School Name"]</code>, but is there a more idiomatic way this is done or would this be the standard way?</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html" rel="nofollow noreferrer"><code>query</code></a>:</p> <pre><code>print (df.query('ilevel_0 == "School Name" and Month == "Jan"')) </code></pre> <p>Sample:</p> <pre><code>df = pd.DataFrame({'A':['School Name','Agona','another'], 'B':[0,1,2], 'Month':['Jan', 'Jan', 'Feb']}).set_index(['A','B']) df.index.names = [None, None] print (df) Month School Name 0 Jan Agona 1 Jan another 2 Feb print (df.query('ilevel_0 == "School Name" and Month == "Jan"')) Month School Name 0 Jan print (df.query('ilevel_0 == "School Name" &amp; Month == "Jan"')) Month School Name 0 Jan </code></pre> <p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p> <pre><code>mask = (df.index.get_level_values(0) == 'School Name') &amp; (df['Month'] == 'Jan') print (df[mask]) Month School Name 0 Jan </code></pre> <p>EDIT:</p> <p>For working with variable use <code>@</code>:</p> <pre><code>var = 'School Name' print (df.query('ilevel_0 == @var &amp; Month == "Jan"')) Month School Name 0 Jan </code></pre>
python-3.x|pandas|dataframe|indexing|multi-index
4
2,535
42,650,230
Pandas pivot on column
<p>my CSV looks like:</p> <pre><code>"a","b","c","d" 1, "x", 1, 1 1, "y", 2, 2 </code></pre> <p>and I want to convert it based on column "b" to</p> <pre><code>"a", "x_c", "y_c", "x_d", "y_d" 1, 1, 2, 1, 2 </code></pre> <p>I've tried it with pivot and unstack. Is there a shortcome in pandas ?</p> <p>EDIT: I have multiple columns therefore I need to append a suffix/prefix</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="noreferrer"><code>pivot_table</code></a>:</p> <pre><code>df = df.pivot_table(index='a',columns='b', values=['c', 'd'], aggfunc=np.mean) #Multiindex to columns df.columns = df.columns.map(lambda x: '{}_{}'.format(x[1], x[0])) df = df.reset_index() print (df) a x_c y_c x_d y_d 0 1 1 2 1 2 </code></pre> <p>Also if duplicates, then aggfunc is applied:</p> <pre><code>print (df) a b c d 0 1 x 1 1 &lt;-duplicates for 1, x 1 1 y 2 2 2 1 x 4 2 &lt;-duplicates for 1, x 3 2 y 2 3 df = df.pivot_table(index='a',columns='b', values=['c', 'd'], aggfunc=np.mean) df.columns = df.columns.map(lambda x: '{}_{}'.format(x[1], x[0])) df = df.reset_index() print (df) a x_c y_c x_d y_d 0 1 2.5 2.0 1.5 2.0 &lt;-x_c, x_d aggregated mean 1 2 NaN 2.0 NaN 3.0 </code></pre>
python|csv|pandas
5
2,536
25,057,977
Defining a function with a loop in Theano
<p>I want to define the following function of two variables in Theano and compute its Jacobian:</p> <pre><code>f(x1,x2) = sum((2 + 2k - exp(k*x1) - exp(k*x2))^2, k = 1..10) </code></pre> <p>How do I make a Theano function for the above expression - and eventually minimize it using its Jacobian?</p>
<p>Since your function is scalar, the Jacobian reduces to the gradient. Assuming your two variables <code>x1, x2</code> are scalar (looks like it from the formula, easily generalizable to other objects), you can write</p> <pre><code>import theano import theano.tensor as T x1 = T.fscalar('x1') x2 = T.fscalar('x2') k = T.arange(1, 10) expr = ((2 + 2 * k - T.exp(x1 * k) - T.exp(x2 * k)) ** 2).sum() func = theano.function([x1, x2], expr) </code></pre> <p>You can call <code>func</code> on two scalars</p> <pre><code>In [1]: func(0.25,0.25) Out[1]: array(126.5205307006836, dtype=float32) </code></pre> <p>The gradient (Jacobian) is then</p> <pre><code>grad_expr = T.grad(cost=expr, wrt=[x1, x2]) </code></pre> <p>And you can use <code>updates</code> in <code>theano.function</code> in the standard way (see theano tutorials) to make your gradient descent, setting <code>x1, x2</code> as shared variables in givens, by hand on the python level, or using <code>scan</code> as indicated by others.</p>
python|numpy|scipy|theano
3
2,537
25,260,000
scikit-learn's GridSearchCV stops working when n_jobs>1
<p>I have previously asked <a href="https://stackoverflow.com/questions/25249212/scikit-grid-search-for-knn-regression-valueerror-array-contains-nan-or-infinity">here</a> come up with following lines of code:</p> <pre><code>parameters = [{'weights': ['uniform'], 'n_neighbors': [5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]}] clf = GridSearchCV(neighbors.KNeighborsRegressor(), parameters, n_jobs=4) clf.fit(features, rewards) </code></pre> <p>But when I've run this there has appeared another problem that was not related to the previously asked question. Python ends up with following OS error message:</p> <pre><code>Process: Python [1327] Path: /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python Identifier: Python Version: 2.7.2.5 (2.7.2.5.r64662-trunk) Code Type: X86-64 (Native) Parent Process: Python [1316] Responsible: Sublime Text 2 [308] User ID: 501 Date/Time: 2014-08-12 10:27:24.640 +0200 OS Version: Mac OS X 10.9.4 (13E28) Report Version: 11 Anonymous UUID: D10CD8B7-221F-B121-98D4-4574A1F2189F Sleep/Wake UUID: 0B9C4AE0-26E6-4DE8-B751-665791968115 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000110 VM Regions Near 0x110: --&gt; __TEXT 0000000100000000-0000000100001000 [ 4K] r-x/rwx SM=COW /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python Application Specific Information: *** multi-threaded process forked *** crashed on child side of fork pre-exec Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 libdispatch.dylib 0x00007fff91534c90 dispatch_group_async_f + 141 1 libBLAS.dylib 0x00007fff9413f791 APL_sgemm + 1061 2 libBLAS.dylib 0x00007fff9413cb3f cblas_sgemm + 1267 3 _dotblas.so 0x0000000102b0236e dotblas_matrixproduct + 5934 4 org.activestate.ActivePython27 0x00000001000c552d PyEval_EvalFrameEx + 23949 5 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118 6 org.activestate.ActivePython27 0x00000001000c5d10 PyEval_EvalFrameEx + 25968 7 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118 8 org.activestate.ActivePython27 0x000000010003d390 function_call + 176 9 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98 10 org.activestate.ActivePython27 0x00000001000c098a PyEval_EvalFrameEx + 4586 11 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118 12 org.activestate.ActivePython27 0x00000001000c5d10 PyEval_EvalFrameEx + 25968 13 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118 14 org.activestate.ActivePython27 0x00000001000c5d10 PyEval_EvalFrameEx + 25968 15 org.activestate.ActivePython27 0x00000001000c7137 PyEval_EvalFrameEx + 31127 16 org.activestate.ActivePython27 0x00000001000c7137 PyEval_EvalFrameEx + 31127 17 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118 18 org.activestate.ActivePython27 0x000000010003d390 function_call + 176 19 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98 20 org.activestate.ActivePython27 0x00000001000c098a PyEval_EvalFrameEx + 4586 21 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118 22 org.activestate.ActivePython27 0x000000010003d390 function_call + 176 23 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98 24 org.activestate.ActivePython27 0x000000010001d36d instancemethod_call + 365 25 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98 26 org.activestate.ActivePython27 0x0000000100077dfa slot_tp_call + 74 27 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98 28 org.activestate.ActivePython27 0x00000001000c098a PyEval_EvalFrameEx + 4586 29 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118 30 org.activestate.ActivePython27 0x000000010003d390 function_call + 176 31 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98 32 org.activestate.ActivePython27 0x00000001000c098a PyEval_EvalFrameEx + 4586 33 org.activestate.ActivePython27 0x00000001000c7137 PyEval_EvalFrameEx + 31127 34 org.activestate.ActivePython27 0x00000001000c7137 PyEval_EvalFrameEx + 31127 35 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118 36 org.activestate.ActivePython27 0x000000010003d390 function_call + 176 37 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98 38 org.activestate.ActivePython27 0x000000010001d36d instancemethod_call + 365 39 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98 40 org.activestate.ActivePython27 0x0000000100077a28 slot_tp_init + 88 41 org.activestate.ActivePython27 0x0000000100074e25 type_call + 245 42 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98 43 org.activestate.ActivePython27 0x00000001000c267d PyEval_EvalFrameEx + 11997 44 org.activestate.ActivePython27 0x00000001000c7137 PyEval_EvalFrameEx + 31127 45 org.activestate.ActivePython27 0x00000001000c7137 PyEval_EvalFrameEx + 31127 46 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118 47 org.activestate.ActivePython27 0x000000010003d390 function_call + 176 48 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98 49 org.activestate.ActivePython27 0x000000010001d36d instancemethod_call + 365 50 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98 51 org.activestate.ActivePython27 0x0000000100077a28 slot_tp_init + 88 52 org.activestate.ActivePython27 0x0000000100074e25 type_call + 245 53 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98 54 org.activestate.ActivePython27 0x00000001000c267d PyEval_EvalFrameEx + 11997 55 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118 56 org.activestate.ActivePython27 0x00000001000c5d10 PyEval_EvalFrameEx + 25968 57 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118 58 org.activestate.ActivePython27 0x000000010003d390 function_call + 176 59 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98 60 org.activestate.ActivePython27 0x000000010001d36d instancemethod_call + 365 61 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98 62 org.activestate.ActivePython27 0x0000000100077dfa slot_tp_call + 74 63 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98 64 org.activestate.ActivePython27 0x00000001000c267d PyEval_EvalFrameEx + 11997 65 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118 66 org.activestate.ActivePython27 0x00000001000c5d10 PyEval_EvalFrameEx + 25968 67 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118 68 org.activestate.ActivePython27 0x00000001000c5d10 PyEval_EvalFrameEx + 25968 69 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118 70 org.activestate.ActivePython27 0x00000001000c5d10 PyEval_EvalFrameEx + 25968 71 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118 72 org.activestate.ActivePython27 0x00000001000c7bf6 PyEval_EvalCode + 54 73 org.activestate.ActivePython27 0x00000001000ed31e PyRun_FileExFlags + 174 74 org.activestate.ActivePython27 0x00000001000ed5d9 PyRun_SimpleFileExFlags + 489 75 org.activestate.ActivePython27 0x00000001001041dc Py_Main + 2940 76 org.activestate.ActivePython27.app 0x0000000100000ed4 0x100000000 + 3796 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x0000000000000100 rbx: 0x00007fff7cd43640 rcx: 0x0000000000000000 rdx: 0x0000000105e00000 rdi: 0x0000000000000008 rsi: 0x0000000105e01000 rbp: 0x00007fff5fbfa370 rsp: 0x00007fff5fbfa350 r8: 0x0000000000000001 r9: 0x0000000105e00000 r10: 0x0000000105e01000 r11: 0x0000000000000000 r12: 0x000000010ba10530 r13: 0x000000010b000000 r14: 0x00000001066d1970 r15: 0x00007fff915311af rip: 0x00007fff91534c90 rfl: 0x0000000000010206 cr2: 0x0000000000000110 Logical CPU: 2 Error Code: 0x00000006 Trap Number: 14 ......... VM Region Summary: ReadOnly portion of Libraries: Total=183.7M resident=97.0M(53%) swapped_out_or_unallocated=86.7M(47%) Writable regions: Total=1.3G written=142.8M(11%) resident=503.6M(39%) swapped_out=0K(0%) unallocated=791.7M(61%) </code></pre> <p>When I have replaced the second line in my code by:</p> <pre><code>clf = GridSearchCV(neighbors.KNeighborsRegressor(), parameters, n_jobs=1) </code></pre> <p>Then everything works fine except I don't use multiple threads.</p> <p>My operating system is OSX 10.9.4</p> <p>My python version is 2.7.8 |Anaconda 2.0.1 (x86_64)| (default, Jul 2 2014, 15:36:00) [GCC 4.2.1 (Apple Inc. build 5577)]</p> <p>My scikit-lern version is 0.14.1</p> <p>My numpy version is 1.8.1</p> <p>And my scipy version is 0.14.0</p> <p>My question is if anybody has an idea how to make GridSearchCV run on more than one thread? </p> <p><strong>EDIT:</strong></p> <p>I have realized that actually this error happens only for some of my input data sets. Unfortunately the problematic datasets (its X) are too big so it is not possible to copy them in here. Input features data is basically tf-idf vectors and y vectors are floats > 0, particularly:</p> <pre><code>[60.0, 7.0, 12.0, 21.0, 5.5, 3.0, 0.0, 2.5, 11.0, 3.0, 16.0, 2.0, 0.0, 4.5, 2.5, 6.0, 9.5, 2.5, 15.0, 7.0, 8.0, 13.0, 14.0, 8.0, 3.5, 6.0, 22.5, 7.0, 4.0, 3.5, 4.5, 6.0, 5.5, 7.0, 2.0, 0.0, 0.0, 0.0, 14.5, 8.0, 7.5, 2.5, 11.5, 1.0, 3.0, 14.5, 10.0, 14.5, 8.0, 8.0, 7.0, 2.5, 3.5, 3.0, 13.5, 7.0, 6.5, 2.5, 9.0, 8.0, 11.0, 17.5, 12.5, 4.5, 5.5, 8.0, 2.0, 7.0, 4.0, 1.5, 3.0, 21.5, 4.5, 4.0, 7.0, 9.0, 13.5, 8.0, 10.5, 4.5, 1.5, 11.5, 7.5, 11.5, 4.5, 5.0, 7.0, 9.5, 4.0, 4.0, 6.0, 3.5, 4.5, 7.5, 3.5, 3.5, 3.5, 6.0, 5.0, 5.5, 25.0, 6.5, 5.0, 2.0, 2.0, 10.5, 0.0, 6.5, 19.0, 9.0, 1.0, 1.5, 1.0, 0.0, 1.0, 4.5, 2.5, 17.5, 39.5, 7.5, 5.5, 8.0, 1.0, 6.0, 12.0, 10.0, 5.5, 19.0, 4.5, 1.5, 25.5, 4.0, 10.0, 18.5, 9.5, 10.5, 2.5, 6.0, 1.0, 10.0, 8.5, 12.5, 13.5, 5.0, 6.5, 11.0, 4.5, 8.0, 7.5, 11.5, 14.5, 9.0, 3.0, 1.5, 3.5, 5.5, 2.5, 12.5, 6.5, 5.5, 5.0, 0.0, 8.0, 3.0, 14.5, 5.0, 14.0, 7.0, 13.5, 12.5, 4.0, 1.5, 6.5, 10.5, 9.0, 16.5, 4.0, 4.0, 15.0, 11.5, 2.5, 8.5, 3.0, 5.0, 4.0, 8.5, 6.0, 5.0, 5.0, 5.0, 5.5, 8.0, 11.0, 4.0, 0.0, 5.5, 0.0, 4.5, 1.5, 0.0, 6.5, 11.0, 2.5, 8.0, 15.5, 5.5, 4.5, 5.0, 4.0, 5.5, 10.5, 7.5, 6.5, 8.5, 2.5, 1.5, 1.5, 18.0, 15.0, 14.0, 9.5, 5.5, 7.5, 14.5, 2.5, 5.0, 60.0, 6.5, 14.5, 6.5, 4.0, 1.5, 2.0, 4.0, 27.0, 3.0, 5.0, 4.0, 2.5, 1.0, 1.5, 1.5, 9.0, 4.0, 8.5, 4.0, 4.0, 0.0, 1.5, 7.5, 1.5, 7.5, 1.0, 28.5, 15.5, 7.5, 1.0, 2.5, 2.5, 2.5, 16.0, 5.5, 8.5, 4.0, 2.5, 5.0, 2.5, 6.0, 11.0, 10.0, 4.5, 6.5, 8.0, 6.0, 4.5, 15.5, 4.0, 5.0] </code></pre> <p>The version with 1 job works for all of my input data sets, even for this one.</p>
<p><code>libdispatch.dylib</code> from Grand Central Dispatch is used internally by OSX's builtin implementation of BLAS called Accelerate when you do a <code>numpy.dot</code> calls. The GCD runtime does not work when programs call the POSIX <code>fork</code> syscall without using an <code>exec</code> syscall afterwards and therefore makes all Python programs that use the <code>multiprocessing</code> module prone to crash. sklearn's <code>GridsearchCV</code> uses the Python <code>multiprocessing</code> module for parallelization.</p> <p>Under Python 3.4 and later you can force Python multiprocessing to use the <a href="https://docs.python.org/dev/library/multiprocessing.html#contexts-and-start-methods" rel="nofollow">forkserver start method</a> instead of the default <code>fork</code> mode to workaround this problem, for instance at the beginning of the main file of your program:</p> <pre><code>if __name__ == "__main__": import multiprocessing as mp; mp.set_start_method('forkserver') </code></pre> <p>Alternatively, you can rebuild numpy from source and make it link against ATLAS or OpenBLAS instead of OSX Accelerate. The numpy developers are working on binary distributions that include either ATLAS or OpenBLAS by default.</p>
python|numpy|scikit-learn
4
2,538
30,471,509
How can I create an array of 1-element arrays from an array?
<p>I would like to be able to convert arrays, such as </p> <pre><code>a = np.array([[1,2], [3,4]]) </code></pre> <p>into the same array BUT each element as a 1-element array instead of a number. The desired output would be: </p> <pre><code>np.array([[np.array([1]), np.array([2])], [np.array([3]), np.array([4])]]) </code></pre>
<p>The operation you describe is very rarely useful. More likely, it would be a better idea to add an extra dimension of length 1 to the end of your array:</p> <pre><code>a = a[..., np.newaxis] # or a = a.reshape(a.shape + (1,)) </code></pre> <p>Then <code>a[0, 1]</code> will be a 1D array, but all the nice NumPy features like broadcasting and ufuncs will work right. Note that this creates a view of the original array; if you need an independent copy, you can call the <code>copy()</code> method.</p> <p>If you actually want a 2D array whose elements are 1D arrays, NumPy doesn't make that easy for you. (It's almost never a good way to organize your data, so there isn't much reason for the NumPy developers to provide an easy way to do it.) Most of the things you might expect to create such an array will instead create a 3D array. The most straightforward way to do it I know of is to create an empty array of object dtype and fill the cells one by one, using ordinary Python loops:</p> <pre><code>b = numpy.empty(a.shape, dtype=object) for i in range(a.shape[0]): for j in range(a.shape[1]): b[i, j] = numpy.array([a[i, j]]) </code></pre>
python|arrays|numpy
0
2,539
13,116,394
pandas: flatten df with delimiter
<p>My goal is to load a dataframe into a DB using a stdin pipe to a load statement executed at the command line (e.g. cat {file_loc} | /path/to/sql --command "COPY table FROM STDIN WITH DELIMITER ',';"). I'm aware that this approach is suboptimal; it's a workaround due to pyodbc issues ;)</p> <p>What's the most efficient way to condense a dataframe so that each row is a string that contains delimiter-separated values with line breaks at the end? My solution, below, seems inefficient. </p> <pre><code>from pandas import * import numpy as np df = DataFrame(np.random.randint(low=0, high=100, size=(5,3)),columns=['A','B','C']) df2 = df.apply(lambda d: ','.join([`x` for x in d])) </code></pre> <p>Writing the dataframe using df.to_csv() or similar is too slow...</p> <pre><code>import timeit m1="""df2=df.apply(lambda d: ','.join([`x` for x in d]))""" met1t = timeit.Timer(stmt=m1,setup="from pandas import *; import numpy as np; df = DataFrame(np.random.randint(low=0, high=100, size=(5,3)),columns=['A','B','C'])") print "Method 1: %.2f usec/pass" % (1000000 * met1t.timeit(number=100000)/100000) # 381.82 usec/pass m2="""df.to_csv('testout.csv', index=False, header=False)""" met2t = timeit.Timer(stmt=m2,setup="from pandas import *; import numpy as np; df = DataFrame(np.random.randint(low=0, high=100, size=(5,3)),columns=['A','B','C'])") print "Method 2:%.2f usec/pass" % (1000000 * met2t.timeit(number=100000)/100000) # 551.30 usec/pass </code></pre>
<p>Could you describe the pyodbc issues?</p> <p>I created an issue here. To get the ultimate perf you'd want to drop down into C or Cython and build the raw byte string yourself using C string functions. Not very satisfying, I know. At some point we should build a better-performing to_csv for pandas, too:</p> <p><a href="http://github.com/pydata/pandas/issues/2210" rel="nofollow">http://github.com/pydata/pandas/issues/2210</a></p>
python|numpy|pandas
0
2,540
28,946,964
How to be a faster Panda with groupbys
<p>I have a Pandas dataframe with 150 million rows. Within that there are about 1 million groups I'd like to do some very simple calculations on. For example, I'd like to take some existing column <code>'A'</code> and make a new column, <code>'A_Percentile'</code> that expresses the values of '<code>A'</code> as percentile ranks, within the group. Here's a little function that does it:</p> <pre><code>from scipy.stats import percentileofscore def rankify(column_name,data=my_data_frame): f = lambda x: [percentileofscore(x, y) for y in x] data[column_name+'_Percentile'] = data.groupby(['Group_variable_1', 'Group_variable_2'])[column_name].transform(f) return </code></pre> <p>Then you can call it like so:</p> <pre><code>rankify('Column_to_Rank', my_data_frame) </code></pre> <p>And wait for...quite a long time. </p> <p>There are some obvious things I could do to speed this up (for instance, I'm sure there's a way to vectorize <code>[percentileofscore(x, y) for y in x]</code>). However, I have the feeling that there are some Pandas tricks I could be doing to speed this up immensely. Is there something I could be doing with the <code>groupby</code> logic? I thought about breaking it apart and parallelizing it, but 1. I'm not sure of a good way to do it and 2. the communication time to write out the data and read in the results seems like it would take nearly as long (perhaps I think that because of point #1).</p>
<p>As you are probably aware, the speed of groupby operations can vary tremendously -- especially as the number of groups gets high. Here's a really simple alternate approach that is quite a bit faster on some test datasets I tried (anywhere from 2x to 40x faster). Usually it is faster if you can avoid user-written functions (in combination with groupby) and stick to built-in functions (which are usually cythonized):</p> <pre><code>In [163]: %timeit rankify('x',df) 1 loops, best of 3: 7.38 s per loop In [164]: def rankify2(column_name,data): ...: r1 = data.groupby('grp')[column_name].rank() ...: r2 = data.groupby('grp')[column_name].transform('count') ...: data[column_name+'_Percentile2'] = 100. * r1 / r2 In [165]: %timeit rankify2('x',df) 10 loops, best of 3: 178 ms per loop </code></pre> <p>Note that my method gives ever so slightly different results (like a difference of <code>10e-15</code>) compared to <code>percentileofscore()</code>. So if you test the results with <code>x == y</code> most will be True but some will be False, but <code>x.round() == y.round()</code> will pass.</p> <p>For results above, this was my test dataset (for other cases I tried, the difference was smaller but always 2x or better speedup):</p> <pre><code>df = pd.DataFrame( { "grp" : np.repeat( np.arange(1000), 100 ), "x" : np.random.randn(100000) } ) </code></pre> <p>I'm sure you could do better than that if you want. Really all you need to do here is sort and rank. I suspect the basic approach I took will be a good way to do it but if you did some or all of it in numpy or numba you might be able to speed it up. Also, you could might be able to use some of pandas indexing tricks to speed things up.</p>
python|performance|pandas|bigdata|dataframe
2
2,541
23,705,113
How to make random Beta in python like normal between two value ?
<p>I want to make random beta in python like normal between two extreme values (ex : 800 / 1000 ). I use this code with numpy random.beta. My problem, I don't have min and max value with normalize and I want keep shape of value. </p> <pre><code>#!/usr/bin/env python # -*- coding: iso-8859-1 -*- import numpy as np import math from random import gauss ##################################### # Parameters definition n=100. #deviation de 20 % n_min = n - n*0.2 n_max = n + n*0.2 var_n_min = 100000. var_n_max = 0. m=100000 for i in range (0,m): alea_var_n.append(np.random.beta(50,70)) alea_var_tau.append(np.random.beta(50,70)) if var_n_min &gt; alea_var_n[i] : var_n_min = alea_var_n[i] if var_n_max &lt; alea_var_n[i] : var_n_max = alea_var_n[i] for i in range (0,m): # Normalize alea_var_n[i] = n_min + alea_var_n[i] * (n_max- n_min)/(var_n_max + var_n_min) </code></pre>
<p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.beta.html" rel="nofollow"><code>numpy</code>'s <code>random.beta</code></a> will give a value between zero and one, so to apply the same distribution between <code>x</code> and <code>y</code> you simply do:</p> <pre><code>z = x + (np.random.beta(a, b) * (y - x)) </code></pre>
python|numpy|random
2
2,542
15,149,265
pandas Timedelta error
<p>I'm getting errors when running the code samples from the pandas documentation. </p> <p>I suspect it might be related to the version of pandas I'm using, but I haven't been able to confirm that. </p> <pre><code>pandas VERSION 0.10.1 numpy VERSION 1.7.0 scipy VERSION 0.12.0.dev-14b1e07 </code></pre> <p>The below examples are taken directly from the pandas documentation here: </p> <p><a href="http://pandas.pydata.org/pandas-docs/dev/timeseries.html#time-deltas" rel="nofollow">pandas - Time Deltas</a><br> This works </p> <pre><code>from datetime import datetime, timedelta from pandas import * s = Series(date_range('2012-1-1', periods=3, freq='D')) s Out[52]: 0 2012-01-01 00:00:00 1 2012-01-02 00:00:00 2 2012-01-03 00:00:00 </code></pre> <p>as does this </p> <pre><code>td = Series([ timedelta(days=i) for i in range(3) ]) td Out[53]: 0 0:00:00 1 1 day, 0:00:00 2 2 days, 0:00:00 df = DataFrame(dict(A = s, B = td)) df Out[54]: A B 0 2012-01-01 00:00:00 0:00:00 1 2012-01-02 00:00:00 1 day, 0:00:00 2 2012-01-03 00:00:00 2 days, 0:00:00 </code></pre> <p>This seems to be consistent with the expected output according to the documentation. </p> <p>The next line in the sample code yields an error: </p> <pre><code>df['C'] = df['A'] + df['B'] </code></pre> <p>... </p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-55-7057e174d79e&gt; in &lt;module&gt;() ----&gt; 1 df['C'] = df['A'] + df['B'] /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas/core/series.pyc in wrapper(self, other) 91 if self.index.equals(other.index): 92 name = _maybe_match_name(self, other) ---&gt; 93 return Series(wrap_results(na_op(lvalues, rvalues)), 94 index=self.index, name=name) 95 /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas/core/series.pyc in na_op(x, y) 63 if isinstance(y, np.ndarray): 64 mask = notnull(x) &amp; notnull(y) ---&gt; 65 result[mask] = op(x[mask], y[mask]) 66 else: 67 mask = notnull(x) TypeError: ufunc add cannot use operands with types dtype('&lt;M8[ns]') and dtype('O') </code></pre> <p>Datatypes: </p> <pre><code>df.dtypes Out[56]: A datetime64[ns] B object </code></pre> <p>Similarly, I get an error when I do addition/subtraction:</p> <pre><code>s - s.max() &lt;ipython-input-57-8d53e24db927&gt; in &lt;module&gt;() ----&gt; 1 s - s.max() /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas/core/series.pyc in wrapper(self, other) 78 79 if (com.is_datetime64_dtype(self) and ---&gt; 80 com.is_datetime64_dtype(other)): 81 lvalues = lvalues.view('i8') 82 rvalues = rvalues.view('i8') /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas/core/common.pyc in is_datetime64_dtype(arr_or_dtype) 1003 tipo = arr_or_dtype.type 1004 else: -&gt; 1005 tipo = arr_or_dtype.dtype.type 1006 return issubclass(tipo, np.datetime64) 1007 AttributeError: 'Timestamp' object has no attribute 'dtype' </code></pre> <p>This code is in a gist for easy reference. </p> <p><a href="https://gist.github.com/hernamesbarbara/5061972" rel="nofollow">https://gist.github.com/hernamesbarbara/5061972</a></p> <p>Thanks for any help or suggestions; it is much appreciated.</p> <p>-Austin</p>
<p>If you look at the title of the page (top of your browser window) you are linking to, you can see that it's the development version of pandas: <a href="http://pandas.pydata.org/pandas-docs/dev/timeseries.html#time-deltas" rel="nofollow">http://pandas.pydata.org/pandas-docs/dev/timeseries.html#time-deltas</a></p> <p>So, today, that's for version </p> <pre><code>'0.11.0.dev-13ae597' </code></pre> <p>where this code is working fine.</p> <p>The docs for the stable version are here:</p> <p><a href="http://pandas.pydata.org/pandas-docs/stable/" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/</a></p> <p>where you will see in at the top of the browser window </p> <pre><code>pandas 0.10.1 </code></pre> <p>which is your version.</p>
datetime|pandas|time-series|series|timedelta
1
2,543
62,143,149
Plotting sorted data
<p>I would need to plot accounts through time, by sorting opening account. </p> <p>I have the following two columns, one for the Accounts and one for OpenTime (it is datetime):</p> <pre><code>Account Name OpenTime ABC 2002/05/20 BAB 2012/07/24 CMN 2012/07/24 GKS 2001/12/05 EIR 2018/04/21 </code></pre> <p>I would like to see on the chart Account Names in the following order:</p> <pre><code>GKS ABC BAB,CMN EIR </code></pre> <p>How can I get this result?</p>
<p>First we need convert the date to datetime , then <code>sort_values</code></p> <pre><code>df.OpenTime=pd.to_datetime(df.OpenTime) df=df.sort_values('OpenTime') print(df['Account Name'].tolist()) </code></pre>
python|pandas|matplotlib|seaborn
1
2,544
62,083,446
Make date_range of hourly frequency over multiple years for a selected month
<p>I understand how to make a date_range in pandas using the freq option. However, I do not know how to use it to do two frequencies at once (or do I need a loop for this)?</p> <p>I am trying to make an hourly date range for only july for a span over some years.</p> <p>I have tried:</p> <pre><code>In: pd.date_range('1951-07-01','1955-07-01',freq='AS') Out: DatetimeIndex(['1952-01-01', '1953-01-01', '1954-01-01', '1955-01-01'], dtype='datetime64[ns]', freq='AS-JAN') </code></pre> <p>And also the hourly frequency.. But what I want is a date range that spans over multiple years, but only an hourly frequency for the month of july. </p> <p>I do not want any months other than july in my date_range.</p> <p>Any hints are appreciated, and if a loop is necessary?</p>
<p>You can create hours frequency with start and end <code>year</code> and then filter only <code>july</code>s:</p> <pre><code>d = pd.date_range('1951-07-01','1955-07-01',freq='H') d = d[d.month == 7] print (d) DatetimeIndex(['1951-07-01 00:00:00', '1951-07-01 01:00:00', '1951-07-01 02:00:00', '1951-07-01 03:00:00', '1951-07-01 04:00:00', '1951-07-01 05:00:00', '1951-07-01 06:00:00', '1951-07-01 07:00:00', '1951-07-01 08:00:00', '1951-07-01 09:00:00', ... '1954-07-31 15:00:00', '1954-07-31 16:00:00', '1954-07-31 17:00:00', '1954-07-31 18:00:00', '1954-07-31 19:00:00', '1954-07-31 20:00:00', '1954-07-31 21:00:00', '1954-07-31 22:00:00', '1954-07-31 23:00:00', '1955-07-01 00:00:00'], dtype='datetime64[ns]', length=2977, freq=None) </code></pre>
python|pandas|datetime|date-range
2
2,545
62,246,851
Differential Privacy decreases the model performance significantly
<p><strong>Background Information</strong></p> <p>I trained a classifier to predict three labels: COVID/Pneumonia/Healthy based on chest X-Ray images. It's a PyTorch implementation of <a href="https://github.com/lindawangg/COVID-Net" rel="nofollow noreferrer">COVID-Net</a>. I use a training set to train on, validation set to save the best performing model, and then a test set to measure the "real" performance of the model. However, I noticed that my model "learned" to classify normal/pneumonia really good, but it just ignored the underpopulated COVID set. Therefore I choose to undersample (reduce the number of training instances of the other classes (normal and pneumonia) in order to get equal populations). This worked well, but my sample set has been reduced to ~1500 samples (low!). The results are somewhat worse than COVID-Net, I achieve an accuracy of ~80% and lower sensitivity on underpopulated classes (COVID) then they report. I suppose that they report better performance because they do not use a validation-set and use the test-set each epoch. I figured that they might indirectly overfit on the test-set because of that. I have chosen to explain this so that the reader gets a context.</p> <p><strong>Question</strong></p> <p>I tried adding privacy to the training procedure by using Differential Privacy. Specifically, I used <a href="https://github.com/facebookresearch/pytorch-dp" rel="nofollow noreferrer">Facebook's PyTorch-DP module</a>. Training works just as well if I choose to add almost no-privacy (this can be achieved by choosing a really low noise multiplier value (sigma), i.e. 1e-7) and a really high delta. So it's not that the module itself is not working/faulty, but, if I use a lower sigma (so I add more noise) then I get more privacy (epsilon decreases) but the model fails to fit the data at all. <strong>The question is:</strong> <em>how do I manage to add privacy to a somewhat meaningful degree while making sure that my model somewhat fits the data still?</em> </p> <p><strong>Performance differences</strong></p> <p><a href="https://i.stack.imgur.com/2qqkc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2qqkc.png" alt="Confusion Matrix of Model without Differential Privacy added"></a> Confusion Matrix of Model without Differential Privacy added. It's not "good" but it's at least somewhat meaningful and the model reaches an accuracy of ~80%.</p> <p><a href="https://i.stack.imgur.com/3vJxy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3vJxy.png" alt="Confusion Matrix of Model with Differential Privacy (epsilon: 2.3) after 100 epochs"></a> Confusion Matrix of Model with Differential Privacy (epsilon: 2.3) after 100 epochs. It looks as if the model does not know what to do, at all. </p> <p><strong>Possible explanations</strong></p> <p>I read a <a href="https://arxiv.org/abs/1905.12101" rel="nofollow noreferrer">paper</a> that stated that adding Differential Privacy can cause bad performance in the sense that the accuracy decreases for underpopulated classes. But, I used undersampling and I think this should've solved that, but the accuracy stays bad (for all classes!).</p> <p>Maybe because my sample set is so small, differential privacy is much harder to achieve, and therefore the performance is bad? However, even if add a really tiny bit of privacy, with an epsilon value >20000, the model still struggles in learning how to classify. So I'm not sure. </p>
<p>It seems the PyTorch Differential Privacy library from Facebook Research is built on the concept of Renyi differential privacy guarantee that is well-suited for expressing guarantees of privacy-preserving algorithms and for composition of heterogeneous mechanisms. We need to have a good estimation of the heterogenity in this COVID-Net dataset. </p> <p>In particular, Rényi divergence satisfies the data processing inequality. It seems the current library is more suitable for machine learning problems with more heterogeneity in datasets. This library is using an implementation of the differentially private stochastic gradient descent (SGD) algorithm. It follows the sequence of random initialisation, computation of gradient, clipping gradient, adding noise and doing the descent. The clipping and noise parameters may vary with the number of training steps and epochs. </p> <p>The success of differential privacy on deep learning problems are driven by the extent of pre-processing the gradient to protect privacy and privacy_accounting which keeps track of the privacy spending over the course of training. It is highlighted that in the differentially private deep learning the model accuracy is more sensitive to training parameters such as batch size and noise level than to the structure of a neural network.</p> <p>In the PyTorch library, we can see the examples on ImageNet, MNIST, DCGAN etc. In all these examples we can see how each of the mentioned parameters such as clipping, batch size etc. can be varied for getting the required accuracy levels. Kindly refer to the following example scripts in the PyTorch DP library. </p> <p><a href="https://github.com/facebookresearch/pytorch-dp/tree/master/examples" rel="nofollow noreferrer">PyTorch DP Example Scripts for Various Models</a></p>
python|machine-learning|pytorch|privacy|confusion-matrix
1
2,546
62,204,867
Pandas Create New Column Based Off of Condition and Value in Other Column
<p>I have a data set like the following:</p> <pre><code>ID Type 1 a 2 a 3 b 4 b 5 c </code></pre> <p>And I'm trying to create the column URL as shown by specifying a different URL based on the "Type" and appending the "ID".</p> <pre><code>ID Type URL 1 a http://example.com/examplea/id=1 2 a http://example.com/examplea/id=2 3 b http://example.com/bbb/id=3 4 b http://example.com/bbb/id=4 5 c http://example.com/testc/id=5 </code></pre> <p>I'm using something like this for the code but it is not pulling in the ID for just that row, instead it is appending all the IDs that have Type = a. </p> <pre><code>df.loc[df['Type'] == 'a', 'URL']= 'http://example.com/examplea/id='+str(df['ID']) df.loc[df['Type'] == 'b', 'URL']= 'http://example.com/bbb/id='+str(df['ID']) </code></pre>
<p>You should alter the command a bit:</p> <pre><code>df.loc[df['Type'] == 'a', 'URL']= 'http://example.com/examplea/id='+df['ID'].astype(str) df.loc[df['Type'] == 'b', 'URL']= 'http://example.com/bbb/id='+df['ID'].astype(str) </code></pre> <p>Or you can use <code>map</code> like this:</p> <pre><code>url_dict = { 'a':'http://example.com/examplea/id=', 'b':'http://example.com/bbb/id=', 'c':'http://example.com/testc/id=' } df['URL'] = df['Type'].map(url_dict) + df['ID'].astype(str) </code></pre> <p>Output:</p> <pre><code> ID Type URL 0 1 a http://example.com/examplea/id=1 1 2 a http://example.com/examplea/id=2 2 3 b http://example.com/bbb/id=3 3 4 b http://example.com/bbb/id=4 4 5 c http://example.com/testc/id=5 </code></pre>
python|pandas|pandas-loc
2
2,547
62,184,063
Too many values to unpack using apply()
<p>Here is the code I have:</p> <pre><code>def f(row): if row['CountInBedDate'] == 1 and row['CountOutBedDate'] == 1: SleepDate = row['DateInBed'] InBedTimeFinal = row['InBedTime'] OutBedTimeFinal = row['OutBedTime'] else: SleepDate = -1 InBedTimeFinal = -1 OutBedTimeFinal = -1 return SleepDate, InBedTimeFinal, OutBedTimeFinal s1['SleepDate'], s1['InBedTimeFinal'], s1['OutBedTimeFinal'] = s1.apply(f, axis=1) </code></pre> <p>I would like to create 3 new columns with apply() but there is </p> <pre><code>ValueError: too many values to unpack (expected 3) </code></pre> <p>If I use this, only one column was created with 3 values combined:</p> <pre><code>s1['SleepDate', 'InBedTimeFinal', 'OutBedTimeFinal'] = s1.apply(f, axis=1) </code></pre> <p>Could you please help? Thanks. </p>
<p>if <code>f</code> is your real function, then you should consider using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.where.html" rel="nofollow noreferrer"><code>where</code></a> instead of apply, it will be way faster.</p> <pre><code>s1[['SleepDate', 'InBedTimeFinal', 'OutBedTimeFinal']] = \ s1[['DateInBed','InBedTime','OutBedTime']].where(s1['CountInBedDate'].eq(1), other=-1) </code></pre> <p>Note it seems that in your function you check twice that CountInBedDate is equal to 1</p> <p>If you want to use apply, then you can try:</p> <pre><code>s1[['SleepDate', 'InBedTimeFinal', 'OutBedTimeFinal']] = \ pd.DataFrame(s1.apply(f, axis=1).tolist(), s1.index) </code></pre>
python|pandas
2
2,548
51,475,435
Python find most common value in array
<pre><code>import numpy as np x = ([1,2,3,3]) y = ([1,2,3]) z = ([6,6,1,2,9,9]) </code></pre> <p>(only positive values) In each array i need to return the most common value, or, if values come up the same amount of times - return the minimum. This is home assignment and I can't use anything but numpy.</p> <p>outputs:</p> <pre><code>f(x) = 3, f(y) = 1, f(z) = 6 </code></pre>
<p>for a numpy exclusive solution something like this will work:</p> <pre><code>occurances = np.bincount(x) print (np.argmax(occurances)) </code></pre> <p>The above mentioned method won't work if there is a negative number in the list. So in order to account for such an occurrence kindly use:</p> <pre><code>not_required, counts = np.unique(x, return_counts=True) x=np.array(x) if (x &gt;= 0).all(): print(not_required[np.argmax(counts)]) else: print(not_required[np.argmax(counts)]) </code></pre>
python|numpy
2
2,549
48,081,743
Python utilizing file paths
<pre><code>sound_file_paths =[ "/Users/ferhatkaygun/Desktop/UrbanSound8K/audio/fold1/57320-0-0-7.wav", "/Users/ferhatkaygun/Desktop/UrbanSound8K/audio/fold1/24074-1-0-3.wav", "/Users/ferhatkaygun/Desktop/UrbanSound8K/audio/fold1/15564-2-0-1.wav", "/Users/ferhatkaygun/Desktop/UrbanSound8K/audio/fold1/31323-3-0-1.wav", "/Users/ferhatkaygun/Desktop/UrbanSound8K/audio/fold1/46669-4-0-35.wav", ] sound_names = [ "air conditioner", "car horn", "children playing", "dog bark", "drilling", "engine idling", "gun shot", "jackhammer", "siren", "street music" ] raw_sounds = load_sound_files(sound_file_paths) </code></pre> <p>Hey there, I am using librosa and I did not deal often with importing files into python programs.</p> <p>How can I import the files without receiving that sound file paths is not defined? The Programm was first coded in python 2.7 but I use the current version on Mac Os. could that be a problem</p>
<p>you are getting the error that soundfiles are not defined because the program cannot find the file. Most likely the paths you are using are not from your machine?</p> <p>you need to put the files to a directory, ex <code>/Users/me/files/</code> on your machine and then replace the file paths in your script to point at this directory instead.</p>
python|audio|tensorflow|filepath|librosa
0
2,550
48,325,478
Excel export using While loop
<p>I am new to Python. I am working on a large analytic program, and this is a snippet of it. Right now, this snippet exports multiple excel files. Is it possible to save what is done per loop on a sheet within a single excel document? So basically right now, it exports 5 files, rather than exporting 5 separate files, can I use this loop and export 1 file, that has 5 sheets?</p> <pre><code>x = 0 y = 0 #these are empty variables for the while loop #while loop that loops up to the system amount #breaks up df into systems #exports excel for each system while x &lt; int(SystemCount): x += 1 y += 1 System = minus4[minus4['System'] == "System " + str(y)] System.to_excel('U4Sys' + str(y) + '.xlsx', sheet_name='sheet1', index=False) print(System.head()) </code></pre> <p>the print at the end prints this </p> <pre><code> email System test1@test.com System 1 test2@test.com System 1 test3@test.com System 1 test4@test.com System 1 test5@test.com System 1 email System test1@test.com System 2 test2@test.com System 2 test3@test.com System 2 test4@test.com System 2 test5@test.com System 2 email System test1@test.com System 3 test2@test.com System 3 test3@test.com System 3 test4@test.com System 3 test5@test.com System 3 </code></pre> <p>Thank you for taking your time to read this!</p>
<p>EDIT (to account for OP using <code>pandas</code> and <code>ExcelWriter</code>):</p> <p>You need to define your target file with <code>ExcelWriter</code> and then write to it with variable sheet names. Also offering some Python clean-up for your iteration:</p> <pre><code>#breaks up df into systems #exports excel for each system writer = ExcelWriter('U4SysOutput.xlsx') for x in range(1, int(SystemCount)+1): System = minus4[minus4['System'] == "System " + str(x)] System.to_excel(writer, sheet_name='sheet{}'.format(x), index=False) print(System.head()) </code></pre>
python|excel|pandas|while-loop|export-to-excel
1
2,551
48,235,916
Cropping a minibatch of images in Pytorch -- each image differently
<p>I have a tensor named <code>input</code> with dimensions 64x21x21. It is a minibatch of 64 images, each 21x21 pixels. I'd like to crop each image down to 11x11 pixels. So the output tensor I want would have dimensions 64x11x11.</p> <p>I'd like to crop each image around a different "center pixel." The center pixels are given by a 2-dimensional long tensor named <code>center</code> with dimensions 64x2. For image i, <code>center[i][0]</code> gives the row index and <code>center[i][1]</code> gives the column index for the pixel that should be at the center in the output. We can assume that the center pixel is always at least 5 pixels away from the border.</p> <p>Is there an efficient way to do this in pytorch (on the gpu)?</p> <p>UPDATE: Let me clarify that the <code>center</code> tensor is formed by a deep neural network. It acts as a "hard attention mechanism," to use the reinforcement learning term for it. After I "crop" an image, that subimage becomes the input to another neural network. That's why I want to do the cropping in Pytorch: because the operations before and after the cropping are in Pytorch. I'd like to avoid having to transfer anything from the GPU back to the CPU.</p>
<p>I raised the question over on the pytorch forums, and got an answer there from smth. The <code>grid_sample</code> function should totally solve the problem.</p> <p><a href="https://discuss.pytorch.org/t/cropping-a-minibatch-of-images-each-image-a-bit-differently/12247" rel="nofollow noreferrer">https://discuss.pytorch.org/t/cropping-a-minibatch-of-images-each-image-a-bit-differently/12247</a></p>
pytorch
2
2,552
48,192,177
keras with tensorflow runs fine, until I add callbacks
<p>I'm running a model using Keras and TensorFlow backend. Everything works perfect:</p> <pre><code>model = Sequential() model.add(Dense(dim, input_dim=dim, activation='relu')) model.add(Dense(200, activation='relu')) model.add(Dense(1, activation='linear')) model.compile(loss='mse', optimizer='Adam', metrics=['mae']) history = model.fit(X, Y, epochs=12, batch_size=100, validation_split=0.2, shuffle=True, verbose=2) </code></pre> <p>But as soon as I include logger and callbacks so I can log for tensorboard, I get </p> <blockquote> <p>InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'input_layer_input_2' with dtype float and shape [?,1329]...</p> </blockquote> <p>Here's my code: (and actually, it worked 1 time, the very first time, then ecer since been getting that error)</p> <pre><code>model = Sequential() model.add(Dense(dim, input_dim=dim, activation='relu')) model.add(Dense(200, activation='relu')) model.add(Dense(1, activation='linear')) model.compile(loss='mse', optimizer='Adam', metrics=['mae']) logger = keras.callbacks.TensorBoard(log_dir='/tf_logs', write_graph=True, histogram_freq=1) history = model.fit(X, Y, epochs=12, batch_size=100, validation_split=0.2, shuffle=True, verbose=2, callbacks=[logger]) </code></pre>
<p>A <code>tensorboard</code> callback uses <code>tf.summary.merge_all</code> function in order to collect all tensors for histogram computations. Because of that - your summary is collecting tensors from previous models not cleared from previous model runs. In order to clear these previous models try:</p> <pre><code>from keras import backend as K K.clear_session() model = Sequential() model.add(Dense(dim, input_dim=dim, activation='relu')) model.add(Dense(200, activation='relu')) model.add(Dense(1, activation='linear')) model.compile(loss='mse', optimizer='Adam', metrics=['mae']) logger = keras.callbacks.TensorBoard(log_dir='/tf_logs', write_graph=True, histogram_freq=1) history = model.fit(X, Y, epochs=12, batch_size=100, validation_split=0.2, shuffle=True, verbose=2, callbacks=[logger]) </code></pre>
tensorflow|machine-learning|neural-network|keras|tensorboard
2
2,553
48,578,272
Why the model size is in huge different between different optimizer?
<p>With TensorFlow, my model size(model.ckpt.data) is 88M when optimizer is <code>tf.train.GradientDescentOptimizer</code>, but it turned to 220M when the optimizer changed to <code>tf.train.AdamOptimizer</code>.</p> <p>Why is there so huge a difference?</p>
<p>ADAM adds two running means (for gradient and square of gradient) as additional non-trainable parameters for each trainable parameter, meaning it increases the number of total parameters to three times. These non-trainable parameters are also saved as they are required to restart the learning process. That's why the model checkpoint is bigger.</p>
tensorflow|neural-network|deep-learning
2
2,554
48,770,411
how to convert columns to numeric while keep those failed intact in pandas
<p>I read my text file into pandas dataframe. All columns are object datatype. What I need to do is convert all those columns that appears 'numeric' to numeric columns. If there are ust a few columns, it's very easy. But my real dataframe has over two hundred columns. I wonder if there is anyway to convert those columns to numeric while keep those which are unable to be converted intact. For example, I have the dataframe below.</p> <pre><code>df = pd.DataFrame({'a': ['1', '2', 'NA', '4'], 'b': ['a', 'b', 'c', 'd'], 'c': ['aa', 'bb', 'cc', 'dd'], 'd': ['11', '22', '33', '44']}) df[['a', 'b', 'c', 'd']] = df[['a', 'b', 'c', 'd']].astype(int) </code></pre> <p>I got error. How can I convert column a and d to numeric while keep b and c as object? Again, my real dataframe has many columns, this is just an example to illustrate my point. I don't want to do allthe hardcoding to convert dtype for every columns.Thanks a lot. </p>
<p>Op1. I usually using <code>to_numeric</code> then <code>fillna</code> (The reason : I usually have some mixed dtype within one column )</p> <pre><code>df=df[['a', 'b', 'c', 'd']].apply(pd.to_numeric,errors='coerce').fillna(df) df.dtypes Out[605]: a int64 b object c object d int64 dtype: object </code></pre> <p>Op2. Or you can using <code>to_numeric</code>+<code>ignore</code></p> <pre><code>df[['a', 'b', 'c', 'd']].apply(pd.to_numeric,errors='ignore').dtypes Out[608]: a int64 b object c object d int64 dtype: object </code></pre> <p>Update </p> <pre><code>df[['a', 'b', 'c', 'd']].apply(pd.to_numeric,errors='coerce').fillna(df).applymap(type) Out[652]: a b c d 0 &lt;class 'float'&gt; &lt;class 'str'&gt; &lt;class 'str'&gt; &lt;class 'int'&gt; 1 &lt;class 'float'&gt; &lt;class 'str'&gt; &lt;class 'str'&gt; &lt;class 'int'&gt; 2 &lt;class 'str'&gt; &lt;class 'str'&gt; &lt;class 'str'&gt; &lt;class 'int'&gt; 3 &lt;class 'float'&gt; &lt;class 'str'&gt; &lt;class 'str'&gt; &lt;class 'int'&gt; </code></pre> <p>If you want , you can add <code>df = df.replace('NA',np.nan)</code> before running the 1st</p> <p>Update 2 </p> <pre><code>s=df.apply(pd.to_numeric,errors='coerce').dropna(axis=1,thresh=1) pd.concat([s,df.loc[:,~df.columns.isin(s.columns)]],1).dtypes Out[668]: a float64 d int64 b object c object dtype: object </code></pre>
python|pandas
4
2,555
70,883,944
Print multiple columns from a matrix
<p>I have a list of column vectors and I want to print only those column vectors from a matrix. Note: the list can be of random length, and the indices can also be random.</p> <p>For instance, the following does what I want:</p> <pre><code>import numpy as np column_list = [2,3] a = np.array([[1,2,6,1],[4,5,8,2],[8,3,5,3],[6,5,4,4],[5,2,8,8]]) new_matrix = [] for i in column_list: new_matrix.append(a[:,i]) new_matrix = np.array(new_matrix) new_matrix = new_matrix.transpose() print(new_matrix) </code></pre> <p>However, I was wondering if there is a shorter method?</p>
<p>Yes, there's a shorter way. You can pass a list (or numpy array) to an array's indexer. Therefore, you can pass <code>column_list</code> to the columns indexer of <code>a</code>:</p> <pre><code>&gt;&gt;&gt; a[:, column_list] array([[6, 1], [8, 2], [5, 3], [4, 4], [8, 8]]) # This is your new_matrix produced by your original code: &gt;&gt;&gt; new_matrix array([[6, 1], [8, 2], [5, 3], [4, 4], [8, 8]]) &gt;&gt;&gt; np.all(a[:, column_list] == new_matrix) True </code></pre>
python-3.x|numpy
1
2,556
70,887,198
Pandas assign - passing column in a user defined function
<p>Given an input dataframe and string:</p> <pre><code>df = pd.DataFrame({&quot;A&quot; : [10, 20, 30], &quot;B&quot; : [0, 1, 8]}) colour = &quot;green&quot; #or &quot;red&quot;, &quot;blue&quot; etc. </code></pre> <p>I want to add a new column <code>df[&quot;C&quot;]</code> conditional on the values in <code>df[&quot;A&quot;]</code>, <code>df[&quot;B&quot;]</code> and <code>colour</code> so it looks like:</p> <pre><code>df = pd.DataFrame({&quot;A&quot; : [4, 2, 10], &quot;B&quot; : [1, 4, 3], &quot;C&quot; : [True, True, False]}) </code></pre> <p>So far, I have a function that works for just the input values alone:</p> <pre><code>def check_passing(colour, A, B): if colour == &quot;red&quot;: if B &lt; 5: return True else: return False if colour == &quot;blue&quot;: if B &lt; 10: return True else: return False if colour == &quot;green&quot;: if B &lt; 5: if A &lt; 5: return True else: return False else: return False </code></pre> <p>How would you go about using this function in <code>df.assign()</code> so that it calculates this for each row? Specifically, how do you pass each column to <code>check_passing()</code>?</p> <p><code>df.assign()</code> allows you to refer to the columns directly or in a lambda, but doesn't work within a function as you're passing in the entire column:</p> <pre><code>df = df.assign(C = check_passing(colour, df[&quot;A&quot;], df[&quot;B&quot;])) </code></pre> <p>Is there a way to avoid a long and incomprehensible lambda? Open to any other approaches or suggestions!</p>
<p>Applying a function like that can be inefficient, especially when dealing with dataframes with many rows. Here is a one-liner:</p> <pre><code>colour = &quot;green&quot; #or &quot;red&quot;, &quot;blue&quot; etc. df['C'] = ((colour == 'red') &amp; df['B'].lt(5)) | ((colour == 'blue') &amp; df['B'].lt(5)) | ((colour == 'green') &amp; df['B'].lt(5) &amp; df['A'].lt(5)) </code></pre>
python|pandas|dataframe
4
2,557
70,910,193
How can I add CSV logging mechanism in case of Multivariable Linear Regression using TensorFlow?
<p>Suppose, the following is my Multivariable Linear Regression source code in Python:</p> <pre><code>import os os.environ[&quot;TF_CPP_MIN_LOG_LEVEL&quot;] = &quot;2&quot; import sys, random import time import tensorflow as tf from tensorflow import keras from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import Adam import matplotlib.pyplot as plt import numpy as np def load_data_k(fname: str, yyy_index: int, **selection): i = 0 file = open(fname) if &quot;top_n_lines&quot; in selection: lines = [next(file) for _ in range(int(selection[&quot;top_n_lines&quot;]))] elif &quot;random_n_lines&quot; in selection: tmp_lines = file.readlines() lines = random.sample(tmp_lines, int(selection[&quot;random_n_lines&quot;])) else: lines = file.readlines() data_x, data_y = [], [] for l in lines: row = l.strip().split() x = [float(ix) for ix in row[yyy_index+1:]] y = float(row[yyy_index]) data_x.append(x) data_y.append(y) # END for l in lines... num_rows = len(data_x) print(&quot;row size = &quot;, len(data_x[0])) given_fraction = selection.get(&quot;validation_part&quot;, 1.0) if given_fraction &gt; 0.9999: valid_x, valid_y = data_x, data_y else: n = int(num_rows * given_fraction) data_x, data_y = data_x[n:], data_y[n:] valid_x, valid_y = data_x[:n], data_y[:n] # END of if-else block print(&quot;size of x = &quot;, len(data_x)) print(&quot;size of y = &quot;, len(data_y)) tx = tf.convert_to_tensor(data_x, dtype=tf.float32) ty = tf.convert_to_tensor(data_y, dtype=tf.float32) vx = tf.convert_to_tensor(valid_x, dtype=tf.float32) vy = tf.convert_to_tensor(valid_y, dtype=tf.float32) return tx, ty, vx, vy # END of the function # load training data from the disk train_x, train_y, validate_x, validate_y = \ load_data_k( fname=&quot;data_file.csv&quot;, yyy_index=6, random_n_lines=90000, validation_part=0.2 ) print(&quot;training data size : &quot;, len(train_x)) print(&quot;validation data size : &quot;, len(validate_x)) predict_data = np.array([[7.042, 5.781, 5.399, 5.373, 5.423, -9.118, 5.488, 5.166, 4.852, 7.470, 6.452, 6.069, 0, 0, 0, 1, 0, 1, 1, 3, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) # Create Keras model model = Sequential() model.add(Dense(1, input_dim=40)) model.add(Dense(128)) model.add(Dense(128)) model.add(Dense(1)) # Gradient descent algorithm adam_opt = Adam(0.1) model.compile(loss='mse', optimizer=adam_opt) history = model.fit(train_x, train_y, epochs=500) prediction = model.predict(predict_data) print(prediction) </code></pre> <p>I want to add CSV logging for the training loss, validation loss, training accuracy, and validation accuracy.</p> <p>How can I do that?</p>
<p>Just use the <code>tf.keras.callbacks.CSVLogger</code> and any regression metric you want to log during training:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(1, input_dim=40)) model.add(tf.keras.layers.Dense(128)) model.add(tf.keras.layers.Dense(128)) model.add(tf.keras.layers.Dense(1)) adam_opt = tf.keras.optimizers.Adam(0.1) model.compile(loss='mse', optimizer=adam_opt, metrics=tf.keras.metrics.MeanSquaredError(name=&quot;mean_squared_error&quot;, dtype=None)) train_x = tf.random.normal((50, 40)) train_y = tf.random.normal((50, 1)) val_x = tf.random.normal((50, 40)) val_y = tf.random.normal((50, 1)) csv_logger = tf.keras.callbacks.CSVLogger('model_training.csv') history = model.fit(train_x, train_y, epochs=5, validation_data=(val_x, val_y), callbacks=[csv_logger]) </code></pre> <p><code>model_training.csv</code>:</p> <pre><code>epoch loss mean_squared_error val_loss val_mean_squared_error 0 304.349060 304.349060 69.584991 69.584991 1 105.304787 105.304787 170.063126 170.063126 2 175.232788 175.232788 7.874812 7.874812 3 104.159607 104.159607 320.626556 320.626556 4 194.709763 194.709763 1.438866 1.438866 </code></pre>
python|tensorflow|keras|logging|deep-learning
2
2,558
70,824,180
Get array with another array indexing with NumPy
<pre><code>arr_1 = np.array([5, 1, 6, 3, 3, 10, 3, 6, 12]) arr_2 = np.array([10, 20, 30, 40, 50, 60, 70, 80, 90]) arr_idx_num_3 = np.where(arr_1 == 3)[0] print(arr_idx_num_3) ## [3 4 6] </code></pre> <p>#how to i get this array Numpy with &quot;arr_idx_num_3&quot;</p> <pre><code>arr_2 = [40 50 70] </code></pre>
<p>Just use it like:</p> <pre><code>print(arr_2[arr_idx_num_3]) </code></pre> <p>output:</p> <pre><code>&gt;&gt;&gt; [40 50 70] </code></pre>
python|numpy
1
2,559
70,786,121
Why my prediction function is giving error? ValueError: not enough values to unpack (expected 2, got 1)
<p>I'm trying to make prediction using the pre-trained model for binary segmentation using UNET and pytorch. Here is my code: model.eval() # Set model to evaluate mode</p> <pre><code>class SimDataset(Dataset): def __init__(self, path, transform=None, isMask=False): self.m = (&quot;test&quot;) self.path = path self.transform = transform self.isMask = isMask def __len__(self): return len(self.path) def __getitem__(self, idx): one_image = os.path.join(self.m, self.path[idx]) # preparing image path/location img_temp = Image.open(one_image) # load RGB input image if self.transform: image = self.transform(img_temp) input_image = np.array(img_temp).astype('float32') # converting one image to np array input_image = np.transpose(input_image, (2, 0 ,1)) # converting from hwc to chw [(256,256,3) =&gt; (3, 256, 256)] return [input_image] testlist = list(os.listdir(r&quot;test&quot;)) len(testlist) image_datasets = { 'testlist': testlist } dataset_sizes = { x: len(image_datasets[x]) for x in image_datasets.keys() } test_dataset = SimDataset(testlist, transform = trans, isMask=False) test_loader = DataLoader(test_dataset, batch_size=3, shuffle=False, num_workers=0) inputs, labels = next(iter(test_loader)) inputs = inputs.to(device) labels = labels.to(device) pred = model(inputs) pred = torch.sigmoid(pred) pred = pred.data.cpu().numpy() print(pred.shape) </code></pre> <p>But it is showing error saying -&gt; ValueError: not enough values to unpack (expected 2, got 1).</p>
<p>Your code expects <em>two</em> outputs from the data loader:</p> <pre class="lang-py prettyprint-override"><code>inputs, labels = next(iter(test_loader)) </code></pre> <p>However, your <code>__getitem__</code> method in your dataset, returns only a <em>single</em> output:</p> <pre class="lang-py prettyprint-override"><code>return [input_image] </code></pre> <p>Either you return <em>two</em> outputs from <code>__getitem__</code>, both images and labels. Or expect only a <em>single</em> output from the <code>test_loader</code>.</p>
python|testing|pytorch|image-segmentation
0
2,560
51,999,924
Tensorflow Object Detection API - showing loss for training and validation on one graph
<p>I am playing with <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="nofollow noreferrer">Tensorflow Object Detection API</a> and training the Faster R-CNN network on my own dataset. I am checking the progress of learning at Tensorbord. All metrics are there, but is there a way to have both loss plots, for training and validation data, on one graph? Or do I have to dive into TOD Api code and modify it? I would like to avoid the second because during every update of the API I will have to keep in mind that some of the code is changed locally.</p> <p><a href="https://i.stack.imgur.com/NfU3b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NfU3b.png" alt="Loss plot"></a></p>
<p>The underlying data for the plots is saved under different tag names (<code>loss</code> vs <code>loss_1</code>). I believe TensorBoard does not natively support displaying different tags in one plot. There might be third-party extensions to do this.</p> <p>If different models used the same tag, the graphs would be combined by default (see: <a href="https://stackoverflow.com/questions/48951136/plot-multiple-graphs-in-one-plot-using-tensorboard">Plot multiple graphs in one plot using Tensorboard</a>).</p>
tensorflow|tensorboard
1
2,561
51,577,885
Converting list numpy array to normal array for CNN-Keras
<p>I have some images separated by folders. So I imported them and converted to them array of pixels. When I type in:</p> <pre><code>In [9]: X_train.shape out [9]: (7467,60,80,3) </code></pre> <p>I wanted to append this with the no. of classes, create a dataset and save as <code>.json</code> file and import in a fresh notebook and do image processing for my own project purpose. So I typed in this code:</p> <pre><code>In [10]: dataset = pd.DataFrame({'label': y_train, 'images': list(X_train)}, columns=['label', 'images']) </code></pre> <p>However when I type in:</p> <pre><code>In [11]: X_train Out [11]: array([[[0., 0., 0.], [0., 0., 0.], [0., 0., 0.], ..., [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], [[0., 0., 0.], [0., 0., 0.], [0., 0., 0.], ..., [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], [[0., 0., 0.], [0., 0., 0.], [0., 0., 0.], ..., [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], ..., </code></pre> <p>But when I import the json file and show:</p> <pre><code>In [2]: train=pd.read_json('train_file.json') train.head() Out [2]: image_no images 0 7468 [[[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0039215... 1 7469 [[[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0,... 10 7478 [[[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0,... 100 7568 [[[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0,... 1000 8468 [[[0.27058823530000004, 0.1843137255, 0.247058. </code></pre> <p>..</p> <p>And when I type in:</p> <pre><code>In [3]: train['images].values Out [3]: array([list([[[0.7411764706, 0.7607843137, 0.8274509804], `[0.7215686275000001, 0.7058823529, 0.7882352941], [0.7019607843, 0.6823529412, 0.7843137255], [0.7176470588, 0.7215686275000001, 0.8196078431], [0.8, 0.8352941176, 0.8549019608], [0.8352941176, 0.8666666667, 0.8666666667], [0.8509803922, 0.8745098039, 0.8666666667], [0.8549019608, 0.8745098039, 0.8666666667], [0.8431372549, 0.8666666667, 0.8666666667], [0.8235294118, 0.8705882353000001, 0.8588235294000001], [0.831372549, 0.8705882353000001, 0.8627450980000001], [0.8352941176, 0.831372549, 0.8549019608], [0.7686274510000001, 0.7686274510000001, 0.8117647059], [0.7098039216, 0.7254901961, 0.7803921569000001], [0.7019607843, 0.7333333333000001, 0.8], [0.7254901961, 0.7686274510000001, 0.8392156863], [0.7647058824, 0.7803921569000001, 0.8509803922], [0.7372549020000001, 0.7411764706, 0.8117647059], [0.7098039216, 0.7019607843, 0.7960784314], [0.6980392157, 0.6705882353, 0.8039215686000001], [0.6901960784000001, 0.6823529412, 0.8117647059], [0.6901960784000001, 0.6901960784000001, 0.8196078431], [0.6941176471, 0.6980392157, 0.831372549], [0.6980392157, 0.7058823529, 0.8352941176], [0.7254901961, 0.7490196078, 0.8352941176], [0.8, 0.831372549, 0.8745098039], [0.8431372549, 0.8784313725, 0.8862745098], [0.8509803922, 0.8823529412000001, 0.8862745098], [0.831372549, 0.8352941176, 0.8745098039], [0.7725490196, 0.7411764706, 0.8392156863], [0.7529411765, 0.7294117647, 0.8392156863], [0.7607843137, 0.7764705882, 0.8352941176], [0.8078431373, 0.8392156863, 0.8705882353000001], [0.8274509804, 0.8549019608, 0.8862745098], [0.8117647059, 0.8431372549, 0.8705882353000001], [0.7725490196, 0.8, 0.8352941176], [0.7529411765, 0.7764705882, 0.8431372549], [0.8117647059, 0.8352941176, 0.8862745098], [0.8745098039, 0.8980392157, 0.9176470588000001], [0.8862745098, 0.9098039216, 0.9058823529000001], [0.8823529412000001, 0.9058823529000001, 0.9019607843], [0.8784313725, 0.9098039216, 0.9058823529000001], [0.8666666667, 0.9137254902, 0.9058823529000001], [0.8627450980000001, 0.9176470588000001, 0.9098039216], [0.86274509....` </code></pre> <p>And when I type in:</p> <pre><code>In [4]: train['images'].shape Out [4]: (7467,) </code></pre> <p>But I'm able to plot these images using <code>plt.imshow()</code>. But when I try to directly do <code>model.fit(train['images],y_train)</code> I get this error:</p> <blockquote> <p>ValueError: setting an array element with a sequence</p> </blockquote> <p>So where am I going wrong? While dumping it to a <code>.json</code> file or how can I convert it to an array after importing the <code>json</code> file and fix the error.</p>
<p>Your <code>np arrays</code> are converted to lists when storing the dataframe as a <code>.json</code>. To feed them to your Keras model, you need to have them in one <code>array</code> of shape <code>(images, height, width, channels)</code>:</p> <pre><code>X_train = np.array(train['images'].tolist()) </code></pre>
python|numpy|keras|deep-learning|conv-neural-network
0
2,562
51,657,913
Tensorflow building error
<p>I got this error while building Tensorflow 1.1.0</p> <pre><code>Starting local Bazel server and connecting to it... ERROR: /home/bishal/.cache/bazel/_bazel_bishal/798d6395d959361055d9b5ddcd7dcd45/external/io_bazel_rules_closure/closure/testing/phantomjs_test.bzl:31:10: name 'set' is not defined ERROR: /home/bishal/.cache/bazel/_bazel_bishal/798d6395d959361055d9b5ddcd7dcd45/external/io_bazel_rules_closure/closure/private/defs.bzl:27:16: name 'set' is not defined ERROR: /home/bishal/.cache/bazel/_bazel_bishal/798d6395d959361055d9b5ddcd7dcd45/external/io_bazel_rules_closure/closure/compiler/closure_js_binary.bzl:216:13: name 'set' is not defined ERROR: /home/bishal/.cache/bazel/_bazel_bishal/798d6395d959361055d9b5ddcd7dcd45/external/io_bazel_rules_closure/closure/filegroup_external.bzl:23:16: name 'set' is not defined ERROR: error loading package '': Extension 'closure/filegroup_external.bzl' has errors Building: no action </code></pre> <p>I've used <code>bazel 0.16.0</code> for this. If this issue is because of not proper version of bazel, which version do I have to use to solve this issue?</p>
<p>You'll need to use <a href="https://github.com/bazelbuild/bazel/releases/tag/0.5.4" rel="nofollow noreferrer">Bazel 0.5.4</a> to build Tensorflow 1.1.0. Please note that 0.5.4 is very old -- it's 0.16.0 as of time of writing this answer.</p> <p>Do you need to specifically build Tensorflow 1.1.0?</p>
tensorflow|bazel
2
2,563
64,568,948
Generating a dictionary of column names based on a condition among columns of a dataframe
<p>I have the following data frame :</p> <pre><code> a_11 b_14 c_13 d_12 AC True False False False BA True False False True AA False False False False </code></pre> <p>I want a dictionary with key as the index and the values as the list of column names which have true values ie</p> <pre><code>{ AC : [a_11], BA : [a_11,d_12], AA : [] } </code></pre> <p>How am I supposed to proceed with this problem</p> <h3>edit : the column names are string, not a character.</h3>
<p>Use dictioanry comprehension if performance is important with transpose DataFrame and convert columns names to list:</p> <pre><code>d = {k: v.index[v].tolist() for k, v in df.T.items()} print (d) {'AC': ['a_11'], 'BA': ['a_11', 'd_12'], 'AA': []} </code></pre> <p>Another idea with <code>zip</code> and convert values to 2d numpy array by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_numpy.html" rel="nofollow noreferrer"><code>DataFrame.to_numpy</code></a>:</p> <pre><code>d = {k: df.columns[v].tolist() for k, v in zip(df.index, df.to_numpy())} print (d) {'AC': ['a_11'], 'BA': ['a_11', 'd_12'], 'AA': []} </code></pre>
python|pandas|dataframe|dictionary
1
2,564
64,603,437
Handyspark Dataframe works on driver or executor
<p>Handyspark dataframe in Pyspark is a bridge between pyspark dataframe and pandas dataframe ,So does it reside on executor node or driver node?</p>
<p>HandySpark isn't a &quot;bridge&quot; - it's a wrapper round a Spark DataFrame which gives it a pandas-like API. Therefore it executes on the executors; there would be little point in the project if it executed on the driver as you could always just to <code>toPandas</code> on your DataFrame to pull it back to the driver (<em>don't do this!</em>)</p> <p>That all said, HandySpark seems to be abandoned, the last commit being eighteen months ago in May 2019. I'd suggest looking at <a href="https://koalas.readthedocs.io/en/latest/" rel="nofollow noreferrer">Koalas</a> instead which does the same thing but is actively developed.</p>
pandas|dataframe|pyspark
0
2,565
64,425,696
Equivalent of np.resize in TensorFlow
<p>I have a 1D array <code>x</code> and want to reshape it to the requested shape in the same way that <a href="https://numpy.org/doc/stable/reference/generated/numpy.resize.html" rel="nofollow noreferrer">np.resize</a> is doing, i.e. if there is too many elements in <code>x</code> they are dropped, if it is too few, they are circularly added, e.g.</p> <pre><code>x = np.array([1, 2, 3, 4, 5, 6]) y = np.resize(x, shape=(2, 2)) assert y == np.array([[1, 2], [3, 4]]) z = np.resize(x, shape=(3, 3)) assert z == np.array([[1, 2, 3], [4, 5, 6], [1, 2, 3]]) </code></pre> <p>I'm wonder how to do it using only tensor operators from TensorFlow.</p>
<p>I am not sure that this is doable in single operation in TF, but one can write a function using crops or <code>tf.tile</code> and then reshaping the result.</p>
tensorflow|tensor
1
2,566
64,401,900
Python, x-axis title is overlapping the tick labels in matplotlib
<p>I'm plotting a graph and the x-axis label is not visible in the graph.</p> <p><a href="https://i.stack.imgur.com/WDuWN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WDuWN.png" alt="enter image description here" /></a></p> <p>I have tried to solve it by adding the</p> <pre><code>ax.xaxis.labelpad = -10 # Adjust x-axis label position </code></pre> <p>Instead the x-label will overlap the ticker label</p> <p><a href="https://i.stack.imgur.com/DGuL4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGuL4.png" alt="enter image description here" /></a></p> <p><strong>How can this be adjusted to show both x-axis label and x-ticker labels within the plot figure?</strong></p> <hr /> <p><strong>Full Code to replicate graph:</strong></p> <pre><code>################################# ### Modules imported used ### ################################# import pandas as pd import numpy as np from datetime import datetime from datetime import date import time import matplotlib.pyplot as plt import matplotlib import matplotlib.dates as mdates # file_path_setup = 'G:/Stocks/PowerPivotApps/Price download/' # Performance_History = pd.read_csv(file_path_setup + 'Performance.txt', dtype=str, sep=',') # Portfolio = Performance_History.loc[Performance_History['ExecutionType'] == 'All Portfolios'] # Portfolio = Performance_History.loc[Performance_History['ExecutionType'] == 'Selected Portfolios'] # remove &quot;# set minimum level for performance time&quot; #Portfolios_Nr_of_Stocks = Portfolio['NrOfStocks'] #Portfolio_Performance_Time = Portfolio['PerformanceTime'] #Portfolio_Date = Portfolio['Date'] Portfolio_Date = ['2020-08-31','2020-09-01','2020-09-02','2020-09-02','2020-09-03','2020-09-04','2020-09-07','2020-09-08','2020-09-09','2020-09-09','2020-09-10','2020-09-11','2020-09-14','2020-09-15','2020-09-16','2020-09-17','2020-09-18','2020-09-21','2020-09-22','2020-09-22','2020-09-23','2020-09-24','2020-09-25','2020-09-28','2020-09-29','2020-09-30','2020-10-01','2020-10-02','2020-10-05','2020-10-06','2020-10-07','2020-10-08','2020-10-08','2020-10-09','2020-10-12','2020-10-13','2020-10-14','2020-10-15','2020-10-16'] Portfolio_Performance_Time =['00:11:11','00:11:07','00:11:16','00:10:42','00:10:54','00:10:46','00:10:27','00:11:23','00:11:35','00:10:23','00:10:51','00:41:22','00:11:05','00:11:15','00:10:50','00:10:41','00:19:47','00:10:43','00:10:48','00:11:12','00:11:05','00:10:45','00:11:02','00:10:57','00:11:01','00:15:17','00:14:33','00:18:49','00:14:28','00:20:45','00:14:29','00:14:45','00:17:52','00:14:37','00:14:08','00:15:05','00:14:46','00:14:39','00:14:40'] Portfolios_Nr_of_Stocks = ['621','619','617','619','622','622','622','621','622','622','622','613','622','621','621','607','621','622','621','622','620','620','622','620','620','680','679','680','681','488','681','681','680','678','678','676','678','676','676'] # Convert To integer numberofstocks = [int(stock) for stock in Portfolios_Nr_of_Stocks] # Convert to time def get_sec(time_str): &quot;&quot;&quot;Get Seconds from time.&quot;&quot;&quot; h, m, s = time_str.split(':') return int(h) * 3600 + int(m) * 60 + int(s) PerformanceTime = [get_sec(t) for t in Portfolio_Performance_Time] # print(type(numberofstocks)) # print type # convert to date series date_portfolio = [datetime.strptime(d, '%Y-%m-%d') for d in Portfolio_Date] # https://matplotlib.org/gallery/api/two_scales.html # https://cmdlinetips.com/2019/10/how-to-make-a-plot-with-two-different-y-axis-in-python-with-matplotlib/ # create figure and axis objects with subplots() fig,ax = plt.subplots(figsize=(12, 8)) # figsize -&gt; size of the plot window # make a plot ax.plot(date_portfolio, PerformanceTime, color=&quot;red&quot;, marker=&quot;x&quot;) # set x-axis label ax.set_xlabel(&quot;Date&quot;, fontsize=14) # set y-axis label ax.set_ylabel(&quot;Performance Time&quot;,color=&quot;red&quot;,fontsize=14) # set title ax.set_title(&quot;Execution History&quot;,fontsize=20, loc=&quot;center&quot;, pad=10) # format y-axis label to hh:mm:ss formatter_yx1 = matplotlib.ticker.FuncFormatter(lambda s, x: time.strftime('%H:%M:%S', time.gmtime(s))) ax.yaxis.set_major_formatter(formatter_yx1) # rotate x-axis lables and adjust size plt.xticks(rotation=90, ha='right') # plt.xticks(rotation=90, ha='right', fontsize='x-small') # Small font text # set minimum level for performance time, y-axis 1 ax.set_ylim([min(PerformanceTime)-100,25*60]) # -100 -&gt; set minimum. 25*60 -&gt; Set maximum # twin object for two different y-axis on the sample plot ax2=ax.twinx() # make a plot with different y-axis using second axis object ax2.plot(date_portfolio, numberofstocks,color=&quot;blue&quot;,marker=&quot;o&quot;) # ax2.set_ylim([620, 680]) ax2.set_ylabel(&quot;Nr Of Stocks&quot;,color=&quot;blue&quot;,fontsize=14) # set minimum level for performance time, y-axis 2 ax2.set_ylim([600, max(numberofstocks)+10]) # -100 -&gt; set minimum. 25*60 -&gt; Set maximum # set date interval ax.xaxis.set_major_locator(mdates.DayLocator(interval=7)) # max interval ax.xaxis.set_minor_locator(mdates.DayLocator(interval=1)) # minimum interval ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d')) # set date format ax.xaxis.labelpad = -10 # Adjust x-axis label position # Plot graph plt.show() </code></pre>
<p>You could use &quot;<a href="https://matplotlib.org/tutorials/intermediate/tight_layout_guide.html" rel="nofollow noreferrer">Tight Layout</a>&quot; function in matplotlib to solve the issue.</p> <p>Add the line before you plot the graph, where <code>h_pad</code> will adjust the height, <code>w_pad</code> will adjust the width.</p> <pre><code># Adjust x-axis margins plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=5.0) </code></pre> <p>And remove this part:</p> <pre><code>ax.xaxis.labelpad = -10 # Adjust x-axis label position </code></pre> <p><strong>Result:</strong></p> <p><a href="https://i.stack.imgur.com/0Gfvi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0Gfvi.png" alt="enter image description here" /></a></p>
python|pandas|matplotlib|graph|adjustment
3
2,567
64,212,463
Combine series by date
<p>The following 2 series of stocks in a single excel file:</p> <p><a href="https://i.stack.imgur.com/nY0bj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nY0bj.png" alt="enter image description here" /></a></p> <p>Can be combined using the date as index?</p> <p>The result should be like this:</p> <p><a href="https://i.stack.imgur.com/JlBV3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JlBV3.png" alt="enter image description here" /></a></p>
<p>I am trying this:</p> <pre><code>df3 = pd.concat([df1, df2]).sort_values('Date').reset_index(drop=True) </code></pre> <p>or</p> <pre><code>df3 = df1.append(df2).sort_values('Date').reset_index(drop=True) </code></pre>
python|pandas|dataframe|indexing
1
2,568
47,718,865
How to apply a function to mulitple columns of a pandas DataFrame in parallel
<p>I have a pandas DataFrame with hundreds of thousands of rows, and I want to apply a time-consuming function on multiple columns of that DataFrame in parallel.</p> <p>I know how to apply the function serially. For example:</p> <pre><code>import hashlib import pandas as pd df = pd.DataFrame( {'col1': range(100_000), 'col2': range(100_000, 200_000)}, columns=['col1', 'col2']) def foo(col1, col2): # This function is actually much more time consuming in real life return hashlib.md5(f'{col1}-{col2}'.encode('utf-8')).hexdigest() df['md5'] = df.apply(lambda row: foo(row.col1, row.col2), axis=1) df.head() # Out[5]: # col1 col2 md5 # 0 0 100000 92e2a2c7a6b7e3ee70a1c5a5f2eafd13 # 1 1 100001 01d14f5020a8ba2715cbad51fd4c503d # 2 2 100002 c0e01b86d0a219cd71d43c3cc074e323 # 3 3 100003 d94e31d899d51bc00512938fc190d4f6 # 4 4 100004 7710d81dc7ded13326530df02f8f8300 </code></pre> <p>But how would I apply function <code>foo</code> parallel, utilizing all available cores on my machine?</p>
<p>The easiest way to do this is using <a href="https://docs.python.org/3/library/concurrent.futures.html" rel="nofollow noreferrer"><code>concurrent.futures</code></a>.</p> <pre><code>import concurrent.futures with concurrent.futures.ProcessPoolExecutor(16) as pool: df['md5'] = list(pool.map(foo, df['col1'], df['col2'], chunksize=1_000)) df.head() # Out[10]: # col1 col2 md5 # 0 0 100000 92e2a2c7a6b7e3ee70a1c5a5f2eafd13 # 1 1 100001 01d14f5020a8ba2715cbad51fd4c503d # 2 2 100002 c0e01b86d0a219cd71d43c3cc074e323 # 3 3 100003 d94e31d899d51bc00512938fc190d4f6 # 4 4 100004 7710d81dc7ded13326530df02f8f8300 </code></pre> <p>Specifying <code>chunksize=1_000</code> makes this run faster because each process will process <code>1000</code> rows at a time (i.e. you will pay the overhead of initializing a process only once per 1000 rows).</p> <p>Note that this will only work in Python 3.2 or newer.</p>
python|pandas|concurrent.futures
2
2,569
47,898,147
Tensorflow Module Import error: AttributeError: module 'tensorflow.python.ops.nn' has no attribute 'rnn_cell'
<p>When attempting to pass my RNN call, I call tf.nn.rnn_cell and I receive the following error: </p> <pre><code>AttributeError: module 'tensorflow.python.ops.nn' has no attribute 'rnn_cell' </code></pre> <p>Which is odd, because I'm sure I imported everything correctly: </p> <pre><code>from __future__ import print_function, division from tensorflow.contrib import rnn import numpy as np import tensorflow as tf import matplotlib.pyplot as plt </code></pre> <p>But looking at the docs, things have moved around between tensorflow versions. </p> <p>what would you all recommend to fix this?? </p> <p>Line, I'm getting the error against: </p> <pre><code>state_per_layer_list = tf.unstack(init_state, axis=0) rnn_tuple_state = tuple( [tf.nn.rnn_cell.LSTMStateTuple(state_per_layer_list[idx][0], state_per_layer_list[idx][1]) for idx in range(num_layers)] ) </code></pre> <p>Specifically: </p> <pre><code>tf.nn.rnn_cell </code></pre> <p>I'm using anaconda 3 to manage all of this so, the dependancies should all be taken care of. I have already tried working around a damn rank/shape error with Tensor shapes which took ages to resolve. </p> <p>Cheers in advance. </p>
<p>Replace <code>tf.nn.rnn_cell</code> with <code>tf.contrib.rnn</code></p> <p>Since version 1.0, <code>rnn</code> implemented as part of the contrib module.</p> <p>More information can be found here <a href="https://www.tensorflow.org/api_guides/python/contrib.rnn" rel="nofollow noreferrer">https://www.tensorflow.org/api_guides/python/contrib.rnn</a></p>
python|tensorflow|python-import|attributeerror|rnn
3
2,570
48,899,041
Separate letters and digits using regex with pandas
<p>I have a column called 'value' from a pandas dataframe, df, that has a mixture of numbers and words. It looks something like this:</p> <pre><code> VALUE 0 done 1 Yes 2 3.45 3 2bc </code></pre> <p>I want to split the column up to 2 columns where the left one only has letters and the right one only numbers. Ideally, the result should be:</p> <pre><code> 0 1 0 done NaN 1 Yes NaN 2 NaN 3.45 3 bc 2 </code></pre> <p>I tried using the .str.extract pandas function like so:</p> <pre><code>df['value'].str.extract('([A-Za-z]+)?([0-9]*[.]?[0-9]+)') </code></pre> <p>The result I get is similar to the following:</p> <pre><code> 0 1 0 NaN NaN 1 NaN NaN 2 NaN 3.45 3 NaN NaN </code></pre> <p>where the words do not show up in column 0 as they should.</p> <p>Does anyone know the reason why or a better way to do such an operation in pandas/python?</p>
<p>Fix your pattern, and use <code>str.extractall</code>:</p> <pre><code>(df.VALUE.str.extractall('(\d+(?:\.\d+)?)|([^\d.]+)') .unstack() .groupby(level=0, axis=1) .first()) 0 1 0 NaN done 1 NaN Yes 2 3.45 NaN 3 2 bc </code></pre>
python|regex|string|pandas
3
2,571
58,845,305
Pandas - date range with monthly rollover, weekmask and list of holidays
<p>I was looking for similar problem but I could not have find an answer for my issue. I try to generate date range in Pandas with monthly or quarterly rollover in respect to a weekmask and a list of holidays. So far I managed to make a range but with daily frequency. Is there any way I could make this dates rolling monthly or quarterly (not daily)?</p> <pre><code>import pandas as pd import numpy as np weekmask_pd = 'Mon Tue Wed Thu Fri' holidays_pd = ['2019-11-15', '2019-12-13'] bday_pd = pd.offsets.CustomBusinessDay(holidays=holidays_pd, weekmask=weekmask_pd) start_date = pd.Timestamp('2019-11-13') end_date = pd.Timestamp('2020-11-13') dts = pd.bdate_range(start_date, end_date, freq=(bday_pd)) </code></pre> <p>The result of the following code is as follow:</p> <pre><code>DatetimeIndex(['2019-11-13', '2019-11-14', '2019-11-18', '2019-11-19', '2019-11-20', '2019-11-21', '2019-11-22', '2019-11-25', '2019-11-26', '2019-11-27', ... '2020-11-02', '2020-11-03', '2020-11-04', '2020-11-05', '2020-11-06', '2020-11-09', '2020-11-10', '2020-11-11', '2020-11-12', '2020-11-13'], dtype='datetime64[ns]', length=261, freq='C') </code></pre> <p>What I would like to receive is:</p> <pre><code>DatetimeIndex(['2019-11-13', '2019-12-16', '2020-01-13', '2020-02-13', '2020-03-13', '2020-04-13', '2020-05-13', '2020-06-15', '2020-07-13', '2020-08-13', ... </code></pre> <p>Any help please?</p>
<p>I think I found a nice solution provided by @MaxU at <a href="https://stackoverflow.com/questions/48454189/pandas-date-range-for-six-monthly-values">Pandas date_range for six-monthly values</a> However it does not behave as expected because it skips start_date in 1) and 2) solution while it returns an error in 3) solution.</p> <p>1) </p> <pre><code>dts2 = pd.bdate_range(start_date, end_date, freq='CBMS', weekmask=weekmask_pd, holidays=holidays_pd) + pd.offsets.Day(start_date.day-1) </code></pre> <p>2)</p> <pre><code>dts3 = pd.bdate_range(start_date, end_date, freq='CBMS', weekmask=weekmask_pd, holidays=holidays_pd)+ datetime.timedelta(days=start_date.day-1) </code></pre> <p>1) and 2) return the same date range:</p> <pre><code>DatetimeIndex(['2019-12-14', '2020-01-13', '2020-02-15', '2020-03-14', '2020-04-13', '2020-05-13', '2020-06-13', '2020-07-13', '2020-08-15', '2020-09-13', '2020-10-13', '2020-11-14'], dtype='datetime64[ns]', freq=None) </code></pre> <p>3)</p> <pre><code>dts3 = pd.bdate_range(start_date, end_date, freq='CBMS-{}'.format(start_date.day), weekmask=weekmask_pd, holidays=holidays_pd) </code></pre> <p>returns:</p> <pre><code>ValueError: invalid custom frequency string: CBMS-13 </code></pre> <p>I would appreciate any comments or hint how to make this date range include start_date and how to make quarterly, semiannually or annually rollovers as I do not see any custom frequency string for these.</p>
python|pandas|time-series
0
2,572
58,912,108
Continuously calculating averages over past intervals w/ Pandas DataFrame
<p>I believe that my problem is really straightforward and there must be a really easy way to solve this issue, however as I am don't feel really confident on working with timestamps so I could not sort that problem by my own.</p> <p>I made the following example, which represents a simple case of what I have been working on. There, you can see that I have made up a dataframe consisting of a speed signal (which is an input) over a period of one hour.</p> <pre><code>import pandas as pd import numpy as np start = pd.Timestamp('2019-11-15T16:00') end = pd.Timestamp('2019-11-15T17:00') t = np.linspace(start.value, end.value, 60*60+1) data = pd.DataFrame([]) data['Timestamp'] = pd.to_datetime(t) noise = np.random.normal(0,1,3601) data['Speed'] = 10*abs(np.random.randn(3601))+noise </code></pre> <p>I am going to implement a controller, which will limit that speed signal, but that not on the scope of the question. So what I am looking for is a way in which I can loop through the column <code>for i,val in enumerate(data['Speed'].values):</code> and calculate the mean speed over the last 10 seconds for each loop. So the idea is for each new iteration, calculate the mean over the past 10 values.</p> <p>Hope that I managed to be succinct and precise. I would really appreciate your help on this one! Suggestions of what to look up for are also welcome.</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.window.Rolling.mean.html#pandas.core.window.Rolling.mean" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.window.Rolling.mean.html#pandas.core.window.Rolling.mean</a></p> <pre><code>data['Speed_10s_mean'] = data['Speed'].rolling(10).mean() </code></pre> <p>result</p> <pre><code> Timestamp Speed Speed_10s_mean 0 2019-11-15 16:00:00 6.467616 NaN 1 2019-11-15 16:00:01 1.233462 NaN 2 2019-11-15 16:00:02 9.136592 NaN 3 2019-11-15 16:00:03 18.617069 NaN 4 2019-11-15 16:00:04 7.628102 NaN 5 2019-11-15 16:00:05 11.840941 NaN 6 2019-11-15 16:00:06 7.788474 NaN 7 2019-11-15 16:00:07 13.069130 NaN 8 2019-11-15 16:00:08 5.549147 NaN 9 2019-11-15 16:00:09 0.596765 8.192730 10 2019-11-15 16:00:10 13.273170 8.873285 11 2019-11-15 16:00:11 19.339124 10.683851 12 2019-11-15 16:00:12 18.659298 11.636122 13 2019-11-15 16:00:13 4.094160 10.183831 14 2019-11-15 16:00:14 13.240686 10.745089 15 2019-11-15 16:00:15 17.535431 11.314539 16 2019-11-15 16:00:16 28.936041 13.429295 17 2019-11-15 16:00:17 6.081373 12.730520 18 2019-11-15 16:00:18 16.009562 13.776561 19 2019-11-15 16:00:19 1.101115 13.826996 ... </code></pre>
python|pandas|dataframe
1
2,573
58,647,340
Finding intersection of pandas data frame index in groupby
<p>I am using Python and have a data frame with a datetime index, a grouping variable (gvar) and a value variable (x). I would like to find all the common datetimes between the groups.</p> <p>I already have a solution using functools, but I am seeking a way to do it using pandas functionalities only (if possible).</p> <pre><code>import functools import pandas as pd gvar = ['A', 'A', 'A', 'B', 'B', 'B'] x = [100, 200, 100, 200 , 100, 200] ind = ['2018-01-01','2018-01-02', '2018-01-03', '2018-01-03', '2018-01-04', '2018-01-05' ] df = pd.DataFrame(data={'gvar':gvar, 'x': x}, index=pd.to_datetime(ind)) common_time = functools.reduce(lambda x, y: pd.np.intersect1d(x, y), [df[df.gvar == x].index for x in set(df.gvar)]) common_time Out[39]: array(['2018-01-03T00:00:00.000000000'], dtype='datetime64[ns]') </code></pre> <p>All suggestions are welcome.</p>
<p>This should do it:</p> <pre><code>&gt;&gt;&gt; df.reset_index().loc[df['gvar'].reset_index().drop_duplicates().duplicated('index'),'index'].tolist() </code></pre> <p>Returning:</p> <pre><code>[Timestamp('2018-01-03 00:00:00')] </code></pre> <p>And if you need the corresponding groups or values:</p> <pre><code>&gt;&gt;&gt;df[df.index.isin(df.reset_index().loc[df['gvar'].reset_index().drop_duplicates().duplicated('index'),'index'].tolist())] </code></pre> <p>Giving you:</p> <pre><code> gvar x 2018-01-03 A 100 2018-01-03 B 200 </code></pre>
python|pandas
1
2,574
59,034,759
Count how many times value A exists in dataframe rows, how many times value B and how many times value A and B
<p>I have a dataframe &quot;dfTags&quot; with 140.000 rows (all lowercase), number of comma separated values in column &quot;tags&quot; can range from 71 to 1. But column tags is one single string, Pandas does not know arrays or lists:</p> <pre><code>index tags 0 a, b, c, aa, bb, 2019 1 a, d, 18, gb 2 aa, a, dd, fb, la 3 aa, d, ddaa, b, k, l </code></pre> <p>and a set &quot;tagTuples&quot; containing 850.000 sorted tuples (all lowercase) build from the tags in each row like:</p> <pre><code>(a, b), (b, c), (aa, c), (aa, bb), (2019, bb), (a, d), (18, d), (18, gb), (a, aa), (a, dd), (dd, fb), (fb, la), (aa, d), (d, ddaa), ... </code></pre> <p>I used a set because I removed every tag that occurs only once and then just added every created tuple, automatically removing duplicates.</p> <p>For every tuple in &quot;tagTuples&quot; I need:</p> <ul> <li><p>e.g. (a, b)</p> </li> <li><p>how many rows in column &quot;tags&quot; contain &quot;a&quot;? (3)</p> </li> <li><p>how many rows in column &quot;tags&quot; that contain &quot;a&quot; also contain &quot;b&quot;? (1)</p> </li> <li><p>= 1/3 =&gt; 0,33</p> </li> <li><p>how many rows in column &quot;tags&quot; contain &quot;b&quot;? (2)</p> </li> <li><p>how many rows in column &quot;tags&quot; that contain &quot;b&quot; also contain &quot;a&quot;? (1)</p> </li> <li><p>= 1/2 =&gt; 0,5</p> <p>resulting in an edge weight between a&lt;&gt;b = (0,33 + 0,5)*100 = 83% (modified Jaccard index)</p> </li> </ul> <p>each result should than be pushed into a dataframe dfTagTuple</p> <pre><code>dfTagTuple = pd.DataFrame(columns=[&quot;Source&quot;, &quot;Target&quot;, &quot;Weight&quot;]) </code></pre> <p>where Source = tuple[0], Target = tuple[1], Weight = Edge Weight</p> <p>so that I get Edge connections between each tag with the edge weight to visualize them in Gephi, creating a tag network.</p> <p>But the tags are of type &quot;object&quot; because Pandas doesn't know arrays. So how can I check each tuple for that formula without counting &quot;aa&quot;/&quot;ddaa&quot;/&quot;la&quot; when I check if row[&quot;tags&quot;] contains &quot;a&quot;?</p> <p>And how can I perform those 4 checks and getting the endresult (0,833..) per tuple in a performant way?</p> <pre><code>def calc_distance(tagLeft, tagRight): # how many times does &quot;a&quot; appear in tags per row? onlyTagLeft = ?? # # how many times does &quot;b&quot; appear in tags per row? onlyTagRight = ?? # how many times does &quot;a&quot; and &quot;b&quot; appear together in tags per row? bothTags = ?? edgeWeight = ((bothTags / onlyTagLeft) + (bothTags / onlyTagRight)) * 100 # print(tagLeft, &quot;#&quot;, tagRight, edgeWeight) print(&quot;{}: {}, {}: {}, bothTags: {}, weight: {}&quot;.format(tagLeft, onlyTagLeft, tagRight, onlyTagRight, bothTags, edgeWeight)) df = pd.DataFrame([[&quot;a, b, c, aa, bb, 2019&quot;], [&quot;a, d, 18, gb&quot;], [&quot;aa, a, dd, fb, la&quot;], [&quot;aa, d, ddaa, b, k, l&quot;]], columns=[&quot;tags&quot;]) tagSet = {('aa', 'd'), ('a', 'aa'), ('a', 'd'), ('a', 'b')} for tagTuple in tagSet: calc_distance(tagTuple[0], tagTuple[1]) </code></pre>
<p>This is not a complete answer, but it will give you, for every tagTupples(<code>tt</code>) how many times the first element of the <code>tt</code> appears and how many times both of them appear and then you can do your calculations</p> <pre><code>import pandas as pd df = pd.DataFrame({'tags': [['a', 'b', 'c', 'aa'], ['a', 'd'], ['aa', 'a', 'dd']]}) tt = [('a', 'b'), ('aa', 'c')] for t in tt: el_1 = t[0] el_2 = t[1] only_el_1 = df.tags.apply(lambda x: el_1 in x).sum() both_el = df.tags.apply(lambda x: (el_1 in x) and (el_2 in x)).sum() print("First element of tupple {} is contained {} times and the both elements are contained {} times".format(t, only_el_1, both_el)) </code></pre> <p>Hope it helps</p>
string|pandas|dataframe|count|csv
0
2,575
58,808,798
pandas dataframe, regrouping
<p>I have the following sample dataset:</p> <pre><code>import pandas as pd data = {'Sentences':['Sentence1', 'Sentence2', 'Sentence3', 'Sentences4', 'Sentences5', 'Sentences6','Sentences7', 'Sentences8'],\ 'Start_Time':[10,15,77,120,150,160,176,188],\ 'End_Time': [12,17,88,128,158,168,182,190],\ 'cps': [3,4,5,6,2,4,5,6]} df = pd.DataFrame(data) print(df) </code></pre> <p><a href="https://i.stack.imgur.com/gUM99.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gUM99.png" alt="enter image description here"></a></p> <p>Basically: Sentences, their start and end time, and the character per second.</p> <p>Now, I also have a list:</p> <pre><code>time_list = [9,80,161,200] </code></pre> <p>Based on that list, I would like to regroup the sentences. The list lists the start and end-times of each group, i.e. </p> <ul> <li>9 to 90: Sentences 1-3 (3 because majority of its time in that group)</li> <li>90 to 161: Sentences 4-5 (sentence 6 does not belong in this group since the majority of its time is not in the group)</li> <li>161 to 200: Sentences 6 (majority in the group), and Sentences 7-8</li> </ul> <p>This is what I have done so far:</p> <pre><code>text = df["Sentences"].tolist() df_text = pd.DataFrame(columns=['Start', 'End', 'Text']) switch = 1 collect_sentence = "" for i_start, time_start in enumerate(df["Start_Time"]): time_end = df["End_Time"][i_start] if i_start &gt; 0: time_list_start = time_list[switch-1] time_list_end = time_list[switch] if time_start &gt;= time_list_start and time_end &lt;= time_list_end: collect_sentence= collect_sentence + text[i_start] if time_start &gt;= time_list_start and time_end &gt; time_list_end and time_start &lt; time_list_end: duration_before = time_list_end - time_start duration_after = time_end - time_list_end if duration_after &lt; duration_before: collect_sentence + text[i_start] else: df_text = df_text.append({ 'Start': int(time_list_start), 'End': int(time_list_end), \ 'Text': collect_sentence}, ignore_index = True) switch += 1 collect_sentence = text[i_start] if time_start &gt; time_list_end: df_text = df_text.append({ 'Start': int(time_list_start), 'End': int(time_list_end), \ 'Text': collect_sentence}, ignore_index = True) switch += 1 collect_sentence = text[i_start] </code></pre> <p><a href="https://i.stack.imgur.com/Z65ID.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z65ID.png" alt="enter image description here"></a></p> <p>As you can see the result is not what it should be. I feel like this is a bit of a mess currently. </p>
<p>Use:</p> <pre><code>mean_time=df[['Start_Time','End_Time']].mean(axis=1).rename('Interval Time') labels = ["{0}-{1}".format(time_list[i], time_list[i+1]) for i in range(len(time_list)-1)] new_df= ( df.groupby(pd.cut(mean_time,bins=time_list, labels=labels,include_lowest=True)) .Sentences .agg(','.join) .reset_index()) print(new_df) Interval Time Sentences 0 9-90 Sentence1,Sentence2,Sentence3 1 90-161 Sentences4,Sentences5 2 161-200 Sentences6,Sentences7,Sentences8 </code></pre> <hr> <p>Using <code>time_list = [9,80,161,200]</code>:</p> <pre><code> Interval Time Sentences 0 9-80 Sentence1,Sentence2 1 80-161 Sentence3,Sentences4,Sentences5 2 161-200 Sentences6,Sentences7,Sentences8 </code></pre> <p>If you prefer create a list:</p> <pre><code>new_df= ( df.groupby(pd.cut(mean_time,time_list,right=False, labels=labels,include_lowest=True)) .Sentences .agg(list) .reset_index()) print(new_df) </code></pre> <p><strong>Output:</strong></p> <pre><code> Interval Time Sentences 0 9-80 [Sentence1, Sentence2] 1 80-161 [Sentence3, Sentences4, Sentences5] 2 161-200 [Sentences6, Sentences7, Sentences8] </code></pre>
python-3.x|pandas
2
2,576
58,662,187
Pandas promotes int to float when filtering
<p>Pandas seems to be promoting an <code>int</code> to a <code>float</code> when filtering. I've provided a simple snippet below but I've got a much more complex example which I believe this promotion leads to incorrect filtering because it compares <code>floats</code>. Is there a way around this? I read that this is a change of behaviour between different versions of pandas - it certainly didn't use to be the case.</p> <p>Below you can see, it changes <code>[4 13]</code> and <code>[5 14]</code> to <code>[4.0 13.0]</code> and <code>[5.0 14.0]</code>.</p> <pre><code>In [53]: df1 = pd.DataFrame(data = {'col1' : [1, 2, 3, 4, 5], 'col2' : [10, 11, 12, 13, 14]}) ...: df2 = pd.DataFrame(data = {'col1' : [1, 2, 3], 'col2' : [10, 11, 12]}) In [54]: df1 Out[54]: col1 col2 0 1 10 1 2 11 2 3 12 3 4 13 4 5 14 In [55]: df2 Out[55]: col1 col2 0 1 10 1 2 11 2 3 12 In [56]: df1[~df1.isin(df2)] Out[56]: col1 col2 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 4.0 13.0 4 5.0 14.0 In [57]: df1[~df1.isin(df2)].dropna() Out[57]: col1 col2 3 4.0 13.0 4 5.0 14.0 In [58]: df1[~df1.isin(df2)].dtypes Out[58]: col1 float64 col2 float64 dtype: object In [59]: df1.dtypes Out[59]: col1 int64 col2 int64 dtype: object In [60]: df2.dtypes Out[60]: col1 int64 col2 int64 dtype: object </code></pre>
<p>There is no float comparison happening here. <code>isin</code> is returning <code>NaN</code>'s for missing data, and since you are using <code>numpy</code>'s <code>int64</code>, the result is getting cast to <code>float64</code>.</p> <p>In 0.24, pandas added a <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/integer_na.html" rel="nofollow noreferrer">nullable integer dtype</a>, which you can use here. </p> <hr> <pre><code>df1 = df1.astype('Int64') df2 = df2.astype('Int64') df1[~df1.isin(df2)] </code></pre> <p></p> <pre><code> col1 col2 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 4 13 4 5 14 </code></pre> <hr> <p>Just be aware that if you wanted to use numpy operations on the result, numpy would treat the above as an array with dtype <code>object</code>.</p>
python|pandas|numpy
1
2,577
58,878,953
Convert Mysql.connector dtypes objects to numeric/ string
<p>I have an SQL query with mysql.connector in python 3. I m converting the result of fetchall to a pandas Dataframe.</p> <pre><code>mycursor.execute(sql_query) m_table = pd.DataFrame(mycursor.fetchall()) m_table.columns = [i[0] for i in mycursor.description] </code></pre> <p>Getting dtypes gives me :</p> <pre><code>Out[185]: sales_forecast_id int64 year int64 products_id int64 test_string object reconduit int64 target_week_1 int64 target_week_2 int64 year_n_1 int64 two_week_before object first_week_before object second_week_before object two_week_before_n_1 object first_week_before_n_1 object second_week_before_n_1 object CIBLE_n_1 int64 dtype: object </code></pre> <p>Test_string is a fake column I added for testing and it contains <code>"test"</code> in all rows.</p> <p>Now this <code>test_string</code> column and the other from <code>two_week_before</code> to <code>second_week_before_n_1</code> appears as dtype object. So <code>test_string</code> is a string in database and the other are decimal. But with the dtype object I can't perform multiplication with another float typed column.</p> <p>Now, I actually have hundreds of this columns, and I would like to convert all the dtype object to float when its a decimal/int and to string when it's a string.</p> <p>How can I do it automatically. How to know if the object is a string or a decimal?</p> <p>Thanks.</p>
<p>This is an easy way to apply this conversion to all columns in case you <strong>are sure</strong> you need them <strong>all</strong> to be transformed into floats except the ones that can't (because they contain strings):</p> <pre><code>import numpy as np import pandas as pd data = {'a':[1,2,3,4],'b':['a','b','aa','abc'],'c':[100,13,14,'xD']} df = pd.DataFrame(data) df['a'] = df['a'].astype('object') print(df.dtypes) </code></pre> <p>Output (where column <code>a</code> is of type <code>object</code> when it should be <code>int</code> or <code>float</code>):</p> <pre><code>a object b object c object dtype: object </code></pre> <p>Applying the following:</p> <pre><code>for i in list(data): try: df[i] = df[i].astype('float') except ValueError: df[i] = df[i].astype('object') print(df.dtypes) </code></pre> <p>Output:</p> <pre><code>a float64 b object c object dtype: object </code></pre>
python|pandas|numpy
1
2,578
58,903,566
Grouping and adding values based on row string with pandas?
<p>I have the following pandas data set:</p> <pre><code>date, pair, value, fruit 2019-11-15 09:35:33,EUR,10,BANANA 2019-11-15 09:35:32,EUR,12,BANANA 2019-11-15 09:35:31,EUR,21,APPLE 2019-11-15 09:35:30,EUR,17,ORANGE 2019-11-15 09:35:28,EUR,19,BANANA 2019-11-14 09:58:05,EUR,37,APPLE 2019-11-14 09:23:42,EUR,41,ORANGE 2019-11-14 09:23:42,EUR,15,APPLE </code></pre> <p>How can I group and add the <code>value</code> field for the same fruit(s)?</p> <p>So I get,</p> <pre><code>[ ['BANANA', 'APPLE', 'ORANGE'], [41, 73, 58] ] </code></pre> <p><code>41</code> Being the sum of all <code>BANANA</code> values,</p> <p><code>73</code> Being the sum of all <code>APPLE</code> values,</p> <p><code>58</code> Being the sum of all <code>ORANGE</code> values.</p> <p>The intention is draw a bar chart.</p>
<p>I think this might help </p> <pre><code>a=your_df.groupby(["fruit"]).sum()["value"] </code></pre>
python|pandas|numpy|data-science
0
2,579
70,330,526
Operations on specific elements of a dataframe in Python
<p>I'm trying to convert kilometer values in one column of a dataframe to mile values. I've tried various things and this is what I have now:</p> <pre><code>def km_dist(column, dist): length = len(column) for dist in zip(range(length), column): if (column == data[&quot;dist&quot;] and dist in data.loc[(data[&quot;dist&quot;] &gt; 25)]): return dist / 5820 else: return dist data = data.apply(lambda x: km_dist(data[&quot;dist&quot;], x), axis=1) </code></pre> <p>The dataset I'm working with looks something like this:</p> <pre><code> past_score dist income lab score gender race income_bucket plays_sports student_id lat long 0 8.091553 11.586920 67111.784934 0 7.384394 male H 3 0 1 0.0 0.0 1 8.091553 11.586920 67111.784934 0 7.384394 male H 3 0 1 0.0 0.0 2 7.924539 7858.126614 93442.563796 1 10.219626 F W 4 0 2 0.0 0.0 3 7.924539 7858.126614 93442.563796 1 10.219626 F W 4 0 2 0.0 0.0 4 7.726480 11.057883 96508.386987 0 8.544586 M W 4 0 3 0.0 0.0 </code></pre> <p>With my code above, I'm trying to loop through all the &quot;dist&quot; values and if those values are in the right column (&quot;data[&quot;dist&quot;]&quot;) and greater than 25, divide those values by 5820 (the number of feet in a kilometer). More generally, I'd like to find a way to operate on specific elements of dataframes. I'm sure this is at least a somewhat common question, I just haven't been able to find an answer for it. If someone could point me towards somewhere with an answer, I would be just as happy.</p>
<p>Instead your solution filter rows with mask and divide column <code>dist</code> by <code>5820</code>:</p> <pre><code>data.loc[data[&quot;dist&quot;] &gt; 25, 'dist'] /= 5820 </code></pre> <p>Working same like:</p> <pre><code>data.loc[data[&quot;dist&quot;] &gt; 25, 'dist'] = data.loc[data[&quot;dist&quot;] &gt; 25, 'dist'] / 5820 </code></pre> <hr /> <pre><code>data.loc[data[&quot;dist&quot;] &gt; 25, 'dist'] /= 5820 print (data) past_score dist income lab score gender race \ 0 8.091553 11.586920 67111.784934 0 7.384394 male H 1 8.091553 11.586920 67111.784934 0 7.384394 male H 2 7.924539 1.350194 93442.563796 1 10.219626 F W 3 7.924539 1.350194 93442.563796 1 10.219626 F W 4 7.726480 11.057883 96508.386987 0 8.544586 M W income_bucket plays_sports student_id lat long 0 3 0 1 0.0 0.0 1 3 0 1 0.0 0.0 2 4 0 2 0.0 0.0 3 4 0 2 0.0 0.0 4 4 0 3 0.0 0.0 </code></pre>
python|pandas
1
2,580
70,356,417
Tensorflowjs - Reshape/slice 4d tensor into image
<p>I am trying to apply style transfer to a webcam capture. I am reading a frozen model I've previously trained in python and converted for TFjs. The output tensor's shape and rank is as follows: <a href="https://i.stack.imgur.com/KIB00.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KIB00.png" alt="enter image description here" /></a></p> <p>I am having issues in the last line of this function, when I try to apply tf.browser.toPixels</p> <pre><code> function predictWebcam() { tf.tidy(() =&gt; { loadmodel().then(model=&gt;{ //let tensor= model.predict(tf.expandDims(tf.browser.fromPixels(video))); let tensor= model.predict(tf.browser.fromPixels(video, 3).toFloat().div(tf.scalar(255)).expandDims()); console.log('shape', tensor.shape); console.log('rank', tensor.rank); tf.browser.toPixels(tensor, resultImage); }); }); } </code></pre> <p>I get this error. I cannot figure out how to reshape or modify the tensor to get an image out of it:</p> <p>Uncaught (in promise) Error: toPixels only supports rank 2 or 3 tensors, got rank 4. Maybe I have to replicate tensor_to_image function from python to javascript as in <a href="https://www.tensorflow.org/tutorials/generative/style_transfer" rel="nofollow noreferrer">the example in the website</a>.</p> <p>Thanks in advance!</p>
<p>given your tensor is <code>[1, 15, 20, 512]</code><br /> you can remove any dims with value of 1 (same dim you've added by running <code>expandDims</code>) by running</p> <pre><code>const squeezed = tf.squeeze(tensor) </code></pre> <p>that will give you <strong>shape</strong> of <code>[15, 20, 512]</code></p> <p>but that still doesn't make sense - what is <code>width</code>, <code>height</code> and <code>channels</code> (e.g. rgb) here?</p> <p>i think that model result needs additional post-processing, that is not an image.</p>
tensorflow|deep-learning|tensorflow.js
1
2,581
70,156,578
Color Formating from pandas to excel
<p>I have a pandas dataframe with values and a condition according to previous filtering. I would like to print my dataframe in an excel and color the cell according to the filtering result (if <em>passed</em>: <strong>green</strong> and if <em>not_passed</em>: <strong>red</strong>). Here is an example code and how I would like it to turn out.</p> <pre><code>filename='example' writer = pd.ExcelWriter(f&quot;{filename}.xlsx&quot;, engine=&quot;xlsxwriter&quot;) workbook = writer.book df = pd.DataFrame( [ [0,{'value': 3, 'filter': 'passed'}], [1,{'value': 4, 'filter': 'not_passed'}], [2,{'value': 2, 'filter': 'passed'}], ], columns=['col 1', 'col 2'] ) df['col 2'] = df['col 2'].apply(lambda x: x['value']) df.to_excel(writer, sheet_name=&quot;Sheet1&quot;) green = workbook.add_format({&quot;bg_color&quot;: &quot;#C6EFCE&quot;, &quot;font_color&quot;: &quot;#006100&quot;}) red = workbook.add_format({&quot;bg_color&quot;: &quot;#FFC7CE&quot;, &quot;font_color&quot;: &quot;#9C0006&quot;}) worksheet = writer.sheets[&quot;Sheet1&quot;] writer.save() </code></pre> <p><a href="https://i.stack.imgur.com/21kdk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/21kdk.png" alt="Example output" /></a></p>
<p>Use a for loop to check if the <code>filter value == 'passed'</code><br> If it is, you can apply the green format to this cell, and Vice versa for Red using worksheet.write(row_index,column_index,value,format).</p> <p><em>Note that Pandas data frames use a different indexing method than Excel. Notably, Pandas starting at index=0, and Excel starting at index=1.</em></p> <pre><code>filename='example' writer = pd.ExcelWriter(f&quot;{filename}.xlsx&quot;, engine=&quot;xlsxwriter&quot;) workbook = writer.book df = pd.DataFrame( [ [0,{'value': 3, 'filter': 'passed'}], [1,{'value': 4, 'filter': 'not_passed'}], [2,{'value': 2, 'filter': 'passed'}], ], columns=['col 1', 'col 2'] ) # df['col 2'] = df['col 2'].apply(lambda x: x['value']) df.to_excel(writer, sheet_name=&quot;Sheet1&quot;) green = workbook.add_format({&quot;bg_color&quot;: &quot;#C6EFCE&quot;, &quot;font_color&quot;: &quot;#006100&quot;}) red = workbook.add_format({&quot;bg_color&quot;: &quot;#FFC7CE&quot;, &quot;font_color&quot;: &quot;#9C0006&quot;}) worksheet = writer.sheets[&quot;Sheet1&quot;] col_ind = 2 for row_ind in range(len(df)): value = df.iloc[row_ind, 1] if value['filter'] == 'passed': worksheet.write(row_ind+1,col_ind, value['value'], green) else: worksheet.write(row_ind+1,2col_ind, value['value'], red) writer.save() </code></pre>
python|excel|pandas|dataframe
2
2,582
56,386,719
Keras Tensorflow fails to learn simple linear relationship
<p>I am fairly new to Tensorflow/Keras and am trying to set up an LSTM model. I have successfully run my code already, but my results have failed to give me meaningful results. I, therefore - as a test - let my LSTM network learn one of the features I am inputting. I am aware that the LSTM and relu use nonlinear relationships, however, I was still expecting the output to be somewhat similar to the input feature I was trying to learn which it is not at all. </p> <p>I am using a modified version from what I learned on <a href="https://keras.io/getting-started/sequential-model-guide/" rel="nofollow noreferrer">https://keras.io/getting-started/sequential-model-guide/</a> </p> <pre class="lang-py prettyprint-override"><code>feature_set = features.iloc[:-3,:].transpose() #23 features target_set = features.iloc[-4:,:].transpose().iloc[:,0] #picking the 23rd feature X_train,X_test,y_train,y_test = train_test_split(feature_set, target_set, test_size=0.2, shuffle=False, random_state=42) rnn_units = 256 batch_size = 1 features_dim = 23 output = 1 def build_model(rnn_units): model = tf.keras.Sequential([ tf.keras.layers.Dense(rnn_units, batch_input_shape=[batch_size, None, features_dim], activation='relu'), tf.keras.layers.Dropout(0.1), tf.keras.layers.CuDNNLSTM(rnn_units, return_sequences=True, stateful=True), tf.keras.layers.Dropout(0.1), tf.keras.layers.CuDNNLSTM(rnn_units, return_sequences=True, stateful=True), tf.keras.layers.Dense(output) ]) return model model = build_model(rnn_units=rnn_units) model.compile(optimizer = tf.train.AdamOptimizer(), loss = tf.keras.losses.mean_squared_error, metrics=['mse', 'mae', 'mape', 'cosine']) reshape_train = int(X_train.values.shape[0]/batch_size) reshape_test = int(X_test.values.shape[0]/batch_size) history = model.fit(X_train.values[:reshape_train*batch_size].reshape(reshape_train*batch_size, -1, features_dim), y_train.values[:reshape_train*batch_size].reshape(reshape_train*batch_size, -1, output), epochs=EPOCHS, batch_size=batch_size, validation_data=(X_test.values[:reshape_test*batch_size].reshape(reshape_test*batch_size, 1, features_dim), y_test.values[:reshape_test*batch_size].reshape(reshape_test*batch_size, 1, output)), callbacks=[checkpoint_callback,tensorboard]) </code></pre> <p>As you can see I am inputting a feature set of 23 values and am trying to learn the 23rd feature. I am using 256 nodes in every layer, with one Dense layout at the beginning and at the end and 2 LSTM layers followed by Dropout layers.</p> <p>I am using mean-square as it is supposed to be a regression on time series data.</p> <p>This is, for instance, one run of my training:</p> <pre class="lang-py prettyprint-override"><code>Epoch 5/5 10329/10329 [==============================] - 93s 9ms/sample - loss: 0.0182 - mean_squared_error: 0.0182 - mean_absolute_error: 0.0424 - mean_absolute_percentage_error: 94.4916 - cosine_proximity: -0.9032 - val_loss: 0.0193 - val_mean_squared_error: 0.0193 - val_mean_absolute_error: 0.0438 - val_mean_absolute_percentage_error: 58.2152 - val_cosine_proximity: -0.9443 </code></pre> <p>And when I run </p> <pre class="lang-py prettyprint-override"><code>result = model.predict(feature_set.values.reshape(-1, 1, features_dim)) feature_set.transpose().append(pd.DataFrame(result.reshape(-1), columns = ['Prediction 5min']).set_index(features.columns).transpose()).transpose() </code></pre> <p>I get for instance</p> <pre class="lang-py prettyprint-override"><code>2019-03-04 01:00:00 82.0105414589 0.0704929618 -0.1165011768 -0.3369084807 -1.8137642288 -0.2780955060 -4.3090711538 6.2721520391 9.5553857757 -1.2900340169 ... -29.8867675862 1.9178869544 -1.4765772054 1.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0080950060 -0.3594492457 0.0056902645 </code></pre> <p>where the last 2 values should be equal but they are</p> <pre class="lang-py prettyprint-override"><code>-0.3594492457 0.0056902645 </code></pre> <p>Any idea what I am doing wrong in my model? Can I use LSTM to learn such relationships?</p> <p>Thanks!</p>
<p>A few issues:</p> <p>Typically, LSTM layers go at the start, followed by a few dense layers. </p> <p>Also, the LSTM layer before the dense layer needs to have return_sequence set to False. </p> <p>However, I'm not sure that they are the reason to cause this problem, I'm just pointing out the problems. I think it is more likely to be the dataset that is the issue, you should check if the dataset has the pattern.</p>
tensorflow|machine-learning|keras|lstm
0
2,583
55,632,558
Number of days between two successive rows in pandas with timestamp ERROR: dtype('<m8[D]')
<p>i have a pandas dataframe like follows:</p> <pre><code>device_id date 101 2018-10-30 10:42:32 101 2018-12-20 14:14:14 102 2018-09-26 14:21:33 102 2018-10-24 09:12:35 102 2018-11-12 04:52:21 </code></pre> <p>My expected output is</p> <pre><code>device_id date diff 101 2018-10-30 10:42:32 0 101 2018-12-20 14:14:14 51 102 2018-09-26 14:21:33 0 102 2018-10-24 09:12:35 28 102 2018-11-12 04:52:21 19 </code></pre> <p>I have used the following code: </p> <pre><code>df['exdate_1'] = df['date'].dt.date df['exdate_1'] = df.groupby('device_id')['exdate_1'].apply(lambda x: x.sort_values()) df['diff'] = df.groupby('device_id')['exdate_1'].diff() / np.timedelta64(1, 'D') </code></pre> <p>but I am getting an error like the following</p> <pre><code>TypeError: ufunc true_divide cannot use operands with types dtype('float64') and dtype('&lt;m8[D]') </code></pre> <p>What is wrong in my code? Can I use any other approach as well? </p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.floor.html" rel="nofollow noreferrer"><code>Series.dt.floor</code></a> for datetimes without times, then <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>DataFrame.sort_values</code></a> by multiple columns and for convert to days use your solution or alternative with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.days.html" rel="nofollow noreferrer"><code>Series.dt.days</code></a>:</p> <pre><code>df['exdate_1'] = df['date'].dt.floor('d') df = df.sort_values(['device_id','exdate_1']) df['diff'] = df.groupby('device_id')['exdate_1'].diff().dt.days.fillna(0).astype(int) print (df) device_id date exdate_1 diff 0 101 2018-10-30 10:42:32 2018-10-30 0 1 101 2018-12-20 14:14:14 2018-12-20 51 2 102 2018-09-26 14:21:33 2018-09-26 0 3 102 2018-10-24 09:12:35 2018-10-24 28 4 102 2018-11-12 04:52:21 2018-11-12 19 </code></pre> <p>Reason why get error is after <code>df.date</code> are returned <code>python date</code> object, and pandas working with it poorly.</p>
python|pandas|pandas-groupby
2
2,584
55,664,514
Pandas fillna() not working on DataFrame slices
<p>Pandas <code>fillna</code> is not working on DataFrame slices, here is an example</p> <pre><code>df = pd.DataFrame([[np.nan, 2, np.nan, 0], [3, 4, np.nan, 1], [np.nan, np.nan, np.nan, 5], [np.nan, 3, np.nan, 4]], columns=list('ABCD')) df[["A", 'B']].fillna(0, inplace=True) </code></pre> <p>the <code>DataFrame</code> doesn't change</p> <pre><code> A B C D 0 NaN 2.0 NaN 0 1 3.0 4.0 NaN 1 2 NaN NaN NaN 5 3 NaN 3.0 NaN 4 </code></pre> <p>in contrast</p> <pre><code>df["A"].fillna(0, inplace=True) </code></pre> <p>and</p> <pre><code>df.fillna(0, inplace=True) </code></pre> <p>work fine. </p> <p>Is this a bug or does it work as intended? Thx in advance.</p> <p>P.S. <a href="https://stackoverflow.com/questions/38134012/pandas-dataframe-fillna-only-some-columns-in-place">this</a> question asks <strong>how</strong> to use <code>fillna</code> on a slice, as for my question, it concerns <strong>why</strong> the above does'n work. The answer is in @heena-bawa answers comment section.</p>
<p>If we look at the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>pandas documentation</code></a> it says you should use the following to <code>fillna</code> on slices:</p> <pre><code>values = {'A':0, 'B':0} df.fillna(value=values, inplace=True) print(df) A B C D 0 0.0 2.0 NaN 0 1 3.0 4.0 NaN 1 2 0.0 0.0 NaN 5 3 0.0 3.0 NaN 4 </code></pre>
python|pandas|dataframe|fillna
3
2,585
55,953,800
Change case for columns in list
<p>How do I change the case for data frame columns that are in a list? I know how to make all columns upper case but I don't know how to only make specific columns upper case. </p> <pre><code>d = {'name':['bob','john','sue'],'id':[545,689,143],'fte':[1,.5,.75]} df = pd.DataFrame(d) # list of columns I want to make upper case cols = ['id','fte'] </code></pre> <p>This doesn't do anything (no error and case isn't changed):</p> <pre><code>df[cols].rename(str.upper,axis=1,inplace=True) df name id fte 0 bob 545 1.00 1 john 689 0.50 2 sue 143 0.75 </code></pre>
<p>It won't work the way you're trying to do it, the reason being that indices <em>do not</em> support <strong>mutable operations</strong>. So one thing you could do is to use a list comprehension to generate a new list of column names an reassign it to <code>df.columns</code>:</p> <pre><code>df.columns = [i.upper() if i in cols else i for i in df.columns] print(df.columns) # Index(['name', 'ID', 'FTE'], dtype='object') </code></pre>
python|pandas
5
2,586
64,964,813
replace 2 selected row values based on others
<p>I have a df that looks like this:</p> <pre><code>Id Class Label 0 APPS Item 1 MODEL Item 2 PRICE Money </code></pre> <p>I want to check all <code>Class</code>entries where the Label is <code>Item</code>. Among these classes, I want to replace all occurrences of <code>APPS</code> with another string <code>OTHERS</code>and want to replace the <code>Label</code> of such rows to another string eg <code>SOFTWARE</code> How can I achieve this? I was trying something like this:</p> <pre><code>def changeLabelClass(label): if (label == &quot;&quot;) mask = modifiedDf['Label'] == &quot;Item&quot; modifiedDf.loc[mask, 'Class'] = [changeLabelClass(x) for x in modifiedDf.loc[mask, 'Class']] </code></pre> <p>Outcome:</p> <pre><code>Id Class Label 0 OTHERS SOFTWARE 1 MODEL Item 2 PRICE Money </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a> with chain 2 conditions by <code>&amp;</code> for bitwise <code>AND</code> - here is assign list with 2 values because selected 2 columns <code>['Class', 'Label']</code>:</p> <pre><code>mask1 = df['Label'] == &quot;Item&quot; mask2 = df['Class'] == &quot;APPS&quot; df.loc[mask1 &amp; mask2, ['Class', 'Label']] = ['OTHERS','SOFTWARE'] print (df) Id Class Label 0 0 OTHERS SOFTWARE 1 1 MODEL Item 2 2 PRICE Money </code></pre>
python|python-3.x|pandas|dataframe|data-analysis
0
2,587
64,869,905
Serving tensorflow models on GCP?
<p>Recently I've been trying to host a custom image classification tensorflow saved model on GCP and use a REST API to send prediction requests. I've hosted this model on Google's <a href="https://cloud.google.com/ai-platform/prediction/docs/reference/rest/v1/projects/predict" rel="nofollow noreferrer">AI Platform API</a>.</p> <p>I'm trying to build an application on React Native. Essentially I take a picture from my phone and send this to my model using REST. Unfortunately after consulting this documentation it appears that I would need OAuth tokens for the prediction request to go through. I don't want this functionality. I don't want users needing to sign in to send prediction requests.</p> <p>I was wondering if there are ways I can host this tensorflow model and send <code>fetch()</code> requests from my React Native environment.</p> <p>If anyone has done this before, please let me know! I'd greatly appreciate all of the help.</p> <p>I'm willing to try different hosting platforms, but the tensorflow website had pointed me towards GCP.</p>
<p>Firstly, I don't recommend you to publicly open billable resources like this, because you are exposed to attack and huge consumption.</p> <p>But, if you really want to achieve this, you can allow <code>allUsers</code> on your deployed models</p> <pre><code>gcloud ai-platform models add-iam-policy-binding &lt;MY_MODEL_NAME&gt; \ --member=&quot;allUsers&quot; \ --role=&quot;roles/ml.modelUser&quot; </code></pre>
tensorflow|machine-learning|google-cloud-platform|google-ai-platform
1
2,588
64,850,973
remove points located within a specific area - python
<p>I'm trying to remove points that are located within a specific area. Using below, I'm hoping to remove points that are located within the blue box. Ideally, I'd map out a polygon that followed the contour of the circle more closely. This is just a rough description.</p> <p>I'm currently applying a crude subset to the y-coordinates:</p> <pre><code>df = df[df['A'] &gt; 0] df = df[df['C'] &gt; 0] </code></pre> <p>While this gets the majority of points, I'm hoping to improve the method.</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl df = pd.DataFrame(np.random.randint(-25,125,size=(500, 4)), columns=list('ABCD')) fig, ax = plt.subplots() ax.set_xlim(-50, 150) ax.set_ylim(-50, 150) x = ([1.25,0.5,-0.25,0.5,1.25,-10,-10,1.25]) y = ([75,62.5,50,37.5,25,25,75,75]) plt.plot(x,y) A = df['A'] B = df['B'] C = df['C'] D = df['D'] plt.scatter(df['A'], df['B'], color = 'purple', alpha = 0.2); plt.scatter(df['C'], df['D'], color = 'orange', alpha = 0.2); Oval_patch = mpl.patches.Ellipse((50,50), 100, 150, color = 'k', fill = False) ax.add_patch(Oval_patch) </code></pre>
<p>I'd suggest to store your polygon (the <code>&lt;Line2D object&gt;</code>) in a variable like this:</p> <pre><code>line = plt.plot(x,y) </code></pre> <p>Which enables you to utilise the <a href="https://matplotlib.org/3.3.2/api/_as_gen/matplotlib.lines.Line2D.html#matplotlib.lines.Line2D.get_path" rel="nofollow noreferrer">get_path() method</a> to get the underlying path object, which has a builtin <a href="https://matplotlib.org/3.3.2/api/path_api.html#matplotlib.path.Path.contains_points" rel="nofollow noreferrer">contains_points() method</a>.</p> <p>Next, include the following lines:</p> <pre><code>pts = df[['C','D']] mask = line[0].get_path().contains_points(pts) plt.scatter(df['C'][mask],df['D'][mask],color=&quot;red&quot;,zorder=10) </code></pre> <p>and you should be able to verify that the correct points are selected by the mask:</p> <p><a href="https://i.stack.imgur.com/Ec1eV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ec1eV.png" alt="masked points highlighted in red" /></a></p> <p>Now, to remove the points within your selected region, you can use <code>~mask</code> to negate the boolean mask, i.e. use:</p> <pre><code>pts = df[['A','B']] mask_ab = line[0].get_path().contains_points(pts) pts = df[['C','D']] mask_cd = line[0].get_path().contains_points(pts) plt.scatter(df['A'][~mask_ab], df['B'][~mask_ab], color = 'purple', alpha = 0.2) plt.scatter(df['C'][~mask_cd], df['D'][~mask_cd], color = 'orange', alpha = 0.2) </code></pre> <p>to get this result:</p> <p><a href="https://i.stack.imgur.com/XZQ0F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XZQ0F.png" alt="masked points removed" /></a></p>
python|pandas
1
2,589
40,064,587
Image display error after changing dtype of image matrix
<p>I'm using opencv + python to process fundus(retinal images). There is a problem that im facing while converting a float64 image to uint8 image.</p> <p><strong>Following is the python code:</strong></p> <pre><code>import cv2 import matplotlib.pyplot as plt import numpy as np from tkFileDialog import askopenfilename filename = askopenfilename() a = cv2.imread(filename) height, width, channel = a.shape b, Ago, Aro = cv2.split(a) mr = np.average(Aro) sr = np.std(Aro) Ar = Aro - np.mean(Aro) Ar = Ar - mr - sr Ag = Ago - np.mean(Ago) Ag = Ag - mr - sr #Values of elements in Ar # Ar = [[-179.17305527, -169.17305527, -176.17305527, ..., -177.17305527, -177.17305527, -177.17305527], # [-178.17305527, -169.17305527, -172.17305527, ..., -177.17305527, -177.17305527, -177.17305527], # [-179.17305527, -178.17305527, -179.17305527, ..., -177.17305527, -177.17305527, -177.17305527], # ..., # [-177.17305527, -177.17305527, -177.17305527, ..., -177.17305527, -177.17305527, -177.17305527], # [-177.17305527, -177.17305527, -177.17305527, ..., -177.17305527, -177.17305527, -177.17305527], # [-177.17305527, -177.17305527, -177.17305527, ..., -177.17305527, -177.17305527, -177.17305527]] Mr = np.mean(Ar) SDr = np.std(Ar) print "MR = ", Mr, "SDr = ", SDr Mg = np.mean(Ag) SDg = np.std(Ag) Thg = np.mean(Ag) + 2 * np.std(Ag) + 50 + 12 Thr = 50 - 12 - np.std(Ar) print "Thr = ", Thr Dd = np.zeros((height, width)) Dc = Dd for i in range(height): for j in range(width): if Ar[i][j] &gt; Thr: Dd[i][j] = 255 else: Dd[i][j] = 0 TDd = np.uint8(Dd) TDd2 = Dd for i in range(height): for j in range(width): if Ag[i][j] &gt; Thg: Dc[i][j] = 1 else: Dc[i][j] = 0 #CALCULATING RATIO ratio = 500.0 / Dd.shape[1] dim = (500, int(Dd.shape[0] * ratio)) # # #RESIZING TO-BE-DISPLAYED IMAGES resized_TDd = cv2.resize(TDd, dim, interpolation=cv2.INTER_AREA) resized_TDd2 = cv2.resize(TDd2, dim, interpolation=cv2.INTER_AREA) resized_original = cv2.resize(Aro, dim, interpolation=cv2.INTER_AREA) cv2.imshow('TDd', resized_TDd) cv2.imshow('TDd2', resized_TDd2) cv2.imshow('Aro', resized_original) cv2.waitKey(0) </code></pre> <p><code>Ar[][]</code> has -ve as well as +ve values and <code>Thr</code> has a -ve value. <strong><code>Dd</code> is the image which i want to display.</strong> The problem is that <code>TDd</code> displays a bizarre image <em>(255s and 0s are being assigned to appropriate pixels, i checked</em> but the image being displayed is weird and not similar to <code>TDd</code></p> <p><strong>Original image</strong></p> <p><a href="https://i.stack.imgur.com/vxtIU.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/vxtIU.jpg" alt="enter image description here"></a></p> <p><strong>Red channel image:</strong></p> <p><a href="https://i.stack.imgur.com/ZWqCL.png" rel="nofollow"><img src="https://i.stack.imgur.com/ZWqCL.png" alt="enter image description here"></a></p> <p><strong>TDd (uint8 of Dd):</strong></p> <p><a href="https://i.stack.imgur.com/HOCd9.png" rel="nofollow"><img src="https://i.stack.imgur.com/HOCd9.png" alt="enter image description here"></a></p> <p><strong>TDd2 (same as Dd)</strong></p> <p><a href="https://i.stack.imgur.com/iv7al.png" rel="nofollow"><img src="https://i.stack.imgur.com/iv7al.png" alt="enter image description here"></a></p> <p><strong>Dd2 (declared uint8 dtype while initializing)</strong></p> <p><a href="https://i.stack.imgur.com/ZKHwT.png" rel="nofollow"><img src="https://i.stack.imgur.com/ZKHwT.png" alt="enter image description here"></a></p> <p>Why are the <code>TDd</code> and <code>TDd2</code> images different? <strong>Since the difference between the gray values of the pixels (as far as i understand and know) in these 2 images is only 255, 0 <em>(in TDd)</em> and 255.0, 0.0 <em>(in TDd2)</em></strong>.</p> <p>It would be a great great help if someone could tell me.</p>
<p>Look I executed your code and there are the <a href="http://postimg.org/gallery/1qi7dozn0" rel="nofollow">results</a></p> <p>They seem pretty normal to me... this is the exact <a href="https://pastebin.com/6jwgKi9r" rel="nofollow">code</a> I used</p> <p>Ar is different from the others because when you <code>imShow()</code> it, it puts white where values are > 0 black otherwise. The other matrices after tour code get white where > <code>Thr</code> which is less than 0, so more pixel get white obviously.</p> <p><strong>update</strong></p> <p>You assigned Dd to Dc when you should've done Dc = np.zeros((height, width)).</p>
python|opencv|numpy
1
2,590
43,953,594
calculate row difference groupwise in pandas
<p>I need to calculate the difference between two rows groupwise using pandas.</p> <pre><code>| Group | Value | ID | ---------------------- | M1 | 10 | F1 | ---------------------- | M1 | 11 | F2 | ---------------------- | M1 | 12 | F3 | ---------------------- | M1 | 15 | F4 | ---------------------- </code></pre> <p>Example output:</p> <pre><code>---------------------- | M1 | F3 - F2 | 1 | ---------------------- | M1 | F4 - F1 | 5 | </code></pre> <p>To calculate the sum I would use pandas.groupby('Group').sum(), but how do you calculate the difference between rows where the row ordering is important?</p>
<p>I think you need custom function with <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#flexible-apply" rel="nofollow noreferrer">apply</a> which return <code>DataFrame</code> for each group, for select by position is used <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.iat.html" rel="nofollow noreferrer"><code>iat</code></a>:</p> <pre><code>def f(x): #print (x) a = x['Value'].iat[2] - x['Value'].iat[1] b = x['Value'].iat[3] - x['Value'].iat[0] c = x['ID'].iat[2] + ' - ' + x['ID'].iat[1] d = x['ID'].iat[3] + ' - ' + x['ID'].iat[0] return pd.DataFrame({'Value': [a,b], 'ID':[c,d]}) df = df.groupby('Group').apply(f).reset_index(level=1, drop=True).reset_index() print (df) Group ID Value 0 M1 F3 - F2 1 1 M1 F4 - F1 5 </code></pre>
python|pandas|numpy
1
2,591
40,847,809
Pandas aggregation subtraction based on column value
<p>Suppose I have DataFrame</p> <pre><code>'name' 'quantity' 'day' 'A' 1 'Monday' 'A' 10 'Sunday' 'A' 5 'Friday' 'B' 2 'Monday' 'B' 30 'Sunday' 'B' 5 'Thursday' </code></pre> <p>What I need to build is another dataframe where for each <em>name</em> I subtract the quantity of Monday from the quantity of Sunday. So, I guess I need a <code>groupBy</code> on the <em>name</em> and then an <code>agg</code> with a function, but I am not sure how to do the filter so that only those days are considered.</p> <p>Following the example, the end result I seek is</p> <pre><code>'name' 'sub_quantity' 'A' 9 'B' 28 </code></pre>
<p><strong><em>setup</em></strong> </p> <pre><code>import pandas as pd from io import StringIO txt = """name quantity day A 1 Monday A 10 Sunday A 5 Friday B 2 Monday B 30 Sunday B 5 Thursday""" df = pd.read_csv(StringIO(txt), delim_whitespace=True) </code></pre> <p><strong><em>option 1</em></strong><br> <code>unstack</code></p> <pre><code>d1 = df.set_index(['name', 'day']).quantity.unstack() d1.Sunday.sub(d1.Monday) name A 9.0 B 28.0 dtype: float64 </code></pre> <p><strong><em>option 2</em></strong><br> <code>query</code></p> <pre><code>s = df.set_index('name').query('day == "Sunday"').quantity m = df.set_index('name').query('day == "Monday"').quantity s - m name A 9 B 28 Name: quantity, dtype: int64 </code></pre> <p><strong><em>option 3</em></strong><br> <code>xs</code></p> <pre><code>d1 = df.set_index(['day', 'name']).quantity d1.xs('Sunday') - d1.xs('Monday') name A 9 B 28 Name: quantity, dtype: int64 </code></pre> <p><strong><em>option 4</em></strong><br> cute <code>apply</code></p> <pre><code>def obnoxious(x): s = x.day.eq('Sunday').idxmax() m = x.day.eq('Monday').idxmax() q = 'quantity' return x.get_value(s, q) - x.get_value(m, q) df.groupby('name').apply(obnoxious) name A 9 B 28 dtype: int64 </code></pre> <hr> <hr> <p><strong><em>timing</em></strong><br> example data<br> <a href="https://i.stack.imgur.com/7PNcd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7PNcd.png" alt="enter image description here"></a></p>
python|pandas|group-by
4
2,592
40,895,730
Python DataFrame from a list
<p>So, I have to create a dataframe. I do not mind my source to be a list of dicts or a dict.</p> <pre><code>List of Dict: [{'A': 'First', 'C': 300, 'B': 200}, {'A': 'Second', 'C': 310, 'B': 210}, {'A': 'Third', 'C': 330, 'B': 230}, {'A': 'Fourth', 'C': 340, 'B': 240}, {'A': 'Fifth', 'C': 350, 'B': 250}] </code></pre> <p>OR</p> <pre><code>{'First': {'C': 300, 'B': 200}, 'Second':{'C': 310, 'B': 210} }, 'Third': {'C': 330, 'B': 230}, 'Fourth': {'C': 340, 'B': 240}, 'Fifth': {'C': 350, 'B': 250} } </code></pre> <p>I want my dataframe like this</p> <pre><code> C B First 300 200 Second 310 210 Third 330 230 Fourth 340 240 Fifth 350 250 </code></pre> <p>Basically, suggesting that one of the columns become the index ... </p>
<p>Also you can use <code>pd.DataFrame.from_records()</code> where you can set a specific column to be index:</p> <pre><code>pd.DataFrame.from_records([{'A': 'First', 'C': 300, 'B': 200}, {'A': 'Second', 'C': 310, 'B': 210}, {'A': 'Third', 'C': 330, 'B': 230}, {'A': 'Fourth', 'C': 340, 'B': 240}, {'A': 'Fifth', 'C': 350, 'B': 250}], index = ['A']) </code></pre> <p><a href="https://i.stack.imgur.com/vJa2Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vJa2Z.png" alt="enter image description here"></a></p>
python|pandas|dataframe
3
2,593
53,957,213
Tensorflow feed_dict dimension miss match with the neural network input and training input
<p>I have two classes of diseases <code>A</code> and <code>B</code>. My training data has <code>28</code> images including both classes. I have created resize function using opencv.</p> <pre><code>def resize_cv(x,width,height): new_image=cv.resize(x,(width,height)) return new_image </code></pre> <p><code>X</code> contains list of 28 images.</p> <pre><code>xx=[] for i in X: xx.append(resize_cv(i,196,196)) #resizing happens here print("__Resized the images__") def scaling (X): new=[] for i in X: for j in i: new.append(j/255) break return new def label_encode(y): from sklearn.preprocessing import LabelBinarizer ff=LabelBinarizer() return ff.fit_transform(y) X=scaling(xx) y=label_encode(y) </code></pre> <p>Now I split the data into training and testing and create our step size</p> <pre><code>X_train,X_test, y_train, y_test=split_data(X,y,0.2) #creating smaller batches step_size=7 steps = len(X_train) remaining = steps % step_size </code></pre> <p>I have created a neural network now when I feed into the network I am getting an dimensional error.</p> <pre><code>layer_conv1 = create_convolutional_layer(input=x,num_input_channels=num_channels,conv_filter_size=filter_size_conv1,num_filters=num_filters_conv1,name="conv1") layer_conv1_1 = create_convolutional_layer(input=layer_conv1,num_input_channels=num_filters_conv1,conv_filter_size=filter_size_conv1,num_filters=num_filters_conv1,name="conv2") layer_conv1_1_1 = create_convolutional_layer(input=layer_conv1_1,num_input_channels=num_filters_conv1,conv_filter_size=filter_size_conv1,num_filters=num_filters_conv1,name="conv3") max_pool_1=maxpool2d(layer_conv1_1_1,2,name="maxpool_1") drop_out_1=dropout(max_pool_1,name="dropout_1") flatten_layer=create_flatten_layer(drop_out_3) layer_fc2 = create_fc_layer(input=flatten_layer,num_inputs=fc_layer_size,num_outputs=num_classes,use_relu=True) y_pred = tf.nn.softmax(layer_fc2,name="y_pred") cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y,logits=y_pred)) #Defining objective train = tf.train.AdamOptimizer(learning_rate=0.00001).minimize(cost) print ("_____Neural Network Architecture Created Succefully_____") epochs=10 matches = tf.equal(tf.argmax(y_pred,axis=1),tf.argmax(y,axis=1)) acc = tf.reduce_mean(tf.cast(matches,tf.float32)) #Initializing weights init = tf.global_variables_initializer() with tf.Session() as sess: #writing output to the logs for tensorboard writer=tf.summary.FileWriter("./logs",sess.graph) sess.run(init) for i in range(epochs): #creating smaller batches for j in range(0,steps-remaining,step_size): sess.run([acc,train,cost],feed_dict={x:X_train[j:j+step_size],y:y_train[j:j+step_size]}) </code></pre> <p><strong><em>Error Trace:</em></strong></p> <pre><code>Traceback (most recent call last): File "/home/centura/gitlab/moles_model/moles_model/modelversion1.py", line 313, in &lt;module&gt; sess.run([acc,train,cost],feed_dict={x:X_train,y:y_train}) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 900, in run run_metadata_ptr) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1111, in _run str(subfeed_t.get_shape()))) ValueError: Cannot feed value of shape (22, 196, 3) for Tensor 'x:0', which has shape '(?, 196, 196, 3)' </code></pre> <p>I have checked images dimension in the <strong>X</strong> array. Each image maintain the dimension <strong>(196,196,3)</strong> but,</p> <p>when I checked the dimension of the image in <strong>X_train</strong> the dimension of each image is <strong>(196,3)</strong>. </p> <p><strong>I am not getting where the missing</strong> <strong>196</strong> <strong>goes</strong>.</p> <p>I am using <code>tensorflow-gpu=1.9.0, python 3.6 , pycharm IDE.</code></p>
<p>The answer was simple the reason the Conversion of <strong>(196,196,3)</strong> happened due to the extra for loop in the scaling function. </p> <p>Instead of using this code </p> <pre><code>def scaling (X): new=[] for i in X: for j in i: new.append(j/255) break return new </code></pre> <p>I should have avoided the second loop an the function will look something like this:</p> <pre><code>def scaling (X): new=[] for i in X: new.append(i/255) return new </code></pre>
python-3.x|tensorflow|multidimensional-array
0
2,594
53,967,271
Detecting a list type in pandas
<p>Is there a way to see if a field is an array in <code>pandas</code>? For example:</p> <pre><code>&gt;&gt;&gt; data=[{'name':'tom','colors':[1,2,3]}] &gt;&gt;&gt; df = pd.DataFrame(data) colors name 0 [1, 2, 3] tom &gt;&gt;&gt; df['colors']['dtype'] Name: colors, dtype: object </code></pre> <p>Is there a way I can get the value <code>list</code>? Or do I need to do an <code>ast.literal_eval()</code> ? The below seems pretty crude:</p> <pre><code>&gt;&gt;&gt; type(ast.literal_eval(str(pd.DataFrame(data)['colors'][0]))) &lt;class 'list'&gt; </code></pre>
<p>If the data in the columns is consistent that is lists then use:</p> <pre><code>type(df.loc[0,'colors']) list </code></pre>
python|pandas
0
2,595
53,926,627
Does keras use gpu automatically?
<p>It seems like it uses gpu automatically, but I do not know why.</p> <p>First, I declared as below</p> <pre><code>tf_config = tf.ConfigProto( allow_soft_placement=True ) tf_config.gpu_options.allow_growth = True sess = tf.Session(config=tf_config) keras.backend.set_session(sess) </code></pre> <p>Then I defined some model as below</p> <pre><code>with K.tf.device('/gpu:0'): some keras model </code></pre> <p>This is obvious that it will use the gpu and I checked it uses the first gpu(with index 0) as I expected.</p> <p>But then, I removed the line</p> <pre><code>with K.tf.device('/gpu:0'): </code></pre> <p>and re-indented all the keras model. I ran the code, it still seems like using first gpu(with index 0).</p> <p>On my ubuntu I used nvidia-smi command to check the gpu memory usage, and I looked on the process manager on my windows. </p> <p>Both of them take the gpu memory and its usages. </p> <p>As far as I remember, tensorflow does not use gpu if I do not spare them to its model. But with Keras it seems like it uses gpu automatically ... is it because I ran the code</p> <pre><code>tf_config = tf.ConfigProto( allow_soft_placement=True ) tf_config.gpu_options.allow_growth = True sess = tf.Session(config=tf_config) keras.backend.set_session(sess) </code></pre> <p>or is there some other reason I am missing?</p>
<p>According to the <a href="https://www.tensorflow.org/guide/using_gpu" rel="noreferrer">documentation</a> TensorFlow will use GPU by default if it exist:</p> <blockquote> <p>If a TensorFlow operation has both CPU and GPU implementations, <strong>the GPU devices will be given priority</strong> when the operation is assigned to a device. For example, matmul has both CPU and GPU kernels. <strong>On a system with devices cpu:0 and gpu:0, gpu:0 will be selected to run</strong></p> </blockquote>
tensorflow|model|keras|gpu
9
2,596
66,158,638
First 'Group by' then plot/save as png from pandas
<p>first I need to filter data then plot each group separately and save files to directory</p> <pre><code>for id in df[&quot;set&quot;].unique(): df2= df.loc[df[&quot;set&quot;] == id] outpath = &quot;path/of/your/folder/&quot; sns.set_style(&quot;whitegrid&quot;, {'grid.linestyle': '-'}) plt.figure(figsize=(12,8)) ax1=sns.scatterplot(data=df2, x=&quot;x&quot;, y=&quot;y&quot;, hue=&quot;result&quot;,markers=['x'],s=1000) ax1.get_legend().remove() ax1.set_yticks((0, 1.5, 2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5), minor=False) ax1.set_xticks([0, 1.5, 2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5, 9.5, 10.5, 11.5, 12.6], minor=False) fig = ax1.get_figure() fig.savefig(path.join(outpath,&quot;id.png&quot;,dpi=300 ) </code></pre>
<p>This worked for me but it is very slow</p> <pre><code>groups = df.groupby(&quot;set&quot;) for name, group in groups: sns.set_style(&quot;whitegrid&quot;, {'grid.linestyle': '-'}) plt.figure(figsize=(12,8)) ax1=sns.scatterplot(data=group, x=&quot;x&quot;, y=&quot;y&quot;, hue=&quot;result&quot;,markers=['x'],s=1000) ax1.get_legend().remove() ax1.set_yticks((0, 1.5, 2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5), minor=False) ax1.set_xticks([0, 1.5, 2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5, 9.5, 10.5, 11.5, 12.6], minor=False) fig = ax1.get_figure() fig.savefig(&quot;directory/{0}.png&quot;.format(name), dpi=300) </code></pre>
pandas|matplotlib|plot
0
2,597
66,101,687
Reformatting a numpy array
<p>I have come across some code (which may answer <a href="https://stackoverflow.com/questions/65936033/assigning-a-label-to-its-corresponding-grid-cell">this</a> question of mine). Here is the code (from Vivek Maskara's solution to my issue):</p> <pre><code>import cv2 as cv import numpy as np def read(image_path, label): image = cv.imread(image_path) image = cv.cvtColor(image, cv.COLOR_BGR2RGB) image_h, image_w = image.shape[0:2] image = cv.resize(image, (448, 448)) image = image / 255. label_matrix = np.zeros([7, 7, 30]) for l in label: l = l.split(',') l = np.array(l, dtype=np.int) xmin = l[0] ymin = l[1] xmax = l[2] ymax = l[3] cls = l[4] x = (xmin + xmax) / 2 / image_w y = (ymin + ymax) / 2 / image_h w = (xmax - xmin) / image_w h = (ymax - ymin) / image_h loc = [7 * x, 7 * y] loc_i = int(loc[1]) loc_j = int(loc[0]) y = loc[1] - loc_i x = loc[0] - loc_j if label_matrix[loc_i, loc_j, 24] == 0: label_matrix[loc_i, loc_j, cls] = 1 label_matrix[loc_i, loc_j, 20:24] = [x, y, w, h] label_matrix[loc_i, loc_j, 24] = 1 # response return image, label_matrix </code></pre> <p>Would it be possible for you to explain how this part of the code works and what it specifically does:</p> <pre><code>if label_matrix[loc_i, loc_j, 24] == 0: label_matrix[loc_i, loc_j, cls] = 1 label_matrix[loc_i, loc_j, 20:24] = [x, y, w, h] label_matrix[loc_i, loc_j, 24] = 1 # response </code></pre>
<p>I will first create and explain a simplified example, and then explain the part you pointed.</p> <p>First, we create the ndarray named <code>label_matrix</code>:</p> <pre><code>import numpy as np label_matrix = np.ones([2, 3, 4]) print(label_matrix) </code></pre> <p>This code means that you wil get an array containing 2 arrays, each of these 2 arrays will contain 3 arrays, and each of these 3 arrays will contain 4 elements. And because we used <code>np.ones</code>, all these elements will have a value of <code>1</code>. So, printing <code>label_matrix</code> wil output this:</p> <pre><code>[[[1. 1. 1. 1.] [1. 1. 1. 1.] [1. 1. 1. 1.]] [[1. 1. 1. 1.] [1. 1. 1. 1.] [1. 1. 1. 1.]]] </code></pre> <p>Now, we will change the values of first 4 elements of the first array contained by the first array of <code>label_matrix</code>.</p> <p>To acces <em>the first array of <code>label_matrix</code></em>, we do: <code>label_matrix[0]</code></p> <p>To access <em>the first array contained by the first array of <code>label_matrix</code></em> we do: <code>label_matrix[0, 0]</code></p> <p>To access the first element of the first array contained by the first array of <code>label_matrix</code> we do: <code>label_matrix[0, 0, 0]</code></p> <p>To access the second element of the first array contained by the first array of <code>label_matrix</code> we do: <code>label_matrix[0, 0, 1]</code></p> <p>etc.</p> <p>So, now, we will change the values of first 4 elements of the first array contained by the first array of <code>label_matrix</code>:</p> <pre><code>label_matrix[0, 0, 0] = 100 label_matrix[0, 0, 1] = 200 label_matrix[0, 0, 2] = 300 label_matrix[0, 0, 2] = 400 </code></pre> <p>Output of <code>label_matrix</code>:</p> <pre><code>[[[100. 200. 300. 400.] [ 1. 1. 1. 1.] [ 1. 1. 1. 1.]] [[ 1. 1. 1. 1.] [ 1. 1. 1. 1.] [ 1. 1. 1. 1.]]] </code></pre> <p>But we could have written it like this, instead of wrting 4 lines of codes:</p> <pre><code>label_matrix[0, 0, 0:4] = [100,200,300,400] </code></pre> <p>Writing <code>label_matrix[0, 0, 0:4]</code> means: in the first array contained by the first array of <code>label_matrix</code>, select the 4 first elements (from index 0 to 4 (4 being not included))</p> <p>So now you know the meaning of each line.</p> <p>I'll explain the part of code you pointed:</p> <p><code>if label_matrix[loc_i, loc_j, 24] == 0:</code>:</p> <p>Test if the element at index 24 (the 23th element) has value <code>0</code></p> <p>if yes, then:</p> <p><code>label_matrix[loc_i, loc_j, cls] = 1</code>:</p> <p>assign the value <code>1</code> to the element at index <code>cls</code>. (If the variable named <code>cls</code> has value <code>4</code>, it will assigne the value <code>1</code> to the element at index 4 of the first array contained by the first array of <code>label_matrix</code>)</p> <p><code>label_matrix[loc_i, loc_j, 20:24] = [x, y, w, h]</code>:</p> <p>Say &quot;x==100&quot;, &quot;y==200&quot;, &quot;w==300&quot; and &quot;h==400&quot;. So, in the first array contained by the first array of <code>label_matrix</code>, assign value <code>100</code> to the elemnt at index <code>20</code>, value <code>200</code> to the elemnt at index <code>21</code>, <code>300</code> at index <code>22</code> and <code>400</code> to index <code>23</code></p> <p><code>label_matrix[loc_i, loc_j, 24] = 1</code>:</p> <p>in the first array contained by the first array of <code>label_matrix</code>, assign value <code>1</code> to the element at index <code>24</code></p>
python|python-3.x|numpy|numpy-ndarray
1
2,598
66,181,278
targeting jth-kth element in a matirx using python
<p><a href="https://i.stack.imgur.com/l092C.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l092C.jpg" alt="enter image description here" /></a></p> <p>I would like to implement a matrix that satisfies the conditions shown in the image:</p> <ol> <li>The matrix is an <code>m * n</code> matrix</li> <li>The <code>j, k</code>th element will be 1 if either the <code>k</code>th index is <code>j + 1</code> or <code>k</code>th index is 2 or the <code>j</code>th index is 2</li> </ol> <p>Here is my latest code:</p> <pre><code>self.ylmaxvect = [1, 2, 3, 4] self.ylmax = int(input('inputylmax')) self.An = sum(self.ylmaxvect) self.Am = sum(self.ylmaxvect) # -------------declare a zero matrix first------------------------ # self.matAL = np.zeros((self.An, self.Am)) for i in range(self.An - 1): if self.matAL[i][i + 1] == 0: if i == self.ylmax: self.matAL[i][i + 1] = 1 else: self.matAL[i][i + 1] = 0 </code></pre>
<p>You're overthinking this. Start with</p> <pre><code>A = np.zeros((m, n)) </code></pre> <p>The condition <code>k = j + 1</code> is just the first diagonal above the main one. You can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.fill_diagonal.html" rel="nofollow noreferrer"><code>np.fill_diagonal</code></a> for that:</p> <pre><code>np.fill_diagonal(A[:, 1:], 1) </code></pre> <p>This works because the slice <code>A[:, 1:]</code> is a view into the original matrix, and therefore any modifications to it are visible in the underlying matrix.</p> <p>The conditions <code>k = 2</code> and <code>j = 2</code> are just slices across the appropriate dimension:</p> <pre><code>A[:, 1] = 1 A[1, :] = 1 </code></pre> <p>I'm assuming that your matrix notation is one-based, while numpy is zero based. If not, change the ones in the index to two.</p> <p>You can visualize what <code>fill_diag</code> is doing by generating the <code>j</code> and <code>k</code> indices yourself. <code>k</code> goes <code>1, 2, 3, ...</code>. <code>j</code> goes <code>0, 1, 2, ...</code>. How far do they go? Well, the lesser of <code>m - 1</code> and <code>n</code>, since it depends on the shape:</p> <pre><code>j = np.arange(min(m - 1, n)) k = j + 1 A[j, k] = 1 </code></pre> <p>If you want to use loops, you can do this in <code>O(m + n)</code> rather than <code>O(m * n)</code> time. The key is that you know where the ones go: you don't need to check each element:</p> <pre><code>for j in range(m): A[j, 2] = A[j, j + 1] = 1 for k in range(n): A[2, k] = 1 </code></pre>
python|numpy|matrix
2
2,599
66,296,162
Numpy ravel takes too long after a slight change to a ndarray
<p>I am working with a flatten image (1920x1080x4), in which I need to reshape (e.g. <code>arr.reshape((1920,1080,4))</code>), remove the last channel (e.g. <code>arr[:,:,:3]</code>), convert from BGR to RGB (e.g. <code>arr[:,:,::-1]</code>) and finally flatten again (e.g. <code>arr.ravel()</code>). The problem is with the ravel/flatten/reshape(-1) operation, that increases about 20ms of the computation time.</p> <p>To make it easy to debug, I assumed the incoming array as a flatten 1920x1080x3 image, meaning that I only need to worry about the BGR to RGB conversion and flattening. However, when testing reshape+ravel, reshape+BGR2RGB and, finally, reshape+BGR2RGB+ravel, the results were 1ms, 1ms, 20ms respectively, which doesn't make any sense to me, since it's only some values changing position in memory. Is there any reason for the ravel to create a copy of the array? How can i reduce this time?</p> <p><strong>Note:</strong> I also tested the inplace reshape method that's written on the notes of <code>numpy.reshape</code> documentation, but, as specified, an error was raised, meaning that the array need to be copied before in order to reshape.</p> <p>Bellow is the code I used for testing:</p> <pre><code>import numpy as np from time import time arr_original = np.ones((1920*1080*3), dtype=np.uint8) arr = arr_original.copy() s = time() arr = arr.reshape(1920,1080,3) arr = arr.ravel() print(f&quot;Reshape + ravel: {round(1000*(time()-s),2)}ms&quot;) arr = arr_original.copy() s = time() arr = arr.reshape(1920,1080,3) arr = arr[:,:,::-1] print(f&quot;Reshape + BGR2RGB: {round(1000*(time()-s),2)}ms&quot;) arr = arr_original.copy() s = time() arr = arr.reshape(1920,1080,3) arr = arr[:,:,::-1] arr = arr.ravel() print(f&quot;Reshape + BGR2RGB + ravel: {round(1000*(time()-s),2)}ms&quot;) </code></pre> <p>Output on my machine</p> <pre><code>Reshape + ravel: 0.01ms Reshape + BGR2RGB: 0.01ms Reshape + BGR2RGB + ravel: 20.54ms </code></pre>
<p>This is because all your operations above are producing views for the same data, but the last ravel is required to make a copy.</p> <p>An array in numpy array has an underlying memory, and shape &amp; strides determining where each element lies.</p> <p>Reshaping a contiguous array may be performed by simply changing shape and strides, without modifying the data. The same is true with slices here. But since your last array is not contiguous, when you use ravel it will make a copy of everything.</p> <p>For instance in a 3d array accessing the element <code>arr[i,j,k]</code> means to access the memory at <code>base + i * arr.strides[0] + j * arr.strides[1] + k * arr.strides[1]</code> you can doo many things with this (even broadcasting if you use stride 0 in a given axis).</p> <pre class="lang-py prettyprint-override"><code>arr_original = np.ones((1920*1080*4), dtype=np.uint8) arr = arr_original print(arr.shape, arr.strides) arr = arr.reshape(1920,1080,4) print(arr.shape, arr.strides) arr = arr[:,:,:3] # keep strides only reduces the length of the last axis print(arr.shape, arr.strides) arr = arr[:,:,::-1] # change strides of last axis to -1 print(arr.shape, arr.strides) arr[0,0,:] = [3,4,5] # operations here are using the memory allocated arr[0,1,:] = [6,7,8] # for arr_original arr = arr.ravel() arr[:] = 0 # this won't affect the original because the data was copied print(arr_original[:8]) </code></pre> <h2>Improving your solution</h2> <p>This is the situation where you have to experiment or dive in the library code. I prefer to test different ways of writing the code.</p> <p>The original approach is the best approach in general, but in this specific case what we have is unaligned memory since you are writing to a uint8 with stride 3.</p> <p>When judging performance it is important to be aware of what is reasonable to expect, in this case we can compare the format conversion with a pure copy</p> <pre><code>arr = arr_original.copy() </code></pre> <p>1.89 ms ± 43.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)</p> <pre class="lang-py prettyprint-override"><code>arr = arr_original arr = arr.reshape(1920,1080,4) arr = arr[:,:,:3] arr = arr[:,:,::-1] arr[0,0,:] = [3,4,5] arr[0,1,:] = [6,7,8] arr = arr.ravel() </code></pre> <p>12.3 ms ± 101 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) (About 6x times slower than a copy)</p> <pre class="lang-py prettyprint-override"><code>arr = arr_original arr = arr.reshape(1920,1080,4) arr_aux = np.empty(arr.shape[:-1] + (3,), dtype=np.uint8) arr_aux[:,:,0] = arr[:,:,2] arr_aux[:,:,1] = arr[:,:,1] arr_aux[:,:,2] = arr[:,:,0] arr = arr_aux.ravel() </code></pre> <p>4.16 ms ± 25 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) (about 2x slower than a copy)</p> <h2>Analysis</h2> <p>In the first case the last axis has very small dimension as well, so maybe this is leading to a small loop. Let's see how this operation can be projected to C++</p> <pre class="lang-cpp prettyprint-override"><code>for(int i = 0; i &lt; height; ++i){ for(int j = 0; j &lt; width; ++j){ // this part would be the bottleneck for(int k = 0; k &lt; 3; ++k){ dst[(width * i + j)*3 + k] = src[(width * i + j)*4 + k]; } } } </code></pre> <p>Of course numpy is doing more things than this, and the indexes may be computed more efficiently by moving precomputing the part independent of the loop variable outside the loop. The idea here is to be didactic.</p> <p>Let's count the number of branches executed, each for loop will perform N+1 branches for N iteration (N that enter the loop and the last jump that breaks it). So the above code runs <code>1 + height * (1 + 1 + width * (1 + 3)) ~ 4 * width * height</code> branches.</p> <p>If we unroll the innermost loop as</p> <pre class="lang-cpp prettyprint-override"><code>for(int i = 0; i &lt; height; ++i){ for(int j = 0; j &lt; width; ++j){ // this part would be the bottleneck dst[(width * i + j)*3 + 0] = src[(width * i + j)*4 + 0]; dst[(width * i + j)*3 + 1] = src[(width * i + j)*4 + 1]; dst[(width * i + j)*3 + 2] = src[(width * i + j)*4 + 2]; } } </code></pre> <p>The number of branches becomes <code>1 + height * (1 + 1 + width) ~ height * width</code>, 4 times less. We cannot do this in python because we don't have access to the inner loop. But with the second code we implement something like</p> <pre class="lang-cpp prettyprint-override"><code>for(int i = 0; i &lt; height; ++i){ for(int j = 0; j &lt; width; ++j){ // this part would be the bottleneck dst[(width * i + j)*3 + 0] = src[(width * i + j)*4 + 0]; } } for(int i = 0; i &lt; height; ++i){ for(int j = 0; j &lt; width; ++j){ dst[(width * i + j)*3 + 1] = src[(width * i + j)*4 + 1]; } } for(int i = 0; i &lt; height; ++i){ for(int j = 0; j &lt; width; ++j){ dst[(width * i + j)*3 + 2] = src[(width * i + j)*4 + 2]; } } </code></pre> <p>That would still have less branches than the first.</p> <p>By the improvement observed I imagine the last loop must be calling a function like <a href="http://www.cplusplus.com/reference/cstring/memcpy/" rel="nofollow noreferrer">memcpy</a> or something else with more overhead in an attempt to be faster for bigger slices, maybe checking memory alignment and that will fail since we are using bytes with stride 3.</p>
python|arrays|numpy|memory|flatten
2