Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
378,000
70,493,098
pandas-how can I replace rows in a dataframe
<p>I am new in Python and try to replace rows.</p> <p>I have a dataframe such as:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">X</th> <th style="text-align: center;">Y</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">1</td> <td style="text-alig...
<p>First use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>DataFrame.iloc</code></a>, python counts from <code>0</code>, so for select second row use <code>1</code> and for fifth use <code>4</code>:</p> <pre><code>df.iloc[[1, 4]] = df.iloc...
python|pandas|replace|rows
1
378,001
70,628,013
What do the coordinates of verts from Marching Cubes mean?
<p>I have a 3D generated voxel model of a vehicle and the coordinates of the voxels are in the vehicle reference frame. The origin is at the center of the floor. It looks this:</p> <blockquote> <p>array([[-2.88783681, -0.79596956, 0.],<br> [-2.8752784 -0.79596956, 0.],<br> [-2.86271998, -0.79596956, 0.],<br> ...,<br> [...
<p>verts are just points in space. Essentially each vert is a corner of some triangle (usually more than 1).</p> <p>To know what the actual triangles are you will look at faces which will have be something like:</p> <pre><code>[(v1, v2, v3), (v1, v4, v5), ...] </code></pre> <p>Each tuple in the list includes 3 indices ...
python|numpy|scikit-learn|marching-cubes
0
378,002
70,396,252
Python Pandas- How do I subset time intervals into smaller ones?
<p>Let's imagine I have a timeseries dataframe of temperature sensor data that goes by 30 min intervals. How do I basically subset each 30 min interval into smaller 5 min intervals while accounting for the difference of temperature drops between each interval?</p> <p>I imagine that doing something like this could work:...
<p>I would do it with a resample of the data frame to a lower time resolution (&quot;6T&quot; in this case, with T meaning minutes), this will create new rows for missing time steps with nan values then you can fill those nan somehow, for what you describe I think a linear interpolation can be enough.</p> <p>Here you h...
python|pandas|datetime
1
378,003
70,655,084
Convert http text response to pandas dataframe
<p>I want to convert the below text into a pandas dataframe. Is there a way I can use Python Pandas pre-built or in-built parser to convert? I can make a custom function for parsing but want to know if there is pre-built and/or fast solution.</p> <p>In this example, the dataframe should result in two rows, one each of ...
<p>You've listed everything you need as tags. Use <code>json.loads</code> to get a dict from string</p> <pre><code>import json import pandas as pd d = json.loads('''{ &quot;data&quot;: [ { &quot;ID&quot;: &quot;ABC&quot;, &quot;Col1&quot;: &quot;ABC_C1&quot;, &quot;Col2&quot;: &quot;ABC_C2&quot...
python|json|pandas|parsing
0
378,004
70,425,398
numpy to spark error: TypeError: Can not infer schema for type: <class 'numpy.float64'>
<p>While trying to convert a numpy array into a Spark DataFrame, I receive <code>Can not infer schema for type: &lt;class 'numpy.float64'&gt;</code> error. The same thing happens with <code>numpy.int64</code> arrays.</p> <p>Example:</p> <pre><code>df = spark.createDataFrame(numpy.arange(10.)) </code></pre> <blockquote>...
<p>Or without using pandas:</p> <pre><code>df = spark.createDataFrame([(float(i),) for i in numpy.arange(10.)]) </code></pre>
pandas|numpy|apache-spark|pyspark
1
378,005
70,525,249
Compare file name in a dataframe to file present in a directory and then fetch row value of a different column
<p>I have a pandas dataframe which captures 2 columns - id and corresponding filename.</p> <p>I want to run a loop and compare if the filename in this dataframe is present in a specific directory. If it is present then I want to fetch the id and filename.</p> <p>I am trying the following code -</p> <pre><code>x = df[[&...
<pre><code>id = x[&quot;id&quot;] </code></pre> <p>will instantiate &quot;id&quot; with the whole columns of x.id, so the print statement will print the whole column everytime it finds a matching file.</p> <p>try</p> <pre><code>id = x.id[x.filename == filename] </code></pre>
python|pandas
1
378,006
70,607,355
Sparse Categorical CrossEntropy causing NAN loss
<p>So, I've been trying to implement a few custom losses, and so thought I'd start off with implementing SCE loss, without using the built in TF object. Here's the function I wrote for it.</p> <pre><code>def custom_loss(y_true, y_pred): print(y_true, y_pred) return tf.cast(tf.math.multiply(tf.experimental.numpy...
<p>You can replicate the <code>SparseCategoricalCrossentropy()</code> loss function as follows</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf def sparse_categorical_crossentropy(y_true, y_pred, clip=True): y_true = tf.convert_to_tensor(y_true, dtype=tf.int32) y_pred = tf.convert_t...
python|tensorflow|keras|loss-function
2
378,007
70,708,165
Apply condition on a column after groupby in pandas and then aggregate to get 2 max value
<pre><code>data field bcorr 0 A cs1 0.8 1 A cs2 0.9 2 A cs3 0.7 3 A pq1 0.4 4 A pq2 0.6 5 A pq3 0.5 6 B cs1 0.8 7 B cs2 0.9 8 B cs3 0.7 9 B pq1 0.4 10 B pq2 0.6 11 B pq3 0.5 </code></pre> <p>For every data <code>A</code> and <code>B</code> in <code>data</code> column,...
<p>First, extract common part of each field (first letters) then sort values (highest values go bottom). Finally group by <code>data</code> column and <code>field</code> series then keep the two last values (the highest):</p> <pre><code>field = df['field'].str.extract('([^\d]+)', expand=False) out = df.sort_values('bco...
python|pandas
0
378,008
70,514,667
Best way to remove specific words from column in pandas dataframe?
<p>I'm working with a huge set of data that I can't work with in excel so I'm using Pandas/Python, but I'm relatively new to it. I have this column of book titles that also include genres, both before and after the title. I only want the column to contain book titles, so what would be the easiest way to remove the genr...
<p>It is an enhancement to @tdy Regex solution. The original regex <code>Family|Drama</code> will match the words &quot;Family&quot; and &quot;Drama&quot; in the string. If the book title contains the words in <code>gernes</code>, the words will be removed as well.</p> <p>Supposed that the labels are separated by &quot...
python|pandas|string|dataframe
0
378,009
70,399,224
Changing column label Python plotly?
<p>How to change column titles? First column title should say &quot;4-Year&quot; and 2nd column title &quot;2-Year&quot;. I tried using label={} but kept getting an error.</p> <pre><code>df = pd.read_csv('college_data.csv') df1 = df[df.years &gt; 2] df2 = df[df.years &lt; 3] #CUNY College Table fig = g...
<p>Change</p> <pre><code>values=list(df1[['college_name', 'college_name']]), </code></pre> <p>to</p> <pre><code>values=[&quot;4-year&quot;, &quot;2-year&quot;], </code></pre> <p>e.g.</p> <pre><code>fig = go.Figure(data=[go.Table( header=dict(values=[&quot;4-year&quot;, &quot;2-year&quot;], ... </code></...
python|pandas|plotly-python|streamlit
1
378,010
70,653,419
Rename files in a folder using python
<p>I have different docs,pdf files available in my folder (almost 1000 no). I want to rename all the files. My folder structure like -</p> <pre><code>nikita ----------abc.doc ----------des.doc ----------jj1.pdf </code></pre> <p>I want name should be starting with NC_. For example</p> <pre><code>nikita ---------...
<p>the enumerate function accepts an optional argument to declare the index you want to start with. Click <a href="https://docs.python.org/3/library/functions.html#enumerate" rel="nofollow noreferrer">here</a> for further information.</p> <p>So if you added the argument like this:</p> <pre><code>enumerate(os.listdir(),...
python-3.x|pandas|dataframe
0
378,011
70,675,464
Creating categorical column based on multiple column values in groupby
<pre><code>print(df.groupby(['Step1', 'Step2', 'Step3']).size().reset_index(name='Freq')) Step1 Step2 Step3 Freq 0 6.0 17.6 28.60 135 1 7.5 ...
<p>As combinationss of the three steps are unique, I used the combinations as a key of Dictionary for Step Type.</p> <p>Here, I pre-defined category value but it can be auto-generated by scanning the df if needed.</p> <pre><code># df Step1 Step2 Step3 0 6.0 17.6 28.60 1 7.5 22.0 35.75 2 10.5 30.8 ...
python|pandas
0
378,012
70,625,453
Replacing nan values in a Pandas data frame with lists
<p>How to replace nan or empty strings (e.g. &quot;&quot;) with zero if it exists in any column. the values in any column can be a combination of lists and scalar values as follows</p> <pre><code>col1 col2 col3 col4 nan Jhon [nan, 1, 2] ['k', 'j'] 1 nan [1, 1, 5] 3 2 &quot;&quo...
<p>You have to handle the three cases (empty string, NaN, NaN in list) separately.</p> <p>For the NaN in list you need to loop over each occurrence and replace the elements one by one.</p> <p><em>NB. <code>applymap</code> is slow, so if you know in advance the columns to use you can subset them</em></p> <p>For the empt...
python|pandas
0
378,013
70,584,431
How to modified dataset
<p>I am working with a dataset like this, where the values of 'country name' are repeat several time, and 'Indicator name' to.</p> <p><a href="https://i.stack.imgur.com/fPyT4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fPyT4.png" alt="enter image description here" /></a></p> <p>I want to create a...
<p>You can use <code>pivot</code> as suggested by @Chris but you can also try:</p> <pre><code>out = df.set_index(['Country Name', 'Indicator Name']).unstack('Country Name').T \ .rename_axis(index=['Year', 'Country'], columns=None).reset_index() print(out) # Output Year Country IndicatorName1 IndicatorName...
python|pandas|dataframe
0
378,014
70,542,409
How to resave a csv file using pandas in Python?
<p>I have read a csv file using Pandas and I need to resave the csv file using code instead of opening the csv file and manually saving it.</p> <p>Is it possible?</p>
<p>There must be something I'm missing in the question. Why not simply:</p> <pre class="lang-py prettyprint-override"><code>df = pd.read_csv('file.csv', ...) # any changes df.to_csv('file.csv') </code></pre> <p>?</p>
python|pandas|csv
2
378,015
70,561,095
Replace only replacing the 1st argument
<p>I have the following code:</p> <pre><code>df['Price'] = df['Price'].replace(regex={'$': 1, '$$': 2, '$$$': 3}) df['Price'].fillna(0) </code></pre> <p>but even if a row had &quot;$$&quot; or &quot;$$$&quot; it still replaces it with a 1.0.</p> <p>How can I make it appropriately replace $ with 1, $$ with 2, and $$$ w...
<pre class="lang-py prettyprint-override"><code>df.Price.map({'$': 1, '$$': 2, '$$$': 3}) </code></pre>
python|pandas
3
378,016
70,622,836
How to match string and arrange dataframe accordingly?
<p>Got Input df1 and df2</p> <p><strong>df1:</strong></p> <pre><code>Subcategory_Desc Segment_Desc Flow Side Row_no APPLE APPLE LOOSE Apple Kanzi Front Row 1 APPLE APPLE LOOSE Apple Jazz Front Row 1 CITRUS ORANGES LOOSE Oran...
<p>So, given the following dataframes:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df1 = pd.DataFrame( { &quot;Subcategory_Desc&quot;: { 0: &quot;APPLE&quot;, 1: &quot;APPLE&quot;, 2: &quot;CITRUS&quot;, 3: &quot;PEAR&quot;, ...
python|pandas|dataframe|string-matching|fuzzywuzzy
1
378,017
70,412,168
Data cleaning, dictionary, inside dictionary,inside lists in CSV
<p>I'm a newbie learning data science, I've been trying to clean a data set, but I've had some hurdles on the way, the first issue I had was <a href="https://stackoverflow.com/questions/70404479/extract-data-in-a-column-from-a-csv-saved-as-a-dictionary-python-pandas"><strong>to explode a Dictionary inside a table into ...
<p>You have the following dataframe given by your dictionary:</p> <pre><code>data = {0: '{&quot;id&quot;:1379875462,&quot;name&quot;:&quot;Batton Lash&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&...
python|pandas|dictionary|machine-learning|data-science
1
378,018
70,526,822
Display specific column through PANDAS
<p>I have PortalMammals_species.csv which cotains following columns :</p> <pre><code>['record_id', 'new_code', 'oldcode', 'scientificname', 'taxa', 'commonname', 'unknown', 'rodent', 'shrubland_affiliated'] </code></pre> <p>I want to find out how many taxa are “Rodent” and Display those records by using PANDAS. ...
<p>If you want a count of 'Rodent' only while still using value_counts(), you can try:</p> <pre><code>df['taxa'][df['taxa']==&quot;Rodent&quot;].value_counts() </code></pre> <p>Another option:</p> <pre><code>df['taxa'][df['taxa']==&quot;Rodent&quot;].count() </code></pre> <p>The PortalMammals_species.csv dataset I am s...
python|pandas|dataset
0
378,019
70,395,804
Keras: Loss for image rotation and translate (target errore registration)?
<p>My model return 3 cordinate [x,y,angle]. I want TRE similarity between 2 images. My custom loss is:</p> <pre><code>loss(y_true, y_pred): s = tfa.image.rotate(images=y_true[0], angles=y_pred[0][0]) s = tfa.image.translate(images=s, translations=y_pred[0][1:]) s = tf.reduce_sum(tf.sqrt(tf.square(s-y_true[1]))) ...
<p>I Believe this will or will not work depending on the frequency distribution in your data. But in fft space this might be easier.</p>
python|tensorflow|rotation|loss|image-registration
0
378,020
70,473,295
python pandas how to read csv file by block
<p>I'm trying to read a CSV file, block by block.</p> <p>CSV looks like:</p> <pre class="lang-none prettyprint-override"><code>No.,time,00:00:00,00:00:01,00:00:02,00:00:03,00:00:04,00:00:05,00:00:06,00:00:07,00:00:08,00:00:09,00:00:0A,... 1,2021/09/12 02:16,235,610,345,997,446,130,129,94,555,274,4, 2,2021/09/12 02:17,3...
<p>Load your file with <code>pd.read_csv</code> and create block at each time the row of your first column is <code>No.</code>. Use <code>groupby</code> to iterate over each block and create a new dataframe.</p> <pre><code>data = pd.read_csv('data.csv', header=None) dfs = [] for _, df in data.groupby(data[0].eq('No.')....
python|pandas|csv
3
378,021
70,559,780
Pandas - return value of column
<p>I have a df with categories and thresholds:</p> <pre><code>cat t1 t2 t3 t4 a 2 4 6 8 b 3 5 7 0 c 0 0 1 0 </code></pre> <p>My end goal is to return the column name given category and score. I can select a row using a cat variable:</p> <pre><code>df[df['cat'] == cat] </code></pre> <p>How do I now return...
<p>You can compute the absolute difference to your value and get the index of the minimum with <code>idxmin</code>:</p> <pre><code>value = 3 cat = 'c' (df.set_index('cat') .loc[cat] .sub(value).abs() .idxmin() ) </code></pre> <p>Output: <code>'t3'</code></p> <h4>ensuring rounded down</h4> <pre><code>value = ...
python|pandas
1
378,022
70,568,067
Calculate standard deviation for groups of values using Python
<p>My data looks similar to this:</p> <pre><code>index name number difference 0 AAA 10 0 1 AAA 20 10 2 BBB 1 0 3 BBB 2 1 4 CCC 5 0 5 CCC 10 5 6 CCC 10.5 0.5 </code></pre> <p>I need to calculate standard deviation for difference column based on groups of na...
<p>You can use <code>groupby(['name'])</code> on the full data frame first, and only apply the agg on the columns of interest:</p> <pre><code>data = pd.DataFrame({'name':['AAA','AAA','BBB','BBB','CCC','CCC','CCC'], 'number':[10,20,1,2,5,10,10.5], 'difference':[0,10,0,1,0,5,0.5]})...
python|pandas-groupby|aggregate|standard-deviation
2
378,023
70,528,867
How to fix "pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available" when installing Tensorflow?
<p>I was trying to install TensorFlow with Anaconda 3.9.9.</p> <p>I ran the command</p> <pre class="lang-none prettyprint-override"><code>pip install tensorflow </code></pre> <p>and there was an error saying:</p> <pre class="lang-none prettyprint-override"><code>WARNING: pip is configured with locations that require TL...
<p>Try running below command to fix this issue:</p> <pre><code>pip install --upgrade pip install ssl </code></pre> <p>Please create an virtual_environment to install <code>TensorFlow</code> in <code>Anaconda</code>.</p> <p>Follow below code to install <code>TensorFlow</code> in virtual_environment:</p> <pre><code>conda...
python|tensorflow
0
378,024
70,670,965
Python Pandas How to get rid of groupings with only 1 row?
<p>In my dataset, I am trying to get the margin between two values. The code below runs perfectly if the fourth race was not included. After grouping based on a column, it seems that sometimes, there will be only 1 value, therefore, no other value to get a margin out of. I want to ignore these groupings in that case. H...
<p>How about returning a NaN if <code>times</code> does not have enough elements:</p> <pre><code>import numpy as np def winning_margin(times): if len(times) &lt;= 1: # New code return np.NaN # New code times = list(times) winner = min(times) times.remove(winner) return min(times) - winner ...
python-3.x|pandas|dataframe
0
378,025
70,450,017
Perform unique row operation after a groupby
<p>I have been stuck to a problem where I have done all the groupby operation and got the resultant dataframe as shown below but the problem came in last operation of calculation of one additional column</p> <p>Current dataframe:</p> <pre><code>code industry category count duration 2 ...
<p>I can't think of a single operation. But the way via a dictionary should work. Oh, and in advance for the other answerers the code to create the example dataframe.</p> <pre><code>st_l = [[2,'Retail','Mobile', 4, 7], [3,'Retail', 'Tab', 2, 33], [3,'Health', 'Mobile', 5, 103], [2,'Food', 'TV', 1, ...
python-3.x|pandas|dataframe|pandas-groupby
0
378,026
70,514,647
Applying a condition for all similar values within a column in a Pandas dataframe
<p>I have the following dataset in a pandas dataframe:</p> <pre><code>Patient_ID Image_Type ... P001 Paired P001 Paired P001 Paired P001 CBCT P002 CBCT P002 CBCT P002 CBCT P002 CBCT P002 CBCT P002 ...
<p>IIUC:</p> <pre><code>df[df[&quot;Image_Type&quot;] == &quot;CBCT&quot;].groupby(&quot;Patient_ID&quot;).size() == df.groupby(&quot;Patient_ID&quot;).size() #Patient_ID #P001 False #P002 True #P003 True #dtype: bool </code></pre> <p>I'm using <code>df</code> as</p> <pre><code> Patient_ID Image_Type 0 ...
python|pandas
-1
378,027
70,477,609
Optimizing using only accuracy
<p>As I know we optimize our model with changing the weight parameters over the iterations. The aim is to minimize the loss <strong>and maximize the accuracy</strong>.</p> <p>I don't understand why we using loss as parameter as well if we have accuracy as parameter.</p> <p>Can we use only accuracy and drop loss from ou...
<p>In short, perfecting a neural network is all about minimizing the difference between the intended result and given result. The difference is known as the cost/loss. So the smaller the cost/loss, the closer the intended value, so the higher the accuracy</p> <p>I suggest you watch 3Blue1Brown's video series on neural ...
python|tensorflow|keras|deep-learning|neural-network
0
378,028
70,604,882
Train a model with a task and test it with another task?
<p>I have a data-frame consists of 3000 samples, n numbers of features, and two targets columns as follow:</p> <pre><code>mydata: id, f1, f2, ..., fn, target1, target2 01, 23, 32, ..., 44, 0 , 1 02, 10, 52, ..., 11, 1 , 2 03, 66, 15, ..., 65, 1 , 0 ...
<p>This is not called Multi-Task Learning but Transfer Learning. It would be multi-task learning if you had trained your model to predict both the <code>target1</code> and <code>target2</code>.</p> <p>Yes, there are ways to handle this issue. The final layer of the model is just the classifier head that computes the fi...
python|dataframe|tensorflow|machine-learning|neural-network
0
378,029
70,470,991
how to create a stacked bar chart indicating time spent on nest per day
<p>I have some data of an owl being present in the nest box. In a previous question you helped me visualize when the owl is in the box:</p> <p><img src="https://i.stack.imgur.com/9L3JY.png" alt="owl in box" /></p> <p>In addition I created a plot of the hours per day spent in the box with the code below (probably this c...
<p>The data seems to have been created manually, so I have changed the format of the data presented. The approach I took was to create the time spent and the time not spent, with a continuous index of 1 minute intervals with the start and end time as the difference time and a flag of 1. Now to create non-stay time, I w...
python|pandas|time-series|stackedbarseries
0
378,030
70,437,442
pandas rolling on specific column
<p>I'm trying something very simple, seemingly at least which is to do a rolling sum on a column of a dataframe. See minimal example below :</p> <pre><code>df = pd.DataFrame({&quot;Col1&quot;: [10, 20, 15, 30, 45], &quot;Col2&quot;: [13, 23, 18, 33, 48], &quot;Col3&quot;: [17, 27, ...
<p>The correct syntax is:</p> <pre><code>df['sum2'] = df.rolling(window=&quot;3d&quot;, min_periods=2, on='dt')['Col1'].sum() print(df) # Output: Col1 Col2 Col3 dt sum2 0 10 13 17 2020-01-01 NaN 1 20 23 27 2020-01-02 30.0 2 15 18 22 2020-01-03 45.0 3 30 33 37 2020-...
python|pandas
3
378,031
70,597,991
how add new column with column names based on conditioned values?
<p>I have a table that contains active cases of covid per country for period of time. The columns are country name and dates.</p> <p>I need to find the max value of active cases per country and the corresponding date of the max values. I have created a list of max values but cant manage to create a column with the cor...
<p>In pandas we are trying not to use python loops, unless we REALLY need them.</p> <p>I suppose that your dataset looks something like that:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({&quot;Country&quot;: [&quot;Poland&quot;, &quot;Ukraine&quot;, &quot;Czechia&quot;, &quot;Russia&quot;], ...
python|pandas|dataframe
1
378,032
70,499,932
Multiple plots from function Matplotlib
<p>(Adjusted to suggestions) I already have a function that performs some plot:</p> <pre><code>def plot_i(Y, ax = None): if ax == None: ax = plt.gca() fig = plt.figure() ax.plot(Y) plt.close(fig) return fig </code></pre> <p>And I wish to use this to plot in a grid for n arrays. Let's assume ...
<p>If I understand correctly, you are looking for something like this:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt def plot_i(Y, ax=None): if ax == None: ax = plt.gca() ax.plot(Y) return def multi_plot(Y_arr, function, n_cols=2): n = Y...
python|numpy|matplotlib
0
378,033
70,428,401
Is it possible in numpy array to add rows with different length and then add elements to that rows in python?
<ul> <li>Python Version: 3.7.11</li> <li>numpy Version: 1.21.2</li> </ul> <p>I want to have a numpy array, something like below:</p> <pre><code>[ [&quot;Hi&quot;, &quot;Anne&quot;], [&quot;How&quot;, &quot;are&quot;, &quot;you&quot;], [&quot;fine&quot;] ] </code></pre> <p>But the process of creating this nu...
<p>Numpy arrays are not optimised for inconsistent dimensions, and therefore not good practice. You can only do this by making your elements objects, not strings. But like I said, numpy is not the way to go for this.</p> <pre><code>a = numpy.array([[&quot;Hi&quot;, &quot;Anne&quot;], [&quot;How&quot;, &quot;are&quot;, ...
python|arrays|python-3.x|numpy
2
378,034
42,724,786
Why numpy argmax() not getting the index?
<p>so I'm really new with data analysis and numpy library, and just playing around with the builtin function.</p> <p>I have this on top of my file <code>import numpy as np</code></p> <pre><code>new_arr = np.arange(25) print new_arr.argmax() </code></pre> <p>which should print out the index of the maximum value, not ...
<p><code>np.arange</code> starts from zero (unless you give it a different start), and indexing is also 0-based. So in <code>np.arange(25)</code>, the 0th element is zero, the 1th element is 1, etc. So every element is the same number as its index. So the maximum value is 24 and its index is also 24.</p>
python|python-2.7|numpy|data-science
0
378,035
42,962,215
Python - how to correctly index numpy array with other numpy arrays, similarly to MATLAB
<p>I'm trying to learn python after years of using MATLAB and this is something I'm really stuck with. I have an array, say 10 by 8. I want to find rows that have value 3 in the first column and take columns "2:" in that row. What I do is:</p> <pre><code>newArray = oldArray[np.asarray(np.where(oldArray[:,0] == 3)), 2:...
<p><code>Slice</code> the first column and compare against <code>3</code> to give us a mask for selecting rows. After selecting rows by indexing into the first axis/rows of a <code>2D</code> array of the input array, we need to select the columns (second axis of array). On your MATLAB code, you have <code>3:end</code>,...
python|arrays|matlab|numpy|indexing
2
378,036
42,842,418
Using apply on pandas dataframe with strings without looping over series
<p>I have a pandas DataFrame filled with strings. I would like to apply a string operation to all entries, for example <code>capitalize()</code>. I know that for a series we can use <code>series.str.capitlize()</code>. I also know that I can loop over the column of the Dataframe and do this for each of the columns. But...
<p>use <code>stack</code> + <code>unstack</code><br> <code>stack</code> makes a dataframe with a single level column index into a series. You can then perform your <code>str.capitalize()</code> and <code>unstack</code> to get back your original form.</p> <pre><code>df.stack().str.capitalize().unstack() </code></pre>
string|python-3.x|pandas|dataframe|apply
1
378,037
42,896,605
Tensorflow - About mnist.train.next_batch()
<p>When I search about mnist.train.next_batch() I found this <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/datasets/mnist.py" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/datasets/mnist.py</a></...
<p>Re 1, when <code>shuffle=True</code> the order of examples in the data is randomized. Re 2, yes, it should respect whatever order the examples have in the numpy arrays.</p>
python|machine-learning|tensorflow|mnist
3
378,038
42,676,213
Can my numba code be faster than numpy
<p>I am new to Numba and am trying to speed up some calculations that have proved too unwieldy for numpy. The example I've given below compares a function containing a subset of my calculations using a vectorized/numpy and numba versions of the function the latter of which was also tested as pure python by commenting...
<p>On my machine running numba 0.31.0, the Numba version is 2x faster than the vectorized solution. When timing numba functions, you need to run the function more than one time because the first time you're seeing the time of jitting the code + the run time. Subsequent runs will not include the overhead of jitting the ...
python|numpy|numba
0
378,039
42,815,644
return streams for multiple securities in pandas
<p>Suppose I have a table which looks like this:</p> <pre><code> Ticker Date ClosingPrice 0 A 01-02-2010 11.4 1 A 01-03-2010 11.5 ... 1000 AAPL 01-02-2010 634 1001 AAPL 01-02-2010 635 </code></pre> <p>So, in other words, we have a sequence of timeseries spliced together one per...
<p>use <code>groupby</code></p> <pre><code>df.set_index(['Ticker', 'Date']).ClosingPrice.groupby(level=0).pct_change() Ticker Date A 01-02-2010 NaN 01-03-2010 0.008772 AAPL 01-02-2010 NaN 01-02-2010 0.001577 Name: ClosingPrice, dtype: float64 </code></pre>
python|pandas|time-series
1
378,040
42,930,485
How can I slice 2D array in python without Numpy module?
<ol> <li>What I wanna process is slice 2D array partially without numpy module like following example with numpy.</li> <li><p>and I want to know Time Complexity of Slicing Lists in python basic function</p> <pre><code>import numpy as np A = np.array([ [1,2,3,4,5,6,7,8] for i in range(8)]) n = len(A[0]) x = int(n/2...
<p>For the first question:</p> <pre><code>A = [ [1,2,3,4,5,6,7,8] for i in range(8)] n = len(A[0]) x = int(n/2) TEMP = [[None]*2 for i in range(2)] for w in range(2): for q in range(2): TEMP[w][q] = [item[q * x:(q * x) + x] for item in A[w * x:(w * x) + x]] for w in range(2): for q in range(2): ...
python|arrays|numpy
1
378,041
42,733,798
Countvectorizer having words not in data
<p>I am new to sklearn and countvectorizer. </p> <p>Some weird behaviour is happening to me. </p> <p>Initializing the count vectorizer</p> <pre><code>from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer() document_mtrx = count_vect.fit_transform(df['description']) count_vect.vocab...
<p><a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html" rel="nofollow noreferrer">CountVectorizer</a> has a parameter <code>lowercase</code> which defaults to <code>True</code> - most probably that's why you can't find those values.</p> <p>So try this:</p> <pre...
pandas|scikit-learn
2
378,042
42,966,813
Pandas DataFrame to drop rows in the groupby
<p>I have a DataFrame with three columns <code>Date</code>, <code>Advertiser</code> and ID. I grouped the data firsts to see if volumns of some Advertisers are too small (For example when <code>count()</code> less than 500). And then I want to drop those rows in the group table.</p> <pre><code>df.groupby(['Date','Adve...
<p>You have a Series object after the <code>groupby</code>, which can be filtered based on value with a chained <em>lambda</em> filter:</p> <pre><code>df.groupby(['Date','Advertiser']).ID.count()[lambda x: x &gt;= 500] #Date Advertiser #2016-01 A 50000 # C 4000 # D ...
python|pandas|dataframe
11
378,043
42,614,993
Back propagation algorithm gets stuck on training AND function
<p>Here is an implementation of AND function with single neuron using tensorflow:</p> <pre><code>def tf_sigmoid(x): return 1 / (1 + tf.exp(-x)) data = [ (0, 0), (0, 1), (1, 0), (1, 1), ] labels = [ 0, 0, 0, 1, ] n_steps = 1000 learning_rate = .1 x = tf.placeholder(dtype=tf.float...
<p>Aren't you missing a <code>mean(error)</code> ?</p> <p>Your problem is the particular combination of the sigmoid, the cost function, and the optimizer.</p> <p>Don't feel bad, AFAIK this exact problem stalled the <em>entire field</em> for a few years.</p> <p>Sigmoid is flat when you're far from the middle, and You...
python|machine-learning|tensorflow|neural-network
2
378,044
42,993,439
How to Pivot a table in csv horizontally in Python using Pandas df?
<p>I have data in this format - </p> <pre>MonthYear HPI Div State_fips 1-1993 105.45 7 5 2-1993 105.58 7 5 3-1993 106.23 7 5 4-1993 106.63 7 5 Required Pivot Table as: Stafips 1-1993 2-1993 3-1993 4-1993 5 105.45 105.58 106.23 106.63</pre> <p>(pretty new to pandas)</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a>:</p> <pre><code>df1 = df.s...
python|csv|pandas|dataframe|pivot
1
378,045
42,729,468
Dimensionality Reduction in Python (defining the variance threshold)
<p>Afternoon. I'm having some trouble with my script. Specifically, I'd like to keep the singular values and their corresponding eigenvectors when the sum of a subset of the eigenvalues is greater than .9*the sum of all the eigenvalues. So far Iv'e been able to use a for loop and append function that creates a list ...
<p>I think your <code>singular_array</code> with its "mixed" scalar/vector elements is a bit more than <code>np.sum</code> can handle. I'm not 100% sure but aren't the variances the squares of the singular values? In other words shouldn't you be using your <code>eigenvalues</code> for the decision?</p> <p>Anyway, here...
arrays|numpy|iteration|linear-algebra|python-3.6
0
378,046
42,954,655
Python pandas read dataframe from custom file format
<p>Using Python 3 and pandas 0.19.2</p> <p>I have a log file formatted this way:</p> <pre><code>[Header1][Header2][Header3][HeaderN] [=======][=======][=======][=======] [Value1][Value2][Value3][ValueN] [AnotherValue1][ValuesCanBeEmpty][][] ... </code></pre> <p>...which is very much like a CSV excepted that each val...
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer"><code>read_csv</code></a> with separator <code>][</code> which has to be escape by <code>\</code>. Then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.ht...
pandas|parsing|dataframe
1
378,047
42,913,564
Numpy C API - Using PyArray_Descr for array creation causes segfaults
<p>I'm trying to use the Numpy C API to create Numpy arrays in C++, wrapped in a utility class. Most things are working as expected, but whenever I try to create an array using one of the functions taking a <code>PyArray_Descr*</code>, the program instantly segfaults. What is the correct way to set up the <code>PyArray...
<p>Try using </p> <pre><code>descr = PyArray_DescrFromType(NPY_UINT16); </code></pre> <p>I've only recently been writing against the numpy C-API, but from what I gather the PyArray_Descr is basically the dtype from python-land. You should building these yourself and use the FromType macro if you can.</p>
python|c++|numpy
4
378,048
42,888,873
Frobenius normalization implementation in tensorflow
<p>I'm beginner in <strong>tensorflow</strong> and i want to apply <a href="http://mathworld.wolfram.com/FrobeniusNorm.html" rel="nofollow noreferrer">Frobenius normalization</a> on a tensor but when i searched i didn't find any function related to it in tensorflow and i couldn't implement it using tensorflow ops, i ca...
<pre><code>def frobenius_norm_tf(M): return tf.reduce_sum(M ** 2) ** 0.5 </code></pre>
tensorflow|normalization
2
378,049
42,800,565
using or to return two columns. Pandas
<pre><code>def continent_af(): africa = df[df['cont'] == 'AF' or df['cont'] == 'af'] return africa print(continent_af()) </code></pre> <p>So the first half of the second line returned what I wanted, but when I put the or function in, i am getting an error, which reads </p> <blockquote> <p>the truth value of...
<p>Try:</p> <pre><code>df[(df['cont'] == 'AF') | (df['cont'] == 'af')] </code></pre>
python|pandas
0
378,050
42,980,112
Trouble creating/manipulating Pandas DataFrame from given list of JSON records
<p>I have json records in the file json_data. I used <code>pd.DataFrame(json_data)</code> to make a new table, <code>pd_json_data</code>, using these records.</p> <p><a href="https://i.stack.imgur.com/HuoYq.png" rel="nofollow noreferrer">pandas table pd_json_data</a></p> <p>I want to manipulate <code>pd_json_data</co...
<h3>Updated Answer</h3> <p>Make fake data</p> <pre><code>df = pd.DataFrame({'number of checks': [5, 10, 300, 8], 'positive checks':[[1,3,10], [10,11], [9,200], [1,8,7]], 'url': ['a', 'b', 'c', 'd']}) </code></pre> <p>Output</p> <pre><code> number of checks positive checks u...
python|json|pandas
0
378,051
42,988,302
Pandas groupby results on the same plot
<p>I am dealing with the following data frame (only for illustration, actual df is quite large):</p> <pre><code> seq x1 y1 0 2 0.7725 0.2105 1 2 0.8098 0.3456 2 2 0.7457 0.5436 3 2 0.4168 0.7610 4 2 0.3181 0.8790 5 3 ...
<p>You need to init axis before plot like in this example</p> <pre><code>import pandas as pd import matplotlib.pylab as plt import numpy as np # random df df = pd.DataFrame(np.random.randint(0,10,size=(25, 3)), columns=['ProjID','Xcoord','Ycoord']) # plot groupby results on the same canvas fig, ax = plt.subplots(fi...
python|pandas|matplotlib
9
378,052
42,748,566
pandas diff() giving 0 value for first difference, I want the actual value instead
<p>I have df:</p> <pre><code>Hour Energy Wh 1 4 2 6 3 9 4 15 </code></pre> <p>I would like to add a column that shows the per hour difference. I am using this:</p> <pre><code>df['Energy Wh/h'] = df['Energy Wh'].diff().fillna(0) </code></pre> <p>df1:</p> <pre><cod...
<p>You can just <code>fillna()</code> with the original column, without using <code>np.where</code>:</p> <pre><code>&gt;&gt;&gt; df['Energy Wh/h'] = df['Energy Wh'].diff().fillna(df['Energy Wh']) &gt;&gt;&gt; df Energy Wh Energy Wh/h Hour 1 4 4.0 2 6 2.0 3 9...
python|pandas|numpy|dataframe
23
378,053
43,024,835
How do I read a bytearray from a CSV file using pandas?
<p>I have a <code>csv</code> file which has a column full of <code>bytearrays</code>. It looks like this:</p> <pre><code>bytearray(b'\xf3\x90\x02\xff\xff\xff\xe0?') bytearray(b'\xf3\x90\x02\xff\xff\xff\xe0?') bytearray(b'\xf3\x00\x00\xff\xff\xff\xe0?') </code></pre> <p>and so on. I tried to read this <code>csv</code>...
<p>You can pass a converter function to <code>pandas.read_csv()</code> to turn your <code>bytearray</code> into a <code>bytearray</code></p> <p><strong>Code:</strong></p> <pre><code>from ast import literal_eval def read_byte_arrays(bytearray_string): if bytearray_string.startswith('bytearray(') and \ ...
python|csv|pandas
2
378,054
42,983,698
Cant iterate through multiple pandas series
<p>So I was attempting to iterate through two series that I obtained from a Pandas DF, and I found that I could not iterate through them to return numbers less than 280.000. I also realized that I could not iterate over lists either. Is there any way I can iterate over multiple lists, series, etc? thanks. Example below...
<p>Currently, two_series is just a tuple with two elements, each of which is a Series. So when you loop through all the elements of two_series, i is the whole series, and you only loop twice. It doesn't make sense to ask if a Series is less than 280, so it throws an error.</p> <p>You could just concatenate the series,...
python|pandas
0
378,055
42,751,748
using python to project lat lon geometry to utm
<p>I have a dataframe with earthquake data called eq that has columns listing latitude and longitude. using geopandas I created a point column with the following:</p> <pre><code>from geopandas import GeoSeries, GeoDataFrame from shapely.geometry import Point s = GeoSeries([Point(x,y) for x, y in zip(df['longitude'], d...
<p>Latitude/longitude aren't really a projection, but sort of a default "unprojection". See <a href="http://www.georeference.org/doc/latitude_longitude_projection.htm" rel="nofollow noreferrer">this page for more details</a>, but it probably means your data uses <code>WGS84</code> or <code>epsg:4326</code>.</p> <p>Let...
python|gis|geopandas
4
378,056
42,889,621
Converting numpy array values into integers
<p>My values are currently showing as <code>1.00+e09</code> in an array (type float64). I would like them to show <code>1000000000</code> instead. Is this possible?</p>
<p>Make a sample array</p> <pre><code>In [206]: x=np.array([1e9, 2e10, 1e6]) In [207]: x Out[207]: array([ 1.00000000e+09, 2.00000000e+10, 1.00000000e+06]) </code></pre> <p>We can convert to ints - except notice that the largest one is too large the default int32</p> <pre><code>In [208]: x.astype(int) Out[208]:...
python|numpy
7
378,057
42,737,025
How to find minimum value every x values in an array?
<pre><code>path = ("C:/Users/Calum/AppData/Local/Programs/Python/Python35-32/Python Programs/PV Data/Monthly Data/brunel-11-2016.csv") with open (path) as f: readCSV = csv.reader((islice(f, 0, 8352)), delimiter = ';') irrad_bru1 = [] for row in readCSV: irrad1 = row[1] irrad_bru1.append(...
<p>You can use <code>np.minimum.reduceat</code>:</p> <pre><code>np.minimum.reduceat(a, np.arange(0, len(a), 200)) </code></pre>
python|numpy|minimum
0
378,058
43,014,269
Add multiple rows for each datetime in pandas dataframe
<pre><code> name_col datetime 2017-03-22 0.2 </code></pre> <p>I want to add multiple rows till the present date (2017-03-25) so that resulting dataframe looks like:</p> <pre><code> name_col datetime 2017-03-22 0.2 2017-03-23 0.0 2017-03-24 0.0 2017-03-25 0...
<p>You can also use <code>.resample()</code> method:</p> <pre><code>In [98]: df Out[98]: name_col datetime 2017-03-22 0.2 In [99]: df.loc[pd.to_datetime(pd.datetime.now().date())] = 0 In [100]: df Out[100]: name_col datetime 2017-03-22 0.2 2017-03-25 0.0 In [101]: df.resamp...
python|pandas|datetime
4
378,059
42,811,697
Pandas DataFrame to csv: Specifying decimal separator for mixed type
<p>I've found a somewhat strange behaviour when I create a Pandas DataFrame from lists and convert it to csv with a specific decimal separator.</p> <p>This works as expected:</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; a = pd.DataFrame([['a', 0.1], ['b', 0.2]]) &gt;&gt;&gt; a 0 1 0 a 0.1 1 b...
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.applymap.html" rel="nofollow noreferrer"><code>applymap</code></a> and cast the <code>float</code> typed cells to <code>str</code> by checking explicitly for their type. Then, replace the decimal dot<code>(.)</code> with the comma <c...
python|csv|pandas
2
378,060
42,977,488
DataFrame of soccer soccers into a league table
<p>so I'm going to re-write this question having spent some time trying to crack it today, and I think I'm doing okay so far.</p> <p>I have a soccer results database with this as the head(3)</p> <pre><code> Date Season home visitor FT hgoal vgoal division tier totgoal goaldif result 1993-04...
<p>Consider using <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.core.groupby.DataFrameGroupBy.rank.html" rel="nofollow noreferrer">GroupBy.rank</a> or <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.Series.rank.html" rel="nofollow noreferrer">Series.rank</a>...
python|pandas|numpy
1
378,061
43,010,072
numpy.all axis parameter misbehavior?
<p>I have a following array.</p> <pre><code>a = np.array([[0, 5, 0, 5], [0, 9, 0, 9]]) &gt;&gt;&gt;a.shape Out[72]: (2, 4) &gt;&gt;&gt;np.all(a,axis=0) Out[69]: array([False, True, False, True], dtype=bool) &gt;&gt;&gt;np.all(a,axis=1) Out[70]: array([False, False], dtype=bool) </code></pre> <p>B...
<p><code>axis=0</code> means to AND the elements together <strong>along</strong> axis 0, so <code>a[0, 0]</code> gets ANDed with <code>a[1, 0]</code>, <code>a[0, 1]</code> gets ANDed with <code>a[1, 1]</code>, etc. The axis specified gets collapsed.</p> <p>You're probably thinking that it takes <code>np.all(a[0])</cod...
python|numpy|axis
3
378,062
26,961,805
Accessing years within a dataframe in Pandas
<p>I have a dataframe wherein there is a column of datetimes:</p> <pre><code>rng = pd.date_range('1/1/2011', periods=4, freq='500D') print(rng) df = DataFrame(rng) </code></pre> <p>which looks like this:</p> <p><img src="https://i.stack.imgur.com/UlKfa.jpg" alt="dataframe"></p> <p>I would like to find the mean year...
<p>If you convert the column into a <a href="http://pandas.pydata.org/pandas-docs/version/0.15.0/generated/pandas.DatetimeIndex.html" rel="nofollow">DatetimeIndex</a>, then you can use its <code>year</code> attribute (which returns a NumPy array) and the array's <code>mean</code> method.</p> <pre><code>In [104]: pd.Da...
python|pandas
2
378,063
27,370,643
Pandas groupby monthly + prorate
<p>I have a MultiIndex series:</p> <pre><code>date xcs subdomain count 2012-04-05 111-11 zero 10 2012-04-11 222-22 m 25 2012-04-11 111-11 zero 30 </code></pre> <p>Basically the first 3 columns form a unique index. I need to group by year-month+xcs+subdomain, but...
<p>One way is to do it like this:</p> <p>Setup your dummy dataframe:</p> <pre><code>import pandas as pd data = """date xcs subdomain count 2012-04-05 111-11 zero 10 2012-04-11 222-22 m 25 2012-04-11 111-11 zero 30""" df = pd.read_csv(pd.io.common.StringIO(data), ...
pandas
1
378,064
27,132,757
Python pandas cumsum() reset after hitting max
<p>I have a pandas DataFrame with timedeltas as a cumulative sum of those deltas in a separate column expressed in milliseconds. An example is provided below:</p> <pre><code>Transaction_ID Time TimeDelta CumSum[ms] 1 00:00:04.500 00:00:00.000 000 2 00:00:04.600 00...
<p>Here's an example of how you might do this by iterating over each row in the dataframe. I created new data for the example for simplicity:</p> <pre><code>df = pd.DataFrame({'TimeDelta': np.random.normal( 900, 60, size=100)}) print df.head() TimeDelta 0 971.021295 1 734.359861 2 867.000397 3 992.166539 4 85...
python|pandas|timedelta|cumsum
11
378,065
27,111,083
numpy slice to return last two dimensions
<p>Basically I'm looking for a function or syntax that will allow me to get the first 'slice' of the last two dimensions of a n dimensional numpy array with an arbitrary number of dimensions.</p> <p>I can do this but it's too ugly to live with, and what if someone sends a 6d array in? There must be a numpy function l...
<p>How about this,</p> <pre><code>def get_last2d(data): if data.ndim &lt;= 2: return data slc = [0] * (data.ndim - 2) slc += [slice(None), slice(None)] return data[slc] </code></pre>
python|numpy|slice
2
378,066
14,918,357
replace integers in array Python
<p>They told me to post a new question to the <a href="https://stackoverflow.com/questions/14917975/accessing-elements-in-array-python">second part of the question</a>.</p> <p>Is there some way I can replace the first 8 integers in the multidimensional array with 8 integers of array that I created for example: </p> <...
<p>Try with this:</p> <pre><code>r[0, :8] = array </code></pre> <p>It looks like you can use reading the <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html" rel="nofollow">numpy docs on indexing</a>.</p>
python|numpy
5
378,067
14,907,364
pandas transform timeseries into multiple column DataFrame
<p>I have a timeseries of intraday day data looks like below</p> <pre><code>ts =pd.Series(np.random.randn(60),index=pd.date_range('1/1/2000',periods=60, freq='2h')) </code></pre> <p>I am hoping to transform the data into a DataFrame, with the columns as each date, and rows as the time in the date.</p> <p>I have trie...
<pre><code>import pandas as pd import numpy as np index = pd.date_range('1/1/2000', periods=60, freq='2h') ts = pd.Series(np.random.randn(60), index = index) key = lambda x: x.time() groups = ts.groupby(key) print pd.DataFrame({k:g for k,g in groups}).resample('D').T </code></pre> <p>out:</p> <pre><code> ...
python|pandas|dataframe|pandas-groupby
2
378,068
14,906,962
Python double free error for huge datasets
<p>I have a very simple script in Python, but for some reason I get the following error when running a large amount of data:</p> <pre><code>*** glibc detected *** python: double free or corruption (out): 0x00002af5a00cc010 *** </code></pre> <p>I am used to these errors coming up in C or C++, when one tries to free me...
<p>After discussions on the same issue on the Numpy Github page (<a href="https://github.com/numpy/numpy/issues/2995" rel="nofollow noreferrer">https://github.com/numpy/numpy/issues/2995</a>) it has been brought to my attention that Numpy/Scipy will not support such a large number of non-zeros in the resulting sparse m...
python|memory|numpy|scipy
6
378,069
14,920,903
Time difference in seconds from numpy.timedelta64
<p>How to get time difference in seconds from numpy.timedelta64 variable?</p> <pre><code>time1 = '2012-10-05 04:45:18' time2 = '2012-10-05 04:44:13' dt = np.datetime64(time1) - np.datetime64(time2) print dt 0:01:05 </code></pre> <p>I'd like to convert <code>dt</code> to number (int or float) representing time differ...
<p>To get number of seconds from <code>numpy.timedelta64()</code> object using <a href="https://docs.scipy.org/doc/numpy/reference/arrays.datetime.html" rel="noreferrer"><code>numpy</code> 1.7 experimental datetime API</a>:</p> <pre><code>seconds = dt / np.timedelta64(1, 's') </code></pre>
python|datetime|numpy
93
378,070
25,404,705
Calculating Percentile scores for each element with respect to its column
<p>So my NumPy array looks like this</p> <pre><code>npfinal = [[1, 3, 5, 0, 0, 0], [5, 2, 4, 0, 0, 0], [7, 7, 2, 0, 0, 0], . . . </code></pre> <p>Sample dataset I'm working with is 25k rows. </p> <p>The first 3 columns contain meaningful data, rest are placeholders for the percentiles.</p> ...
<p>I found a solution that I believe it works better when there are repeated values in the array:</p> <pre><code>import numpy as np from scipy import stats # some array with repeated values: M = np.array([[1, 7, 2], [5, 2, 2], [5, 7, 2]]) # calculate percentiles applying scipy rankdata to each column: percentile...
python|numpy|scipy
2
378,071
25,206,851
Aggregate over an index in pandas?
<p>How can I aggregate (sum) over an index which I intend to map to new values? Basically I have a <code>groupby</code> result by two variables where I want to groupby one variable into larger classes. The following code does this operation on <code>s</code> by mapping the first by-variable but seems too complicating:<...
<p>There is a <code>level=</code> option for <code>Series.sum()</code>, I guess you can use that and it will be a quite concise way to do it.</p> <pre><code>In [69]: s.index = pd.MultiIndex.from_tuples(map(lambda x: (mapping.get(x[0]), x[1]), s.index.values)) s.sum(level=(0,1)) Out[69]: 1 1 2 2 2 3 1 1 ...
pandas
3
378,072
25,119,536
Interpolating array columns with PiecewisePolynomial in scipy
<p>I'm trying to interpolate each column of a numpy array using scipy's <code>PiecewisePolynomial</code>. I know that this is possible for scipy's <code>interp1d</code> but for piecewise polynomial interpolation it does not seem to work the same way. I have the following code:</p> <pre><code>import numpy as np import ...
<p>The <code>scipy.interpolate.PiecewisePolynomial</code> inteprets the different columns of <code>y1</code> as the derivatives of the function to be interpolated, whereas <code>interp1d</code> interprets the columns as different functions.</p> <p>It may be that you do not actually want to use the <code>PiecewisePolyn...
python|arrays|numpy|scipy|interpolation
0
378,073
25,135,578
Python Pandas: drop a column from a multi-level column index?
<p>I have a multi level column table like this:</p> <pre><code> a ---+---+--- b | c | f --+---+---+--- 0 | 1 | 2 | 7 1 | 3 | 4 | 9 </code></pre> <p>How can I drop column "c" by name? to look like this:</p> <pre><code> a ---+--- b | f --+---+--- 0 | 1 | 7 1 | 3 | 9 </code></pre> <p>I tried this:<...
<p>Solved:</p> <pre><code>df.drop('c', axis=1, level=1) </code></pre>
pandas|dataframe|multiple-columns|multi-level
44
378,074
25,211,547
optimization of some numpy/scipy code
<p>I'm trying to optimize some python code, which uses <code>scipy.optimize.root</code> for rootfinding. </p> <p>cProfile tells me that most of the time the programm is evaluating the function called by <code>optimize.root</code>: e.g. for a total execution time of 80s, 58s are spend on <code>lineSphericalDist</code> ...
<p>If I understand well your <code>a0</code> and <code>t0</code> are not part of the optimization, you only optimize over <code>tt</code>. However, inside <code>lineSphericalDist</code>, you call self.fun(t0). You could precompute that quantity outside of lineSphericalDist, that would halve the number of calls to self....
optimization|numpy|scipy
2
378,075
25,385,374
Pandas aggregate -- how to retain all columns
<p>Example dataframe:</p> <pre><code>rand = np.random.RandomState(1) df = pd.DataFrame({'A': ['group1', 'group2', 'group3'] * 2, 'B': rand.rand(6), 'C': rand.rand(6), 'D': rand.rand(6)}) </code></pre> <p>print df</p> <pre><code> A B C D 0...
<p>Two stages: first find indices, then lookup all the rows.</p> <pre><code>idx = df.groupby('A').apply(lambda x: x['B'].argmax()) idx Out[362]: A group1 0 group2 1 group3 5 df.loc[idx] Out[364]: A B C D 0 group1 0.417022 0.186260 0.204452 1 group2 0.720324 0.345561...
python|pandas|aggregate
6
378,076
25,275,057
Pandas sort_index gives strange result after applying function to grouped DataFrame
<p>Basic setup:</p> <p>I have a <code>DataFrame</code> with a <code>MultiIndex</code> on both the rows and the columns. The second level of the column index has <code>float</code>s for values.</p> <p>I want to perform a <code>groupby</code> operation (grouping by the first level of the row index). The operation wil...
<p>The returned result looks as expected. You added columns. There was no guarantee that order imposed on those columns.</p> <p>You could just reimpose ordering:</p> <pre><code>result = result[sorted(result.columns)] </code></pre>
python|pandas
1
378,077
25,361,828
Plotting Pandas Time Data
<p>My data is a pandas dataframe called 'T':</p> <pre><code> A B C Date 2001-11-13 30.1 2 3 2007-02-23 12.0 1 7 </code></pre> <p>The result of T.index is </p> <pre><code>&lt;class 'pandas.tseries.index.DatetimeIndex'&gt; [2001-11-13, 2007-02-23] Length: 2, Freq: None, Timezone...
<p>Use the implemented pandas command:</p> <pre><code>In[211]: df2 Out[211]: A B C 1970-01-01 30.1 2 3 1980-01-01 12.0 1 7 In[212]: df2.plot() Out[212]: &lt;matplotlib.axes.AxesSubplot at 0x105224e0&gt; In[213]: plt.show() </code></pre> <p>You can access the axis using</p> <pre><code>ax = df2...
python|matplotlib|pandas
3
378,078
25,193,522
Renumbering a 1D mesh in Python
<p>First of all, I couldn't find the answer in other questions. </p> <p>I have a numpy array of integer, this is called ELEM, the array has three columns that indicate, element number, node 1 and node 2. This is one dimensional mesh. What I need to do is to renumber the nodes, I have the old and new node numbering tab...
<p>If you're doing a lot of these, it makes sense to encode the renumbering in a dictionary for fast lookup.</p> <pre><code>lookup_table = dict( zip( old_num, new_num ) ) # create your translation dict vect_lookup = np.vectorize( lookup_table.get ) # create a function to do the translation ELEM[:, 1:] = vect_lookup( E...
python|arrays|numpy
1
378,079
25,344,895
Ordered colored plot after clustering using python
<p>I have a 1D array called data=[5 1 100 102 3 4 999 1001 5 1 2 150 180 175 898 1012]. I am using python scipy.cluster.vq to find clusters within it. There are 3 clusters in the data. After clustering when I'm trying to plot the data, there is no order in it. </p> <p>It would be great if it's possible to plot the da...
<p>You can plot the clustered data points based on their distances from the cluster center and then write the index of each data point close to that in order to see how they scattered based on their clustering properties: </p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy.cluster.vq import k...
python|python-2.7|numpy|scipy|data-mining
1
378,080
25,368,750
Using Numpy Array to Create Unique Array
<p>Can you create a numpy array with all unique values in it?</p> <pre><code>myArray = numpy.random.random_integers(0,100,2500) myArray.shape = (50,50) </code></pre> <p>So here I have a given random 50x50 numpy array, but I could have non-unique values. Is there a way to ensure every value is unique?</p> <p>Thank y...
<p>The most convenient way to get a unique random sample from a set is probably <a href="http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.random.choice.html" rel="noreferrer"><code>np.random.choice</code></a> with <code>replace=False</code>.</p> <p>For example:</p> <pre><code>import numpy as np # creat...
python|arrays|numpy|unique
11
378,081
25,146,277
Pandas - Delete Rows with only NaN values
<p>I have a DataFrame containing many NaN values. <strong>I want to delete rows that contain too many NaN values; specifically: 7 or more.</strong></p> <p>I tried using the <em>dropna</em> function several ways but it seems clear that it greedily deletes columns or rows that contain <em>any</em> NaN values. </p> <p>T...
<p>Basically the way to do this is determine the number of cols, set the minimum number of non-nan values and drop the rows that don't meet this criteria:</p> <pre><code>df.dropna(thresh=(len(df) - 7)) </code></pre> <p>See the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html...
python|pandas|dataframe|rows|nan
14
378,082
25,171,420
Speed up Pandas filtering
<p>I have a 37456153 rows x 3 columns Pandas dataframe consisting of the following columns: <code>[Timestamp, Span, Elevation]</code>. Each <code>Timestamp</code> value has approximately 62000 rows of <code>Span</code> and <code>Elevation</code> data, which looks like (when filtered for <code>Timestamp = 17210</code>, ...
<p>You can vectorize the whole thing using <code>np.searchsorted</code>. I am not much of a pandas user, but something like this should work, and runs reasonably fast on my system. Using chrisb's dummy data:</p> <pre><code>In [8]: %%timeit ...: mesh = np.linspace(start, end, num=(end/delta + 1)) ...: midpoints =...
python|performance|optimization|numpy|pandas
6
378,083
30,592,868
Updating rows of the same index
<p>Given a DataFrame <code>df</code>:</p> <pre><code> yellowCard secondYellow redCard match_id player_id 1431183600x96x30 76921 X NaN NaN 76921 NaN X X 1431192600x162x32 71174 ...
<p>It looks like your df is multi-indexed on <code>match_id</code> and <code>player_id</code> so I would perform a <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html#pandas.DataFrame.groupby" rel="nofollow"><code>groupby</code></a> on the <code>match_id</code> and fill the <cod...
python|pandas
2
378,084
30,354,425
Applying DataFrame with logic expression to another DataFrame (need Pandas wizards)
<p>I have a DataFrame <code>conditions</code> with a set of conditions that are used like an expression:</p> <pre><code> indicator logic value Discount 'ADR Premium' '&lt;' -0.5 Premium 'ADR Premium' '&gt;' 0.5 </code></pre> <p>Now I have a dataframe <code>indicators</code> with...
<p>It's hard to tell from the question but I think you just want to classify each entry in <code>indicators</code> with according to some set of conditions for that column. First I would initialise signals:</p> <pre><code>signals = pd.Series(index=indicators.index) </code></pre> <p>This will be a series of nans. For ...
python|pandas|expression|signal-processing
1
378,085
30,568,701
distinct contiguous blocks in pandas dataframe
<p>I have a pandas dataframe looking like this:</p> <pre><code> x1=[np.nan, 'a','a','a', np.nan,np.nan,'b','b','c',np.nan,'b','b', np.nan] ty1 = pd.DataFrame({'name':x1}) </code></pre> <p>Do you know how I can get a list of tuples containing the start and end indices of distinct contiguous blocks? For example for th...
<p>You can use <code>shift</code> and <code>cumsum</code> to create 'id's for each contiguous block: </p> <pre><code>In [5]: blocks = (ty1 != ty1.shift()).cumsum() In [6]: blocks Out[6]: name 0 1 1 2 2 2 3 2 4 3 5 4 6 5 7 5 8 6 9 7 10 8 11 8 12 9 </cod...
python|pandas
11
378,086
30,643,436
copying a 24x24 image into a 28x28 array of zeros
<p>Hi I want to copy a random portion of a 28x28 matrix and then use the resulting 24x24 matrix to be inserted into a 28x28 matrix image = image.reshape(28, 28)</p> <pre><code> getx = random.randint(0,4) gety = random.randint(0,4) # get a 24 x 24 tile from a random location in img blank_image ...
<p>If you are getting an error, it might be because your np array dimensions are different. If your image is an RGB image, then your blank image should be defined as :</p> <pre><code>blank_image = np.zeros((28,28,3), uint8) </code></pre>
python|opencv|numpy
0
378,087
30,371,646
join two dataframe together
<pre><code> total_purchase_amt 2013-07-01 22533121 2014-08-29 214114844 2014-08-30 183547267 2014-08-31 205369438 total_purchase_amt 2014-08-31 2.016808e+08 2014-09-01 2.481354e+08 2014-09-02 2.626838e+08 2014-09-03 2.4972...
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html#pandas.DataFrame.combine_first" rel="nofollow"><code>combine_first</code></a> with the other df combining with the first df, this will perserve the values in your other df and add the missing values from the first ...
python|pandas
0
378,088
30,427,374
Pandas MAX formula across different grouped rows
<p>I have dataframe that looks like this:</p> <pre><code>Auction_id bid_price min_bid rank 123 5 3 1 123 4 3 2 124 3 2 1 124 1 2 2 </code></pre> <p>I'd like to create another column that returns MAX(rank 1 min_bid, rank...
<p>Here's an approach that does some reshaping with pivot()</p> <pre><code> Auction_id bid_price min_bid rank 0 123 5 3 1 1 123 4 3 2 2 124 3 2 1 3 124 1 2 2 </...
pandas
2
378,089
30,535,516
How to check for real equality (of numpy arrays) in python?
<p>I have some function in python returning a numpy.array:</p> <pre><code>matrix = np.array([0.,0.,0.,0.,0.,0.,1.,1.,1.,0.], [0.,0.,0.,1.,1.,0.,0.,1.,0.,0.]) def some_function: rows1, cols1 = numpy.nonzero(matrix) cols2 = numpy.array([6,7,8,3,4,7]) rows2 = numpy.array([0,0,0,1,1,1]) print...
<p>You could check <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow">where</a> the arrays are not equal:</p> <pre><code>print(where(rows1 != rows2)) </code></pre> <p>But what you are doing is unclear, first there is no <code>nonzeros</code> function in numpy, only a <a href...
python|c++|numpy|ipopt
1
378,090
30,515,204
time-series analysis in python
<p>I'm new to Python and I'm trying to analyze a time series. I have a Series indexed with dates, and I would like to split my time series to see e.g. how many $t$ appeared between 16 and 17, how many between 17 and 18, and so on.</p> <p>How can I do that for minutes, days, weeks, months? Basically I would like to zoo...
<p>Check out <a href="http://pandas.pydata.org/" rel="nofollow">Pandas</a>. Pandas provides data structures and data analysis tools for time series and will provide exactly the kind of functionality you are looking for. Look into this page of documentation which focuses on time series: <a href="http://pandas.pydata.org...
python|pandas|time-series
4
378,091
30,605,261
Referencing numpy array locations within if statements
<p>I have the following section of Python:</p> <pre><code>for j in range(0,T): for x in xrange(len(index)): for y in xrange(x+1,len(index)): if index(y) == index(x): continue </code></pre> <p>For which I have been attempting to translate successfully from a MATLAB equivalent. In m...
<p>Looks like <code>index</code> is an array of some sort, but when you do <code>index(y)</code> and <code>index(x)</code>, Python thinks you're trying to call a function <code>index()</code> using <code>x</code> and <code>y</code> as parameters, respectively.</p> <p>If you're trying to simply access the elements, use...
python|arrays|matlab|numpy|indexing
2
378,092
26,790,476
playing videos from file in anaconda
<p>This is my first time asking so this is a rather basic question. I'm trying to play saved videos using Anaconda on Windows, but for some reason nothing is playing. The intent is to play the current file, and then progress up to visual tracking in real time. Here is my code:</p> <pre><code>import numpy as np import ...
<p>The main problem is that <strong>y0u 4r3 n0t c0d1ng s4f3ly</strong>: you should always test the return of functions or the validity of the parameters returned by these calls.</p> <p><strong>These are the most common reasons why <code>VideoCapture()</code> fails</strong>:</p> <ul> <li>It was unable to find the file...
python|opencv|numpy|video-capture|anaconda
1
378,093
26,685,600
Pandas subtract 2 rows from same dataframe
<p>How do I subtract one row from another in the following dataframe (df):</p> <pre><code>RECL_LCC 1 2 3 RECL_LCC 35.107655 36.015210 28.877135 RECL_PI 36.961519 43.499506 19.538975 </code></pre> <p>I want to do something like:</p> <pre><code>df['Difference'] = df['RECL_LCC']-df['RE...
<p>You can select rows by index value using <code>df.loc</code>:</p> <pre><code>In [98]: df.loc['Diff'] = df.loc['RECL_LCC'] - df.loc['RECL_PI'] In [99]: df Out[99]: RECL_LCC 1 2 3 RECL_LCC 35.107655 36.015210 28.877135 RECL_PI 36.961519 43.499506 19.538975 Diff -1.853864 -7.4...
python|pandas|subtraction
9
378,094
26,639,569
Slicing multiple dimensions in a ndarray
<p>How to slice <code>ndarray</code> by multiple dimensions in one line? Check the last line in the following snippet. This seems so basic yet it gives a surprise... but why?</p> <pre><code>import numpy as np # create 4 x 3 array x = np.random.rand(4, 3) # create row and column filters rows = np.array([True, False, ...
<p>Since <code>rows</code> and <code>cols</code> are boolean arrays, when you do:</p> <pre><code>x[rows, cols] </code></pre> <p>it is like:</p> <pre><code>x[np.where(rows)[0], np.where(cols)[0]] </code></pre> <p>which is:</p> <pre><code>x[[0, 2], [0, 2]] </code></pre> <p>taking the values at positions <code>(0, 0...
python|arrays|numpy|slice
4
378,095
26,555,774
Efficient way to clean a csv?
<p>I am parsing and modifying large files (about a gig per month) which contain a record of every interaction. These files are sent to me by our client, so I am stuck with what they contain. I am using pandas to clean them up a bit, add some information, etc.</p> <p>I keep running into issues where, out of 1 million+ ...
<p>Use the H column to filter out the error rows using a boolean index and the <a href="http://pandas.pydata.org/pandas-docs/stable/basics.html#vectorized-string-methods" rel="nofollow">vectorized string methods.</a></p> <pre><code>good_rows_mask = df.TransID.str[0] == 'H' df = df[good_rows_mask] </code></pre>
pandas
0
378,096
26,689,614
Recursively calling functions within functions in Python (trying to replicate MATLAB behaviour)
<p>In MATLAB this function (by Hao Zhang) calls itself</p> <pre><code>function r=rotmat2expmap(R) % Software provided by Hao Zhang % http://www.cs.berkeley.edu/~nhz/software/rotations r=quat2expmap(rotmat2quat(R)); </code></pre> <p>as an argument to the function </p> <pre><code>function [r]=quat2expmap(q) % Softw...
<p>It seems that you have misread the original function definition. It doesn't recursively call itself, it calls instead <code>rotmat2quat</code> (not <code>rotmat2expmap</code>). You presumably need to implement <code>rotmat2quat</code> (see e.g., <a href="https://github.com/gwtaylor/imCRBM/blob/master/Motion/rotmat2q...
python|matlab|numpy|recursion
2
378,097
26,856,793
Pandas read excel with Chinese filename
<p>I am trying to load as a pandas dataframe a file that has Chinese characters in its name.</p> <p>I've tried:</p> <pre><code>df=pd.read_excel("url/某物2008.xls") </code></pre> <p>and</p> <pre><code>import sys df=pd.read_excel("url/某物2008.xls", encoding=sys.getfilesystemencoding()) </code></pre> <p>But the response...
<pre><code>df=pd.read_excel(u"url/某物2008.xls", encoding=sys.getfilesystemencoding()) </code></pre> <p>may work... but you may have to declare an encoding type at the top of the file</p>
python|unicode|pandas|character-encoding
3
378,098
26,874,857
Pandas TimeSeries With duration of event
<p>I've been googling for this for a while and haven't found a proper solution. I have a time series with a couple of million rows that has a rather odd structure:</p> <pre><code>VisitorID Time VisitDuration 1 01.01.2014 00:01 80 seconds 2 01.01.2014 00:03 37 seconds </code></pre> <p>I ...
<p>Create some dummy data first:</p> <pre><code>import numpy as np import pandas as pd start = pd.Timestamp("2014-11-01") end = pd.Timestamp("2014-11-02") N = 100000 t = np.random.randint(start.value, end.value, N) t -= t % 1000000000 start = pd.to_datetime(np.array(t, dtype="datetime64[ns]")) duration = pd.to_timede...
python|pandas|time-series|sampling
9
378,099
26,678,132
Python numpy subtraction no negative numbers (4-6 gives 254)
<p>I wish to subtract 2 <strong>gray</strong> human faces from each other to see the difference, but I encounter a problem that subtracting e.g. [4] - [6] gives [254] instead of [-2] (or difference: [2]).</p> <pre><code>print(type(face)) #&lt;type 'numpy.ndarray'&gt; print(face.shape) #(270, 270) print(type(nface)) #&...
<p>It sounds like the <code>dtype</code> of the array is <code>uint8</code>. All the numbers will be interpreted as integers in the range 0-255. Here, -2 is equal to 256 - 2, hence the subtraction results in 254.</p> <p>You need to recast the arrays to a <code>dtype</code> which supports negative integers, e.g. <code>...
python|numpy|grayscale|subtraction|array-difference
19