Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
9,000
35,591,081
FailedPreconditionError while trying to use RMSPropOptimizer on tensorflow
<p>I am trying to use the RMSPropOptimizer for minimizing loss. Here's the part of the code that is relevant:</p> <pre><code>import tensorflow as tf #build large convnet... #... opt = tf.train.RMSPropOptimizer(learning_rate=0.0025, decay=0.95) #do stuff to get targets and loss... #... grads_and_vars = opt.compute_gradients(loss) capped_grads_and_vars = [(tf.clip_by_value(g, -1, 1), v) for g, v in grads_and_vars] opt_op = self.opt.apply_gradients(capped_grads_and_vars) sess = tf.Session() sess.run(tf.initialize_all_variables()) while(1): sess.run(opt_op) </code></pre> <p>Problem is as soon as I run this I get the following error:</p> <pre><code>W tensorflow/core/common_runtime/executor.cc:1091] 0x10a0bba40 Compute status: Failed precondition: Attempting to use uninitialized value train/output/bias/RMSProp [[Node: RMSProp/update_train/output/bias/ApplyRMSProp = ApplyRMSProp[T=DT_FLOAT, use_locking=false, _device="/job:localhost/replica:0/task:0/cpu:0"](train/output/bias, train/output/bias/RMSProp, train/output/bias/RMSProp_1, RMSProp/learning_rate, RMSProp/decay, RMSProp/momentum, RMSProp/epsilon, clip_by_value_9)]] [[Node: _send_MergeSummary/MergeSummary_0 = _Send[T=DT_STRING, client_terminated=true, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=-6901001318975381332, tensor_name="MergeSummary/MergeSummary:0", _device="/job:localhost/replica:0/task:0/cpu:0"](MergeSummary/MergeSummary)]] Traceback (most recent call last): File "dqn.py", line 213, in &lt;module&gt; result = sess.run(opt_op) File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 385, in run results = self._do_run(target_list, unique_fetch_targets, feed_dict_string) File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 461, in _do_run e.code) tensorflow.python.framework.errors.FailedPreconditionError: Attempting to use uninitialized value train/output/bias/RMSProp [[Node: RMSProp/update_train/output/bias/ApplyRMSProp = ApplyRMSProp[T=DT_FLOAT, use_locking=false, _device="/job:localhost/replica:0/task:0/cpu:0"](train/output/bias, train/output/bias/RMSProp, train/output/bias/RMSProp_1, RMSProp/learning_rate, RMSProp/decay, RMSProp/momentum, RMSProp/epsilon, clip_by_value_9)]] Caused by op u'RMSProp/update_train/output/bias/ApplyRMSProp', defined at: File "dqn.py", line 159, in qLearnMinibatch opt_op = self.opt.apply_gradients(capped_grads_and_vars) File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 288, in apply_gradients update_ops.append(self._apply_dense(grad, var)) File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/training/rmsprop.py", line 103, in _apply_dense grad, use_locking=self._use_locking).op File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/training/gen_training_ops.py", line 171, in apply_rms_prop grad=grad, use_locking=use_locking, name=name) File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 659, in apply_op op_def=op_def) File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1904, in create_op original_op=self._default_original_op, op_def=op_def) File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1083, in __init__ self._traceback = _extract_stack() </code></pre> <p>Note that I don't get this error If am using the usual GradientDescentOptimizer. I am initializing my variables as you can see above but I don't know what 'train/output/bias/RMSProp' is because I don't create any such variable. I only have 'train/output/bias/' which does get initialized above.</p> <p>Thanks!</p>
<p>So for people from the future running into similar trouble, I found this post helpful: <a href="https://stackoverflow.com/questions/33788989/tensorflow-using-adam-optimizer">Tensorflow: Using Adam optimizer</a></p> <p>Basically, I was running </p> <pre><code>sess.run(tf.initialize_all_variables()) </code></pre> <p>before I had defined my loss minimization op</p> <pre><code>loss = tf.square(targets) #create the gradient descent op grads_and_vars = opt.compute_gradients(loss) capped_grads_and_vars = [(tf.clip_by_value(g, -self.clip_delta, self.clip_delta), v) for g, v in grads_and_vars] #gradient capping self.opt_op = self.opt.apply_gradients(capped_grads_and_vars) </code></pre> <p>This needs to be done <em>before</em> running the initialization op!</p>
tensorflow|gradient-descent
3
9,001
35,686,774
Python - Statistical distribution
<p>I'm quite new to python world. Also, I'm not a statistician. I'm in the need to implementing mathematical models developed by mathematicians in a computer science programming language. I've chosen python after some research. I'm comfortable with programming as such (PHP/HTML/javascript).</p> <p>I have a column of values that I've extracted from a MySQL database &amp; in need to calculate the below - </p> <pre><code>1) Normal distribution of it. (I don't have the sigma &amp; mu values. These need to be calculated too apparently). 2) Mixture of normal distribution 3) Estimate density of normal distribution 4) Calculate 'Z' score </code></pre> <p>The array of values looks similar to the one below ( I've populated sample data)-</p> <pre><code>d1 = [3,3,3,3,3,3,3,9,12,6,3,3,3,3,9,21,3,12,3,6,3,30,12,6,3,3,24,30,3,3,3] mu1, std1 = norm.fit(d1) </code></pre> <p>The normal distribution, I understand could be calculated as below -</p> <pre><code>import numpy as np from scipy.stats import norm mu, std = norm.fit(data) </code></pre> <p>Could I please get some pointers on how to get started with (2),(3) &amp; (4) in this please? I'm continuing to look up online as I look forward to hear from experts. </p> <p>If the question doesn't fully make sense, please do let me know what aspect is missing so that I'll try &amp; get information around that.</p> <p>I'd very much appreciate any help here please.</p>
<p>Some parts of your question are unclear. It might help to give the context of what you're trying to achieve, rather than what are the specific steps you're taking. </p> <p>1) + 3) In a Normal distribution - fitting the distribution, and estimating the mean and standard deviation - are basically the same thing. The mean and standard deviation <em>completely determine</em> the distribution. </p> <pre><code>mu, std = norm.fit(data) </code></pre> <p>is tantamount to saying "find the mean and standard deviation which best fit the distribution".</p> <p>4) Calculating the Z score - you'll have to explain what you're trying to do. This <a href="https://en.wikipedia.org/wiki/Standard_score" rel="nofollow">usually means</a> how much above (or below) the mean a data point is, in units of standard deviation. Is this what you need here? If so, then it is simply</p> <pre><code>(np.array(data) - mu) / std </code></pre> <p>2) Mixture of normal distribution - this is completely unclear. It usually means that the distribution is actually generated by more than a single Normal distribution. What do you mean by this?</p>
python|numpy|scipy|statistics|mixture-model
1
9,002
35,561,949
Pandas: how to test that top-n-dataframe really results from original dataframe
<p>I have a DataFrame, foo:</p> <pre><code> A B C D E 0 50 46 18 65 55 1 48 56 98 71 96 2 99 48 36 79 70 3 15 24 25 67 34 4 77 67 98 22 78 </code></pre> <p>and another Dataframe, bar, which contains the greatest 2 values of each row of foo. All other values have been replaced with zeros, to create sparsity:</p> <pre><code> A B C D E 0 0 0 0 65 55 1 0 0 98 0 96 2 99 0 0 79 0 3 0 0 0 67 34 4 0 0 98 0 78 </code></pre> <p>How can I test that every row in bar really contains the desired values?</p> <p><strong>One more thing:</strong> The solution should work with large DateFrames i.e. 20000 X 20000.</p>
<p>Obviously you can do that with looping and efficient sorting, but maybe a better way would be:</p> <pre><code>n = foo.shape[0] #Test1: #bar dataframe has original data except zeros for two values: diff = foo - bar test1 = ((diff==0).sum(axis=1) == 2) == n #Test2: #bar dataframe has 3 zeros on each line test2 = ((bar==0).sum(axis=1) == 3) == n #Test3: #these 2 numbers that bar has are the max bar2=bar.replace({0:pandas.np.nan(), inplace=True} #the max of remaining values is smaller than the min of bar: row_ok = (diff.max(axis=1) &lt; bar.min(axis=1)) test3 = (ok.sum() == n) </code></pre> <p>I think this covers all cases, but haven't tested it all...</p>
python-3.x|pandas|top-n
0
9,003
35,658,085
Data analysis on .json file in Python
<p>I have to analyse a .json file which looks like this:</p> <pre><code>{"columns":["id","timestamp","offset_freq","reprate_freq"], "index":[0,1,2,3,4,5,6,7 ... "data":[[526144,1451900097533,20000000.495000001,250000093.9642499983],[... } </code></pre> <p>it has over 600000 indexes so I dont want to show you the whole file.</p> <p>I have found out that I can read the file simply by:</p> <pre><code>df = pd.read_json(file_name_or_Python_string, orient='split') </code></pre> <p>But what do I do now if I want to do some math on one of the columns? For example devide all the offset_freq by 2?</p>
<p>to divide 'offset_freq' by 2:</p> <pre><code>df['offset_freq']/2 </code></pre> <p>I suggest you read 10 minutes to pandas for a quick pandas primer</p> <p><a href="http://pandas.pydata.org/pandas-docs/stable/10min.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/10min.html</a></p>
python|json|pandas
1
9,004
35,669,105
comparing column values based on other column values in pandas
<p>I have a dataframe:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame([['M',2014,'Seth',5], ['M',2014,'Spencer',5], ['M',2014,'Tyce',5], ['F',2014,'Seth',25], ['F',2014,'Spencer',23]],columns =['sex','year','name','number']) print df </code></pre> <p>I would like to find the most gender ambiguous name for 2014. I have tried many ways but haven't had any luck yet.</p>
<p>Not sure what do you mean by 'most gender ambigious', but you can start from this</p> <pre><code>&gt;&gt;&gt; dfy = (df.year == 2014) &gt;&gt;&gt; dfF = df[(df.sex == 'F') &amp; dfy][['name', 'number']] &gt;&gt;&gt; dfM = df[(df.sex == 'M') &amp; dfy][['name', 'number']] &gt;&gt;&gt; pd.merge(dfF, dfM, on=['name']) name number_x number_y 0 Seth 25 5 1 Spencer 23 5 </code></pre> <p>If you want just the name with highest total number then:</p> <pre><code>&gt;&gt;&gt; dfT = pd.merge(dfF, dfM, on=['name']) &gt;&gt;&gt; dfT name number_x number_y 0 Seth 25 5 1 Spencer 23 5 &gt;&gt;&gt; dfT['total'] = dfT['number_x'] + dfT['number_y'] &gt;&gt;&gt; dfT.sort_values('total', ascending=False).head(1) name number_x number_y total 0 Seth 25 5 30 </code></pre>
python|numpy|pandas
0
9,005
35,494,917
how do I add a 'RowNumber' field to a structured numpy array?
<p>I am using genfromtxt to load large csv files into structured arrays. I need to sort the data (using multiple fields), do some work and then restore the data to the original ordering. My plan is to add another field to the data and put the row number into this field before the first sort is applied. It can then be used to revert the order at the end. I thought there might be an elegant way of adding this field of record numbers but after hours of trying and searching for ideas I have nothing particularly slick.</p> <pre><code>import numpy import numpy.lib.recfunctions as rfn def main(): csvDataFile = 'C:\\File1.csv' csvData = numpy.genfromtxt(csvDataFile, delimiter=',',names = True, dtype='f8') rowNums = numpy.zeros(len(csvData),dtype=[('RowID','f8')]) #populate and add column for RowID for i in range (0, len(csvData)): rowNums['RowID'][i]=i csvDataWithID = rfn.merge_arrays((csvData, rowNums), asrecarray=True, flatten=True) </code></pre> <p>The recfunctions.merge_arrays in particular is very slow and adding the row numbers one by one seems so old school. Your ideas would be gratefully received.</p>
<pre><code>rowNums = np.zeros(len(csvData),dtype=[('RowID','f8')]) rowNums['RowID']=np.arange(len(csvData)) </code></pre> <p>The above saves approx half a second per file with the csv files I am using. Very good so far.</p> <p>However the key thing was how to efficiently obtain a record of the sort order. This is most elegantly solved using;</p> <pre><code>sortorder = np.argsort(csvData, 'col_1','col_2','col_3','col_4','col_5') </code></pre> <p>giving an array that lists the order of items in <code>CsvData</code> when sorted by cols 1 through 5. This negates the need to make, populate and merge a <code>RowID</code> column, saving me around 15s per csv file (over 6hrs across my entire dataset.) </p> <p>Thank you very much @hpaulj</p>
python|arrays|numpy
0
9,006
11,513,731
List differences between numbers
<p>I have seconds input ordered data from smallest to largest for both times.</p> <pre><code> start_time[s] = [10, 20, 30, 40, 50, 61, 79, 80] end_time[s] = [8, 9, 15, 31, 41, 60] </code></pre> <p>The lists are not the same sizes as they are generated log file timestamp data</p> <p>I want to get output for the positive difference between end_time and the minimum of the start_time </p> <p>The code I have is as follows:</p> <pre><code> for item1 in end_time: for item2 in start_time: if (item1 &gt; item2): new_item = item1 - item2 new_list.append(new_item) </code></pre> <blockquote> <blockquote> <blockquote> <p>[5, 21, 11, 1, 31, 21, 11, 1, 50, 40, 30, 20, 10] </p> </blockquote> </blockquote> </blockquote> <p>The ideal output will be generated as follows: </p> <blockquote> <blockquote> <blockquote> <p>[5, 11, 11, 20]</p> </blockquote> </blockquote> </blockquote> <p>5...this is by taking the end_time of 15 - the start_time of 10, why? Its the first end_time > start_time (8,9 are also end_times less than 10)</p> <p>11...this is by taking the next end_time of 31 (i don't want to use 15 as i will be double counting) and then subract the next start_time of 20 to give 11.</p> <p>11...this is by taking the following end_time of 41 and subtracting the start_time of 30 to give 11.</p> <p>20...this will be the last entry, it takes the 60 from the end_time and uses 40 from the start_time to give a difference of 20.</p>
<p>This is how i did it</p> <pre><code>start_time = [10, 20, 30, 40, 50, 61, 79, 80] end_time = [8, 9, 15, 31, 41, 60] time_difference = [(min(start_time) - et) for et in end_time if min(start_time) &gt; et] [2,1] </code></pre> <p>Your question maybe a bit ambiguous. I assume you strictly want positive time differences.</p> <p>What is the output that you would like?</p>
python|numpy|python-2.7
0
9,007
51,093,886
How to output a range of numbers from a column within a dataframe?
<p>I want to make a loop that will pull a number or range within a dataframe and stop analyzing the string after the word has been found.</p> <p>For example:</p> <pre><code> df['size']=['sz 10-13 of jordan 12', 'size 10 adidas', 'size 11 nike air forece 1', 'sz 6-7 jordan 6sz', ‘brand new Sz 11 jordan 5’] </code></pre> <p>I need a function similar to this:</p> <pre><code>def assignSize(row): sizeList =[] for word in sizeList: if word == 'sz' or word == 'size': #i do not know what to place here </code></pre> <p>But I would like my output to be:</p> <pre><code>df['size'] =['10-13','10','11','6-7'] </code></pre> <p>Basically I want the script to stop reading the string after finding the first number or first range of numbers. So of there is another 'sz' that follows after the initial size or sz, it should not read it.</p>
<p>Why not just this?:</p> <pre><code>df['size'] = df['size'].apply(lambda x: x.split()[1]) print(df['size']) </code></pre> <p>Output:</p> <pre><code>0 10-13 1 10 2 11 3 6-7 Name: size, dtype: object </code></pre> <p><strong><em>Edit</em></strong>:</p> <p>Try this:</p> <pre><code>import re df['size']=['sz 10-13 of jordan 12', 'size 10 adidas', 'brand new Sz 13 jordan 5', 'sz 6-7 jordan 6sz'] df['size'] = df['size'].apply(lambda x: '-'.join(re.findall(r'\d+', ' '.join(x.split()[:-1])))) print(df['size']) </code></pre> <p>Output:</p> <pre><code>0 10-13 1 10 2 13 3 6-7 Name: size, dtype: object </code></pre>
python|pandas|dataframe
1
9,008
50,781,772
How to get the maximum value and the position from each row of a matrix
<p>Assume that I have a matrix like this <code>A=np.array([[1,2,3],[5,2,6],[7,1,5]])</code></p> <p>Then, I want to select the biggest value and the position from each row.</p> <p>The result should be Value=[3,6,7], Position=[2,2,0].</p> <p>In Matlab, the code <code>[Value,Position]=max(A);</code> can calculate the correct answer </p> <p>But I need to change it into Python code.</p> <p>I have ever tried the code like that</p> <pre><code>Value=np.max(A, axis=1) Position=np.where(A==np.max(A,axis=1)) Result: Value=array([3, 6, 7]) Position=(array([], dtype=int32), array([], dtype=int32)) </code></pre> <p>The biggest value is correct, but the position is wrong.</p>
<p>Start with <code>argmax</code>, and use the result to index your into your array.</p> <pre><code>idx = A.argmax(axis=1) val = A[np.arange(len(A)), idx] </code></pre> <p></p> <pre><code>idx array([2, 2, 0]) val array([3, 6, 7]) </code></pre>
python|numpy
1
9,009
51,076,475
How to move to audio file with matching text
<p>I have an audio file which is converted into text by google speech API. I want a new feature like while clicking on the text at the same time audio timing should move to match a place in an audio file?</p> <p>Please refer this(<a href="http://www.ted.com/talks/reed_hastings_how_netflix_changed_entertainment_and_where_it_s_headed/transcript#t-128497" rel="nofollow noreferrer">http://www.ted.com/talks/reed_hastings_how_netflix_changed_entertainment_and_where_it_s_headed/transcript#t-128497</a>)</p>
<p>Time offset values can be included in your speech recognition results <a href="https://cloud.google.com/speech-to-text/docs/basics#time-offsets" rel="nofollow noreferrer">[1]</a>. By setting the <code>enable_word_time_offsets</code> parameter to <code>True</code> in your request configuration, timestamps will be included for the first alternative provided in the recognition response <a href="https://cloud.google.com/speech-to-text/docs/async-time-offsets#speech-async-recognize-gcs-python" rel="nofollow noreferrer">[2]</a>.</p>
python|tensorflow|machine-learning|nlp|google-cloud-platform
1
9,010
33,307,342
Dataframe.isin() giving this error: The truth value of a DataFrame is ambiguous
<p>Can you help with this error: what am I doing wrong with the df.isin function?</p> <pre><code>cursor = con.cursor() cursor.execute("""SELECT distinct date FROM raw_finmis_online_activation_temp""") existing_dates = [x[0] for x in cursor.fetchall()] if df[df['date'].isin(existing_dates)]: print "Yes it's in there" else: print "N" </code></pre> <p>It's giving me this error:</p> <blockquote> <p>ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p> </blockquote>
<p><code>df[df['date'].isin(existing_dates)]</code> returns a dataframe. Unlike normal sequences, DataFrames inherit their truthyness from <code>numpy.ndarray</code> which is don't allow you to do a truth check on it (unless it has length 1 -- which is weird).</p> <p>The solution depends on what you want out of that expression ... e.g. if you want to check if there is at least one element:</p> <pre><code>len(df[df['date'].isin(existing_dates)]) </code></pre> <p>or if you want to check if all the elements are "truthy":</p> <pre><code>df[df['date'].isin(existing_dates)].all() </code></pre>
python|python-3.x|pandas
6
9,011
33,472,157
Efficiently Find Partial String Match --> Values Starting From List of Values in 5 GB file with Python
<p>I have a 5GB file of businesses and I'm trying to extract all the businesses that whose business type codes (SNACODE) start with the SNACODE corresponding to grocery stores. For example, SNACODEs for some businesses could be 42443013, 44511003, 44419041, 44512001, 44522004 and I want all businesses whose codes start with my list of grocery SNACODES codes = [4451,4452,447,772,45299,45291,45212]. In this case, I'd get the rows for 44511003, 44512001, and 44522004</p> <p>Based on what I googled, the most efficient way to read in the file seemed to be one row at a time (if not the SQL route). I then used a for loop and checked if my SNACODE column started with any of my codes (which probably was a bad idea but the only way I could get to work).</p> <p>I have no idea how many rows are in the file, but there are 84 columns. My computer was running for so long that I asked a friend who said it should only take 10-20 min to complete this task. My friend edited the code but I think he misunderstood what I was trying to do because his result returns nothing.</p> <p>I am now trying to find a more efficient method than re-doing my 9.5 hours and having my laptop run for an unknown amount of time. The closest thing I've been able to find is <a href="https://stackoverflow.com/questions/4839597/most-efficient-way-to-find-partial-string-matches-in-large-file-of-strings-pyth">most efficient way to find partial string matches in large file of strings (python)</a>, but it doesn't seem like what I was looking for.</p> <h2>Questions:</h2> <p>What's the best way to do this? How long should this take?<br /> Is there any way that I can start where I stopped? (I have no idea how many rows of my 5gb file I read, but I have the last saved line of data--is there a fast/easy way to find the line corresponding to a unique ID in the file without having to read each line?)</p> <h3>This is what I tried -- in 9.5 hours it outputted a 72MB file (200k+ rows) of grocery stores</h3> <pre><code> codes = [4451,4452,447,772,45299,45291,45212] #codes for grocery stores for df in pd.read_csv('infogroup_bus_2010.csv',sep=',', chunksize=1): data = np.asarray(df) data = pd.DataFrame(data, columns = headers) for code in codes: if np.char.startswith(str(data[&quot;SNACODE&quot;][0]), str(code)): with open(&quot;grocery.csv&quot;, &quot;a&quot;) as myfile: data.to_csv(myfile, header = False) print code break #break code for loop if match grocery.to_csv(&quot;grocery.csv&quot;, sep = '\t') </code></pre> <p>This is what my friend edited it to. I'm pretty sure the <code>x = df[df.SNACODE.isin(codes)]</code> is only matching perfect matches, and thus returning nothing.</p> <pre><code> codes = [4451,4452,447,772,45299,45291,45212] matched = [] for df in pd.read_csv('infogroup_bus_2010.csv',sep=',', chunksize=1024*1024, dtype = str, low_memory=False): x = df[df.SNACODE.isin(codes)] if len(x): matched.append(x) print &quot;Processed chunk and found {} matches&quot;.format(len(x)) output = pd.concat(matched, axis=0) output.to_csv(&quot;grocery.csv&quot;, index = False) </code></pre> <p>Thanks!</p>
<p>To increase speed you could pre-build a single regexp matching the lines you need and the read the raw file lines (no csv parsing) and check them with the regexp...</p> <pre><code>codes = [4451,4452,447,772,45299,45291,45212] col_number = 4 # Column number of SNACODE expr = re.compile("[^,]*," * col_num + "|".join(map(str, codes)) + ".*") for L in open('infogroup_bus_2010.csv'): if expr.match(L): print L </code></pre> <p>Note that this is just a simple sketch as no escaping is considered... if the SNACODE column is not the first one and preceding fields may contain a comma you need a more sophisticated regexp like:</p> <pre><code>... '([^"][^,]*,|"([^"]|"")*",)' * col_num + ... </code></pre> <p>that ignores commas inside double-quotes</p>
python|pandas|bigdata|string-matching|data-extraction
2
9,012
9,071,084
Polar contour plot in matplotlib - best (modern) way to do it?
<p><strong>Update:</strong> I've done a full write-up of the way I found to do this on my blog at <a href="http://blog.rtwilson.com/producing-polar-contour-plots-with-matplotlib/" rel="noreferrer">http://blog.rtwilson.com/producing-polar-contour-plots-with-matplotlib/</a> - you may want to check there first.</p> <p>I'm trying to plot a polar contour plot in matplotlib. I've found various resources on the internet, (a) I can't seem to get my code to work and (b) many of the resources appear rather old, and I'm wondering if there is a better way now. For example, <a href="http://www.mail-archive.com/matplotlib-users@lists.sourceforge.net/msg01953.html" rel="noreferrer">http://www.mail-archive.com/matplotlib-users@lists.sourceforge.net/msg01953.html</a> suggests that something may be done to improve things soon, and that was in 2006!</p> <p>I'd love to be able to plot proper polar contour plots - like pcolor lets you do for its type of plot (see commented out section below), but I can't seem to find any way to do that, so I'm converting to cartesian co-ordinates first.</p> <p>Anyway, I have the code that follows:</p> <pre><code>from pylab import * import numpy as np azimuths = np.arange(0, 360, 10) zeniths = np.arange(0, 70, 10) values = [] for azimuth in azimuths: for zenith in zeniths: print "%i %i" % (azimuth, zenith) # Run some sort of model and get some output # We'll just use rand for this example values.append(rand()) theta = np.radians(azimuths) values = np.array(values) values = values.reshape(len(zeniths), len(azimuths)) # This (from http://old.nabble.com/2D-polar-surface-plot-td28896848.html) # works fine ############## # Create a polar axes # ax = subplot(111, projection='polar') # pcolor plot onto it # c = ax.pcolor(theta, zeniths, values) # show() r, t = np.meshgrid(zeniths, azimuths) x = r*np.cos(t) y = r*np.sin(t) contour(x, y, values) </code></pre> <p>When I run that I get an error <code>TypeError: Inputs x and y must be 1D or 2D.</code>. I'm not sure why I get this, as both x and y are 2D. Am I doing something wrong?</p> <p>Also, it seems rather clunky to be putting my values returned from my model into a list and then reshaping it. Is there a better way to do this?</p>
<p>You should just be able to use <code>ax.contour</code> or <code>ax.contourf</code> with polar plots just as you normally would... You have a few bugs in your code, though. You convert things to radians, but then use the values in degrees when you plot. Also, you're passing in <code>r, theta</code> to contour when it expects <code>theta, r</code>.</p> <p>As a quick example:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt #-- Generate Data ----------------------------------------- # Using linspace so that the endpoint of 360 is included... azimuths = np.radians(np.linspace(0, 360, 20)) zeniths = np.arange(0, 70, 10) r, theta = np.meshgrid(zeniths, azimuths) values = np.random.random((azimuths.size, zeniths.size)) #-- Plot... ------------------------------------------------ fig, ax = plt.subplots(subplot_kw=dict(projection='polar')) ax.contourf(theta, r, values) plt.show() </code></pre> <p><img src="https://i.stack.imgur.com/8a21z.png" alt="enter image description here"></p>
python|numpy|matplotlib
26
9,013
66,381,877
How to append a column to a dataframe with values based on condition
<p>I have the following dataframe:</p> <p>Country is actually the index:</p> <pre><code> 2014 2015 PopEst Country China 8.230121e+12 8.797999e+12 1.367645e+09 United States 1.615662e+13 1.654857e+13 3.176154e+08 Japan 5.642884e+12 5.669563e+12 1.274094e+08 United Kingdom 2.605643e+12 2.666333e+12 6.387097e+07 Russian Federation 1.678709e+12 1.616149e+12 1.435000e+08 Canada 1.773486e+12 1.792609e+12 3.523986e+07 Germany 3.624386e+12 3.685556e+12 8.036970e+07 India 2.200617e+12 2.367206e+12 1.276731e+09 France 2.729632e+12 2.761185e+12 6.383735e+07 South Korea 1.234340e+12 1.266580e+12 4.980543e+07 Italy 2.033868e+12 2.049316e+12 5.990826e+07 Spain 1.375605e+12 1.419821e+12 4.644340e+07 Iran 4.639027e+11 NaN 7.707563e+07 Australia 1.272520e+12 1.301251e+12 2.331602e+07 Brazil 2.412231e+12 2.319423e+12 2.059153e+08 </code></pre> <p>And I have the following dict:</p> <pre><code>ContinentDict = {'China':'Asia', 'United States':'North America', 'Japan':'Asia', 'United Kingdom':'Europe', 'Russian Federation':'Europe', 'Canada':'North America', 'Germany':'Europe', 'India':'Asia', 'France':'Europe', 'South Korea':'Asia', 'Italy':'Europe', 'Spain':'Europe', 'Iran':'Asia', 'Australia':'Australia', 'Brazil':'South America'} </code></pre> <p>I need to append a column showing the Continent Name for each country.</p> <p>how can I do this?</p>
<p>Use:</p> <pre><code>df['Continent'] = df.index.map('ContinentDict') </code></pre>
python|pandas
1
9,014
66,356,000
Generate a new categorical variable using count()
<p>I have a dataframe similar to this one:</p> <pre><code>col_a col_b A 1 B 6 B 3 C 2 C 3 D 6 E 7 F 8 E 8 </code></pre> <p>I want to create a column c and do a cumulative count of everything larger than 1, 2, 3, 4, 5, 6, 7, and 8.</p> <p>for example, for row 1, the number is 2, so I count total number larger or equal to 2 in col b for row 2, the number is 6, so I count total number larger than or equal to 6 in col b</p> <pre><code>the returned col should be something like this col_c 8 (total count of col_b value that is larger or equal to 1) 5 (total count of col_b value that is larger or equal to 6) 6 (total count of col_b value that is larger or equal to 3) 7 (total count of col_b value that is larger or equal to 2) </code></pre> <p>My code looks like this :</p> <pre><code>df.loc[df['col_b'] &gt;= 1, 'group'] = df[df['col_b'] &gt;=8].count() df.loc[df['col_b'] &gt;= 2, 'group'] = df[df['col_b'] &gt;=8].count() df.loc[df['col_b'] &gt;= 3, 'group'] = df[df['col_b'] &gt;=8].count() df.loc[df['col_b'] &gt;= 4, 'group'] = df[df['col_b'] &gt;=8].count() df.loc[df['col_b'] &gt;= 5, 'group'] = df[df['col_b'] &gt;=8].count() </code></pre> <p>Is there anyway to make this easier? Also, my return is NA, instead of an actual count?</p>
<pre><code>df['col_c'] = df['col_b'].apply(lambda x: sum(i &gt;= x for i in df['col_b'].tolist())) </code></pre> <p>Output:</p> <pre><code>| | col_a | col_b | col_c | |---:|:--------|--------:|--------:| | 0 | A | 1 | 9 | | 1 | B | 6 | 5 | | 2 | B | 3 | 7 | | 3 | C | 2 | 8 | | 4 | C | 3 | 7 | | 5 | D | 6 | 5 | | 6 | E | 7 | 3 | | 7 | F | 8 | 2 | | 8 | E | 8 | 2 | </code></pre>
python|pandas
1
9,015
66,535,562
Apply custom function to pandas DataFrame returns 'DataFrame' object is not callable
<p>Hej,</p> <p>My first post here. I was trying to find similar problem in here but without a success. Here it goes. I have a few separate pandas DataFrames where at least one column contains dictionary i.e.</p> <pre><code>fiscalYear | prodID | position 2020 | 123 | {'description': 'Customer Operations', 'code': '51254185'} 2020 | 456 | {'description': 'Support', 'code': '50544654'} ... </code></pre> <p>I can transform dictionary column to two (or more) columns with this:</p> <pre><code>position_df['position'] = main_df['position'].apply(lambda x: dict(eval(x))) position_df = position_df['position'].apply(pd.Series) position_df.rename(columns={'des': 'position_name', 'code':'positionID'},inplace=True) result = pd.concat([main_df, position_df], axis=1, join=&quot;inner&quot;) </code></pre> <p>so I get</p> <pre><code>fiscalYear| prodID| position | position_name | posID 2020 | 123 | {'des': 'Customer Operations', 'code': '51254185'} | 'Customer Operations' | 51254185 2020 | 456 | {'des': 'Support', 'code': '50544654'} | 'Support', | 50544654 ... </code></pre> <p>I created custom function and only change input but I got <em>TypeError: 'DataFrame' object is not callable</em></p> <p>Here is my function and the call</p> <pre><code> def dictionary_to_columns(dic_column,rename_col,df): temp_df = pd.DataFrame() # todo: # of items in the dictionary in dic_column temp_df['temporary'] = df[dic_column].apply(lambda x: dict(eval(x))) temp_df = temp_df['temporary'].apply(pd.Series) temp_df.rename(columns=rename_col,inplace=True) result = pd.concat([df, temp_df], axis=1, join=&quot;inner&quot;) return result main_df['position'] = main_df['position'].apply(dictionary_to_columns('position',{'des': 'name', 'code':'ID'},main_df)) </code></pre> <p>I think that I get the error on the return statement. I printed top 5 rows before return and it looked okay. Any suggestions?</p>
<p>Unpacking a dict into columns is as simple as returning a series from <code>apply</code>:</p> <pre><code>df fiscalYear prodID position 0 2020 123 {'description': 'Customer Operations', 'code':... 1 2020 456 {'description': 'Support', 'code': '50544654'} df[['name', 'ID']] = df.apply(lambda row: pd.Series(row['position']), axis=1) df fiscalYear prodID position name ID 0 2020 123 {'description': 'Customer Operations', 'code':... Customer Operations 51254185 1 2020 456 {'description': 'Support', 'code': '50544654'} Support 50544654 </code></pre>
python|pandas|dataframe|dictionary|typeerror
0
9,016
66,630,517
Sort Pandas DataFrame using multi-criteria weighting
<p><strong>Overview</strong></p> <p>For the following Pandas DataFrame I can sort the data using <code>sort_values</code> (I used &quot;Clust Length&quot; here) but this will not allow me to sort using a multi-criteria decision-making approach. I tried adding a list to <code>sort_values</code> with <code>by=[col1,col2]</code> etc but one column is ultimately sorting the DataFrame unless the first column has a duplicate value.</p> <p><strong>Desired output</strong></p> <p>For all columns a higher value is preferred. Is there a simple way to sort based on an even mix of all the columns? Could the data be normalised and weighted?</p> <p>For example,</p> <ul> <li>row 1 <code>Clust Max Date</code> value of <code>2020-11-30</code> &gt; row 0 <code>2018-09-29</code></li> <li>row 1 <code>Clust Bounce</code> value of <code>404.40</code> &gt; row 0 <code>268.30</code></li> </ul> <p>but row 0 is first because of the current sorting method. Ideally row 1 would be higher than row 0 as per this truncated df showing only two rows as an example.</p> <p>(Short example showing sorted rows using desired approach)</p> <p><a href="https://i.stack.imgur.com/tJkJLl.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tJkJLl.jpg" alt="enter image description here" /></a></p> <p>(Existing DataFrame for reference.) <a href="https://i.stack.imgur.com/RNKfgl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RNKfgl.png" alt="enter image description here" /></a></p> <p><strong>Data to recreate the DataFrame is below</strong></p> <pre><code>df.to_dict() {'Clust Length': {0: 9, 1: 8, 2: 8, 3: 7, 4: 6, 5: 6, 6: 5, 7: 2, 8: 5, 9: 1}, 'Clust Bounce Median': {0: 268.2999999999991, 1: 404.4000000000003, 2: 174.15000000000015, 3: 221.79999999999978, 4: 148.59999999999985, 5: 191.39999999999935, 6: 35.49999999999942, 7: 59.19999999999925, 8: 242.9999999999999, 9: 84.59999999999911}, 'Clust Min Date': {0: '2008-12-31 00:00:00+00:00', 1: '2008-09-29 23:00:00+00:00', 2: '2008-03-30 23:00:00+00:00', 3: '2007-09-29 23:00:00+00:00', 4: '2010-02-28 00:00:00+00:00', 5: '2004-08-30 23:00:00+00:00', 6: '2003-10-31 00:00:00+00:00', 7: '2016-02-29 00:00:00+00:00', 8: '2015-02-28 00:00:00+00:00', 9: '2010-02-28 00:00:00+00:00'}, 'Clust Max Date': {0: '2018-09-29 23:00:00+00:00', 1: '2020-11-30 00:00:00+00:00', 2: '2012-11-30 00:00:00+00:00', 3: '2013-04-29 23:00:00+00:00', 4: '2012-10-31 00:00:00+00:00', 5: '2019-10-31 00:00:00+00:00', 6: '2020-08-30 23:00:00+00:00', 7: '2020-10-31 00:00:00+00:00', 8: '2018-05-30 23:00:00+00:00', 9: '2013-11-30 00:00:00+00:00'}} </code></pre>
<p>You can rank each column separately, sum ranks row-wise and order descending:</p> <pre class="lang-py prettyprint-override"><code>df['order'] = df.rank().sum(axis=1) df.sort_values('order', inplace=True, ascending=False) </code></pre>
python|pandas
1
9,017
16,405,153
Offset a mask within a larger array
<p>I've got two numpy arrays: data, and a mask. The mask and the data are not the same size, so I imagine them like a canvas and a stamp. How can I stamp my canvas at different locations?</p> <pre><code>import numpy as np import matplotlib.pyplot as plt # Make a canvas canvas = np.zeros( 2500 ).reshape( 50, 50 ) # Make a "stamp" r = 10 xx, yy = np.mgrid[ :r * 2, :r * 2 ] stamp = ((xx - r) ** 2 + (yy - r) ** 2) &lt; r**2 # Draw on the canvas canvas[stamp] = 10 # Display the drawing plt.imshow(canvas) plt.show() </code></pre> <p>I get this: <img src="https://i.stack.imgur.com/9mtCE.png" alt="what I can do"></p> <p>How can I stamp at a different location to get something like this? <img src="https://i.stack.imgur.com/Afz7Y.png" alt="what I&#39;d like to do"></p>
<p>First crop out the rectangle (with the same size as stamp) from the canvas.</p> <pre><code># Draw on the canvas canvas[x_offset : x_offset + stamp.shape[0], y_offset : y_offset + stamp.shape[1]][stamp] = 10 </code></pre>
numpy|matplotlib
2
9,018
57,578,705
highlight all rows that have nan values in panda frame
<p>I’ve been trying to highlight all rows that have nan values with the code below:</p> <pre><code>Pd.style.applymap(lambda x: ['background-color: light green' if Pd.isnull() else '']) </code></pre> <p>But this seems doesn’t work. Is there a way to do it?</p>
<p>The Styler object has a <code>highlight_null</code> function:</p> <pre><code>df.style.highlight_null('lightgreen') </code></pre> <p><a href="https://i.stack.imgur.com/9Rhne.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9Rhne.png" alt="Styling with highlight_null"></a></p> <p>If you want to change anything other than the background, use <code>applymap</code>:</p> <pre><code>df.style.applymap(lambda cell: 'color: red' if np.isnan(cell) else '') </code></pre> <p><a href="https://i.stack.imgur.com/eY6OA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eY6OA.png" alt="Styling with applymap"></a></p> <p>If you want to highlight an entire row if any of its columns is null:</p> <pre><code>df.style.apply(lambda row: np.repeat('color: lightgray' if row.isnull().any() else '', row.shape[0]), axis=1) </code></pre> <p><a href="https://i.stack.imgur.com/dHfds.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dHfds.png" alt="Highlight entire row"></a></p>
python|pandas|dataframe
1
9,019
57,475,889
Could I use BERT to Cluster phrases with pre-trained model
<p>I found it was a failure that I had used Gensim with GoogleNews pre-trained model to cluster phrases like:</p> <ul> <li>knitting</li> <li>knit loom</li> <li>loom knitting</li> <li>weaving loom</li> <li>rainbow loom</li> <li>home decoration accessories</li> <li>loom knit/knitting loom</li> <li>...</li> </ul> <p>I am advised that <a href="https://stackoverflow.com/questions/57426745/how-to-cluster-words-and-phrases-with-pre-trained-model-on-gensim">GoogleNews model does't have the phrases in it</a>. The phrases I have are a little specific to GoogleNews model while I don't have corpus to train a new model. I have only the phrases. And now I am considering to turn to BERT. But could BERT do that as I expected as above? Thank you.</p>
<p>You can feed a phrase into the pretrained BERT model and get an embedding, i.e. a fixed-dimension vector. So BERT can embed your phrases in a space. Then you can use a clustering algorithm (such as k-means) to cluster the phrases. The phrases do not need to occur in the training corpus of BERT, as long as the words they consist of are in the vocabulary. You will have to try to see if the embeddings give you relevant results.</p>
tensorflow|nlp|pytorch|gensim|word2vec
0
9,020
57,646,377
assigning data to existing column in a for loop
<p>I want to assign data to a column of the data frame using for loop and a function but I got the common warning: </p> <blockquote> <p>"SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame"</p> </blockquote> <p>I have a data frame and three columns of date(year, month and day) and now I want a new column which converts those three columns to one.</p> <p>I am using a for loop to assign the new data to the new column. I tried to use <code>copy()</code> and <code>deepcopy()</code> as you can see below but it does not work.</p> <pre><code> for i in range(100008): df.new_col[i]=convert(df.year[i],df.mounth[i],df.day[i]) </code></pre> <p>what I tried instead of second line: </p> <pre><code> df.new_col[i].copy()=convert(df.year[i],df.mounth[i],df.day[i]) deepcopy(df.new_col[i]) =convert(df.year[i],df.mounth[i],df.day[i]) </code></pre> <p>I expected my code to assign the values to the column and it does(as I interrupted the kernel and called the df) but it takes many hours to do this. How can I fix the problem?</p>
<p>Instead of a <code>for</code> loop, use the pandas <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer">apply</a> method, which is much faster.</p> <p>If I understand correctly your code, you want something like this:</p> <pre><code>df['new_col'] = df.apply(convert2, axis=1) </code></pre> <p>where <code>convert2</code> is defined like:</p> <pre><code>def convert2(x): return convert(x['year'], x['month'], x['day']) </code></pre> <p>This is because, when passing a function to <code>apply</code>, the function must take as its argument a row or a column (a row in this case, since <code>axis=1</code>) of the dataframe.</p> <p>Alternatively, instead of defining <code>convert2</code>, you can use a <code>lambda</code> function:</p> <pre><code>df['new_col'] = df.apply(lambda x : convert(x['year'], x['month'], x['day']), axis=1) </code></pre>
python|pandas|dataframe
2
9,021
57,661,987
Minimum difference of Numpy arrays
<p>I have two 3-dimensional Numpy arrays of the same size. Their entries are similar, but not quite the same. I would like to shift one array in all three space dimensions, so that the difference between both arrays is minimal. </p> <p>I tried to write a function with arguments - list of lengths I like to shift the array, - array 1, - array 2. But I do not know how I can minimize this function, I tried using scipy.optimize.minimize, but failed:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.optimize import minimize def array_diff(shift, array1, array2): roll = np.roll(np.roll(np.roll(array2, shift[0], axis=0), shift[1], axis=1), shift[2], axis=2) diff = np.abs(np.subtract(array1, roll)) diffs = np.sum(diff) return diffs def opt_diff(func, array1, array2): opt = minimize(func, x0=np.zeros(3), args=(array1, array2)) return opt min_diff = opt_diff(array_diff, array1, array2) </code></pre> <p>This gives an error message regarding roll = np.roll(...) It says "slice indices must be integers or have an <strong>index</strong> method". I guess, that I am using the minimize function nor correctly, but have no idea, how to fix it.</p> <p>My goal is to minimize the function img_diff and get the minimum sum of all entries of the difference array. As a result I would like to have the three parameters shift[0], shift[1] and shift[2] for shift in y-, x-, and z-direction.</p> <p>Thank you for all your help.</p>
<blockquote> <p>This gives an error message regarding roll = np.roll(...) It says "slice indices must be integers or have an index method".</p> </blockquote> <p>np.roll requires an integer for the <code>shift</code> parameter. np.zeros creates an array of floats. Specify an integer type for <code>x0</code>:</p> <pre><code>x0=np.zeros(3,dtype=np.int32) </code></pre> <hr> <pre><code>x0=np.zeros(3) x0 Out[3]: array([ 0., 0., 0.]) x0[0] Out[4]: 0.0 x0=np.zeros(3,dtype=np.int32) x0[0] Out[6]: 0 </code></pre> <hr> <p>scipy.optimize.minimize will try to adjust <code>x0</code> by <em>fractions</em> so maybe just add a statement to <code>array_diff</code>:</p> <pre><code>def array_diff(shift, array1, array2): shift = shift.astype(np.int32) ... </code></pre>
python|numpy-ndarray|scipy-optimize
0
9,022
43,616,882
pandas 3D plot for multiple dataframes
<p>My goal is to plot something similar to the top graph of the following <a href="http://matplotlib.org/examples/mplot3d/wire3d_zero_stride.html" rel="nofollow noreferrer">link</a>.</p> <p>I have several txt files, every one of them corresponding to a different sample. Currently, I have my data loaded as pandas dataframes (although I'm not sure if it could be easier if I had loaded as numpy arrays):</p> <pre><code>sample4.head() Out[61]: 20 40 60 80 100 x 1.10 1.09734 1.25772 1.41810 1.57847 1.73885 1.11 1.06237 1.21307 1.36378 1.51448 1.66518 1.12 1.02176 1.16346 1.30516 1.44686 1.58856 1.13 0.97769 1.11097 1.24426 1.37754 1.51083 1.14 0.93162 1.05702 1.18241 1.30781 1.43321 test5.head() Out[62]: 20 40 60 80 100 x 1.10 1.12427 1.31545 1.50663 1.69781 1.88899 1.11 1.06327 1.24045 1.41763 1.59482 1.77200 1.12 0.99875 1.16302 1.32730 1.49158 1.65585 1.13 0.93276 1.08509 1.23742 1.38975 1.54208 1.14 0.86668 1.00792 1.14916 1.29040 1.43164 test6.head() Out[63]: 20 40 60 80 100 x 1.10 1.08463 1.30038 1.51612 1.73187 1.94761 1.11 0.99905 1.19626 1.39346 1.59067 1.78788 1.12 0.91255 1.09283 1.27310 1.45337 1.63365 1.13 0.82706 0.99181 1.15656 1.32131 1.48605 1.14 0.74381 0.89429 1.04477 1.19525 1.34572 </code></pre> <p>As it can be seen, all samples share one column. The following approach works for a single sample, giving a simple 2D plot:</p> <pre><code>sample4.plot() </code></pre> <p>But my idea is to plot all dataframes I have along the y axis, meaning that the y axis should be each of the individual samples I have, in a 3d graph like the example above, but I don't know how to "stack" dataframes and plot them using a third axis.</p> <p>Any help would be appreciated.</p> <p>Thanks in advance.</p>
<p>Here's one approach, using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html" rel="nofollow noreferrer"><code>melt</code></a> and <a href="http://matplotlib.org/mpl_toolkits/mplot3d/api.html?highlight=axes3d#module-mpl_toolkits.mplot3d.axes3d" rel="nofollow noreferrer"><code>Axes3D</code></a>. </p> <p>First, generate the sample data provided by OP:</p> <pre><code>import pandas as pd from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D sample4_z = [1.09734, 1.25772, 1.4181 , 1.57847, 1.73885, 1.06237, 1.21307, 1.36378, 1.51448, 1.66518, 1.02176, 1.16346, 1.30516, 1.44686, 1.58856, 0.97769, 1.11097, 1.24426, 1.37754, 1.51083, 0.93162, 1.05702, 1.18241, 1.30781, 1.43321] test5_z = [1.12427, 1.31545, 1.50663, 1.69781, 1.88899, 1.06327, 1.24045, 1.41763, 1.59482, 1.772 , 0.99875, 1.16302, 1.3273 , 1.49158, 1.65585, 0.93276, 1.08509, 1.23742, 1.38975, 1.54208, 0.86668, 1.00792, 1.14916, 1.2904 , 1.43164] test6_z = [1.08463, 1.30038, 1.51612, 1.73187, 1.94761, 0.99905, 1.19626, 1.39346, 1.59067, 1.78788, 0.91255, 1.09283, 1.2731 , 1.45337, 1.63365, 0.82706, 0.99181, 1.15656, 1.32131, 1.48605, 0.74381, 0.89429, 1.04477, 1.19525, 1.34572] def make_df(data): x = [1.1, 1.11, 1.12, 1.13, 1.14] y = [20, 40, 60, 80, 100] z = np.array(data).reshape((len(x),len(y))) return pd.DataFrame(z, index=x, columns=y).reset_index().rename(columns={'index':'x'}) sample4 = make_df(sample4_z) test5 = make_df(test5_z) test6 = make_df(test6_z) </code></pre> <p>Now plot all three data frames on one 3D grid:</p> <pre><code># signal to pyplot that we want 3d plots fig, ax = plt.subplots(1, 1, figsize=(10, 10), subplot_kw={'projection': '3d'}) # convenience wrapper for plotting function def plot_3d(df): ax.plot(df.x, df.y.astype(float), df.z) # dims must be floats # reshape with melt(), then plot plot_3d(pd.melt(sample4, id_vars='x', var_name='y', value_name='z')) plot_3d(pd.melt(test5, id_vars='x', var_name='y', value_name='z')) plot_3d(pd.melt(test6, id_vars='x', var_name='y', value_name='z')) # label axes ax.set_xlabel('x', fontsize=20) ax.set_ylabel('y', fontsize=20) ax.set_zlabel('z', fontsize=20) # optional view configurations ax.elev = 10 ax.axim = 20 </code></pre> <p><a href="https://i.stack.imgur.com/FRBzf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FRBzf.png" alt="3d plot"></a></p> <p><strong>UPDATE</strong> re: y-axis as categorical<br> With only two continuous-valued axes, it's generally not necessary (nor recommended) to invoke a 3D plotting surface (see, for example, <a href="https://stackoverflow.com/a/24531294/2799941">this similar discussion</a>). It's clearer to encode the categorical variable as a labeled dimension. </p> <p>This case is additionally complicated by the sample group levels, which represent a fourth dimension. I'd suggest considering a panel of plots, with y-axis categories encoded as legends. Like this:</p> <pre><code>datasets = ['sample4','test5','test6'] line_types = ['-.','--','-'] fix, axes = plt.subplots(1,3, figsize=(14,5)) for i, data in enumerate([sample4, test5, test6]): data.set_index('x').plot(style=line_types[i], ax=axes[i], sharey=True, xticks=data.x, title=datasets[i]) </code></pre> <p><a href="https://i.stack.imgur.com/bxHYu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bxHYu.png" alt="panel plots"></a></p> <p>Still, if you really want to keep things in 3D, a scatter plot with the correct view rotation will give you the effect you're looking for. This also circumvents the problem of the y-axis being read as a metric variable, rather than an ordinal one. </p> <pre><code># scatter plot with categorical y-axis def plot_3d(df, color): ax.scatter(df.x, df.y, df.z, c=color) # dims must be floats # reshape with melt(), then plot plot_3d(pd.melt(sample4, id_vars='x', var_name='y', value_name='z'), 'red') plot_3d(pd.melt(test5, id_vars='x', var_name='y', value_name='z'), 'blue') plot_3d(pd.melt(test6, id_vars='x', var_name='y', value_name='z'), 'green') # label axes ax.set_xlabel('x', fontsize=20) ax.set_ylabel('y', fontsize=20) ax.set_zlabel('z', fontsize=20) # optional view configurations ax.elev = 10 ax.azim = 280 </code></pre> <p><a href="https://i.stack.imgur.com/3DSGB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3DSGB.png" alt="3d scatter plot"></a></p> <p>Note: It's possible to use the <a href="http://matplotlib.org/mpl_toolkits/mplot3d/api.html?highlight=axes3d#mpl_toolkits.mplot3d.axes3d.Axes3D.bar3d" rel="nofollow noreferrer"><code>bar3d</code></a> class to treat one or more dimensions as categorical, but its cascading approach to multiple points with the same category value may not get you what you're looking for.</p>
python|pandas|matplotlib|plot|mplot3d
3
9,023
43,626,484
How to append and set value in one command using Python?
<p>I have the following dataframe (df):</p> <pre><code> SERV_OR_IOR_ID IMP_START_TIME IMP_CLR_TIME IMP_START_TIME_BIN IMP_CLR_TIME_BIN 0 -1447310116 23:59:00 00:11:00 47 0 1 1673545041 00:00:00 00:01:00 0 0 2 -743717696 23:59:00 00:00:00 47 0 3 58641876 04:01:00 09:02:00 8 18 </code></pre> <p>I want to duplicate the rows for which <code>IMP_START_TIME_BIN</code> is less than <code>IMP_CLR_TIME_BIN</code> as many times as the absolute difference of <code>IMP_START_TIME_BIN</code> and <code>IMP_CLR_TIME_BIN</code> and then append (at the end of the data frame) or preferable append below that row while incrementing the value of <code>IMP_START_TIME_BIN</code>.</p> <p>For example, for row 3, the difference is 10 and thus I should append 10 rows in the data frame incrementing the value in the <code>IMP_START_TIME_BIN</code> from 8(excluding) to 18(including).</p> <p>The result should look like this:</p> <pre><code> SERV_OR_IOR_ID IMP_START_TIME IMP_CLR_TIME IMP_START_TIME_BIN IMP_CLR_TIME_BIN 0 -1447310116 23:59:00 00:11:00 47 0 1 1673545041 00:00:00 00:01:00 0 0 2 -743717696 23:59:00 00:00:00 47 0 3 58641876 04:01:00 09:02:00 8 18 4 58641876 04:01:00 09:02:00 9 18 ... ... ... ... ... ... 13 58641876 04:01:00 09:02:00 18 18 </code></pre> <p>For this I tried to do the following but it didn't work :</p> <p><code>for i in range(len(df)): if df.ix[i,3] &lt; df.ix[i,4]: for j in range(df.ix[i,3]+1, df.ix[i,4]+1): df = df.append((df.set_value(i,'IMP_START_TIME_BIN',j))*abs(df.ix[i,3] - df.ix[i,4]))</code> </p> <p>How can I do it ?</p>
<p>You can use this solution, only necessary index values has to be unique:</p> <pre><code>#first filter only values for repeating l = df['IMP_CLR_TIME_BIN'] - df['IMP_START_TIME_BIN'] l = l[l &gt; 0] print (l) 3 10 dtype: int64 #repeat rows by repeating index values df1 = df.loc[np.repeat(l.index.values,l.values)].copy() #add counter to column IMP_START_TIME_BIN #better explanation http://stackoverflow.com/a/43518733/2901002 a = pd.Series(df1.index == df1.index.to_series().shift()) b = a.cumsum() a = b.sub(b.mask(a).ffill().fillna(0).astype(int)).add(1) df1['IMP_START_TIME_BIN'] = df1['IMP_START_TIME_BIN'] + a.values #append to original df, if necessary sort df = df.append(df1, ignore_index=True).sort_values('SERV_OR_IOR_ID') </code></pre> <pre><code>print (df) SERV_OR_IOR_ID IMP_START_TIME IMP_CLR_TIME IMP_START_TIME_BIN \ 0 -1447310116 23:59:00 00:11:00 47 1 1673545041 00:00:00 00:01:00 0 2 -743717696 23:59:00 00:00:00 47 3 58641876 04:01:00 09:02:00 8 4 58641876 04:01:00 09:02:00 9 5 58641876 04:01:00 09:02:00 10 6 58641876 04:01:00 09:02:00 11 7 58641876 04:01:00 09:02:00 12 8 58641876 04:01:00 09:02:00 13 9 58641876 04:01:00 09:02:00 14 10 58641876 04:01:00 09:02:00 15 11 58641876 04:01:00 09:02:00 16 12 58641876 04:01:00 09:02:00 17 13 58641876 04:01:00 09:02:00 18 IMP_CLR_TIME_BIN 0 0 1 0 2 0 3 18 4 18 5 18 6 18 7 18 8 18 9 18 10 18 11 18 12 18 13 18 </code></pre>
python|pandas|dataframe|append|setvalue
1
9,024
43,647,483
Pandas plotting two columns with series defined by value in third column
<p>Hi I have a pandas dataframe that looks like</p> <pre><code>deflector wFlow aContent DO Difference 64 3 127.5 10 2.007395 65 3 127.5 3 1.163951 66 3 127.5 5 1.451337 67 3 127.5 7 1.535639 68 3 24.0 10 1.046328 69 3 24.0 3 0.854763 70 3 24.0 5 0.766780 71 3 24.0 7 0.905270 72 3 56.0 10 1.274954 73 3 56.0 3 1.298657 74 3 56.0 5 1.049621 75 3 56.0 7 1.004255 76 3 88.0 10 1.194174 77 3 88.0 3 1.056968 78 3 88.0 5 1.066173 79 3 88.0 7 1.097231 </code></pre> <p>I would like to plot the aContent column vs the DO Difference column with each line defined by the wFlow column (x = aContent, y = DO Difference, 4 different lines, one for each wFlow.</p> <p>Thanks!</p>
<p>You can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html#pandas-dataframe-pivot" rel="noreferrer"><code>pivot</code></a> the data and use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html#pandas-dataframe-plot" rel="noreferrer"><code>pandas.dataframe.plot</code></a>:</p> <pre><code>df.pivot(index='aContent',columns='wFlow',values='DO Difference').plot() </code></pre> <p><a href="https://i.stack.imgur.com/WBjrP.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WBjrP.png" alt="enter image description here"></a></p>
python|pandas|matplotlib|plot
6
9,025
43,859,710
Filter out extra headers in middle of table
<p>I am attempting to import a very large data file. It is a text file structured like</p> <pre><code>***** Information about Data *********** Information about data Information about Data Information about Data Information about Data Col1 Col2 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 ...(10k+ lines) 1.0 1.0 1.0 1.0 ***** Information about Data *********** Information about data Information about Data Information about Data Information about Data Col1 Col2 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 ...(10k+ lines) 1.0 1.0 1.0 1.0 </code></pre> <p>and repeats some arbitrary number of times. The number of lines between headers varies and the total file is >1 million lines.</p> <p>Is there a method of stripping this header without looking line-by-line? I have written a line-by-line search, but that is too slow to be practical.</p> <p>The header is varies slightly each time it is displayed.</p>
<p>Assuming your file is named <code>test.txt</code></p> <ul> <li>read in entire file as a string</li> <li><p><code>split</code> on <code>'\n*'</code></p> <pre><code> new line \ 1.0 1.0 ***** Information about Data *********** \ followed by astricks </code></pre></li> <li><p><code>rsplit</code> results by <code>'\n\n'</code> and take last</p> <pre><code> first new line \ Information about Data \ second new line Col1 Col2 1.0 1.0 1.0 1.0 1.0 1.0 </code></pre></li> <li><code>read_csv</code></li> <li><code>pd.concat</code></li> </ul> <hr> <pre><code>from io import StringIO import pandas as pd def rtxt(txt): return pd.read_csv(StringIO(txt), delim_whitespace=True) fname = 'test.txt' pd.concat( [rtxt(st.rsplit('\n\n', 1)[-1]) for st in open(fname).read().split('\n*')], ignore_index=True ) Col1 Col2 0 1.0 1.0 1 1.0 1.0 2 1.0 1.0 3 1.0 1.0 4 1.0 1.0 5 1.0 1.0 6 1.0 1.0 7 1.0 1.0 8 1.0 1.0 9 1.0 1.0 10 1.0 1.0 11 1.0 1.0 </code></pre>
python-2.7|pandas|numpy
0
9,026
72,998,005
How to create lists based on index or the A column in a dataframe?
<p>I have this dataframe :</p> <pre><code> A B C 1 14 100 1 15 101 1 16 102 2 17 103 2 18 104 3 19 105 3 20 106 ... ... ... n </code></pre> <p>and I would like this output for any number up to n for the whole dataframe :</p> <pre><code>l1 = [14, 15, 16] (the 1 in A column) l2 = [17,18] (the 2 in A column) l3 = [19,20] (the 3 in A column) </code></pre> <p>Could you please help me? Tanks !</p>
<p>Try:</p> <pre><code>l1, l2, l3 = df.groupby('A')['B'].apply(list).values </code></pre> <p>Unpacking will only work only if there are three unique values in df['A'], so you probably wouldn't want to unpack the values and just index the resulting DataFrame from:</p> <pre><code>&gt;&gt;&gt; df.groupby('A')['B'].apply(list) A 1 [14, 15, 16] 2 [17, 18] 3 [19, 20] </code></pre>
python|pandas|list|dataframe
5
9,027
72,902,211
fill new Columns with Open price at certain times
<p>Hoping someone can help me here... I'm absolutely lost!</p> <p>Here is my dataframe: a 1 minute export of the DAX fom Metatrader5. I am wanting to add 2 new columns. &quot;Hourly Open&quot; and &quot;Market Open&quot;.</p> <pre><code> Open High Low Close Vol Date_Time 2022-07-04 00:01:00 12869.2 12873.5 12867.5 12869.1 63 2022-07-04 00:02:00 12868.3 12868.3 12854.9 12854.9 68 2022-07-04 00:03:00 12855.8 12861.1 12854.4 12860.8 69 2022-07-04 00:04:00 12861.1 12861.7 12854.0 12854.0 73 2022-07-04 00:05:00 12854.1 12857.3 12849.8 12849.9 58 </code></pre> <ol> <li><p>for the Hourly Open column I would like to find the Open price at the start of each hour. I would then like to ffill() that value into subsequent columns until a new hour opens at a new price....</p> </li> <li><p>for the market Open column I would like to find the open price for a specific time: ie 08:00. Once again Id like to ffill() that until the next day at 08:00 provides a new value....</p> </li> </ol> <p>I'm not understanding why I am having so much trouble with Pandas etc It just seems like every attempt I make at this is incorrect, plus I just haven't got the experience in Pandas to know where I'm going wrong. Every tutorial I do is helpful to a point but I'm not sure what exactly I'm seaching for....</p> <p>if someone could please help me with some pointers I'd be grateful...</p> <p>=============================</p> <p>Thanks for the question. Hopefully this clarifies things...</p> <ol> <li><p>Good Point - I'll have to drop the entries before the 08:00 time.</p> </li> <li><p>Here is what I'd like to end up with... Saved as an image</p> </li> </ol> <p><a href="https://i.stack.imgur.com/9ul04.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9ul04.png" alt="final dataframe" /></a></p> <p>Hopefully that makes sense....I need to be able to refer to both the Hourly oopen and the Session / Market Open later...</p> <p>many thanks!</p>
<p>I can only do it on ideal data. Namely, on the opening date (9:30 for me, you apparently have it at 8:00, this will need to be corrected) there should be a candle, if it is not there, then there will be empty values. The same applies to the hour, if there is no candle, for example, exactly at 13:00, then there will also be gaps in the nan value. <strong>Perhaps someone will tell you how to set a range in offset in order to take the nearest available data, the same applies to the hourly frequency.</strong></p> <p>Now for the code: the column Date_Time is converted to datetime format and made into an index. Next, a column is created with a frequency per hour Hourly Open. Then, a series is additionally created with a frequency per day, with the start time of the day at 9:30. And with this data, the Hourly Open column is filled in and the Market Open is filled in. At the end, the empty data is filled with past values.</p> <pre><code>import pandas as pd pd.set_option('display.max_rows', None)#these lines are needed to display all rows and columns pd.set_option('display.max_columns', None)#you can remove them or comment them out df = pd.read_csv('dax.csv', sep=',', header=0) df['Date_Time'] = pd.to_datetime(df['Date_Time'], errors='raise') df = df.set_index('Date_Time') df['Hourly Open'] = df['Close'].resample(&quot;H&quot;).asfreq() df1 = df['Close'].resample('1D', offset='9h30min').asfreq().dropna() df.loc[df1.index, 'Market Open'] = df1.values df.loc[df1.index, 'Hourly Open'] = df1.values df[['Hourly Open', 'Market Open']] = df[['Hourly Open', 'Market Open']].fillna(method=&quot;ffill&quot;) print(df[['Close', 'Hourly Open', 'Market Open']]) </code></pre> <p>Again, this will only work if the data frequency is uniform.</p>
python|pandas|dataframe
0
9,028
73,128,929
Slice pandas dataframe column by start and end values
<p>For example, I have a dataframe that looks like this:</p> <pre><code>0 -- end 1 QQQQ 2 GEO 3 DEF 4 ABC 5 -- start 6 -- end 7 apple 8. -- start </code></pre> <p>Is it possible to dynamically slice the column by the '-- end' &amp; '-- start'. Meaning, I want to work with the data between the -- start and -- end independently.</p> <pre><code>start_end = df[df.col.str.contains('-- end')+1:df.col.str.contains('-- start')] </code></pre> <p>To no avail, maybe this isn't even possible in pandas but would love input.</p> <p>Thank you all.</p>
<p>You could try as follows:</p> <pre><code>data = {'column': {0: '-- end', 1: 'QQQQ', 2: 'GEO', 3: 'DEF', 4: 'ABC', 5: '-- start', 6: '-- end', 7: 'apple', 8: '-- start'}} df = pd.DataFrame(data) exclude_lst = ['-- start','-- end'] # get False for members of exclude_lst, True for the rest bools = ~df.column.isin(['-- start','-- end']) # get sequences: [1, 2, 2, 2, 2, 3, 3, 4, 5] sequences = (bools != bools.shift()).cumsum() # keep only sequences where bools == True (so, only 2 and 4) groups = df[bools].groupby([sequences]) # now you can loop through each slice, and perform some operation on them for gr in groups: print(gr) # or put them in a list and go from there: gr_lst = list(groups) print(gr_lst[0]) (2, column 1 QQQQ 2 GEO 3 DEF 4 ABC) # so, we end up with tuples. Here gr_lst[0][0] == 2, a ref to first slice as [2, 2, 2, 2] # use gr_lst[i][1] to access an actual slice, e.g.: print(gr_lst[1][1]) column 7 apple </code></pre>
python|pandas|dataframe|group-by
0
9,029
70,653,975
Python - Iterate through multiple dataframes and append data to a new dataframe
<p>I have 3 pandas dataframes. I would like to append one row from each in each iteration to an existing dataframe.</p> <p>Example shown below:</p> <pre><code>DF1 = col1 col2 col3 a a a d d d g g g </code></pre> <pre><code>DF2= col1 col2 col3 b b b e e e h h h </code></pre> <pre><code>DF3= col1 col2 col3 c b b f f f i i i </code></pre> <pre><code>clean_DF = col1 col2 col3 a a a b b b c c c d d d e e e f f f g g g h h h i i i </code></pre> <p>Dummy code:</p> <pre><code>for i,j in df1.itterows(): for a,b in df2.itterows(): for c,d in df2.itterrows(): clean_df.append(i,j,a,b,c,d) </code></pre> <p>Please could someone point me in the right direction?</p>
<p>Concatenate them, using the <code>keys</code> argument to associate an index with rows from each original dataframe, then swap the index levels and sort the dataframe by this index.</p> <pre><code>df1 = pd.DataFrame([[&quot;a&quot;, &quot;a&quot;, &quot;a&quot;], [&quot;d&quot;, &quot;d&quot;, &quot;d&quot;], [&quot;g&quot;, &quot;g&quot;, &quot;g&quot;]], columns=[&quot;col1&quot;, &quot;col2&quot;, &quot;col3&quot;]) df2 = pd.DataFrame([[&quot;b&quot;, &quot;b&quot;, &quot;b&quot;], [&quot;e&quot;, &quot;e&quot;, &quot;e&quot;], [&quot;h&quot;, &quot;h&quot;, &quot;h&quot;]], columns=[&quot;col1&quot;, &quot;col2&quot;, &quot;col3&quot;]) df3 = pd.DataFrame([[&quot;c&quot;, &quot;c&quot;, &quot;c&quot;], [&quot;f&quot;, &quot;f&quot;, &quot;f&quot;], [&quot;i&quot;, &quot;i&quot;, &quot;i&quot;]], columns=[&quot;col1&quot;, &quot;col2&quot;, &quot;col3&quot;]) clean_df = pd.concat([df1, df2, df3], keys=range(3)).swaplevel().sort_index() </code></pre> <p>This assumes that each dataframe currently has a single index and is sorted by that index. If you have dataframes that may not be sorted by index, and you want to preserve their current sort orders, then you could reset their indices before concatenating them.</p> <pre><code>dfs = [df.reset_index() for df in [df1, df2, df3]] clean_df = pd.concat(dfs, keys=range(len(dfs))).swaplevel().sort_index() </code></pre>
python|pandas|for-loop
1
9,030
70,526,857
Concat multiple CSV's with the same column name
<p>Im having trouble with concatting these pandas dataframes as I keep getting a error saying <code>pandas.errors.InvalidIndexError: Reindexing only valid with uniquely valued Index objects</code> I am also trying to make my code less clunky and run smoother. I was also wondering if there was a way to get multiple pages on one csv using python. Any help would be great.</p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36'} URL = &quot;https://www.collincad.org/propertysearch?situs_street=Willowgate&amp;situs_street_suffix&quot; \ &quot;=&amp;isd%5B%5D=any&amp;city%5B%5D=any&amp;prop_type%5B%5D=R&amp;prop_type%5B%5D=P&amp;prop_type%5B%5D=MH&amp;active%5B%5D=1&amp;year=2021&amp;sort=G&amp;page_number=1&quot; t = URL + &quot;&amp;page_number=&quot; URL2 = t + &quot;2&quot; URL3 = t + &quot;3&quot; s = requests.Session() data = [] page = s.get(URL,headers=headers) page2 = s.get(URL2, headers=headers) page3 = s.get(URL3, headers=headers) soup = BeautifulSoup(page.content, &quot;lxml&quot;) soup2 = BeautifulSoup(page2.content, &quot;lxml&quot;) soup3 = BeautifulSoup(page3.content, &quot;lxml&quot;) for row in soup.select('#propertysearchresults tr'): data.append([c.get_text(' ',strip=True) for c in row.select('td')]) for row in soup2.select('#propertysearchresults tr'): data.append([c.get_text(' ',strip=True) for c in row.select('td')]) for row in soup3.select('#propertysearchresults tr'): data.append([c.get_text(' ',strip=True) for c in row.select('td')]) df1 = pd.DataFrame(data[1:], columns=data[0]) df2 = pd.DataFrame(data[2:], columns=data[1]) df3 = pd.DataFrame(data[3:], columns=data[2]) final = pd.concat([df1, df2, df3], axis=0) final.to_csv('Street.csv', encoding='utf-8') </code></pre>
<h3>What happens?</h3> <p>As mentioned @Zach Young <code>data</code> is already holding all the rows you like to convert into <strong>one</strong> dataframe. So it is not an issue of <code>pandas</code> it is more an issue on how collecting the information.</p> <h3>How to fix?</h3> <p>An approach based on the code in your question is selecting the table data more specific - Note the <code>tbody</code> in the selection, this will exclude the headers:</p> <pre><code>for row in soup.select('#propertysearchresults tbody tr'): data.append([c.get_text(' ',strip=True) for c in row.select('td')]) </code></pre> <p>While creating your dataframe you can set the column headers additionally:</p> <pre><code>pd.DataFrame(data, columns=[c.get_text(' ',strip=True) for c in soup.select('#propertysearchresults thead td')]) </code></pre> <h3>Example</h3> <p>This will show how to iterate the different pages of website containing your tables:</p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36'} URL = &quot;https://www.collincad.org/propertysearch?situs_street=Willowgate&amp;situs_street_suffix&quot; \ &quot;=&amp;isd%5B%5D=any&amp;city%5B%5D=any&amp;prop_type%5B%5D=R&amp;prop_type%5B%5D=P&amp;prop_type%5B%5D=MH&amp;active%5B%5D=1&amp;year=2021&amp;sort=G&amp;page_number=1&quot; s = requests.Session() data = [] while True: page = s.get(URL,headers=headers) soup = BeautifulSoup(page.content, &quot;lxml&quot;) for row in soup.select('#propertysearchresults tbody tr'): data.append([c.get_text(' ',strip=True) for c in row.select('td')]) if (a := soup.select_one('#page_selector strong + a')): URL = &quot;https://www.collincad.org&quot;+a['href'] else: break pd.DataFrame(data, columns=[c.get_text(' ',strip=True) for c in soup.select('#propertysearchresults thead td')]) </code></pre> <h3>Output</h3> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: left;">Property ID ↓ Geographic ID ↓</th> <th style="text-align: left;">Owner Name</th> <th style="text-align: left;">Property Address</th> <th style="text-align: left;">Legal Description</th> <th style="text-align: left;">2021 Market Value</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">2709013 R-10644-00H-0010-1</td> <td style="text-align: left;">PARTHASARATHY SURESH &amp; ANITHA HARIKRISHNAN</td> <td style="text-align: left;">12209 Willowgate Dr Frisco, TX 75035</td> <td style="text-align: left;">Ridgeview At Panther Creek Phase 2, Blk H, Lot 1</td> <td style="text-align: left;">$513,019</td> </tr> <tr> <td style="text-align: right;">...</td> <td style="text-align: left;">...</td> <td style="text-align: left;">...</td> <td style="text-align: left;">...</td> <td style="text-align: left;">...</td> <td style="text-align: left;">...</td> </tr> <tr> <td style="text-align: right;">61</td> <td style="text-align: left;">2129238 R-4734-00C-0110-1</td> <td style="text-align: left;">HEPFER ARRON</td> <td style="text-align: left;">990 Willowgate Dr Prosper, TX 75078</td> <td style="text-align: left;">Willow Ridge Phase One, Blk C, Lot 11</td> <td style="text-align: left;">$509,795</td> </tr> </tbody> </table> </div>
python-3.x|pandas|dataframe|concatenation|export-to-csv
2
9,031
70,612,009
Group Multiple columns while performing multiple aggregations in pandas
<p>I would like to group by multiple columns and perform several different aggregations. Grouping by type and date and taking average of en, en2, stat1 and stat2.</p> <p><strong>Data</strong></p> <pre><code>type en en2 date stat1 stat2 aa 40 80 1/1/2021 1 1 aa 20 20 1/1/2021 2 1 aa 10 10 1/1/2021 3 5 bb 10 10 1/1/2021 3 9 bb 50 5 1/1/2021 5 1 aa 90 5 1/7/2021 5 2 aa 100 10 1/7/2021 1 5 bb 80 10 1/7/2021 5 2 </code></pre> <p><strong>Desired</strong></p> <pre><code>type en en2 date stat1 stat2 aa 23 36 1/1/2021 2 3 bb 30 7.5 1/1/2021 4 5 aa 95 7.5 1/7/2021 3 3.5 bb 80 10 1/7/2021 5 2 </code></pre> <p><strong>Doing</strong></p> <pre><code>grouped = final.groupby(['date'],['type']) \ .agg({'en':'mean', 'en2':'mean','stat1':'mean','stat2':'mean'}) </code></pre> <p>I am getting a typeError. - Unhashable list I am researching. Any suggestion is appreciated.</p>
<pre><code>grouped = final[['date', 'type', 'en', 'en2','stat1','stat2']].groupby(['date', 'type'], as_index=False, dropna=False).sum() </code></pre>
python|pandas|numpy
1
9,032
43,006,542
How does a numpy function handle a logical if operator for the axis argument?
<p>I stumbled onto this on accident, but can't make sense of what is going on. I am doing a K-means clustering assignment with images and trying to vectorize the code to make it run as fast as possible. I came up with the following code:</p> <pre><code>image_values =np.array( [[[ 0.36302522 0.51708686 0.20952381] [ 0.46330538 0.69915968 0.2140056 ] [ 0.7904762 0.93837535 0.27002802] [ 0.78375351 0.89187676 0.24201682] [ 0.57871151 0.79775912 0.24593839] [ 0.2896359 0.39103645 0.64481789] [ 0.23809525 0.30924368 0.64257705]] [[ 0.36302522 0.51708686 0.20952381] [ 0.46330538 0.69915968 0.2140056 ] [ 0.7904762 0.93837535 0.27002802] [ 0.78375351 0.89187676 0.24201682] [ 0.57871151 0.79775912 0.24593839] [ 0.2896359 0.39103645 0.64481789] [ 0.23809525 0.30924368 0.64257705]] [[ 0.36302522 0.51708686 0.20952381] [ 0.46330538 0.69915968 0.2140056 ] [ 0.7904762 0.93837535 0.27002802] [ 0.78375351 0.89187676 0.24201682] [ 0.57871151 0.79775912 0.24593839] [ 0.2896359 0.39103645 0.64481789] [ 0.23809525 0.30924368 0.64257705]]]) means = np.array([[0.909,0.839,0.6509],[0.813,0.808,0.694],[0.331,0.407,0.597]]) #random centroids err = 1 while err &gt; .01: J = [np.sum((image_values-avg)**2, axis = 2) for avg in means] K = np.argmin(J, axis = 0) old_means = means means = np.array([np.mean(image_values[K==i], axis ==True) for i in range(len(means))]) print means err = abs(sum(old_means)-sum(means)) print err </code></pre> <p>In each new means calculation, I used my K array to select which pixel values should be included in each mean calculation but I couldn't get the axis to agree. I actually made a typo where instead of axis=3, I typed axis==3 and it worked! I tried a bunch of different numbers, and found out that it doesn't matter what the number is, the result is the same. I tried a bunch of numbers and Booleans with the equal operator they didn't work. I've gone through the documentation, but I couldn't figure it out. </p> <p>What does numpy do when it gets a logical if in the axis argument of one of its array functions?</p> <p>Thanks!</p>
<p>I am not entirely sure I fully understood what you're trying to do. Here's what I assume; You have one single image with RGB values and you would like to cluster the pixels within this image. Each centroid will thus define one value for each color channel respectively. I assume that each row in your <code>means</code> matrix is one centroid with the columns being the RGB values.</p> <p>In your approach, I think you might have a mistake in the way you are subtracting the centroids. You will need to create a distance matrix for each centroid (at the moment your not subtracting each color channel correctly).</p> <p>Here's one proposition. Please note that with given example data you will run into a <code>NaN</code> error since not all centroids have pixels that are closest to them. You also might need to adjust the stopping criterion to your needs.</p> <pre><code>err = 1 while err &gt; 0.1: # There are three centroids. We would like to compute the # distance for each pixel to each centroid. Here, the image # is thus replicated three times. dist = np.tile(image_values, (3,1,1,1)) # The 2D matrix needs to be reshaped to fit the dimensions of # the dist matrix. With the new shape, the matrix can directly # be subtracted. means2 = means.reshape(3,3,1,1) # Subtract each respective RGB value of the centroid for # each "replica" of the image J = np.power(dist - means2, 2) # Sum the r,g,b channels together to get the total distance for a pixel J = J.sum(axis=1) # Check for which cluster the pixel is closest K = np.argmin(J, axis=0) # I couldn't think of a better way than this loop newMeans = np.zeros((3,3)) for i in range(means.shape[0]): # do each centroid # In axis 1 there are pixels which we would like to # average for each color channel (axis 0 are the RGB channels) newMeans[i,:] = image_values[:,K==i].mean(axis=1) err = np.power(means - newMeans, 2).sum() means = newMeans </code></pre>
python|arrays|numpy
1
9,033
26,954,327
One liner for matrix-wise operations in python numpy (coming from MATLAB environment)
<p>Each column of a matrix should sum to 1. In MATLAB I would write for a matrix <code>mat</code> </p> <pre><code>&gt; mat = rand(5) mat = 0.2017 0.3976 0.0318 0.2750 0.2225 0.0242 0.1222 0.1369 0.2883 0.3395 0.0390 0.4260 0.2395 0.1462 0.2816 0.0351 0.1851 0.2292 0.2386 0.3376 0.1624 0.0157 0.2125 0.2813 0.2388 &gt; mat = mat ./ ( ones(5,1) * sum(mat) ) mat = 0.4363 0.3467 0.0374 0.2237 0.1567 0.0522 0.1066 0.1610 0.2345 0.2391 0.0844 0.3715 0.2819 0.1189 0.1983 0.0760 0.1614 0.2697 0.1941 0.2377 0.3511 0.0137 0.2500 0.2288 0.1682 </code></pre> <p>so that</p> <pre><code>&gt; sum(mat) ans = 1.0000 1.0000 1.0000 1.0000 1.0000 </code></pre> <p>I hope this is an appropriate question for this site. Thanks.</p>
<p>This operation can be written very concisely in numpy:</p> <pre><code>import numpy as np mat = np.random.rand(5, 5) mat /= mat.sum(0) mat.sum(0) # will be array([ 1., 1., 1., 1., 1.]) </code></pre>
python|matlab|python-2.7|numpy|matrix
2
9,034
25,332,065
numpy meshgrid of dynamic shape
<p>I am trying to use numpy meshgrid to generate some arrays. So, I have a nd array. Let us call it data and it can have an arbitrary shape and I am trying to generate some indices array as follows:</p> <pre><code>shape = data.shape x = np.meshgrid[1,x-1 for x in shape] </code></pre> <p>I know the syntax looks crazy but sometimes I try things like these in python and it works! Anyway, is there a way to do this dynamic meshgrid in python? This comes back with invalid syntax error:</p> <pre><code> x = np.meshgrid[1,x-1 for x in shape] ^ SyntaxError: invalid syntax </code></pre> <p>EDIT:</p> <p>I would like basically to create an array of indices. For example, I can do the following when the index always begins with 0</p> <pre><code>import numpy as np array = np.random.rand(5, 5, 5) shape = array.shape indices = np.indices(x-1 for x in shape) </code></pre> <p>This creates an ndarray with indices starting from 0 to (n-1) along each of the axes of my input array. Now, I wanted to have the indexing begin from 1 and could not find a good way to do this.</p> <p>EDIT:</p> <p>For example, a call for an array with shape (4, 5, 6) could be something like:</p> <pre><code> x = np.meshgrid(np.arange(1,4), np.arange(1,5), np.arange(1, 6)) </code></pre>
<p>Going off your last example, you can do something like this:</p> <pre><code>x = np.meshgrid(*[np.arange(1, x) for x in shape]) </code></pre> <p>You need to explicitly create a list of the values you want to pass to <code>meshgrid</code>. If you want each one to start at 1, you need to put the 1 in each call to <code>arange</code>. You can't do something like <code>[1, arange(x)]</code> and have it "distribute" the 1 through all the calls.</p> <p>Then the <code>*</code> there expands the list into separate arguments. (See <a href="https://stackoverflow.com/questions/36901/what-does-double-star-and-star-do-for-python-parameters">here</a> for info.)</p>
python|numpy
9
9,035
39,414,370
Is there a way to prevent pandas to_json from adding \?
<p>I am trying to send a pandas dataframe to_json and I am having some issues with the date. I am getting an addtional \ so that my records look like <code>Updated:09\/06\/2016 03:09:44</code>. Is it possible to not have this additional \ added? I am assuming that it is an escape character of some sort but I haven't been able to find any additional information regarding this.</p> <p>I have been adjusting the various parameters but I havent had any luck <code>df[0:10].to_json('splunkJsonFormat.txt', orient='records', date_format='ISO8601')</code></p> <p>Sample Data:</p> <pre><code>b_Updated, Updated:09/06/2016 03:09:44, Updated:06/29/2016 08:16:52, Updated:09/07/2016 07:54:37, </code></pre>
<p>The JSON output you obtained is indeed correct and is the right behavior.</p> <p>Allowing <code>\/</code> helps when embedding JSON in a <code>&lt;script&gt;</code> tag, which doesn't allow <code>&lt;/</code> inside strings. Hence, in JSON <code>/</code> and <code>\/</code> are equivalent.</p> <p>One workaround would be to separate the date from the string and convert it to a format more suitable where the datetime format is more evident.</p> <pre><code>df['b_Updated'] = df['b_Updated'].str.split(':', 1) \ .apply(lambda x: x[0] + ':' + str(pd.to_datetime(x[1]))) df.to_json(orient='records', date_format='iso') [{"b_Updated":"Updated:2016-09-06 03:09:44"}, {"b_Updated":"Updated:2016-06-29 08:16:52"}, {"b_Updated":"Updated:2016-09-07 07:54:37"}] </code></pre>
python|json|pandas|to-json
5
9,036
39,262,088
How to fetch a batch of data using tf.train.batch() according to some conditions?
<p>I have a function 'read_and_decode_Train' which read and decode one single image and label from a TFRecords dataset. Then I use the tf.train.batch() function to serialized a BATCH_SIZE of images and labels to the images_batch and labels_batch. The code is as bellows:</p> <pre><code>image, label = read_and_decode_Train(tfRecordsName) images_batch, labels_batch = tf.train.batch([image, label], batch_size=BATCH_SIZE, num_threads=8, capacity=2000) </code></pre> <p>Now, I want to divide the TFRecords dataset into three subsets of training, validation and testing datasets according to some conditions, for example, if I have a csv file which rows are corresponding to the images and labels of the TFRecords dataset, then I divide the dataset according to the csv file. I modify my program to add a condition, which is as bellowing:</p> <pre><code>COUNT_TRAIN = -1 def read_and_decode_Train(filename, csvLines, valNo, testNo): '''read and decode one single image and label from the TFRecords dataset. ''' global COUNT_TRAIN filename_queue = tf.train.string_input_producer([filename], num_epochs=None) reader = tf.TFRecordReader() _, serialized_example = reader.read(filename_queue) while True: features = tf.parse_single_example( serialized_example, features={ 'label': tf.FixedLenFeature([], tf.int64), 'img_raw': tf.FixedLenFeature([], tf.string) }) image = tf.decode_raw(features['img_raw'], tf.float32) image = tf.reshape(image, [64, 64, 1]) label = features['label'] COUNT_TRAIN += 1 if csvLines[COUNT_TRAIN][1] != valNo and csvLines[COUNT_TRAIN][1] != testNo: break return image, label image, label = read_and_decode_Train(tfRecordsName, csvLines, valNo, testNo) images_batch, labels_batch = tf.train.batch([image, label], batch_size=BATCH_SIZE, num_threads=8, capacity=2000) </code></pre> <p>However, the tf.train.batch() function seems to read the data consequently as before. So, in my situation, how to fetch data from TFRecords according to some conditions instead of reading data consequently? Thank you for your kind suggestion and advice.</p>
<p>It might be easier for you to instead split your data into 3 separate <code>TFRecords</code> files; one for training, one for validation, and one for testing.</p>
tensorflow
0
9,037
39,200,444
deduplicate records in multiple CSV files with varying columns
<p>I have multiple CSV files in a directory. Some contain more columns (which would be OK to drop).</p> <p>Is there an elegant way to deduplicate records between these CSV files and reduce columns to a common set of columns?</p> <p>Currently, I will use python / pandas to accomplish this. I will load all the files into a single data frame, note in an additional column where the records originated from (filename), drop the additional columns and finally have pandas deduplicate via <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html</a> In the last step, I write the deduplicated files back to disk based on the filename-identifier column.</p> <pre><code># ASSUMPTION: files are in order, first file defines minimum common columns path = '.' files_in_dir = [f for f in os.listdir(path)if f.endswith('csv')] isFirst = True for filenames in fs.find('*.csv', path): df = pd.read_csv(filenames, error_bad_lines=False) df['origin'] = fs.add_suffix(filenames, '_deduplicated') if (isFirst): isFirst = False bigDf = df else: bigDf = pd.concat(bigDf, df, axis=0, join='inner') cols_for_dup = [col for col in bigDf.columns if col not in ['origin']] bigDf.duplicated(subset=cols_for_dup).sum() bigDf.duplicated().sum() bigDf_withoutNA = bigDf.drop_duplicates(keep='first', subset= cols_for_dup) grouped = bigDf_withoutNA.groupby('origin') for name, group in grouped: #filename = 'path' + name group.to_csv(path_or_buf= name, sep=';', decimal=',') </code></pre> <p>Is there a simpler approach to this?</p>
<p>I do not known how to make it simpler. I have an equal script done for some data of mine. It just runs twice, first to determine the min / max cols in all documents and finally to rewrite the csv files in an new folder, to keep the original data.</p> <p>I am just using the csv lib from python. <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">https://docs.python.org/2/library/csv.html</a></p> <p>There are no checks in this script, as it's just a quick and dirty script.</p> <p>The deduplication is not done. It just cuts all data to the same length, but you can replace the last line with your deduplication code.</p> <pre><code>import os import csv mincols = 0xffffffff maxcols = 0 srcdir = '/tmp/csv/' dstdir = '/tmp/csv2/' for dirName, subdirList, fileList in os.walk(srcdir): for fname in fileList: if fname[-4:].lower() == '.csv': with open(os.path.join(dirName, fname)) as csvfile: reader = csv.reader(csvfile, delimiter=',', quotechar='"') for row in reader: if mincols &gt; len(row): mincols = len(row) if maxcols &lt; len(row): maxcols = len(row) print(mincols, maxcols) for dirName, subdirList, fileList in os.walk(srcdir): for fname in fileList: if fname[-4:].lower() == '.csv': fullpath = os.path.join(dirName, fname) newfile = os.path.join(dstdir, fullpath[len(srcdir):]) if not os.path.exists(os.path.dirname(newfile)): os.makedirs(os.path.dirname(newfile)) with open(fullpath) as csvfile: reader = csv.reader(csvfile, delimiter=',', quotechar='"') with open(newfile, 'w') as dstfile: writer = csv.writer(dstfile, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL) for row in reader: #You can deduplicate here writer.writerow(row[:mincols]) </code></pre>
python|csv|pandas|duplicates
1
9,038
39,147,492
Annotate seaborn Factorplot
<p>I would like to visualize 2 boolean informations stored as columns in one seaborn FactorPlot.</p> <p>Here is my df :</p> <p><a href="https://i.stack.imgur.com/KTG5j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KTG5j.png" alt="enter image description here"></a></p> <p>I would like to visualize both of <code>actual_group</code> and <code>adviced_group</code> in the same FactorPlot.</p> <p>For now I am only able to plot the <code>adviced_groups</code> using the <code>hue</code> parameter :</p> <p><a href="https://i.stack.imgur.com/5k3kb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5k3kb.png" alt="enter image description here"></a></p> <p>with the code below :</p> <pre><code> _ = sns.factorplot(x='groups', y='nb_opportunities', hue='adviced_groups', size=6, kind='bar', data=df) </code></pre> <p>I tried to use the <code>ax.annotate()</code> from matplotlib without any success, because - for what I understood - Axes are not handled by the <code>sns.FactorPlot()</code> method.</p> <p>It could be an annotation, colorize one of the rectangle's edge or anything that could help visualize the actual group.</p> <p>The result could be for instance something like this : </p> <p><a href="https://i.stack.imgur.com/UZdSv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UZdSv.png" alt="enter image description here"></a></p>
<p>You could make use of <a href="http://matplotlib.org/users/annotations_intro.html" rel="noreferrer"><code>plt.annotate</code></a> method provided by <code>matplotlib</code> to make annotations for the <code>factorplot</code> as shown:</p> <p><strong>Setup:</strong> </p> <pre><code>df = pd.DataFrame({'groups':['A', 'B', 'C', 'D'], 'nb_opportunities':[674, 140, 114, 99], 'actual_group':[False, False, True, False], 'adviced_group':[False, True, True, True]}) print (df) actual_group adviced_group groups nb_opportunities 0 False False A 674 1 False True B 140 2 True True C 114 3 False True D 99 </code></pre> <p><strong>Data Operations:</strong></p> <p>Choosing the subset of <code>df</code> where the values of <code>actual_group</code> are True. The <code>index</code> value and the <code>nb_opportunities</code> value become the arguments for x and y that become the location of the annotation.</p> <pre><code>actual_group = df.loc[df['actual_group']==True] x = actual_group.index.tolist()[0] y = actual_group['nb_opportunities'].values[0] </code></pre> <p><strong>Plotting:</strong> </p> <pre><code>sns.factorplot(x="groups", y="nb_opportunities", hue="adviced_group", kind='bar', data=df, size=4, aspect=2) </code></pre> <p>Adding some padding to the location of the annotation as well as the location of text to account for the width of the bars being plotted.</p> <pre><code>plt.annotate('actual group', xy=(x+0.2,y), xytext=(x+0.3, 300), arrowprops=dict(facecolor='black', shrink=0.05, headwidth=20, width=7)) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/GZHtJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/GZHtJ.png" alt="Image"></a></p>
python|pandas|matplotlib|seaborn
9
9,039
39,220,914
tensorflow.train.string_producer return nothing
<p>I'm tring to train my own images data using cifar-10 cnn model,the debug information is below . <a href="https://i.stack.imgur.com/OcuAE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OcuAE.png" alt="tensorflow-string-producer"></a></p> <p>the debug location is :</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code> # Create a queue that produces the filenames to read. filename_queue = tf.train.string_input_producer(filenames) # Read examples from files in the filename queue. read_input = read_cifar10(filename_queue) reshaped_image = tf.cast(read_input.uint8image, tf.float32)</code></pre> </div> </div> </p> <p>So my question is,why there is nothing in filename_queue ?Thanks!</p>
<p>When debugging the source code of cifar-10,i found there is nothing in its queue either. It seems that the content within queue was filled later</p>
string|input|tensorflow|producer
0
9,040
29,102,173
Array indexing inconsistent between script and console Python 2.7?
<p>I am setting up a 3D box, 2x2x3 in this example. I have used a list comprehension to set up the coordinates of the centroids of each 1x1x1 cell in the box (I am using Python 2.7). I then convert that list structure into a numpy array and want to randomly half fill the box.</p> <pre><code>import numpy as np tube = [] xy = 2 for z in range(3): tube.append([(x+0.5, y+0.5, z+0.5, 0) for x in range(xy) for y in range(xy)]) centroid = np.array(tube) exitFlag = False for numParticle in range(6): while exitFlag == False: zCoord = np.random.randint(3, size=1) xyCoord = np.random.randint((xy*xy), size=1) print zCoord, xyCoord if centroid[zCoord][xyCoord][3] == 0.0: centroid[zCoord][xyCoord][3] = 1.0 exitFlag = True print centroid </code></pre> <p>I am happy with the shape of the array but I get the following error,</p> <pre><code> if centroid[zCoord][xyCoord][3] == 0.0: IndexError: index 3 is out of bounds for axis 0 with size 1 </code></pre> <p>Now if I query the exact same entry in the console I get no index error.</p> <p>For example if the randint calls return 0 and 3 respectively, I get the above error. </p> <p>Yet when I check enter <code>centroid[0][3][3]</code> into the console I get <code>0.0</code>, which is what I expect.</p> <p>Why is the script swapping my indices?</p>
<p><code>centroid[zCoord][xyCoord][3]</code> is not a good way to index an array. Sometimes it works, but it isn't the intended mechanism.</p> <p>One set of brackets gives more control:</p> <pre><code> if centroid[zCoord, xyCoord, 3] == 0: centroid[zCoord, xyCoord, 3] = 1 exitFlag=True </code></pre> <p>I added </p> <pre><code> print centroid[zCoord].shape print centroid[zCoord][xyCoord] </code></pre> <p>and got <code>(1, 4, 4)</code> followed by the error message. <code>ZCoord</code> as you generate it isn't a scalar, but an array of shape <code>(1,)</code>. So indexing with it leaves that initial <code>1</code> dimension. The 2nd bracket then tries to index on that dimension, rather than the 2nd that you expect. The reason your console tests work is that you using scalars, not arrays.</p>
python|arrays|list|python-2.7|numpy
2
9,041
29,243,323
How do I write a python pandas.DataFrame to a file using aligned space chacarcters?
<p>I want to store a pandas.DataFrame to a text file that has the columns aligned using whitespace characters. If this is my sample DataFrame:</p> <pre><code>In [1]: import numpy as np In [2]: import pandas as pd In [3]: df = pd.DataFrame(np.linspace(0,1,9).reshape(3,3)) In [4]: df Out[4]: 0 1 2 0 0.000 0.125 0.250 1 0.375 0.500 0.625 2 0.750 0.875 1.000 [3 rows x 3 columns] </code></pre> <p>I want to do something like this: </p> <pre><code>In [5]: df.to_csv('test.txt', sep='?') </code></pre> <p>to get this:</p> <pre><code>In [6]: more test.txt 0 1 2 0 0.0 0.125 0.25 1 0.375 0.5 0.625 2 0.75 0.875 1.0 </code></pre> <p>What separator should I use? I want to know if there is a way to do this without using the <code>\t</code> character. It looks nice </p> <pre><code> 0 1 2 0 0.0 0.125 0.25 1 0.375 0.5 0.625 2 0.75 0.875 1.0 </code></pre> <p>but then my text files have tab characters which create other problems.</p> <p>If I use <code>sep=' '</code> I get this which is obviously wrong.</p> <pre><code> 0 1 2 0 0.0 0.125 0.25 1 0.375 0.5 0.625 2 0.75 0.875 1.0 </code></pre> <p>I know pandas can read in files like this so I figure there is a way to write out files like this.</p>
<p>How about this</p> <pre><code>import numpy as np import pandas as pd import csv df = pd.DataFrame(np.linspace(0,1,9).reshape(3,3)) df.to_csv('test.txt', float_format='%10.3f', sep=" ", quoting=csv.QUOTE_NONE, escapechar=" ") </code></pre> <p>It produces:</p> <pre><code> 0 1 2 0 0.000 0.125 0.250 1 0.375 0.500 0.625 2 0.750 0.875 1.000 </code></pre> <p>Number of spaces can be ofc customized by the number of digits of the 'longest' number.</p>
python|pandas
4
9,042
33,959,124
Plot pandas dataframe with subplots (subplots=True): Place legend and use tight layout
<p>I really like pandas to handle and analyze big datasets. So far, I have mostly used matplotlib for plotting but now want to use pandas own plot functionalities (based on matplotlib) since it needs less code and seems to be sufficient for me in most cases. Especially the subplots to have a guick glance at big dataframes like in the following example..</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt # Generate random data df = pd.DataFrame(np.random.randn(96,12), columns=['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L']) # Plotting df.plot(kind='line', subplots=True, grid=True, title=&quot;Sample Data (Unit)&quot;, layout=(4, 3), sharex=True, sharey=False, legend=True, style=['r', 'r', 'r', 'g', 'g', 'g', 'b', 'b', 'b', 'r', 'r', 'r'], xticks=np.arange(0, len(df), 16)) </code></pre> <p><a href="https://i.stack.imgur.com/B4gBn.png" rel="noreferrer"><img src="https://i.stack.imgur.com/B4gBn.png" alt="enter image description here" /></a></p> <p>..which brings me to my questions:</p> <p>1.) How can I place all legends in the subplots at the same place (e. g. centered, outside, topright)?</p> <p>2.) Can I somehow use matplotlibs &quot;Tight Layout&quot; (<a href="http://matplotlib.org/users/tight_layout_guide.html" rel="noreferrer">http://matplotlib.org/users/tight_layout_guide.html</a>) for the plot?</p> <p>Thanks in advance!</p>
<ol> <li><p>You can have all the legends in the same place, but you would have to create them in a separate step.</p> <pre><code># Plotting df.plot(kind='line', subplots=True, grid=True, title=&quot;Sample Data (Unit)&quot;, layout=(4, 3), sharex=True, sharey=False, legend=False, style=['r', 'r', 'r', 'g', 'g', 'g', 'b', 'b', 'b', 'r', 'r', 'r'], xticks=np.arange(0, len(df), 16)) for ax in plt.gcf().axes: ax.legend(loc=1) </code></pre> </li> <li><p>Sure. just use <code>plt.tight_layout()</code> before you <code>show</code> or <code>savefig</code>. Compare the two examples below created with and without <code>tight_layout</code>.</p> </li> </ol> <p>Without <code>tight_layout()</code>:</p> <p><a href="https://i.stack.imgur.com/RcCX4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/RcCX4.png" alt="enter image description here" /></a></p> <p>With <code>tight_layout()</code>:</p> <p><a href="https://i.stack.imgur.com/IDRb1.png" rel="noreferrer"><img src="https://i.stack.imgur.com/IDRb1.png" alt="enter image description here" /></a></p>
python|pandas|matplotlib
24
9,043
22,519,885
Unexpected behaviour of np.sum which acts as np.ravel
<p>I have a list of list of numbers and I want to sum all the numbers (regardless the list of lists). This should be a piece of cake for np.sum In fact if we have</p> <pre><code>a=[[1,2],[3,4]] np.sum(a) </code></pre> <p>returns 10</p> <p>By the way if we have</p> <pre><code>a=[[1,2],[3,4,5]] np.sum(a) </code></pre> <p>returns</p> <pre><code>[1,2,3,4,5] </code></pre> <p>It seems quite weird to me...</p>
<p>So I would hazard to guess that the answer here is pretty simple.</p> <p><code>np.sum</code> will evaluate the two lists and realise that it can't store their values in a normal array. It will therefore make an object array:</p> <pre><code>In [99]: x = [[1,2],[3,4,5]] In [100]: np.array(x) Out[100]: array([[1, 2], [3, 4, 5]], dtype=object) </code></pre> <p>When it comes to sum the elements of the array it will use the objects <code>__add__</code> operator.</p> <p>The addition of the two objects is:</p> <pre><code>In [103]: [1,2] + [3,4,5] Out[103]: [1, 2, 3, 4, 5] </code></pre> <p>Therefore:</p> <pre><code>In [104]: np.sum([[1,2],[3,4,5]]) Out[104]: [1, 2, 3, 4, 5] </code></pre>
python|numpy|sum
4
9,044
22,565,463
How do I 'force' python to use a specific version of a module?
<p>I'm new to python so I apologize if this has been answered elsewhere with tags I haven't thought of.</p> <p>I'm trying to update numpy from the 1.6 version I have now to 1.8. I've installed numpy in my python site-packages when I call numpy it calls the old 1.6 version. I've tried looking for the root to numpy 1.6 so I can remove it but that leads to :-</p> <pre><code>import numpy print numpy.__version__ print numpy.__file__ &gt;&gt;&gt; 1.6.2 V:\Brian.140\Python.2.7.3\lib\site-packages\numpy\__init__.pyc </code></pre> <p>I've added the folder containing the module to the system path using:-</p> <pre><code>sys.path.append('C:/Python27/Lib/site-packages') </code></pre> <p>and I know this works as I can call other modules in this location with no errors, for example:-</p> <pre><code>import wx import Bio </code></pre> <p>and</p> <pre><code>import nose </code></pre> <p>produce no errors. Why is this happening and how can I tell python which version of numpy to use?</p>
<p>You can also insert the directory to the beginning of the path, so you won't need to remove the old one:</p> <pre><code>sys.path.insert(1, 'C:/Python27/Lib/site-packages') </code></pre> <p>That won't work if you've already import your module. You can either import it after the sys.path.insert command, or use importlib.reload(<em>module_name</em>)</p>
python|python-2.7|numpy|module|updating
4
9,045
13,735,096
python vs octave random generator
<p>More specifically, numpy:</p> <pre><code>In [24]: a=np.random.RandomState(4) In [25]: a.rand() Out[25]: 0.9670298390136767 In [26]: a.get_state() Out[26]: ('MT19937', array([1248735455, ..., 1532921051], dtype=uint32), 2,0,0.0) </code></pre> <p>octave:</p> <pre><code>octave:17&gt; rand('state',4) octave:18&gt; rand() ans = 0.23605 octave:19&gt; rand('seed',4) octave:20&gt; rand() ans = 0.12852 </code></pre> <p>Octave claims to perform the same algorithm (Mersenne Twister with a period of 2^{19937-1})</p> <p>Anybody know why the difference?</p>
<p>Unfortunately the MT19937 generator in Octave does not allow you to initialise it using a single 32-bit integer as <code>np.random.RandomState(4)</code> does. If you use <code>rand("seed",4)</code> this actually switches to an earlier version of the PRNG used previously in Octave, which PRNG is not MT19937 at all, but rather the Fortran <code>RANDLIB</code>.</p> <p>It is possible to get the same numbers in NumPy and Octave, but you have to hack around the random seed generation algorithm in Octave and write your own function to construct the state vector out of the initial 32-bit integer seed. I am not an Octave guru, but with several Internet searches on bit manipulation functions and integer classes in Octave/Matlab I was able to write the following crude script to implement the seeding:</p> <pre><code>function state = mtstate(seed) state = uint32(zeros(625,1)); state(1) = uint32(seed); for i=1:623, tmp = uint64(1812433253)*uint64(bitxor(state(i),bitshift(state(i),-30)))+i; state(i+1) = uint32(bitand(tmp,uint64(intmax('uint32')))); end state(625) = 1; </code></pre> <p>Use it like this:</p> <pre><code>octave:9&gt; rand('state',mtstate(4)); octave:10&gt; rand(1,5) ans = 0.96703 0.54723 0.97268 0.71482 0.69773 </code></pre> <p>Just for comparison with NumPy:</p> <pre><code>&gt;&gt;&gt; a = numpy.random.RandomState(4) &gt;&gt;&gt; a.rand(5) array([ 0.96702984, 0.54723225, 0.97268436, 0.71481599, 0.69772882]) </code></pre> <p>The numbers (or at least the first five of them) match.</p> <p>Note that the default random number generator in Python, provided by the <code>random</code> module, is also MT19937, but it uses a different seeding algorithm, so <code>random.seed(4)</code> produces a completely different state vector and hence the PRN sequence is then different.</p>
python|random|numpy|octave
10
9,046
62,201,219
Unable to slice year from date column using negative indexing with pandas
<p>I have a simple data set, where we have a Dates column from which I want to extract the year. I am using the negative index to get the year </p> <p>d0['Year'] = d0['Dates'].apply(lambda x: x[-1:-5]) </p> <p>This normally works, however, not on this. A blank column is created. I sampled the column for some of the data and saw no odd characters present. I have tried the following variations</p> <p>d0['Year'] = d0['Dates'].apply(lambda x: str(x)[-1:-5]) # column is created and it is blank. </p> <p>d0['Year'] = d0.Dates.str.extract('\d{4}') # gives an error "ValueError: pattern contains no capture groups"</p> <p>d0['Year'] = d0['Dates'].apply(lambda x: str(x).replace('[^a-zA-Z0-9_-]','a')[-1:-5]) # same - gives a blank column</p> <p>Really not sure what other options I have and where is the issue. What possibly can be the issue?</p> <p>Below is a sample dump of the data I have</p> <p>Outbreak,Dates,Region,Tornadoes,Fatalities,Notes 2000 Southwest Georgia tornado outbreak,"February 13–14, 2000",Georgia,17,18,"Produced a series of strong and deadly tornadoes that struck areas in and around Camilla, Meigs, and Omega, Georgia. Weaker tornadoes impacted other states." 2000 Fort Worth tornado,"March 28, 2000",U.S. South,10,2,"Small outbreak produced an F3 that hit downtown Fort Worth, Texas, severely damaging skyscrapers and killing two. Another F3 caused major damage in Arlington and Grand Prairie." 2000 Easter Sunday tornado outbreak,"April 23, 2000","Oklahoma, Texas, Louisiana, Arkansas",33,0, "2000 Brady, Nebraska tornado","May 17, 2000",Nebraska,1,0,"Highly photographed F3 passed near Brady, Nebraska." 2000 Granite Falls tornado,"July 25, 2000","Granite Falls, Minnesota",1,1,"F4 struck Granite Falls, causing major damage and killing one person."</p>
<p>To extract year from "Dates" column , as <strong>object</strong> type use</p> <pre><code>da['Year'] = da['Dates'].apply(lambda x: x[-4:]) </code></pre> <p>If you want to use it as <strong>int</strong> then , you could do following operations after doing the step above</p> <pre><code>da['Year']=pd.to_numeric(da['Year']) </code></pre>
pandas|datetime|slice
0
9,047
62,292,872
AttributeError: 'str' object has no attribute 'weekday'
<p>I tried running linear regressions in Jupyter and it is throwing me a strange "AttributeError: 'str' object has no attribute 'weekday'" error. Any ideas?</p> <pre><code>for df in [lr_train, lr_test]: df['day_of_week'] = df.index.weekday df['is_weekend'] = df.index.map(lambda x: 1 if x.weekday() &gt; 4 else 0) df['hour_of_day'] = df.index.hour df['time_since_jan'] = df.index.map(lambda x: time_since_start(x)) </code></pre>
<p>This means that the object you're attempting to get the <code>.weekday</code> attribute from does not have that attribute. Given the code, it appears that <code>df['day_of_week']</code> is a string and not a <code>datetime.datetime()</code> object.</p> <p>If you want to see why this is happening, try the following code in a Python terminal.</p> <pre class="lang-py prettyprint-override"><code>from datetime import datetime current = datetime.now() current_string = current.isoformat() print(current.weekday) print(current_string) print(current_string.weekday) </code></pre> <p>The last line will throw the same exception you're seeing.</p>
python|pandas|linear-regression
-1
9,048
62,379,297
Pandas: Most efficient way of change a value in one of the column based on 2 other columns
<p>What will be the best way to perform operations on a column in Pandas DF based on 2 other columns. One of the columns has the value, while other column has the name of column to fill in data.</p> <pre><code>value B C1 C2 C3 C4 C5 1 C2 0 0 0 0 0 5 C3 0 0 0 0 0 3 C5 0 0 0 0 0 </code></pre> <p>Column <code>value</code> has the value and column <code>B</code> has the details of column to fill in. Hence, the result should look like:</p> <pre><code>value B C1 C2 C3 C4 C5 1 C2 0 1 0 0 0 5 C3 0 0 5 0 0 3 C5 0 0 0 0 3 </code></pre> <p>Any comments on the most efficient way of doing this, or is apply my best friend here?</p>
<p>We do <code>pivot</code> then <code>update</code> </p> <pre><code>df.update(df.pivot(columns='B',values='value')) df value B C1 C2 C3 C4 C5 0 1 C2 0 1.0 0.0 0 0.0 1 5 C3 0 0.0 5.0 0 0.0 2 3 C5 0 0.0 0.0 0 3.0 </code></pre>
python|pandas
7
9,049
62,429,594
Pandas: If column value is empty then insert value from another column in the same row
<p>I have the following scenario where I need to fill my empty column value with another column value.</p> <p>my.csv</p> <pre><code>country newCountry France Argentina Uruguay Germany Ireland </code></pre> <p>desired output:</p> <pre><code>country newCountry France Argentina Uruguay Uruguay Germany Ireland </code></pre> <p>my code:</p> <pre><code> df.loc[df['newCountry'] == '', 'newCountry'] = df['country'] </code></pre> <p>it doesn't throw any error, but after running this the row value remains empty instead of show Uruguay in the newCountry column.</p> <p>Could someone help with this please?</p>
<p>If possible mutiple spaces use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.strip.html" rel="nofollow noreferrer"><code>Series.str.strip</code></a>:</p> <pre><code>df.loc[df['newCountry'].str.strip() == '', 'newCountry'] = df['country'] </code></pre> <p>Or if there are missing values instead empty space use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.fillna.html" rel="nofollow noreferrer"><code>Series.fillna</code></a>:</p> <pre><code>df['newCountry'] = df['newCountry'].fillna(df['country']) </code></pre>
python|pandas|csv
2
9,050
62,213,117
Pandas CSV delimiter, special character and insert inside a row question
<p>sorry for asking 2 times but my teacher said i can use pandas now. Finish output</p> <p><a href="https://github.com/emanuelOchoa/csv" rel="nofollow noreferrer">files</a></p> <p><a href="https://i.stack.imgur.com/ID7cK.png" rel="nofollow noreferrer">What they want(pic of the solution)</a></p> <p><a href="https://i.stack.imgur.com/AkxPA.png" rel="nofollow noreferrer">What i have</a></p> <pre><code>import pandas as pd df = pd.read_csv('C:/Programacion/1.csv',sep=";", encoding='latin1') new_column = pd.DataFrame({'Movimientos': ['S', 'E', 'S']}) df = df.merge(new_column, left_index = True, right_index = True) df.to_csv('C:/Programacion/aaaaaa.csv', index = False) </code></pre> <ul> <li>I have problems with the special characters like º</li> <li>Some columns and rows are together, but i have the delimiter ';' so i dont know why this is happening</li> <li>I'm trying to insert every day the value 10:00 and 10:15 inside "Horas". Like the pic of the solution</li> </ul>
<p>At least one of the files you uploaded to github as an encoding issue, but nevertheless - here's a way to read it: </p> <pre><code>url = "https://raw.githubusercontent.com/emanuelOchoa/csv/master/source.csv" res = requests.get(url) content = res.content.decode('utf-8','ignore') pd.read_csv(StringIO(content), sep = ";") </code></pre> <p>The result: </p> <p><a href="https://i.stack.imgur.com/ZA2ue.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZA2ue.png" alt="enter image description here"></a></p>
python|pandas|csv
0
9,051
51,321,354
Find the maximum of count in a grouped dataframe
<p>I have a dataframe that consists of football data with columns such as player name, club, nationality and rating. I have applied the <code>groupby</code> function to group the data by club and nationality and have calculated the count, min, max and mean. <a href="https://i.stack.imgur.com/frp31.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/frp31.png" alt="five column table"></a></p> <p>Now, I need to display the clubs and nation with the maximum count for that club. For example, Hoffenheim has 10 German nationals and that is the maximum for the club. How can I do that?</p>
<p>I think need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> by first level of <code>MultiIndex</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.idxmax.html" rel="nofollow noreferrer"><code>idxmax</code></a> and then select rows by <code>loc</code>:</p> <pre><code>df = pd.DataFrame({'club':list('AABBCC'), 'min':[4,5,4,5,5,4], 'mean':[7,8,9,4,2,3], 'max':[1,3,5,7,1,0], 'count':[5,3,6,9,2,4], 'nationality':list('aaabbb')}).set_index(['club','nationality']) print (df) min mean max count club nationality A a 4 7 1 5 a 5 8 3 3 B a 4 9 5 6 b 5 4 7 9 C b 5 2 1 2 b 4 3 0 4 df = df.loc[df.groupby(level=1)['count'].idxmax()] print (df) min mean max count club nationality B a 4 9 5 6 b 5 4 7 9 </code></pre> <p><strong>Detail</strong>:</p> <pre><code>print (df.groupby(level=1)['count'].idxmax()) nationality a (B, a) b (B, b) Name: count, dtype: object </code></pre>
python|pandas|data-science
1
9,052
48,428,193
shifting pandas series for only some entries
<p>I've got a dataframe that has a <code>Time</code> Series (made up of strings) with some missing information:</p> <pre><code># Generate a toy dataframe: import pandas as pd data = {'Time': ['0'+str(i)+':15:45' for i in range(10)]} data['Time'][4] = 'unknown' data['Time'][8] = 'unknown' df = pd.DataFrame(data) # df Time 0 00:15:45 1 01:15:45 2 02:15:45 3 03:15:45 4 unknown 5 05:15:45 6 06:15:45 7 07:15:45 8 unknown 9 09:15:45 </code></pre> <p>I would like the <code>unknown</code> entries to match the entry above, resulting in this dataframe:</p> <pre><code># desired_df Time 0 00:15:45 1 01:15:45 2 02:15:45 3 03:15:45 4 03:15:45 5 05:15:45 6 06:15:45 7 07:15:45 8 07:15:45 9 09:15:45 </code></pre> <p>What is the best way to achieve this?</p>
<p>If you're intent on working with a time series data. I would recommend converting it to a time series, and then forward filling the blanks</p> <pre><code>import pandas as pd data = {'Time': ['0'+str(i)+':15:45' for i in range(10)]} data['Time'][4] = 'unknown' data['Time'][8] = 'unknown' df.Time = pd.to_datetime(df.Time, errors = 'coerce') df.fillna(method='ffill') </code></pre> <p>However, if you are getting this data from a <code>csv</code> file or something where you use <code>pandas.read_*</code> function you should use the <code>na_values</code> argument in those functions to specify <code>unknown</code> as a NA value</p> <pre><code>df = pd.read_csv('example.csv', na_values = 'unknown') df = df.fillna(method='ffill') </code></pre> <p>you can also pass a list instead of the string, and it adds the words passed to already existing list of NA values</p> <p>However, if you want to keep the column a string, I would recommend just doing a find and replace</p> <pre><code>df.Time = np.where(df.Time == 'unknown', df.Time.shift(),df.Time) </code></pre>
python|pandas|dataframe
1
9,053
48,283,609
how to use pyknackhq python library for getting whole objects/tables from my knack builder
<p>I am trying to connect <a href="https://builder.knack.com/" rel="nofollow noreferrer">knack</a> online database with my python data handling scripts in order to renew objects/tables directly into my knack app builder. I discovered <a href="https://pypi.python.org/pypi/pyknackhq" rel="nofollow noreferrer">pyknackhq</a> Python API for KnackHQ can fetch objects and return json objects for the object's records. So far so good.</p> <p>However, following the documentation (<a href="http://www.wbh-doc.com.s3.amazonaws.com/pyknackhq/quick%20start.html" rel="nofollow noreferrer">http://www.wbh-doc.com.s3.amazonaws.com/pyknackhq/quick%20start.html</a>) I have tried to fetch all rows (records in knack) for my object-table (having in total 344 records). My code was:</p> <pre><code>i =0 for rec in undec_obj.find(): print(rec) i=i+1 print(i) &gt;&gt; 25 </code></pre> <p>All first 25 records were returned indeed, however the rest until the 344-th were never returned. The documentation of pyknackhq library is relatively small so I couldn't find a way around my problem there. Is there a solution to get all my records/rows? (I have also changed the specification in knack to have all my records appear in the same page - page 1). </p> <p>The ultimate goal is to take all records and make them a pandas dataframe. thank you!</p>
<p>I haven't worked with that library, but I've written another python Knack API wrapper that should help: <a href="https://github.com/cityofaustin/knackpy" rel="nofollow noreferrer">https://github.com/cityofaustin/knackpy</a></p> <p>The docs should get you where you want to go. Here's an example:</p> <pre><code>&gt;&gt;&gt; from knackpy import Knack # download data from knack object # will fetch records in chunks of 1000 until all records have been downloaded # optionally pass a rows_per_page and/or page_limit parameter to limit record count &gt;&gt;&gt; kn = Knack( obj='object_3', app_id='someappid', api_key='topsecretapikey', page_limit=10, # not needed; this is the default rows_per_page=1000 # not needed; this is the default ) &gt;&gt;&gt; for row in kn.data: print(row) {'store_id': 30424, 'inspection_date': 1479448800000, 'id': '58598262bcb3437b51194040'},... </code></pre> <p>Hope that helps. Open a GitHub issue if you have any questions using the package.</p>
python|pandas
1
9,054
48,262,754
Numpy argsort, sorting on first number
<p>I've got the following code:</p> <pre><code>import numpy as np a = np.array([["value1", "value2", 3, "value4", "value5"], ["value1", "value2", -10, "value4", "value5"], ["value1", "value2", 31, "value4", "value5"], ["value1", "value2", 5, "value4", "value5"], ["value1", "value2", 3, "value4", "value5"]]) print("Default") print(a) a = a[a[:, 2].argsort()] print() print("Sorted:") print(a) </code></pre> <p>This results in the following output:</p> <pre><code>Sorted: [['value1' 'value2' '-10' 'value4' 'value5'] ['value1' 'value2' '3' 'value4' 'value5'] ['value1' 'value2' '3' 'value4' 'value5'] ['value1' 'value2' '31' 'value4' 'value5'] ['value1' 'value2' '5' 'value4' 'value5']] </code></pre> <p>But what I'm looking for is for the function to output this:</p> <pre><code>Sorted: [['value1' 'value2' '-10' 'value4' 'value5'] ['value1' 'value2' '3' 'value4' 'value5'] ['value1' 'value2' '3' 'value4' 'value5'] ['value1' 'value2' '5' 'value4' 'value5'] ['value1' 'value2' '31' 'value4' 'value5']] </code></pre> <p>When I change the 31 value to 51, it outputs correctly. So numpy is basically sorting on the first character of the number. But I can't find how to make it sort on the entire number.</p>
<p>As mentioned in the comments, the values you are sorting are strings. Change</p> <pre><code>a = a[a[:, 2].argsort()] </code></pre> <p>to </p> <pre><code>a = a[a[:, 2].astype(np.int).argsort()] </code></pre> <p>So that they are compared as integers. </p>
python|numpy
4
9,055
48,866,886
Joining tables in python with different lengths based on a key field
<p>So I want to make a join on two tables with a key field which both tables contain, so I can make a side by side comparison. </p> <p>Table A has 1164 rows and table B has 74 rows. And the common field in Table A is called EmployeeID and the 'same' field in Table B is called UserID. </p> <p><a href="https://i.stack.imgur.com/hGchO.png" rel="nofollow noreferrer">Table A</a></p> <p><a href="https://i.stack.imgur.com/59jht.png" rel="nofollow noreferrer">Table B</a></p> <p>I want to have the output in 3 forms: </p> <ol> <li>Table 1 with the records where the key field values were only found in TableA. (UNMATCHED LEFT)</li> <li>Table 2 with the matching records (so the key field value was found in Table A and B. (INNER JOIN)</li> <li>Table 3 with the records that were only in Table B. (UNMATCHED RIGHT)</li> </ol> <p>What is the best way to tackle this problem?</p> <p>When I used this code:</p> <pre><code>data_left_join = pd.merge(table_a, table_b, how='left') </code></pre> <p>I got 48268 rows as result. </p> <p>All the articles I could find were in SQL or R. </p> <p>I managed to import the tables and make some modifications to the tables. But I got stuck here. </p> <p>Thank in advance. </p>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge()</code></a> with the <code>left_on</code> and <code>right_on</code> arguments to specify your key field, then check for NaN values to find which rows are not in a table:</p> <pre><code>import pandas as pd # Create dataframes to test with table_a = pd.DataFrame({ "value": [1, 2, 3, 4, 5], "employee_id": [100, 200, 300, 400, 500] }) print "Table A:\n", table_a table_b = pd.DataFrame({ "value": [1, 2, 3, 4, 5], "user_id": [100, 200, 300, 1000, 2000], "age": [40, 50, 60, 70, 80] }) print "\nTable B:\n", table_b # Merge table A (left) on employee_id, and table B (right) on user_id merged = table_a.merge(table_b, left_on="employee_id", right_on="user_id", how="outer", suffixes=("_tableA", "_tableB")) print "\nMerged:\n", merged # Table A-columns with NaNs are not present in table B only_in_table_a = merged.loc[merged.value_tableB.isnull()] print "\nOnly in table A:\n", only_in_table_a # Table B-columns with NaNs are not present in table A only_in_table_b = merged.loc[merged.value_tableA.isnull()] print "\nOnly in table B:\n", only_in_table_b # Rows with no NaNs are in both tables in_both = merged.dropna(subset=["employee_id", "user_id"]) print "\nIn both:\n", in_both </code></pre> <p>Which yields:</p> <pre><code>Table A: employee_id value 0 100 1 1 200 2 2 300 3 3 400 4 4 500 5 Table B: age user_id value 0 40 100 1 1 50 200 2 2 60 300 3 3 70 1000 4 4 80 2000 5 Merged: employee_id value_tableA age user_id value_tableB 0 100.0 1.0 40.0 100.0 1.0 1 200.0 2.0 50.0 200.0 2.0 2 300.0 3.0 60.0 300.0 3.0 3 400.0 4.0 NaN NaN NaN 4 500.0 5.0 NaN NaN NaN 5 NaN NaN 70.0 1000.0 4.0 6 NaN NaN 80.0 2000.0 5.0 Only in table A: employee_id value_tableA age user_id value_tableB 3 400.0 4.0 NaN NaN NaN 4 500.0 5.0 NaN NaN NaN Only in table B: employee_id value_tableA age user_id value_tableB 5 NaN NaN 70.0 1000.0 4.0 6 NaN NaN 80.0 2000.0 5.0 In both: employee_id value_tableA age user_id value_tableB 0 100.0 1.0 40.0 100.0 1.0 1 200.0 2.0 50.0 200.0 2.0 2 300.0 3.0 60.0 300.0 3.0 </code></pre>
python|pandas|join|inner-join
0
9,056
48,471,794
Pip3 installing of Tensorflow issue
<p>I'm trying to install python and tensorflow to learn as part of a class I am taking, and am having some issues installing tensorflow. I keep getting the same set of errors:</p> <pre><code>C:\Users\X\AppData\Local\Programs\Python\Python36-32\python.exe: can't find '__main__' module in 'C:\\' </code></pre> <p>or</p> <pre><code>Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow </code></pre> <p>I've tried uninstalling and reinstalling python while adding it to my path, and that solved me not being stuck with syntax errors (most of the time), but I haven't been able to make any progress. Any advice on moving forward would be appreciated.</p> <p>To be clear, I am typing the following command into my command prompt:</p> <pre><code>pip3 install --upgrade tensorflow </code></pre> <p>As per <a href="https://www.tensorflow.org/install/install_windows" rel="nofollow noreferrer">the instructions on the TensorFlow website</a> - I've been finding the "C:\>" raises a syntax error however.</p>
<p>your pip3 is broken, because I can clearly see tensorflow distribution on PyPi for python 3.6 <a href="https://pypi.python.org/pypi/tensorflow/1.5.0" rel="nofollow noreferrer">https://pypi.python.org/pypi/tensorflow/1.5.0</a>. Also, I just created a new virtual env for python 3.6 and I am able to <code>pip install</code> it.</p> <p>Also you can download manually the "tensorflow-1.5.0-cp36-cp36m-win_amd64.whl",assuming you have windows, and <code>pip install tensorflow-1.5.0-cp36-cp36m-win_amd64.whl</code> </p>
python|tensorflow
0
9,057
71,083,776
Finding the difference in value counts by keys in two Dictionaries
<p>I have two sample python dictionaries that counts how many times each key appears in a DataFrame.</p> <pre><code>dict1 = { 2000 : 2, 3000 : 3, 4000 : 4, 5000 : 6, 6000 : 8 } </code></pre> <pre><code>dict2 = { 4000 : 4, 3000 : 3, 2000 : 4, 6000 : 10, 5000 : 4 } </code></pre> <p>I would like to output the following where there is a difference.</p> <pre><code>diff = { 2000 : 2 5000 : 2 6000 : 2 } </code></pre> <p>I would appreciate any help as I am not familiar with iterating though dictionaries. Even if the output shows me at which key there is a difference in values, it would work for me. I did the following but it does not produce any output.</p> <pre><code>for (k,v), (k2,v2) in zip(dict1.items(), dict2.items()): if k == k2: if v == v2: pass else: print('value is different at k') </code></pre>
<p>The way you're doing doesn't work because the dicts are not sorted, so <code>k==k2</code> is always evaluated False.</p> <p>You could use a dict comprehension where you traverse <code>dict1</code> and subtract the value in <code>dict2</code> with the matching key:</p> <pre><code>diff = {k: abs(v - dict2[k]) for k, v in dict1.items()} </code></pre> <p>Output:</p> <pre><code>{2000: 2, 3000: 0, 4000: 0, 5000: 2, 6000: 2} </code></pre> <p>If you have Python &gt;=3.8, and you want only key-value pairs where value &gt; 0, then you could also use the walrus operator:</p> <pre><code>diff = {k: di for k, v in dict1.items() if (di := abs(v - dict2[k])) &gt; 0} </code></pre> <p>Output:</p> <pre><code>{2000: 2, 5000: 2, 6000: 2} </code></pre> <p>Since you tagged it as <code>pandas</code>, you can also do a similar job in pandas as well.</p> <p>First, we need to convert the dicts to DataFrame objects, then <code>join</code> them. Since <code>join</code> joins by index by default and the indexes are the keys in the dicts, you get a nice DataFrame where you can directly find the difference row-wise. Then use the <code>diff</code> method on axis + <code>abs</code> to find the differences.</p> <pre><code>df1 = pd.DataFrame.from_dict(dict1, orient='index') df2 = pd.DataFrame.from_dict(dict2, orient='index') out = df1.join(df2, lsuffix='_x', rsuffix='').diff(axis=1).abs().dropna(axis=1)['0'] </code></pre> <p>Also, instead of creating two DataFrames and <code>join</code>ing them, we could also build a single DataFrame by passing a list of the dicts, and use similar methods to get the desired outcome:</p> <pre><code>out = pd.DataFrame.from_dict([dict1, dict2]).diff().dropna().abs().loc[1] </code></pre> <p>Output:</p> <pre><code>2000 2 3000 0 4000 0 5000 2 6000 2 Name: 0, dtype: int64 </code></pre>
python|python-3.x|pandas|dataframe|dictionary
3
9,058
70,755,998
DataFrame .isin for integers
<p>I've created a function - set of conditions, which returns 1 / 0, if the condition is fulfilled or not.</p> <pre><code>avg_ActivityScore = company['ActivityScore'].median() min_EmployeeLowerBound = 10 list_LegalFormIDs = [112, 121, 301, 118, 141, 703, 111, 705, 921, 117, 361, 391, 711] min_CompaniesCount = 10 def flag_company(df): if (df['ActivityScore'] &gt;= avg_ActivityScore): return 1 elif (df['EmployeeLowerBound'] &gt;= min_EmployeeLowerBound): return 1 elif (df['LegalFormID'].isin(list_LegalFormIDs)): return 1 else: return 0 </code></pre> <p>Then I'm applying the function on the DataFrame as follows:</p> <pre><code>df['Flag'] = df.apply(flag_company, axis = 1) </code></pre> <p>However, it returns an error message - int' object has no attribute 'isin'. Any ideas what could I change to keep the functionality, please?</p> <p>If I use the below code, it works without any issues:</p> <pre><code>df.loc[df['LegalFormID'].isin(list_LegalFormIDs)] </code></pre> <p>Many thanks!</p>
<p>Working with scalars in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>DataFrame.apply</code></a>, so cannot use functions for <code>Series</code>, because <code>df['LegalFormID']</code> is scalar inside function:</p> <pre><code>def flag_company(df): print (df['ActivityScore']) if (df['ActivityScore'] &gt;= avg_ActivityScore): return 1 elif (df['EmployeeLowerBound'] &gt;= min_EmployeeLowerBound): return 1 #check scalar by in elif (df['LegalFormID'] in list_LegalFormIDs): return 1 else: return 0 </code></pre> <hr /> <p>Vectorized solution working with <code>Series</code> is:</p> <pre><code>m1 = df['ActivityScore'] &gt;= avg_ActivityScore m2 = df['EmployeeLowerBound'] &gt;= min_EmployeeLowerBound m3 = df['LegalFormID'].isin(list_LegalFormIDs) df['Flag'] = (m1 | m2 | m3).astype(int) </code></pre>
python|pandas|dataframe
0
9,059
70,746,158
How to assign graph label for graph in pytorch geometric?
<p><strong>Question:</strong> How can we assign a graph-level label to a graph made in PyTorch geometric?</p> <p><strong>Example</strong>: Let us say we create an undirected graph in PyTorch geometric and now we want to label that graph according to its class (can use a numerical value). How could we now assign a class label for the whole graph, such that it can be used for graph classification tasks? Furthermore, how could we collect a bunch of graphs with labels to form our dataset?</p> <p><strong>Code</strong>: (to be run in Google Colab)</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt import networkx as nx import torch from torch.nn import Linear import torch.nn.functional as F torch.__version__ # install pytorch geometric !pip install torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-1.10.0+cpu.html from torch_geometric.nn import GCNConv from torch_geometric.utils.convert import to_networkx, from_networkx # Make the networkx graph G = nx.Graph() # Add some cars G.add_nodes_from([ ('Ford', {'y': 0, 'Name': 'Ford'}), ('Lexus', {'y': 1, 'Name': 'Lexus'}), ('Peugot', {'y': 2, 'Name': 'Peugot'}), ('Mitsubushi', {'y': 3, 'Name': 'Mitsubishi'}), ('Mazda', {'y': 4, 'Name': 'Mazda'}), ]) # Relabel the nodes remapping = {x[0]: i for i, x in enumerate(G.nodes(data = True))} G = nx.relabel_nodes(G, remapping, copy=True) # Add some edges --&gt; A = [(0, 1, 0, 1, 1), (1, 0, 1, 1, 0), (0, 1, 0, 0, 1), (1, 1, 0, 0, 0), (1, 0, 1, 0, 0)] as the adjacency matrix G.add_edges_from([ (0, 1), (0, 3), (0, 4), (1, 2), (1, 3), (2, 1), (2, 4), (3, 0), (3, 1), (4, 0), (4, 2) ]) # Convert the graph into PyTorch geometric pyg_graph = from_networkx(G) </code></pre> <p>Now how could we give this graph a label = 0 (for class e.g. cars)? Then if we did that for lots of graphs, how could we bunch them together to form a dataset?</p> <p>Thanks</p>
<p>The <code>pyg_graph</code> object has type <code>torch_geometric.data.Data</code>. Inspecting the <a href="https://pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/data/data.html#Data" rel="nofollow noreferrer">source code of <code>Data</code> class</a>, you can see that it defines the dunder methods <code>__setattr__</code> and <code>__setitem__</code>.</p> <p>Thanks to <code>__setattr__</code>, you can assign the label with the line</p> <pre class="lang-py prettyprint-override"><code>pyg_graph.label = 0 </code></pre> <p>or you can instead use <code>__setitem__</code> doing</p> <pre class="lang-py prettyprint-override"><code>pyg_graph[&quot;label&quot;] = 0 </code></pre> <p>The two notations perform the same action internally, so they can be used interchangeably.</p> <p>To create a batch of graphs and labels, you can simply do</p> <pre class="lang-py prettyprint-override"><code>batch = torch_geometric.data.Batch.from_data_list([pyg_graph, pyg_graph]) &gt;&gt;&gt; batch.label tensor([0, 0]) </code></pre> <p>and PyG takes care of the batching of all attributes automatically.</p>
python|pytorch
0
9,060
51,740,976
Data not appearing on a python plot
<p>I have a dataframe with date as index, floats as columns, filled with mostly NaN and a few floats.</p> <p>I am plotting this dataframe using :</p> <pre><code>fig, ax = plt.subplots() plot(df2[11][2:], linestyle='dashed',linewidth=2,label='xx') ax.set(xlabel='xx', ylabel='xx', title='xx') ax.grid() ax.legend() </code></pre> <p>The plot window open but with no data appearing. But if I use markers instead of line, the data point will appears.</p> <p>What should I correct to plot my graphs as lines?</p> <p>edit Thanks, it worked like this :</p> <pre><code>s1 = np.isfinite(df2[11][2:]) fig, ax = plt.subplots() plot(df2.index[2:][s1],df2[11][2:].values[s1], linestyle='-',linewidth=2,label='xx') ax.set(xlabel='xx', ylabel='xx',title='xx') ax.grid() ax.legend() </code></pre>
<p>Try</p> <pre><code>import matplotlib.pyplot as plt fig = plt.figure() plt.plot(df2[11][2:], linestyle='dashed',linewidth=2,label='xx') plt.set(xlabel='xx', ylabel='xx', title='xx') plt.grid() plt.legend() plt.show() </code></pre>
python|pandas|matplotlib
1
9,061
51,699,523
Error: "CUDNN STATUS NOT INITIALIZED" in keras-based convolutional network
<p>I'm trying to create a convolutional network using keras. However, I'm getting the following error:</p> <blockquote> <p>2018-08-05 21:10:44.670676: E T:\src\github\tensorflow\tensorflow\stream_executor\cuda\cuda_dnn.cc:332] could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED 2018-08-05 21:10:44.670843: E T:\src\github\tensorflow\tensorflow\stream_executor\cuda\cuda_dnn.cc:336] error retrieving driver version: Unimplemented: kernel reported driver version not implemented on Windows</p> </blockquote> <p>I haven't installed cudnn seperately, only installed tensorflow-gpu through pip (not using the url). A seperate program that doesn't use a convolutional network works fine. My code:</p> <pre><code> from __future__ import print_function import tensorflow as tf from tensorflow import keras from tensorflow.keras.datasets import mnist from tensorflow.keras.layers import Dense, Flatten from tensorflow.keras.layers import Conv2D, MaxPooling2D from tensorflow.keras.models import Sequential import matplotlib.pylab as plt import numpy as np batch_size = 64 num_classes = 10 epochs = 10 # input image dimensions img_x, img_y = 32, 32 # Load cifar data from file # define standard sizing values image_height = 32 image_width = 32 color_channels = 3 model_name = "cifar" def unpickle(file): import pickle with open(file, 'rb') as fo: dict = pickle.load(fo, encoding='bytes') return dict # Set the path as a mutable variable and initiate our training stuff cifar_path = 'cifar-10-batches-py/' x_train = np.array([]) y_train = np.array([]) # Load all the data batches. for i in range(1, 3): data_batch = unpickle(cifar_path + 'data_batch_' + str(i)) x_train = np.append(x_train, data_batch[b'data']) y_train = np.append(y_train, data_batch[b'labels']) # Load the eval batch. eval_batch = unpickle(cifar_path + 'test_batch') x_test = eval_batch[b'data'] y_test = eval_batch[b'labels'] # Load the english category names. category_names_bytes = unpickle(cifar_path + 'batches.meta')[b'label_names'] category_names = list(map(lambda x: x.decode("utf-8"), category_names_bytes)) def process_data(data): float_data = np.array(data, dtype=float) / 255.0 reshaped_data = np.reshape(float_data, (-1, color_channels, image_height, image_width)) # The incorrect image transposed_data = np.transpose(reshaped_data, [0, 2, 3, 1]) return transposed_data # redefine the data with it in its processed form x_train = process_data(x_train) x_test = process_data(x_test) # reshape the data into a 4D tensor - (sample_number, x_img_size, y_img_size, num_channels) x_train = x_train.reshape(x_train.shape[0], img_x, img_y, 3) x_test = x_test.reshape(x_test.shape[0], img_x, img_y, 3) input_shape = (img_x, img_y, 3) # convert the data to the right type x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') # convert class vectors to binary class matrices - this is for use in the # categorical_crossentropy loss below y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = Sequential() model.add(Conv2D(32, kernel_size=(5, 5), strides=(1, 1), activation='relu', input_shape=input_shape)) model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) model.add(Conv2D(64, (5, 5), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(1000, activation='relu')) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(), metrics=['accuracy']) class AccuracyHistory(keras.callbacks.Callback): def on_train_begin(self, logs={}): self.acc = [] def on_epoch_end(self, batch, logs={}): self.acc.append(logs.get('acc')) history = AccuracyHistory() model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test), callbacks=[history]) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) plt.plot(range(1, 11), history.acc) plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.show() </code></pre>
<p>You need to include cudnn in your environment variables(if on windows), if you need to run tensorflow-gpu.</p>
python|tensorflow|machine-learning|keras|cudnn
1
9,062
51,865,465
Error in importing NiftyNet with Tensorflow 1.9
<p>I installed the package <a href="http://niftynet.io/" rel="nofollow noreferrer"><code>NiftyNet 0.3.0</code></a> with <code>Python 2.7.5</code> on <a href="https://www.centos.org/" rel="nofollow noreferrer"><code>CentOS</code></a> Linux 7.5. <code>Tensorflow 1.9</code> was installed a priori. When I import <code>NiftyNet</code>, I got the following error message.</p> <pre><code>$ python Python 2.7.5 (default) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] on linux2 &gt;&gt; import niftynet INFO:tensorflow:TensorFlow version 1.9.0 Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/lib64/python2.7/site-packages/niftynet/__init__.py", line 47, in &lt;module&gt; set_logger() File "/usr/lib64/python2.7/site-packages/niftynet/io/misc_io.py", line 633, in set_logger tf.logging._logger.handlers = [] AttributeError: 'module' object has no attribute '_logger' </code></pre> <p>A similar problem was reported as <a href="https://github.com/NifTK/NiftyNet/issues/96" rel="nofollow noreferrer">an issue of its GitHub repository</a>, which states that <code>NiftyNet</code> might not be supported by the recent versions of <code>Tensorflow</code> (>=1.8). </p> <p>Unfortunately, it is not allowed to downgrade <code>Tensorflow</code> to the version 1.7 in the Linux server as a non-administrator. Could anyone suggest any tip to solve this incompatibility of <code>NiftyNet</code> with <code>Tensorflow 1.9</code>? If possible, I am willing to revise its source codes which were released in <a href="https://github.com/NifTK/NiftyNet" rel="nofollow noreferrer">GitHub repository</a>. Thank you for your help in advance.</p>
<p>The latest dev branch supports TF 1.9, you can follow these steps to install: <a href="https://github.com/NifTK/NiftyNet/wiki/NiftyNet-FAQ" rel="nofollow noreferrer">https://github.com/NifTK/NiftyNet/wiki/NiftyNet-FAQ</a></p>
python-2.7|tensorflow|niftynet
0
9,063
51,806,361
How to sort a csv file without headers using python?
<p>How can I sort a csv file which is without header using python pandas? NOTE: The csv file is without headers.</p> <p>My File:</p> <pre><code>1,a123,adam,student 2,b345,becky,student 3,c678,charles,teacher 1,d987,dickson,teacher 2,e654,evanston,teacher </code></pre> <p>Expected output:</p> <pre><code>1,a123,adam,student 1,d987,dickson,teacher 2,b345,becky,student 2,e654,evanston,teacher 3,c678,charles,teacher </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer"><code>read_csv</code></a> with parameter <code>names</code> for new columns names of <code>Dataframe</code> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>DataFrame.sort_values</code></a>:</p> <pre><code>import pandas as pd temp=u"""1,a123,adam,student 2,b345,becky,student 3,c678,charles,teacher 1,d987,dickson,teacher 2,e654,evanston,teacher""" #after testing replace 'pd.compat.StringIO(temp)' to 'filename.csv' df = pd.read_csv(pd.compat.StringIO(temp), names=['a','b','c','d']) print (df) a b c d 0 1 a123 adam student 1 2 b345 becky student 2 3 c678 charles teacher 3 1 d987 dickson teacher 4 2 e654 evanston teacher df = df.sort_values('a') print (df) a b c d 0 1 a123 adam student 3 1 d987 dickson teacher 1 2 b345 becky student 4 2 e654 evanston teacher 2 3 c678 charles teacher </code></pre> <p>Or use <code>header=None</code> for default columns names - <code>RangeIndex</code>:</p> <pre><code>df = pd.read_csv(pd.compat.StringIO(temp), header=None) print (df) 0 1 2 3 0 1 a123 adam student 1 2 b345 becky student 2 3 c678 charles teacher 3 1 d987 dickson teacher 4 2 e654 evanston teacher df = df.sort_values(0) print (df) 0 1 2 3 0 1 a123 adam student 3 1 d987 dickson teacher 1 2 b345 becky student 4 2 e654 evanston teacher 2 3 c678 charles teacher </code></pre>
python|python-2.7|pandas|pandas-groupby
2
9,064
64,205,923
Use another df to replace column values
<hr /> <p>Hello I have two df such as</p> <p><strong>df1</strong></p> <pre><code>Ancient New Seq1.1 Seq1.1_A Seq2 Se2.4_3 </code></pre> <p>and another</p> <p><strong>df2</strong></p> <pre><code>COL1 COL2 A Seq1.1 B Plants C YP_OODDD D Seq2 </code></pre> <p>and I would like to replace <code>COL2</code> values corresponding to the <code>df.Ancient</code> column and replace it by their corresponding <code>df1.New</code></p> <p>and get</p> <pre><code>COL1 COL2 A Seq1.1_A B Plants C YP_OODDD D Se2.4_3 </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.replace.html" rel="nofollow noreferrer"><code>Series.replace</code></a> with <code>Series</code> created by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> and selecting volumn <code>New</code>:</p> <pre><code>df2['COL2'] = df2['COL2'].replace(df1.set_index('Ancient')['New']) print (df2) COL1 COL2 0 A Seq1.1_A 1 B Plants 2 C YP_OODDD 3 D Se2.4_3 </code></pre>
python|pandas|replace
2
9,065
64,202,899
Question about adding a layer after loading pre-trained weights
<p>I have a question about creating a deep neural network with partially loading weights.</p> <p>Suppose I construct a model as follows (assume a sequence of layers are specified for the model):</p> <pre><code>model = models.Model(inputs, x, name=model_name) </code></pre> <p>And then, I load the weights for the model.</p> <pre><code>model.load_weights(weights) </code></pre> <p>What I want to do next is add additional layers to the deep network model that I have just created, initializing the corresponding weights to random values.</p> <p>I am not sure what is a proper way to do this, so could you help me with this?</p>
<p>Say you have a model loaded with pretrained weights</p> <pre><code>model.load_weights(weights) #Set trainable to false to maintain the previous weights for layer in model.layers: layer.trainable = False </code></pre> <p>Down below you will see some examples, but make sure you consult the <a href="https://www.tensorflow.org/api_docs/python/tf/keras/initializers" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/initializers</a> for all types of initializers.</p> <h3>Sequential model way</h3> <pre><code>initializer_1 = tf.keras.initializers.HeNormal() dense_1 = model.add(Dense(512,activation='relu',kernel_initializer = initializer_1)) initializer_2 = tf.keras.initializers.GlorotUniform() model.add(Dense(256,activation = 'relu',kernel_initializer = initializer_2)) </code></pre> <h3>Functional model way</h3> <pre><code>initializer_1 = tf.keras.initializers.HeNormal() dense_1 = Dense(512,activation='relu',kernel_initializer = initializer_1)(model.output) initializer_2 = tf.keras.initializers.GlorotUniform() dense_2 = Dense(256,activation = 'relu',kernel_initializer = initializer_2)(dense_1) </code></pre> <p>By default, the newly added are trainable, so you do not need to worry about setting the <code>trainable</code> property.</p>
tensorflow|keras|deep-learning|tensorflow2.0
0
9,066
47,924,400
Python Pandas: Assign Last Value of DataFrame Group to All Entries of That Group
<p>In Python Pandas, I have a DataFrame. I group this DataFrame by a column and want to assign the last value of a column to all rows of another column.</p> <p>I know that I am able to select the last row of the group by this command:</p> <pre><code>import pandas as pd df = pd.DataFrame({'a': (1,1,2,3,3), 'b':(20,21,30,40,41)}) print(df) print("-") result = df.groupby('a').nth(-1) print(result) </code></pre> <p>Result:</p> <pre><code> a b 0 1 20 1 1 21 2 2 30 3 3 40 4 3 41 - b a 1 21 2 30 3 41 </code></pre> <p>How would it be possible to assign the result of this operation back to the original dataframe so that I have something like:</p> <pre><code> a b b_new 0 1 20 21 1 1 21 21 2 2 30 30 3 3 40 41 4 3 41 41 </code></pre>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html" rel="noreferrer"><code>transform</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.last.html" rel="noreferrer"><code>last</code></a>:</p> <pre><code>df['b_new'] = df.groupby('a')['b'].transform('last') </code></pre> <p>Alternative:</p> <pre><code>df['b_new'] = df.groupby('a')['b'].transform(lambda x: x.iat[-1]) print(df) a b b_new 0 1 20 21 1 1 21 21 2 2 30 30 3 3 40 41 4 3 41 41 </code></pre> <p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.nth.html" rel="noreferrer"><code>nth</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html" rel="noreferrer"><code>join</code></a>:</p> <pre><code>df = df.join(df.groupby('a')['b'].nth(-1).rename('b_new'), 'a') print(df) a b b_new 0 1 20 21 1 1 21 21 2 2 30 30 3 3 40 41 4 3 41 41 </code></pre> <p><strong>Timings</strong>:</p> <pre><code>N = 10000 df = pd.DataFrame({'a':np.random.randint(1000,size=N), 'b':np.random.randint(10000,size=N)}) #print (df) def f(df): return df.join(df.groupby('a')['b'].nth(-1).rename('b_new'), 'a') #cᴏʟᴅsᴘᴇᴇᴅ1 In [211]: %timeit df['b_new'] = df.a.map(df.groupby('a').b.nth(-1)) 100 loops, best of 3: 3.57 ms per loop #cᴏʟᴅsᴘᴇᴇᴅ2 In [212]: %timeit df['b_new'] = df.a.replace(df.groupby('a').b.nth(-1)) 10 loops, best of 3: 71.3 ms per loop #jezrael1 In [213]: %timeit df['b_new'] = df.groupby('a')['b'].transform('last') 1000 loops, best of 3: 1.82 ms per loop #jezrael2 In [214]: %timeit df['b_new'] = df.groupby('a')['b'].transform(lambda x: x.iat[-1]) 10 loops, best of 3: 178 ms per loop #jezrael3 In [219]: %timeit f(df) 100 loops, best of 3: 3.63 ms per loop </code></pre> <p><strong>Caveat</strong></p> <p>The results do not address performance given the number of groups, which will affect timings a lot for some of these solutions.</p>
python|pandas|dataframe|group-by|pandas-groupby
21
9,067
49,063,601
pandas colume split by code and concat these data
<p>I'm pandas newbie</p> <p>for example I have dataframe as below</p> <pre><code>code time open high low close 1 2 1 1 1 1 2 1 1 1 1 1 2 2 1 1 1 1 </code></pre> <p>and </p> <ol> <li>I want column split by code</li> <li>I want concat these splited data on index by time and fill NaN </li> </ol> <p>like below</p> <pre><code> "1" "2" time(index) open high low close open high low close 1 NaN NaN NaN NaN 1 1 1 1 2 1 1 1 1 1 1 1 1 </code></pre> <p>is there any way using pandas?</p>
<p>Use:</p> <ul> <li>first reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> + <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a></li> <li>swap levels in <code>MultiIndex</code> in columns by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.swaplevel.html" rel="nofollow noreferrer"><code>swaplevel</code></a></li> <li>last sort columns by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="nofollow noreferrer"><code>sort_index</code></a></li> </ul> <hr> <pre><code>df = df.set_index(['time', 'code']).unstack().swaplevel(0,1,1).sort_index(1) </code></pre> <p>Alternative:</p> <pre><code>df = df.pivot('time', 'code').swaplevel(0,1,1).sort_index(1) </code></pre> <hr> <pre><code>print (df) code 1 2 close high low open close high low open time 1 NaN NaN NaN NaN 1.0 1.0 1.0 1.0 2 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 </code></pre>
pandas|financial
1
9,068
49,286,381
What is the the best way to modify (e.g., perform math functions) a column in a Dask DataFrame?
<p>I'm a veteran of Pandas DataFrame objects, but I'm struggling to find a clean, convenient method for altering the values in a Dask DataFrame column. For a specific example, I'm trying to multiply positive values in a numpy.float column by -1, thereby making them negative. Here is my current method (I'm trying to change the last column in the DataFrame):</p> <pre><code>cols = df.columns df[[cols[-1]]] = df[[cols[-1]]]*-1 </code></pre> <p>This seems to work only if the column has a string header, otherwise it adds another column using the index number as a string-type column name for a new column. Is there something akin to the Pandas method of, say, <code>df.iloc[-1,:] = df.iloc[-1,:]*-1</code> that I can use with a Dask dataframe?</p> <p>Edit: I'm also trying to implement: <code>df = df.applymap(lambda x: x*-1)</code>. This, of course, applies the function to the entire dataframe, but is there a way to apply a function over just one column? Thank you.</p>
<h3>first question</h3> <p>If something works for string columns and not for numeric-named columns then that is probably a bug. I recommend raising an issue at <a href="https://github.com/dask/dask/issues/new" rel="nofollow noreferrer">https://github.com/dask/dask/issues/new</a></p> <h3>second question</h3> <blockquote> <p>but is there a way to apply a function over just one column? </p> </blockquote> <p>You can't apply a single Python function over a dask dataframe that is stored in many pieces directly, however methods like <code>.map_partitions</code> or <code>.reduction</code> may help you to achieve the same result with some cleverness.</p> <p><em>in the future we recommend asking separate questions separately on stack overflow</em></p>
python|pandas|dataframe|dask
0
9,069
49,067,073
how to append two or more dataframes in pandas and do some analysis
<p>I have 3 df's:</p> <pre><code>df1=pd.DataFrame({"Name":["one","two","three"],"value":[4,5,6]}) df2=pd.DataFrame({"Name":["four","one","three"],"value":[8,6,2]}) df3=pd.DataFrame({"Name":["one","four","six"],"value":[1,1,1]}) </code></pre> <p>I can append one by one but I want to append all the three data frames at a time and do some analysis.</p> <p>I am trying to count the name contains in how many data frame divided by total dataframes <code>name present in dataframes/total dataframes</code></p> <p>My desired output is,</p> <pre><code> Name value Count one 11 1 two 5 0.333 three 8 0.666 four 9 0.666 six 1 0.333 </code></pre> <p>Please help, thanks in advance!</p>
<p>Use:</p> <ul> <li>first <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a></li> <li>aggregate by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.agg.html" rel="nofollow noreferrer"><code>agg</code></a></li> <li>divide column</li> </ul> <hr> <pre><code>dfs = [df1, df2, df3] df = pd.concat(dfs) df1 = df.groupby('Name')['value'].agg([('value', 'sum'), ('Count', 'size')]).reset_index() df1['Count'] /= len(dfs) </code></pre> <p>Similar solution:</p> <pre><code>df1 = (pd.concat(dfs) .groupby('Name')['value'] .agg([('value', 'sum'), ('Count', 'size')]) .assign(Count = lambda x: x.Count /len(dfs)) .reset_index()) print (df1) Name value Count 0 four 9 0.666667 1 one 11 1.000000 2 six 1 0.333333 3 three 8 0.666667 4 two 5 0.333333 </code></pre>
python|pandas|dataframe|data-analysis
1
9,070
49,114,732
Python numpy floating point array precision
<p>I am trying to solve for SVM optimisation problem using Pegasos mini-batch algorithm (as in Fig 2) from this link: <a href="http://www.cs.huji.ac.il/~shais/papers/ShalevSiSrCo10.pdf" rel="nofollow noreferrer">http://www.cs.huji.ac.il/~shais/papers/ShalevSiSrCo10.pdf</a></p> <pre><code>#X: m*n matrix with m examples and n features per example (m=4000 and n=784 in my case), Y: m length vector containing 1 or -1 for each example, l: lambda as given in algorithm (l=1 in my code), itr: number of iterations, k: size of batch (100) in my case def pegasos(X,Y,l,n,m,itr,k): w = np.zeros((1,n),dtype=np.float32) print m, n diff = 0.0 for t in range(1,itr+1): A = random.sample(range(1,m),k) total = np.zeros((1,n),dtype=np.float32) eta = 1/(l*t) for i in A: x = X[i] y = Y[i] p = y*(np.dot(w,x.T)) if p &lt; 1: p1 = y*x total = np.add(total,p1) #update rule w = np.add((w*(1-(1/t))) , (eta*total*(1/k))) return w </code></pre> <p>My dataset is in such a way that when my variable <em>total</em> is computed, I get mostly 0s but there are a few values in the order of 10^(-1) to 10^(-5). As soon as total is multiplied by (eta/k) at the update rule, all the values become 0. and hence at every iteration the w I obtain is 0. which should not be the case. I have tried ways to increase the precision of my floats but they don't seem to work at all. When I use basic Pegasos algorithm (as given in Fig 1 in the above link), I don't face any problem, thus my dataset is not utterly weird. Any help regarding this issue would be highly appreciated :)</p>
<p>If you need precision, you should use <code>np.float64</code> (the normal floating point precision <code>double</code>).</p> <p>If you are using Python 2, you are using integer divition in <code>(1/t)</code>, <code>(1/k)</code>, and <code>(1/l)</code>. Write it as <code>1.0/</code>, to force a floating point division.</p>
python|numpy|svm|precision
0
9,071
58,617,465
How to choose which nan in column header
<p>I will like to choose the first nan to be named School while the other nan to be named grade. </p> <p>Original Column Header: </p> <pre><code>Name Address nan Class Subject nan </code></pre> <p>This is the outcome I want: </p> <pre><code>Name Address School Class Subject Grade </code></pre> <hr> <p>When i tried this code:</p> <pre><code>df = df.rename(columns={np.nan:'School'}) </code></pre> <p>This is my result with my code: </p> <pre><code>Name Address School Class Subject School </code></pre>
<pre><code>df.columns = ["Name", "Address", "School", "Class", "Subject", "Grade"] </code></pre>
python|pandas|dataframe
1
9,072
58,811,041
sklearn normalize() produces every value as 1
<p>I'm trying to normalize a single feature to [0, 1], but the result I'm getting back is all float values of 1 and is clearly wrong. </p> <pre><code>import pandas as pd import numpy as np from sklearn.preprocessing import normalize test = pd.DataFrame(data=[7, 6, 5, 2, 9, 9, 7, 8, 6, 5], columns=['data']) normalize(test['data'].values.reshape(-1, 1)) </code></pre> <p>This produces the following output:</p> <pre><code>array([[1.], [1.], [1.], [1.], [1.], [1.], [1.], [1.], [1.], [1.]]) </code></pre> <p>I thought this might be an int to float datatype issue so I tried casting to float first, <code>normalize(test['data'].astype(float).values.reshape(-1, 1))</code>, but this gives the same result. What am I missing?</p>
<p>This is because the default <code>axis</code> is 1.</p> <p>Set <code>axis = 0</code>:</p> <pre><code>normalize(test['data'].values.reshape(-1, 1), axis=0) </code></pre> <p>Output:</p> <pre><code>array([[0.32998316], [0.28284271], [0.23570226], [0.0942809 ], [0.42426407], [0.42426407], [0.32998316], [0.37712362], [0.28284271], [0.23570226]]) </code></pre>
python|pandas|scikit-learn|normalization
5
9,073
58,963,513
TypeError: Cannot convert provided value to EagerTensor. Provided value: 0.0 Requested dtype: int64
<p>I am trying to train the transformer model available from the tensorflow official models. I am able to train in cpu without any error but when I try gpu I get the following error:</p> <pre><code>models/official/transformer/v2/transformer.py:143 call * encoder_outputs = self.encode(inputs, attention_bias, training) models/official/transformer/v2/transformer.py:166 encode embedded_inputs = self.embedding_softmax_layer(inputs) TypeError: Cannot convert provided value to EagerTensor. Provided value: 0.0 Requested dtype: int64 </code></pre> <p>I tried tf.cast but it doesn't seem to help. </p>
<p>I had the same error with another Keras function. One of my parameters was a float by mistake.</p>
tensorflow|transformer-model
0
9,074
70,138,882
How to assign a unique id for a sequence of repeated column value in pandas dataframe?
<p>I have a dataframe call it dfA,</p> <pre><code>ID Chronological Label 1 1 None 2 0 ONPEAPLFPH 3 0 JFECJGSQNS 4 1 None 5 1 None 6 0 MGMELTIVTJ 7 1 None 8 1 None 9 1 None </code></pre> <p>I want to assign a <code>unique_id</code> to the column <code>Chronological</code> such that each consequent repeated values has a &quot;common&quot; <code>unique_id</code>. That is I want the following desired output,</p> <pre><code>ID Chronological Label unique_id 1 1 None 1 2 0 ONPEAPLFPH 2 3 0 JFECJGSQNS 3 4 1 None 4 5 1 None 4 6 0 MGMELTIVTJ 5 7 1 None 6 8 1 None 6 9 1 None 6 </code></pre> <p>I tried using a non-vectorized solution using for-loop but it is really slow,</p> <pre><code>starting_index = 0 unique_id = 1 dfs = [] for cL in dfA['Label'].unique(): if cL != &quot;None&quot;: current_index = dfA[dfA['Label']==cL].index.values[0] sliced_df = dfA.iloc[starting_index:current_index+1, :] sliced_df_ = sliced_df.copy() if len(sliced_df_)&gt;=1: sliced_df_['unique_id'] = unique_id starting_index = current_index unique_id += 1 dfs.append(sliced_df_) df_concat = pd.concat(dfs, axis=0) </code></pre> <p>Is there a more efficient way to solve it?</p>
<p>Try this:</p> <pre><code>df['unique_id'] = (df['Chronological'].eq(0) | (df['Chronological'] != df['Chronological'].shift()) ).cumsum() </code></pre> <p>Output:</p> <pre><code> ID Chronological Label unique_id 0 1 1 None 1 1 2 0 ONPEAPLFPH 2 2 3 0 JFECJGSQNS 3 3 4 1 None 4 4 5 1 None 4 5 6 0 MGMELTIVTJ 5 6 7 1 None 6 7 8 1 None 6 8 9 1 None 6 </code></pre>
python|pandas|dataframe
2
9,075
70,280,939
IndexError: index 0 is out of bounds for axis 0 with size 0? see detail in output1111
<pre><code>#count the number of fake and real videos def number_of_real_and_fake_videos(data_list): header_list = [&quot;file&quot;,&quot;label&quot;] lab = pd.read_csv('/content/drive/My Drive/Gobal_metadata.csv',names=header_list) fake = 0 real = 0 for i in data_list: temp_video = i.split('/')[-1] label = lab.iloc[(labels.loc[labels[&quot;file&quot;] == temp_video].index.values[0]),1] if(label == 'FAKE'): fake+=1 if(label == 'REAL'): real+=1 return real,fake </code></pre> <pre><code># load the labels and video in data loader import random import pandas as pd from sklearn.model_selection import train_test_split header_list = [&quot;file&quot;,&quot;label&quot;] labels = pd.read_csv('/content/drive/My Drive/Gobal_metadata.csv',names=header_list) #print(labels) train_videos = video_files[:int(0.8*len(video_files))] valid_videos = video_files[int(0.8*len(video_files)):] print(&quot;train : &quot; , len(train_videos)) print(&quot;test : &quot; , len(valid_videos)) # train_videos,valid_videos = train_test_split(data,test_size = 0.2) # print(train_videos) print(&quot;TRAIN: &quot;, &quot;Real:&quot;,number_of_real_and_fake_videos(train_videos)[0],&quot; Fake:&quot;,number_of_real_and_fake_videos(train_videos)[1]) print(&quot;TEST: &quot;, &quot;Real:&quot;,number_of_real_and_fake_videos(valid_videos)[0],&quot; Fake:&quot;,number_of_real_and_fake_videos(valid_videos)[1]) im_size = 112 mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] train_transforms = transforms.Compose([ transforms.ToPILImage(), transforms.Resize((im_size,im_size)), transforms.ToTensor(), transforms.Normalize(mean,std)]) test_transforms = transforms.Compose([ transforms.ToPILImage(), transforms.Resize((im_size,im_size)), transforms.ToTensor(), transforms.Normalize(mean,std)]) train_data = video_dataset(train_videos,labels,sequence_length = 10,transform = train_transforms) #print(train_data) val_data = video_dataset(valid_videos,labels,sequence_length = 10,transform = train_transforms) train_loader = DataLoader(train_data,batch_size = 4,shuffle = True,num_workers = 2) valid_loader = DataLoader(val_data,batch_size = 4,shuffle = True,num_workers = 2) image,label = train_data[0] im_plot(image[0,:,:,:]) </code></pre> <p>Output:</p> <pre><code>train : 8720 test : 2180 --------------------------------------------------------------------------- IndexError Traceback (most recent call last) &lt;ipython-input-32-7ad703495b44&gt; in &lt;module&gt;() 14 # print(train_videos) 15 ---&gt; 16 print(&quot;TRAIN: &quot;, &quot;Real:&quot;,number_of_real_and_fake_videos(train_videos)[0],&quot; Fake:&quot;,number_of_real_and_fake_videos(train_videos)[1]) 17 print(&quot;TEST: &quot;, &quot;Real:&quot;,number_of_real_and_fake_videos(valid_videos)[0],&quot; Fake:&quot;,number_of_real_and_fake_videos(valid_videos)[1]) 18 &lt;ipython-input-29-8723d4941fd5&gt; in number_of_real_and_fake_videos(data_list) 7 for i in data_list: 8 temp_video = i.split('/')[-1] ----&gt; 9 label = lab.iloc[(labels.loc[labels[&quot;file&quot;] == temp_video].index.values[0]),1] 10 if(label == 'FAKE'): 11 fake+=1 IndexError: index 0 is out of bounds for axis 0 with size 0 </code></pre>
<p>These 2 lines of code are accessing a list index that may not exist</p> <pre><code>print(&quot;TRAIN: &quot;, &quot;Real:&quot;,number_of_real_and_fake_videos(train_videos)[0],&quot; Fake:&quot;,number_of_real_and_fake_videos(train_videos)[1]) print(&quot;TEST: &quot;, &quot;Real:&quot;,number_of_real_and_fake_videos(valid_videos)[0],&quot; Fake:&quot;,number_of_real_and_fake_videos(valid_videos)[1]) </code></pre> <p>Maybe try a safer alternative</p> <pre><code>if len(number_of_real_and_fake_videos(train_videos)) &gt; 1: print(&quot;TRAIN: &quot;, &quot;Real:&quot;,number_of_real_and_fake_videos(train_videos)[0],&quot; Fake:&quot;,number_of_real_and_fake_videos(train_videos)[1]) </code></pre> <p>Same for the other one</p> <pre><code>if len(number_of_real_and_fake_videos(valid_videos)) &gt; 1: print(&quot;TEST: &quot;, &quot;Real:&quot;,number_of_real_and_fake_videos(valid_videos)[0],&quot; Fake:&quot;,number_of_real_and_fake_videos(valid_videos)[1]) </code></pre> <p>Regarding why it happens, we would need the data, etc. But this a good starting point to find out what's causing the issue, trying printing the data, etc.</p>
python|pandas|numpy|index-error
0
9,076
70,167,433
Calculate % change in flat tables
<p>From df1 I would like to calculate the percentage change from, which should give df2. Would you please assist me?</p> <p>df1</p> <pre><code>lst=[['01012021','A',10],['01012021','B',20],['01012021','A',12],['01012021','B',23]] df1=pd.DataFrame(lst,columns=['Date','FN','AuM']) </code></pre> <p>df2</p> <pre><code>lst=[['01012021','A',10,''],['01012021','B',20,''],['01012021','A',12,0.2],['01012021','B',23,0.15]] df2=pd.DataFrame(lst,columns=['Date','FN','AuM','%_delta']) </code></pre> <p>Thank you</p>
<p>Use <code>groupby</code> and <code>pct_change</code>:</p> <pre><code>df1['%_delta'] = df1.groupby('FN')['AuM'].pct_change() print(df1) # Output: Date FN AuM %_delta 0 01012021 A 10 NaN 1 01012021 B 20 NaN 2 01012021 A 12 0.20 3 01012021 B 23 0.15 </code></pre>
pandas|flat
2
9,077
56,343,679
Append dataframe with column names alone to another dataframe with data
<p>I have 2 dataframes as below.</p> <p><strong>Dataframe 1 (with only column names and no data):</strong></p> <pre><code>Name Age Gender </code></pre> <p>0 rows * 3 columns</p> <p><strong>Dataframe 2 (has data with over 1000 rows):</strong></p> <pre><code>level_1 level_2 level_3 AAA 26 M BBB 19 F CCC 24 F </code></pre> <p>1000 rows * 3 columns</p> <p>I have to append both the above dataframes.</p> <p><strong>Expected Output</strong></p> <p><strong>Dataframe 1</strong></p> <pre><code>Name Age Gender AAA 26 M BBB 19 F CCC 24 F </code></pre> <p><strong>What i tried so far:</strong></p> <pre><code>dataframe_1 = dataframe_1.append(dataframe_2,ignore_index = True) </code></pre> <p><strong>which gave me the below output:</strong></p> <pre><code>Name Age Gender level_1 level_2 level_3 NaN NaN NaN AAA 26 M NaN NaN NaN BBB 19 F NaN NaN NaN CCC 24 F </code></pre> <p>1000 rows * 6 columns</p>
<p>Need same columns names for correct alignment of columns between both DataFrames, so set columns names by from another DataFrame:</p> <pre><code>dataframe_2.columns = dataframe_1.columns dataframe_1 = dataframe_1.append(dataframe_2,ignore_index = True) </code></pre> <p>Another solution:</p> <pre><code>dataframe_1 = pd.concat([dataframe_1, dataframe_2],ignore_index = True) </code></pre> <hr> <pre><code>print (dataframe_1) Name Age Gender 0 AAA 26 M 1 BBB 19 F 2 CCC 24 F </code></pre>
python|pandas|dataframe
5
9,078
56,107,675
Second Largest Row of Multiple Pandas Columns
<p>I have a Pandas dataframe and would like to take the minimum of multiple 6 columns by row for example in the below table I would like to put in the below 6 rows and get the row min:</p> <pre><code>+-col1-col2-col3-col4-col5-col6-Min-+ | 1 2 3 4 5 6 2 | | 6 5 4 3 2 2 3 | | 7 8 9 10 11 12 8 | | 90 80 70 60 70 80 70 | </code></pre> <p>The code I have currently put together is below:</p> <pre><code>a1_raw_data['Best6Sec'] = a1_raw_data.iloc[:, [21, 23, 25, 27, 29, 31]].apply(lambda row: row.nlargest(2).values[-1], axis=1) </code></pre> <p>It is trying to take the minimum by row of columns 21, 23, 25, 27, 29 and 31. It does this by taking the nlargest rows and taking the last value in each. But I get an error message saying:</p> <pre><code>IndexError: ('index -1 is out of bounds for axis 0 with size 0', 'occurred at index 0') </code></pre> <p>Thanks</p>
<p>If there are at least 2 unique values per rows first remove mising values by <code>dropna</code>, get unique values, sorting and select second value by indexing:</p> <pre><code>df = a1_raw_data.iloc[:, [21, 23, 25, 27, 29, 31]] a1_raw_data['Min'] = df.apply(lambda row: np.sort(np.unique(row.dropna()))[1], axis=1) print (a1_raw_data) col1 col2 col3 col4 col5 col6 Min 0 1 2 3 4 NaN 6 2.0 1 2 2 2 3 2.0 2 3.0 2 7 8 9 10 11.0 12 8.0 3 90 80 70 60 70.0 80 70.0 </code></pre> <p>If possible all values per row are unique get error like:</p> <blockquote> <p>IndexError: ('index 1 is out of bounds for axis 0 with size 1', 'occurred at index 1')</p> </blockquote> <p>Solution is filter all non unique rows and apply solution:</p> <pre><code>mask = df.nunique(axis=1) != 1 f = lambda row: np.sort(np.unique(row.dropna()))[1] a1_raw_data.loc[mask, 'Min'] = df[mask].apply(f, axis=1) print (a1_raw_data) col1 col2 col3 col4 col5 col6 Min 0 1 2 3 4 NaN 6 2.0 1 2 2 2 2 2.0 2 NaN 2 7 8 9 10 11.0 12 8.0 3 90 80 70 60 70.0 80 70.0 </code></pre>
python|python-3.x|pandas
1
9,079
56,410,093
Using pandas to combine csv by comparing a key ("id")
<p>I have two CSVs with basically the same content, but spelling mistakes are removed from one, <code>fileA.csv</code>, and <code>fileB.csv</code> gets updated (as in new rows are added) from upstream (a limesurvey installation). How do I "combine" these two files using Pandas by checking the "id" column?</p> <p>I have tried to iterate over both files using Python <code>csv</code> module, but it didn't ended successfully. I managed to combine the two CSVs using the code below, but it just added the same columns ending with "_x" and "_y" ...</p> <pre><code>import pandas as pd fileA = pd.read_csv("new_data.csv_corrected",sep=";") fileB = pd.read_csv("new_data.csv",sep=";") merged = pd.merge(fileB, fileA, on='id') print(merged.to_csv()) </code></pre>
<p>I'm presuming spelling mistakes being removed from <code>fileA.csv</code> means you want to keep row in <code>fileA.csv</code>, but add any rows in <code>fileB.csv</code> that do not exist in <code>fileA.csv</code>.</p> <p>As a general rule you should read in your DataFrames so the <em>index</em> is set to your primary key. Having done that, I think the simple way to do what you want is <code>combine_first()</code>:</p> <hr> <p><strong>Example:</strong></p> <pre><code>&gt; cat FileA.csv id,0, 1, 2, 3, 4 A,1.000,1.000,1.000,1.000,1.000 B,1.000,1.000,1.000,1.000,1.000 C,1.000,1.000,1.000,1.000,1.000 D,1.000,1.000,1.000,1.000,1.000 &gt; cat FileB.csv id,0, 1, 2, 3, 4 A,0.000,0.000,0.000,0.000,0.000 B,0.000,0.000,0.000,0.000,0.000 E,0.000,0.000,0.000,0.000,0.000 F,0.000,0.000,0.000,0.000,0.000 &gt; dfA = pd.read_csv('FileA.csv', header=0, index_col='id') &gt; dfB = pd.read_csv('FileB.csv', header=0, index_col='id') &gt; dfA.combine_first(dfB) </code></pre> <p><strong>Gives:</strong></p> <pre><code> 0 1 2 3 4 id A +1.000000 +1.000000 +1.000000 +1.000000 +1.000000 B +1.000000 +1.000000 +1.000000 +1.000000 +1.000000 C +1.000000 +1.000000 +1.000000 +1.000000 +1.000000 D +1.000000 +1.000000 +1.000000 +1.000000 +1.000000 E +0.000000 +0.000000 +0.000000 +0.000000 +0.000000 F +0.000000 +0.000000 +0.000000 +0.000000 +0.000000 </code></pre> <p>There is also <code>DataFrame.update()</code> but annoyingly, its behavior is inconsistent with <code>dict.update()</code>, as won't add new "keys" (index items).</p>
python|pandas
1
9,080
56,010,938
Is there a specific order one should choose when chaining conditions in querying?
<p>Imagine we have this dataframe as an example:</p> <pre><code>df = pd.DataFrame([purchase_1, purchase_2, purchase_3], index=['Store 1', 'Store 1', 'Store 2']) </code></pre> <p>If I want to know the names of people who spent more than 3 (euros), what is the difference between these two approaches:</p> <pre><code>#approach 1: df[df['Cost']&gt;3]['Name'] #approach 2: df['Name'][df['Cost']&gt;3] </code></pre> <p>Is there any difference at all or any recommended approach in these cases?</p>
<p>Do neither of these. It's <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy" rel="nofollow noreferrer">chained indexing</a>, and can come back to hurt you unexpectedly.</p> <p>Instead, it's safer to provide both axis labels at once:</p> <pre><code>df.loc[df['Cost'] &gt; 3, 'Name'] </code></pre> <p>This lets you treat <code>df</code> as a single entity rather than getting an intermediate object before doing the second filtering/indexing.</p>
python|pandas
3
9,081
55,751,180
Why does memory usage in Pandas report the same number for integers as for object dtype?
<p>I'm trying to understand the difference in memory usage between integers and string (objects) dtypes in Pandas.</p> <pre><code>import pandas as pd df_int = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'), dtype=int) </code></pre> <p>As expected, this takes around 3.2 KB of memory as each column is a 64-bit integer</p> <pre><code>In [38]: df_int.info() &lt;class 'pandas.core.frame.DataFrame'&gt; RangeIndex: 100 entries, 0 to 99 Data columns (total 4 columns): A 100 non-null int64 B 100 non-null int64 C 100 non-null int64 D 100 non-null int64 dtypes: int64(4) memory usage: 3.2 KB </code></pre> <p>However, when I try to initialize it as a string, it is telling me that it has roughly the same memory usage</p> <pre><code>import pandas as pd df_str = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'), dtype=str) In [40]: df_str.info() &lt;class 'pandas.core.frame.DataFrame'&gt; RangeIndex: 100 entries, 0 to 99 Data columns (total 4 columns): A 100 non-null object B 100 non-null object C 100 non-null object D 100 non-null object dtypes: object(4) memory usage: 3.2+ KB </code></pre> <p>When I use <code>sys.getsizeof</code>, the difference is clear. For the dataframe containing only 64-bit integers, the size is roughly 3.3 KB (including the dataframe overhead of 24 bytes)</p> <pre><code>In [44]: sys.getsizeof(df_int) Out[44]: 3304 </code></pre> <p>For the dataframe initialized with integers converted to strings, it is nearly 24 KB</p> <pre><code>In [42]: sys.getsizeof(df_str) Out[42]: 23984 </code></pre> <p>Why does memory usage in Pandas report the same number for integers as for strings (object dtype)?</p>
<p>Following the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.info.html" rel="nofollow noreferrer">docs</a>, use <code>'deep'</code> to get the actual value (otherwise it's an estimate)</p> <pre><code>df_str.info(memory_usage='deep') #&lt;class 'pandas.core.frame.DataFrame'&gt; #RangeIndex: 100 entries, 0 to 99 #Data columns (total 4 columns): #A 100 non-null object #B 100 non-null object #C 100 non-null object #D 100 non-null object #dtypes: object(4) #memory usage: 23.3 KB </code></pre> <blockquote> <p>A value of ‘deep’ is equivalent to “True with deep introspection”. Memory usage is shown in human-readable units (base-2 representation). Without deep introspection a memory estimation is made based in column dtype and number of rows assuming values consume the same memory amount for corresponding dtypes. With deep memory introspection, a real memory usage calculation is performed at the cost of computational resources.</p> </blockquote>
python|pandas
3
9,082
64,659,330
Getting the frequency over columns of each item
<p>I have been trying to get the frequency of each ID per day over a period of time. I have the following dataframe:</p> <pre><code>data1 = pd.DataFrame({ 'Date_Time': [ '2010-01-01', '2010-01-01', '2010-04-02', '2010-04-01', '2011-01-01', '2011-01-01', '2013-01-01', '2014-01-01', '2014-01-01', '2015-01-01', '2016-01-01', '2011-01-01'], 'ID': [1, 1, 1, 1, 2, 2, 3, 4, 4, 5, 6, 6] }) </code></pre> <p>So I would like to get the frequency of each ID per day given that there are many days in which the same ID exists. I tried the following approach which worked partly and am still strugling with getting the it right. Here is the code which I have used:</p> <pre><code>for dt in set(data1['Date_Time']): for id in df['ID']: length = len(data1[data1['Date_Time']==dt]) data1.loc[data1['Date_Time']==dt, 'new'] = length </code></pre> <p>The final result should be looking something like this</p> <p><img src="https://i.stack.imgur.com/CNGGj.png" alt="Here are the assumed results" /></p>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>df.groupby()</code></a> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.transform.html" rel="nofollow noreferrer"><code>transform</code></a>:</p> <pre><code>In [94]: data1['ID_freq_per_day'] = data1.groupby(['Date_Time', 'ID'])['ID'].transform('size') In [95]: data1 Out[95]: Date_Time ID ID_freq_per_day 0 2010-01-01 1 2 1 2010-01-01 1 2 2 2010-04-02 1 1 3 2010-04-01 1 1 4 2011-01-01 2 2 5 2011-01-01 2 2 6 2013-01-01 3 1 7 2014-01-01 4 2 8 2014-01-01 4 2 9 2015-01-01 5 1 10 2016-01-01 6 1 11 2011-01-01 6 1 </code></pre>
python|pandas|dataframe
0
9,083
64,965,027
How to get vendors count from dataframe in pandas?
<p>I have this dataframe</p> <pre><code>area vendors electronic city zomato electronic city zomato electronic city swiggy Anekal, Electronic City swiggy Konappana Agrahara, Doddathoguru, Electronic City zomato electronic city swiggy electronic city swiggy electronic city swiggy </code></pre> <p>I need count of vendors for perticular area. For example, area=electronic city is having two vendors zomato and swiggy,so I should get vendorslist=1~2 where vendorlist is the variable where we have to store the results. and also I need column of vendorNames which will be like this way</p> <pre><code>area vendorNames electronic city swiggy,zomato Anekal, Electronic City swiggy Konappana Agrahara, Doddathoguru, Electronic City zomato </code></pre>
<p>To get the number of unique vendors by area you can simply use</p> <pre><code>In [10]: df.groupby(['area'])['vendors'].nunique().rename('vendorlist') Out[10]: area Anekal, Electronic City 1 Konappana Agrahara, Doddathoguru, Electronic City 1 electronic city zomato 2 Name: vendorlist, dtype: int64 </code></pre> <p>Is this what you need or what do you mean with <code>vendorlist</code> and <code>1~2</code>?</p>
python|pandas|dataframe
0
9,084
64,993,857
How to set pandas columns names from a dictionary of lists?
<p>I have a dict like:</p> <pre><code>actions = {0: [0, 1, 2, 4], 1: [0, 1, 8, 5, 2, 4], 2: [0, 1, 2, 5, 6]} </code></pre> <p>And I would like to create a dataframe where the column names where:</p> <pre><code>state_actions = pd.Dataframe() Empty DataFrame Columns: [[0, 1, 2, 4], [0, 1, 8, 5, 2, 4], [0, 1, 2, 5, 6]] Index: [] </code></pre> <p>The idea is to have as column names the dict values.</p>
<pre><code>pd.DataFrame(columns=actions.values()) Empty DataFrame Columns: [[0, 1, 2, 4], [0, 1, 8, 5, 2, 4], [0, 1, 2, 5, 6]] Index: [] </code></pre>
python|pandas
1
9,085
64,774,306
How to remove double quote from a csv file before reading it?
<p>I am getting the following error:</p> <blockquote> <p>pandas.errors.ParserError: '|' expected after '&quot;'</p> </blockquote> <p>The reason is because the first line has <code>'&quot;'</code> that shouldn't be there:</p> <pre><code>&quot;Name|Kind|Color|Price </code></pre> <p>I tried the following:</p> <pre><code>`pd.read_csv(filename, sep='|', usecols=fields, engine='python')` </code></pre> <p>Which produces the above error.</p> <pre><code>pd.read_csv(filename, sep='|', usecols=fields, engine='python', quotechar='&quot;', error_bad_lines=False) </code></pre> <p>This doesn't work because it drops the whole line which I need because it's column headers.</p> <p>Is there a way to fix this without rewriting the file? Maybe read it into a string and remove <code>'&quot;',</code> but then how do I read that string with the following?</p> <pre><code>pd.read_csv(filename, sep='|', usecols=fields, engine='python') </code></pre>
<p>I am not totally sure about your problem but given a csv file like:</p> <pre><code>&quot;Name|Kind|Color|Price alex|robot|braun|100$ </code></pre> <p>then the following code will remove any leading &quot;#&quot; if present:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import re pd.DataFrame([ re.match(r'&quot;*(?P&lt;line&gt;.*)', line) .group(&quot;line&quot;) .split(&quot;|&quot;) for line in open(&quot;tmp.csv&quot;).readlines() ]) # # 0 1 2 3 # 0 Name Kind Color Price # 1 alex robot braun 100$ </code></pre>
python|pandas
0
9,086
64,843,410
Elements from list are overwritten in python
<p>I've been struggling for some days trying to figure out why my items from a python list are overwritten. Basically I have to implement a function that rotates 8 times a matrix.</p> <pre><code>def rotate_ring(matrix, offset): dim = len(matrix[0]) print(dim) last_element = matrix[offset][offset] for j in range(1 + offset, dim - offset): matrix[offset][j-1] = matrix[offset][j] matrix[offset][dim-1-offset] = matrix[1+offset][dim-1-offset] for i in range(1 + offset, dim - offset): matrix[i-1][dim-1-offset] = matrix[i][dim-1-offset] matrix[dim-1-offset][dim-1-offset] = matrix[dim-1-offset][dim-2-offset] for j in range(1+offset, dim-offset): matrix[dim-1-offset][dim-j] = matrix[dim-1-offset][dim-j-1] matrix[dim-1-offset][offset] = matrix[dim-2-offset][offset] for i in range(1+offset, dim-offset): matrix[dim-i][offset] = matrix[dim-i-1][offset] matrix[1+offset][offset] = last_element return matrix def rotate_matrix(matrix): dim = len(matrix[0]) for offset in range(0, int(dim/2)): matrix = rotate_ring(matrix, offset) return matrix </code></pre> <p>The functions above are the functions that rotate the matrix and they do because I checked them. After these functions were implemented, I implemented another function</p> <pre><code>def compass_filter(kernel): #result = np.zeros(gray.shape, dtype=np.float) #result = np.zeros(gray.shape, dtype=np.float) results = [] results.append(kernel) for i in range(0,7): kernel = rotate_matrix(kernel) #rotate the kernel print(kernel) results.append(kernel) #appending to results return results </code></pre> <p>that iterates, 8 times (because I always have 8 kernels) and append the kernels to a list. The problem that I have encountered is that the new rotated kernel is printed:</p> <p><a href="https://i.stack.imgur.com/fiqRl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fiqRl.png" alt="enter image description here" /></a></p> <p>, but when I print the final list all I see is the last kernel printed 8 times. Does anybody what is the problem? I have also tried to do another list in the for loop just for the new element and then append it to the list that is outside the loop. Thank you! <a href="https://i.stack.imgur.com/BlO3n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BlO3n.png" alt="enter image description here" /></a></p>
<p>Please change the function compass filter as follows:</p> <pre><code>def compass_filter(kernel): results = [] results.append(kernel) for i in range(7): kernel_ = np.array([rotate_matrix(kernel)]) print(kernel_) results.append(kernel_) return results </code></pre>
python|numpy|opencv
1
9,087
64,940,159
Write rows from Postgres Table into a CSV file using Pandas
<p><strong>I want to writerows to the given NAS(mnt) folder from a postgres table, but only after performing some data hygine checks</strong></p> <p><em>Below is the actual code which is working, but it only selects columns from the table and saves the data into a .txt file</em></p> <pre class="lang-py prettyprint-override"><code>import os import psycopg2 import time import pandas as pd # File path and name. filePath = '/mnt/nasdatavant/datavant/covid_implementation/postgres_files/' timestr = time.strftime(&quot;%Y-%m-%d-%-H%M%S&quot;) fileName = 'covid-' + timestr + '.txt' # Database connection variable. connect = None # Check if the file path exists. if os.path.exists(filePath): connect = psycopg2.connect(&lt;connection_details_here&gt;) with connect.cursor() as cursor: sqlSelect = &quot;select patnt_last_nm as cust_last_nm, \ patnt_frst_nm as cust_frst_nm, \ date(patnt_brth_dt) as cust_brth_dt, \ patnt_gendr_cd as cust_gendr_cd, \ patnt_postl_cd as cust_postl_cd, \ indiv_entpr_id from datavant_o.covid_patnt_registry&quot; cursor.execute(sqlSelect) results = cursor.fetchall() print(results) headers = [i[0] for i in cursor.description] csvFile = csv.writer(open(filePath + fileName, 'w', newline=''), delimiter='|', lineterminator='\r\n', escapechar='\\') csvFile.writerow(headers) csvFile.writerows(results) connect.commit() connect.close() else: print(&quot;File path does not exist.&quot;) </code></pre> <p>Note: when i say print(results) it yields a list of tuples eg:</p> <pre><code>[('TOBY', 'MICHEAL', datetime.date(1986, 8, 23), 'M', '06472', '872956'), ('CARLIE', 'NAHLA', datetime.date(1979, 9, 29),..etc] </code></pre> <p>so to yield a dataframe i wrote df = pd.DataFrame(results)</p> <p><strong>What i actually want now is to add data hygiene checks like below before writing into a .txt file</strong></p> <p>what i tried:</p> <pre class="lang-py prettyprint-override"><code> csvFile = csv.writer(open(filePath + fileName, 'w', newline=''), delimiter='|', lineterminator='\r\n', escapechar='\\') df = pd.DataFrame(results) print(df) df = df.dropna(axis=0) df = df.loc[ (df[0].astype('str').str.len()&gt;1) &amp; (df[1].astype('str').str.len()&gt;1) &amp; (df[4].astype('str').str.len()&gt;4) &amp; (df[4].astype('str').str.len()&lt;8)] csvFile.writerow(headers) csvFile.writerows(df) </code></pre> <p>error i got:</p> <pre><code>error: csvFile.writerows(df) _csv.Error: iterable expected, not int </code></pre> <p>Final expected output(in .txt file):</p> <pre><code>cust_last_nm|cust_frst_nm|cust_brth_dt|cust_gendr_cd|cust_postl_cd|indiv_entpr_id TOBY|MICHEAL|1986-08-23|M|06472|872956 CARLIE|NAHLA|1979-09-29|F|06757|499666 …etc </code></pre> <p>i need some help to solve this scenario (new to pandas) thanks ahead.</p>
<pre><code>import os import psycopg2 import time import pandas as pd # File path and name. filePath = '/mnt/nasdatavant/datavant/covid_implementation/postgres_files/' timestr = time.strftime(&quot;%Y-%m-%d-%-H%M%S&quot;) fileName = 'covid-' + timestr + '.csv' # Check if the file path exists. if os.path.exists(filePath): connect = psycopg2.connect(&quot;dbname=postgres user=postgres&quot;) with connect.cursor() as cursor: # create test table cursor.execute(&quot;create table if not exists test_table (test_column int)&quot;) cursor.execute(&quot;insert into test_table (test_column) values (10);&quot;) # query and import as Dataframe df = pd.read_sql(&quot;select * from test_table&quot;, connect) # add here your cleaning operations df = df.dropna(axis=0) # export df.to_csv(os.path.join(filePath, fileName), index=False) connect.commit() connect.close() else: print(&quot;File path does not exist.&quot;) </code></pre>
python|pandas|postgresql
1
9,088
40,286,298
How to access all flags and get their values using loop in Tensorflow?
<p>I want to write all flags and its values in external file(like txt). How can I get automatically all the contents inside <code>tf.flag</code>? is there any built-in function? or is there easy way e.g. by using loop?</p> <p>for example,</p> <pre><code>tf.flags.DEFINE_string("device","/gpu:0", "select device") tf.flags.DEFINE_integer("rnn_size","64", "number of units") </code></pre> <p>I want to get</p> <pre><code>device /gpu:0 rnn_size 64 </code></pre>
<p>For tensorflow 1.5 you can use <code>tf.app.flags.FLAGS.flag_values_dict()</code> they have changed the flags library one more time</p>
python|tensorflow
27
9,089
44,167,418
Expand category in a column to column name in pandas
<p>I'm trying to expand (not sure if it is the right word) some categorical data into columns using pandas.</p> <p>Let's say I have the following data frame:</p> <pre><code>df = pandas.DataFrame({'name': ['john', 'john', 'louis', 'louis'], 'day':['a', 'b', 'a', 'b'], 'oranges':[10, 23, 15, 5], 'apple': [5, 4, 1, 3]}) </code></pre> <p>Which produces this table:</p> <pre><code> apple day name oranges 0 5 a john 10 1 4 b john 23 2 1 a louis 15 3 3 b louis 5 </code></pre> <p>I would like to use some pandas method to produce a table like this:</p> <pre><code> apple_a apple_b name oranges_a oranges_b 0 5 4 john 10 23 1 1 3 louis 15 5 </code></pre> <p>So far I've tried:</p> <pre><code>df.pivot('name', columns='day') apple oranges day a b a b name john 5 4 10 23 louis 1 3 15 5 </code></pre> <p>My question is: how can I split my data and create more columns based on a categorical information using Pandas?</p> <p>Thanks in advance,</p> <p>Rhenan</p>
<p>You have already got the desired output, you need to format the column names</p> <pre><code>df = df.pivot('name', columns='day') df.columns = ['_'.join(col).strip() for col in df.columns.values] df = df.reset_index() name apple_a apple_b oranges_a oranges_b 0 john 5 4 10 23 1 louis 1 3 15 5 </code></pre>
python|pandas
3
9,090
44,338,623
Horizontal stacked bar chart in python giving multiple charts in Jupyter Notebook
<p>I am trying to make a stacked horizontal bar chart with a specified size, title, and legend location in Jupyter Notebooks. When I use other Stack Overflow solutions I get several graphs printed out instead of just one. Here's a simplified example:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt a = [3,5,4,2,1] b = [3,4,5,2,1] c = [3,5,4,6,1] df = pd.DataFrame({'a' : a,'b' : b, 'c' : c}) df.plot.barh(stacked=True); fig, ax = plt.subplots() fig.set_size_inches(6,6) ax.set_title("My ax title") #plt.title("My plt title") # This seems to be identical to ax.set_title # Which is prefered? ax.legend(loc='upper left') plt.show() </code></pre> <p>This code gives me the following two plots. The plot is what I'm looking for but my size and legend location are ignored and the title was put on a second graph that I don't want.</p> <p><a href="https://i.stack.imgur.com/GdNBa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GdNBa.png" alt="enter image description here"></a><a href="https://i.stack.imgur.com/FLCnh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FLCnh.png" alt="enter image description here"></a></p> <p>Note: I'm using plot.barh from pandas because I got it to work but I would be just as happy to do it directly from matplotlib.</p>
<p>You can assign the return object from the <code>plot</code> to the variable <code>ax</code>. Then do what you wanted.</p> <pre><code>a = [3,5,4,2,1] b = [3,4,5,2,1] c = [3,5,4,6,1] df = pd.DataFrame({'a' : a,'b' : b, 'c' : c}) ax = df.plot.barh(stacked=True); ax.figure.set_size_inches(6,6) ax.set_title("My ax title") ax.legend(loc='upper left') </code></pre> <p><a href="https://i.stack.imgur.com/wcfNE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wcfNE.png" alt="enter image description here"></a></p> <hr> <p>Or alternatively, you could have referenced your created <code>ax</code> in the <code>plot</code> call.</p> <pre><code>a = [3,5,4,2,1] b = [3,4,5,2,1] c = [3,5,4,6,1] df = pd.DataFrame({'a' : a,'b' : b, 'c' : c}) fig, ax = plt.subplots() fig.set_size_inches(6,6) df.plot.barh(stacked=True, ax=ax); ax.set_title("My ax title") ax.legend(loc='upper left') </code></pre> <p><a href="https://i.stack.imgur.com/wcfNE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wcfNE.png" alt="enter image description here"></a></p>
python|pandas|matplotlib|jupyter-notebook
4
9,091
69,438,751
Apply Custom Function and write to CSV every N rows
<p>I have a large dataframe and am trying to apply a custom function to one of the columns. However, as the function is a GET request to a website, it is rather slow, and the apply function breaks after an hour or so.</p> <p>As such, my current thinking is to break the dataframe up into subsamples of N rows each, apply the custom function, and appending the results to a csv. I'd like to know the most ideal way to perform this, especially on the portion of iterating N rows and saving to csv every N rows.</p> <p>Thanks in advance!</p>
<p>You can try to open only a chunk of the dataframe at a time with the following :</p> <pre><code>for chunk in pd.read_excel(your_file, chunksize=number_of_rows_at_a_time): chunk # chunk is a sub dataframe containing chunksize number of rows </code></pre> <p>Though, the best option is still to find a way to not use an apply function as it is rather slow for big dataframes. I would suggest to try to find another solution if it is ever possible instead of opening chunks.</p>
python|pandas
0
9,092
69,638,483
Why do Keras Tuners' Fixed hyperparameters produce different results from static values?
<p>My hypertuning results are quite different even though my model should effectively be using the same parameters, depending on whether I use <code>hp.Fixed(key, value)</code> or just <code>value</code> (where <code>value</code> is say, an Int). I've verified that repeated runs of each test produce the same results following the <a href="https://keras.io/getting_started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development" rel="nofollow noreferrer">instructions for reproducibility</a> as well as setting the seed for all applicable layers/initializers/etc. even though the instructions stated they weren't necessary.</p> <h2><strong>Using <code>hp.Fixed(key, value)</code></strong></h2> <p><a href="https://i.stack.imgur.com/XEKmX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XEKmX.png" alt="Using hp.Fixed(key, value)" /></a></p> <h2>Using <code>value</code></h2> <p><a href="https://i.stack.imgur.com/xXP33.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xXP33.png" alt="Using value" /></a></p> <p>Looking at the table of all hyperparameters, it appears that <code>hp.Fixed</code> isn't even doing anything at all. All hyperparameters are being tested.</p> <p>EDIT: My custom hypermodel's hyperparameters regardless of state are being ignored by the HyperbandTuner.</p> <hr /> <p>Here is the offending code:</p> <pre><code>class MyModel(kt.HyperModel): def __init__(self, **config): self.config = config self.seed = config.get('seed') def build_model(self): model = Sequential(name=self.name) model.add(LSTM(self.units, name='LSTM')) model.add(Dense(1, name='Output', kernel_initializer=GlorotUniform(seed=self.seed))) model.compile(loss='mean_squared_error', metrics='mean_squared_error', sample_weight_mode='temporal') return model # If the user has supplied the parameter manually, use hp.Fixed() # Otherwise, use the provided hyperparameter (default) def _param(self, key, default=None): value = self.config.get(key) if value is not None: return self.hp.Fixed(key, value) else: return default def build(self, hp): self.hp = hp self.units = self._param('units', hp.Int('units', 1, 200, step=5)) return self.build_model() </code></pre>
<p>Ok so after doing some more digging I discovered (specifically through how <a href="https://reposhub.com/python/deep-learning/keras-team-keras-tuner.html#:%7E:text=hp%20%3D%20HyperParameters()%0A%23%20This%20will%20override%20the%20%60learning_rate%60%20parameter%20with%20your%0A%23%20own%20selection%20of%20choices%0Ahp.Choice(%27learning_rate%27%2C%20values%3D%5B1e-2%2C%201e-3%2C%201e-4%5D)" rel="nofollow noreferrer">this tutorial</a> declared the hyperparameters) that when you write <code>hp.[Fixed|Choice|etc.]</code>, you are immediately declaring the presence of those hyperparameters in the search space, regardless of where that code appears.</p> <p>Think of the hyperparameter declaration as an eigenclass method, not as a regular Python object that the HyperTuner picks up on from within the model.</p> <p>Essentially, each of the Fixed/Choice/etc. hyperparameter methods simultaneously sets a global hyperparameter somewhere in the background of the HyperTuner class while also returning a regular variable (Int/Float/String/Range/List/etc.) so that you can still build your model without error before the HyperTuner eventually overwrites it during the search phase.</p> <p>I was confused by this because typically the <code>hp</code> shows up as an argument in the <code>build_model()</code> method or <code>kt.HyperModel</code> class wherein the calls are assigned to local variables which are then passed to the model declaration.</p> <hr /> <p>Here's the fix for the offending code:</p> <pre><code>class MyModel(kt.HyperModel): def __init__(self, **config): self.config = config self.seed = config.get('seed') def build_model(self): model = Sequential(name=self.name) model.add(LSTM(self.units, name='LSTM')) model.add(Dense(1, name='Output', kernel_initializer=GlorotUniform(seed=self.seed))) model.compile(loss='mean_squared_error', metrics='mean_squared_error', sample_weight_mode='temporal') return model # If the user has supplied the parameter manually, use hp.Fixed() # Otherwise, use the provided hyperparameter (default) def _param(self, key, default=None): value = self.config.get(key) if value is not None: return self.hp.Fixed(key, value) else: return default() def build(self, hp): self.hp = hp self.units = self._param('units', lambda: hp.Int('units', 1, 200, step=5)) return self.build_model() </code></pre>
tensorflow|keras|hyperparameters
1
9,093
41,203,959
Conditionally format Python pandas cell
<p>I am trying to color, highlight, or change fond of Python pandas DataFrame based on the value of the cell. e.g. if the cells on each rows are bigger than the cell in the first column of that row, then highlight the cell as red (or any other color), otherwise leave it as it is. </p> <p>I wrote a for loop here:</p> <pre><code>for index in range(0, df.shape[0]): for column in range(1, df.shape[1]): # from 1 not from 0 because I only need # to compare the 2nd to the last cell of each row with the 1st cell in the row if df.iloc[index][column] - df_BDE_n_months_avg_std_pct.iloc[index][0] &gt; 0: then "PLEASE PUT YOUR HELP HERE, I NEED A PIECE OF CODE THAT CAN HIGHLIGHT THE CELL" else: "DO NOTHING" </code></pre> <p>So far I haven't found a way to do it. Any help will be great.</p>
<p>From <a href="http://pandas.pydata.org/pandas-docs/stable/style.html" rel="noreferrer">the style docs:</a></p> <blockquote> <p>You can apply conditional formatting, the visual styling of a DataFrame depending on the data within, by using the DataFrame.style property.</p> </blockquote> <pre><code>import pandas as pd df = pd.DataFrame([[2,3,1], [3,2,2], [2,4,4]], columns=list("ABC")) df.style.apply(lambda x: ["background: red" if v &gt; x.iloc[0] else "" for v in x], axis = 1) </code></pre> <p><a href="https://i.stack.imgur.com/Y1k2G.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Y1k2G.png" alt="enter image description here"></a></p> <hr> <p><em>Edit</em>: to format specific cells, you can add condition checkers to check the name of element with <code>Series.iteritems()</code> or check the index with <code>enumerate()</code>, e.g. if you want to format starting from column 3, you can use enumerate and check the index:</p> <pre><code>df = pd.DataFrame([[2,3,-3], [3,2,7], [2,4,4]], columns=list("ABC")) df.style.apply(lambda x: ["background-color: #ff33aa" if (i &gt;= 2 and (v &gt; x.iloc[0] + x.iloc[1] or v &lt; x.iloc[0] - x.iloc[1])) else "" for i, v in enumerate(x)], axis = 1) </code></pre> <p><a href="https://i.stack.imgur.com/wkftq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/wkftq.png" alt="enter image description here"></a></p>
python|pandas|dataframe|conditional-formatting|pandas-styles
38
9,094
54,134,593
Finding max value by Row and displaying its column name correspodingly
<p>I have the following array , which consists of 4 columns and 2 rows. I wish to find the maximum value in the second row and return the corresponding value name . In other words my output should be :521 (100 has the 99 value aswell but i need to return the first value ).</p> <p>ive tried this : (student_ids is the first row , and grades is the lower one (average of some grades array ) . the following function returned me the 624 value .</p> <pre><code>def find_student_with_max_avg(grades, student_ids): return np.max(np.vstack((student_ids, np.mean(grades, axis=0)))) </code></pre> <p>array : [[521 597 624 100] [ 99 73 97 99]]</p> <p>Keep in mind that the solution should be simple , around one line as we are not allowed to use loops and its around basic numpy methods . no imports other than numpy aswell. </p>
<p>Input: </p> <pre><code>array : [[521 597 624 100] [ 99 73 97 99]] </code></pre> <p>First, find the index of <code>max</code> in second row like this,</p> <pre><code>idx = np.argmax(arr[1]) </code></pre> <p>Then, extract the element from first row of that index,</p> <pre><code>print(arr[0][idx]) </code></pre> <p>Output:</p> <pre><code>521 </code></pre>
python|arrays|numpy
0
9,095
54,093,248
iOS Firebase ML Kit Simple Audio Recognition "Failed to create a TFLite interpreter for the given model"
<p>I have been trying to implement the <a href="https://www.tensorflow.org/tutorials/sequences/audio_recognition" rel="nofollow noreferrer">Simple Audio Recognition</a> Tensorflow sample in iOS using the <a href="https://firebase.google.com/docs/ml-kit/ios/use-custom-models?authuser=0" rel="nofollow noreferrer">Firebase's ML kit</a>. I have successfully trained the model and converted it into a TFlite file. The model takes the Audio(wav) file path as input([String]) and gives the predictions as output(float32). My iOS code is fairly simple</p> <pre><code>func initMLModel(){ /*Initializing local TFLite model*/ guard let modelPath = Bundle.main.path(forResource: "converted_model", ofType: "tflite") else { return } let myLocalModel = LocalModelSource.init(modelName: "My", path: modelPath) let registrationSuccessful = ModelManager.modelManager().register(myLocalModel) let options = ModelOptions(cloudModelName: nil, localModelName: "My") let interpreter = ModelInterpreter.modelInterpreter(options: options) let ioOptions = ModelInputOutputOptions() do { try ioOptions.setInputFormat(index: 0, type: .unknown, dimensions: []) /*input is string path. Since string is not defined, setting it as unknown.*/ try ioOptions.setOutputFormat(index: 0, type: .float32, dimensions: [1,38]) /* output is 1 of 38 labelled classes*/ } catch let error as NSError { print("Failed to set IO \(error.debugDescription)") } let inputs = ModelInputs() var audioData = Data() let audiopath = Bundle.main.path(forResource: "audio", ofType: "wav") do { audioData = try Data.init(contentsOf: URL.init(fileURLWithPath: audiopath!)) //try inputs.addInput(audioData) /*If the input type is direct audio data*/ try inputs.addInput([audiopath]) } catch let error as NSError { print("Cannot get audio file data \(error.debugDescription)") return } interpreter.run(inputs: inputs, options: ioOptions) { (outputs, error) in if error != nil { print("Error running the model \(error.debugDescription)") return } do { let output = try outputs!.output(index: 0) as? [[NSNumber]] let probabilities = output?[0] guard let labelsPath = Bundle.main.path(forResource: "conv_labels", ofType: "txt") else { return } let fileContents = try? String.init(contentsOf: URL.init(fileURLWithPath: labelsPath)) guard let labels = fileContents?.components(separatedBy: "\n") else {return} for i in 0 ..&lt; labels.count { if let probability = probabilities?[i] { print("\(labels[i]) : \(probability)") } } }catch let error as NSError { print("Error in parsing the Output \(error.debugDescription)") return } } } </code></pre> <p>But when i run this i get the following error output <code>Failed to create a TFLite interpreter for the given model</code>. The Complete Log of the sample app is as below</p> <pre><code> 2019-01-07 18:22:31.447917+0530 sample_core_ML[67500:3515789] - &lt;AppMeasurement&gt;[I-ACS036002] Analytics screen reporting is enabled. Call +[FIRAnalytics setScreenName:setScreenClass:] to set the screen name or override the default screen class name. To disable screen reporting, set the flag FirebaseScreenReportingEnabled to NO (boolean) in the Info.plist 2019-01-07 18:22:33.354449+0530 sample_core_ML[67500:3515686] libMobileGestalt MobileGestalt.c:890: MGIsDeviceOneOfType is not supported on this platform. 2019-01-07 18:22:34.789665+0530 sample_core_ML[67500:3515812] 5.15.0 - [Firebase/Analytics][I-ACS023007] Analytics v.50400000 started 2019-01-07 18:22:34.790814+0530 sample_core_ML[67500:3515812] 5.15.0 - [Firebase/Analytics][I-ACS023008] To enable debug logging set the following application argument: -FIRAnalyticsDebugEnabled (see ) 2019-01-07 18:22:35.542993+0530 sample_core_ML[67500:3515823] [BoringSSL] nw_protocol_boringssl_get_output_frames(1301) [C1.1:2][0x7f9db0701d70] get output frames failed, state 8196 2019-01-07 18:22:35.543205+0530 sample_core_ML[67500:3515823] [BoringSSL] nw_protocol_boringssl_get_output_frames(1301) [C1.1:2][0x7f9db0701d70] get output frames failed, state 8196 2019-01-07 18:22:35.543923+0530 sample_core_ML[67500:3515823] TIC Read Status [1:0x0]: 1:57 2019-01-07 18:22:35.544070+0530 sample_core_ML[67500:3515823] TIC Read Status [1:0x0]: 1:57 2019-01-07 18:22:39.981492+0530 sample_core_ML[67500:3515823] 5.15.0 - [Firebase/MLKit][I-MLK002000] ModelInterpreterErrorReporter: Didn't find custom op for name 'DecodeWav' with version 1 2019-01-07 18:22:39.981686+0530 sample_core_ML[67500:3515823] 5.15.0 - [Firebase/MLKit][I-MLK002000] ModelInterpreterErrorReporter: Registration failed. Failed to set IO Error Domain=com.firebase.ml Code=3 "input format 0 has invalid nil or empty dimensions." UserInfo={NSLocalizedDescription=input format 0 has invalid nil or empty dimensions.} 2019-01-07 18:22:40.604961+0530 sample_core_ML[67500:3515812] 5.15.0 - [Firebase/MLKit][I-MLK002000] ModelInterpreterErrorReporter: Didn't find custom op for name 'DecodeWav' with version 1 2019-01-07 18:22:40.605199+0530 sample_core_ML[67500:3515812] 5.15.0 - [Firebase/MLKit][I-MLK002000] ModelInterpreterErrorReporter: Registration failed. Error running the model Optional(Error Domain=com.firebase.ml Code=2 "Failed to create a TFLite interpreter for the given model (/Users/minimaci73/Library/Developer/CoreSimulator/Devices/7FE413C1-3820-496A-B0CE-033BE2F3212A/data/Containers/Bundle/Application/868CB2FE-77D8-4B1F-8853-C2E17ECA63F2/sample_core_ML.app/converted_model.tflite)." UserInfo={NSLocalizedDescription=Failed to create a TFLite interpreter for the given model (/Users/minimaci73/Library/Developer/CoreSimulator/Devices/7FE413C1-3820-496A-B0CE-033BE2F3212A/data/Containers/Bundle/Application/868CB2FE-77D8-4B1F-8853-C2E17ECA63F2/sample_core_ML.app/converted_model.tflite).}) </code></pre> <p>When looked at this line <code>Didn't find custom op for name 'DecodeWav'</code> I looked up on the custom supported ops and found that Tensorflow already supports this in the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/ops/audio_ops.cc" rel="nofollow noreferrer">audio_ops.cc</a> by default.</p> <p><strong>Details</strong></p> <p>My Tensorflow Version : 1.12.0</p> <p>Environment : Conda</p> <p>OS Version : Mac OSX Mojave 10.14.2</p> <p>Deployment target : ios 12.0</p> <p>Installation type : Pod Installation (pod 'Firebase/MLModelInterpreter')</p> <p>But i ran my training model first in v1.9.0. Then updated the Tensorflow to latest v1.12.0 to run the TFLite Convertor. Both are master branch.</p> <p><strong>My TFLite Convertor code Python</strong></p> <pre><code>import tensorflow as tf graph_def_file = "my_frozen_graph.pb" input_arrays = ["wav_data"] output_arrays = ["labels_softmax"] input_shape = {"wav_data" : [1,99,40,1]} converter = tf.contrib.lite.TFLiteConverter.from_frozen_graph( graph_def_file, input_arrays, output_arrays, input_shape) converter.allow_custom_ops = True tflite_model = converter.convert() open("converted_model.tflite", "wb").write(tflite_model) </code></pre>
<p>I posted this same question in the firebase quickstart iOS repository, And i got the following response <a href="https://github.com/firebase/quickstart-ios/issues/614#issuecomment-453218055" rel="nofollow noreferrer">DecodeWav op is never supported by TensorFlowLite</a>. So at present Tensorflow Lite does not support audio processing even-though Tensorflow itself supports audio processing.</p>
python|ios|firebase|tensorflow|firebase-mlkit
0
9,096
53,844,459
Drop duplicates but keep rows having largest value in a given column per group
<p>I have a DF like this:</p> <pre><code> Name Gender Age Level Pikachu Male 4 8 Charmander Female 5 7 Charmander Female 5 7 Squirtle Male 3 6 Squirtle Male 3 9 Squirtle Female 4 9 </code></pre> <p>I want it to look like this:</p> <pre><code> Name Gender Age Level Pikachu Male 4 8 Charmander Female 5 7 Squirtle Male 3 9 Squirtle Female 4 9 </code></pre> <p>I don't know how to explain what I want to do in English so I'll write it in pseudocode.</p> <p>Basically:</p> <pre><code>If Name, Gender and Age are the same: If there is a difference in levels: Keep the row with higher level If there is a tie: Keep a random one </code></pre> <p>Any idea is appreciated!</p>
<p>Check with <code>sort_values</code>+<code>drop_duplicates</code></p> <pre><code>df=df.sort_values('Level').drop_duplicates(['Name','Gender','Age'],keep='last') df Name Gender Age Level 2 Charmander Female 5 7 0 Pikachu Male 4 8 4 Squirtle Male 3 9 5 Squirtle Female 4 9 </code></pre>
python|pandas|dataframe|group-by
3
9,097
66,231,322
Determining the count and percentage of numbers in a string (pandas)
<p>I have a column in a dataset df which contains strings like these</p> <pre><code>Webs https://www.mhouse.com/107462464135489/posts/please-lets-be-guidedun-is-where-the-code/142970213918047/ https://www.msed.com/IKONINIBWANASEEDMARCH2020.html https://www.msed.com/ https://carrice.com/jen/stat/1241025420562178050?lang=en </code></pre> <p>...</p> <p>I would like to determine the count and the percentage of numbers within them; so, for instance</p> <pre><code>Count Percentage 15 (and the percentage compared to the length of the string) 4 ... 0 ... 19 ... </code></pre> <p>If I am not wrong I'd use a combination of is digit for determining the number of digits in the strings and len() for determining the length of the string, then the percentage.</p>
<p>You can count the number of digits in a string using <code>Series.str.count</code> with a regular expression. Additionally, you can get the length of each string in a series with <code>Series.str.len()</code>. Once you do that, calculating the percentage is straight forward!</p> <pre><code>df[&quot;digit_count&quot;] = df[&quot;Webs&quot;].str.count(&quot;\d&quot;) df[&quot;total_characters&quot;] = df[&quot;Webs&quot;].str.len() df[&quot;digit_percentage&quot;] = df[&quot;digit_count&quot;] / df[&quot;total_characters&quot;] * 100 print(df) Webs digit_count total_characters digit_percentage 0 https://www.mhouse.com/107462464135489/posts/p... 30 103 29.126214 1 https://www.msed.com/IKONINIBWANASEEDMARCH2020... 4 51 7.843137 2 https://www.msed.com/ 0 21 0.000000 3 https://carrice.com/jen/stat/12410254205621780... 19 56 33.928571 </code></pre>
python|regex|pandas
5
9,098
66,246,168
Pandas - how to build and expanding window dataframe from a series
<p>I'm not allowed to use any df.expanding().apply() solutions but need to go through an approach as the following. Therefore, given a pd.Series such as</p> <pre><code>2008-12-31 1.4174 2009-01-01 1.4184 2009-01-02 1.4098 2009-01-05 1.4000 2009-01-06 1.3882 2009-01-07 1.4079 2009-01-08 1.4045 2009-01-09 1.4148 2009-01-12 1.4716 2009-01-13 1.4979 </code></pre> <p>I would like to build a dataframe such as</p> <pre><code>2008-12-31 1.4174 1.4174 1.4174 1.4174 1.4174 2009-01-01 1.4184 1.4184 1.4184 1.4184 1.4184 2009-01-02 1.4098 1.4098 1.4098 1.4098 1.4098 2009-01-05 1.4000 1.4000 1.4000 1.4000 1.4000 2009-01-06 1.3882 1.3882 1.3882 1.3882 1.3882 2009-01-07 NaN 1.4079 1.4079 1.4079 1.4079 2009-01-08 NaN NaN 1.4045 1.4045 1.4045 2009-01-09 NaN NaN NaN 1.4148 1.4148 2009-01-12 NaN NaN NaN NaN 1.4716 2009-01-13 NaN NaN NaN NaN NaN </code></pre> <p>How can I proceed? Thanks</p>
<p>You can use <code>shift</code> and construct the data frame with <code>for</code> loop:</p> <pre><code>pd.DataFrame({ i: s.shift(i).shift(-i) for i in range(1,6) }) </code></pre> <p>Output:</p> <pre><code> 1 2 3 4 5 Date 2008-12-31 1.4174 1.4174 1.4174 1.4174 1.4174 2009-01-01 1.4184 1.4184 1.4184 1.4184 1.4184 2009-01-02 1.4098 1.4098 1.4098 1.4098 1.4098 2009-01-05 1.4000 1.4000 1.4000 1.4000 1.4000 2009-01-06 1.3882 1.3882 1.3882 1.3882 1.3882 2009-01-07 1.4079 1.4079 1.4079 1.4079 NaN 2009-01-08 1.4045 1.4045 1.4045 NaN NaN 2009-01-09 1.4148 1.4148 NaN NaN NaN 2009-01-12 1.4716 NaN NaN NaN NaN 2009-01-13 NaN NaN NaN NaN NaN </code></pre>
python|pandas|apply|rolling-computation
0
9,099
66,125,362
Programmatically picking an inequality operator
<p>I'm trying to perform actions based on input from a config file. In the config, there will be specifications for a signal, a comparison, and a value. I'd like to translate that comparison string into a choice of inequality operator. Right now, this looks like</p> <pre class="lang-py prettyprint-override"><code>def compute_mask(self, signal, comparator, value, df): if comparator == '&lt;': mask = df[signal] &lt; value elif comparator == '&lt;=': mask = df[signal] &lt;= value elif comparator == '=': mask = df[signal] == value elif comparator == '&gt;=': mask = df[signal] &gt;= value elif comparator == '&gt;': mask = df[signal] &gt; value elif comparator == '!=': mask = df[signal] != value return mask </code></pre> <p>In other applications, I was able to do something like</p> <pre class="lang-py prettyprint-override"><code>func = { 'a': func_a, 'b': func_b, 'c': func_c }.get(func_choice) func(value_to_process) </code></pre> <p>in order to easily avoid having to repeat code over and over. How would I go about doing the same thing here?</p>
<p>You can use the <code>operator</code> module to get functions equivalent to each of the operators.</p> <pre><code>import operator funcs = { '&lt;': operator.lt, '&lt;=': operator.le, '=': operator.eq, '&gt;': operator.gt, '&gt;=': operator.ge, '!=': operator.ne } def compute_mask(self, signal, comparator, value, df): return funcs[comparator](df[signal], value) </code></pre>
python|pandas|dataframe
3