Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
7,700
62,567,073
Invalid syntax error when setting root = to in PyTorch
<pre><code>import torchvision from torchvision import transforms train_data_path=&quot;./train/&quot; transforms = transforms.Compose([ transforms.Resize(64), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) ]) train_data = torchvision.datasets.ImageFolder (root=train_data_path,transform=transforms) </code></pre> <p>This is the error message:</p> <pre><code>File &quot;&lt;ipython-input-4-e470172b3902&gt;&quot;, line 8 (root=train_data_path,transform=transforms) ^ SyntaxError: invalid syntax </code></pre> <p>How would I be able to fix this?</p>
<p>You need the opening parentheses to be directly connected to the function, without any whitespace separating them. Try replacing the final two lines with:</p> <pre class="lang-py prettyprint-override"><code>train_data = torchvision.datasets.ImageFolder( root=train_data_path, transform=transforms ) </code></pre>
python|syntax-error|pytorch
1
7,701
62,777,583
Keras timeseriesgenerator: how to predict multiple data points in one step?
<p>I have meteorological data that looks like this:</p> <pre><code>DateIdx winddir windspeed hum press temp 2017-04-17 00:00:00 0.369397 0.155039 0.386792 0.196721 0.238889 2017-04-17 00:15:00 0.363214 0.147287 0.429245 0.196721 0.233333 2017-04-17 00:30:00 0.357032 0.139535 0.471698 0.196721 0.227778 2017-04-17 00:45:00 0.323029 0.127907 0.429245 0.204918 0.219444 2017-04-17 01:00:00 0.347759 0.116279 0.386792 0.213115 0.211111 2017-04-17 01:15:00 0.346213 0.127907 0.476415 0.204918 0.169444 2017-04-17 01:30:00 0.259660 0.139535 0.566038 0.196721 0.127778 2017-04-17 01:45:00 0.205564 0.073643 0.523585 0.172131 0.091667 2017-04-17 02:00:00 0.157650 0.007752 0.481132 0.147541 0.055556 2017-04-17 02:15:00 0.122101 0.003876 0.476415 0.122951 0.091667 </code></pre> <p>My aim: to use the keras timeseriesgenerator (<code>from tensorflow.keras.preprocessing.sequence import TimeseriesGenerator</code>) to train and predict multiple data points (multiple rows) at once, e.g. not to do</p> <pre><code>[input X] | [targets y] [dp1, dp2, dp3, dp4, dp5] | [dp6] [dp2, dp3, dp4, dp5, dp6] | [dp7] [dp3, dp4, dp5, dp6, dp7] | [dp8] ... </code></pre> <p>but to do</p> <pre><code>[input X] | [targets y] [dp1, dp2, dp3, dp4, dp5] | [dp6, dp7, dp8] [dp2, dp3, dp4, dp5, dp6] | [dp7, dp8, dp9] [dp3, dp4, dp5, dp6, dp7] | [dp8, dp9, dp10] ... </code></pre> <p>I can achieve the top kind of predictions with</p> <pre><code>generator = TimeseriesGenerator( X, X, length=5, sampling_rate=1, stride=1, start_index=0, end_index=None, shuffle=False, reverse=False, batch_size=1, ) </code></pre> <p>, but I haven't figured out how I can tweak the generator options for the second kind of predictions.</p> <p>Is there an easy way to achieve the desired prediction window of 3 data points with the timeseriesgenerator? If not, can you suggest me some code to bin my predictions <code>y</code> to achieve the task? Tnx</p>
<p>What you can do with the TimeSeries generator is to change the target entry. Concretely, since you want to predict the next thee timesteps, your target should be something of the form</p> <pre><code> target=np.concatenate((np.roll(X, -1, axis=0), np.roll(X, -2, axis=0), np.roll(X, -3, axis=0) ),axis=1) </code></pre> <p>The roll will shift your rows downward, you should probably throw away the last two rows of the target. Therefore when you define your generator , you can now use the <code>target</code> object as a parameter:</p> <pre><code>generator = TimeseriesGenerator( X, target, length=5, sampling_rate=1, stride=1, start_index=0, end_index=None, shuffle=False, reverse=False, batch_size=1, ) </code></pre> <p>Note that now, when you do call <code>model.fit</code> it expect output shaped like 3<em>dim_col</em>X, so your model architecture and/or loss function needs to account for this, you should therefore change the output dim of your last layer directly, or combine 3 models using <code>layer.concatenate([model_timeplus1,model_timeplus2,model_timeplus3], axis=-1)</code> if you choose a shared weight model (the three predicted values generated by one single nn <code>model_timeplus1</code>):</p> <pre><code>layer.concatenate([model_timeplus1,model_timeplus1,model_timeplus3], axis=-1) </code></pre> <p>It is equivalent to an unrolled <a href="https://stanford.edu/%7Eshervine/teaching/cs-230/cheatsheet-recurrent-neural-networks" rel="nofollow noreferrer">recursive neural network</a>.</p>
python|tensorflow|keras|time-series|recurrent-neural-network
0
7,702
54,526,939
Converting column in pandas to datetime
<p>I am trying to convert a pandas data frame column from string to a datetime type.I am sure I am doing it correctly but am getting an error saying that the format does not match and I can not work out why. My strings I want to convert look like this</p> <pre><code>Date 2019-02-03 04:09:34 2019-02-02 14:21:03 2019-02-02 16:54:13 2019-02-02 17:39:19 2019-02-02 09:13:38 2019-01-05 09:03:24 2019-02-02 16:50:34 2019-02-02 16:05:50 2019-02-02 07:28:10 </code></pre> <p>I am trying this on the file containing this data </p> <pre><code>file['Date1'] = pd.to_datetime(file['Date'], format='%Y-%m-%d :%H:%M:%S')` </code></pre> <p>But repetitively get the error </p> <pre><code> ValueError: time data ' Date' does not match format '%Y-%m-%d :%H:%M:%S' (match)` </code></pre> <p>I have been able to make this work but only for one row not the entire column</p> <pre><code>file['Date1'] = datetime.strptime(file['Date'][1], '%Y-%m-%d %H:%M:%S') </code></pre> <p>Please let me know what I am doing wrong, Thank you </p>
<p>The problem is your additional <code>:</code> before <code>%:H</code> in your format string. <code>pandas</code> is looking for a colon and can't find it in the data you provide.</p> <p>Additionally, I tested <code>pd.to_datetime</code> without a format string, and it appears to be able to infer the format, so you could do that too.</p>
python|pandas|datetime
1
7,703
54,534,963
Make my Nested loops Works simpler (Operating Time is Higher)
<p>I am a learner in nested loops in python.</p> <p>Below I have written my code. I want to make my code simpler, since when I run the code it takes so much time to produce the result.</p> <p>I have a list which contains 1000 values:</p> <pre><code>Brake_index_values = [ 44990678, 44990679, 44990680, 44990681, 44990682, 44990683, 44997076, 44990684, 44997077, 44990685, ... 44960673, 8195083, 8979525, 100107546, 11089058, 43040161, 43059162, 100100533, 10180192, 10036189] </code></pre> <p>I am storing the element no 1 in another list</p> <pre><code>original_top_brake_index = [Brake_index_values[0]] </code></pre> <p>I created a temporary list called temp and a numpy array for iteration through Loop:</p> <pre><code>temp =[] arr = np.arange(0,1000,1) </code></pre> <p>Loop operation:</p> <pre><code>for i in range(1, len(Brake_index_values)): if top_15_brake &lt;= 15: a1 = Brake_index_values[i] #a2 = Brake_index_values[j] a3 = arr[:i] for j in a3: a2 = range(Brake_index_values[j] - 30000, Brake_index_values[j] + 30000) if a1 in a2: pass else: temp.append(a1) if len(temp)== len(a3): original_top_brake_index.append(a1) top_15_brake += 1 del temp[:] else: del temp[:] continue </code></pre> <p>I am comparing the <code>Brake_index_values[1]</code> element available between the range of 30000 before and after <code>Brake_index_values[0]</code> element, that is `range(Brake_index_values[0]-30000, Brake_index_values[0]+30000).</p> <p>If the <code>Brake_index_values[1]</code> available between the range, I should ignore that element and go for the next element <code>Brake_index_values[2]</code> and follow the same process as before for <code>Brake_index_values[0]</code> &amp; <code>Brake_index_values[1]</code></p> <p>If it is available, store the Value, in <code>original_top_brake_index</code> thorough append operation. </p> <p>In other words :</p> <p>(Lets take 3 values a,b &amp; c. I am checking whether the value b is in range between (a-30000 to a+30000). Possibility 1: If b is in between (a-30000 to a+30000) , neglect that element (Here I am storing inside a temporary list). Then the same process continues with c (next element) Possibility 2: If b is not in b/w those range put b in another list called original_top_brake_index (this another list is the actual result what i needed)</p> <p>The result I get:</p> <p>It is working, but it takes so much time to complete the operation and sometimes it shows MemoryError.</p> <p>I just want my code to work simpler and efficient with simple operations.</p>
<p>We can use the <code>bisect</code> module to shorten the elements we actually have to lookup by finding the smallest element that's greater or less than the current value. We will use recipes from <a href="https://docs.python.org/3/library/bisect.html#searching-sorted-lists" rel="nofollow noreferrer">here</a></p> <p>Let's look at this example:</p> <pre><code>from bisect import bisect_left, bisect_right def find_lt(a, x): 'Find rightmost value less than x' i = bisect_left(a, x) if i: return a[i-1] return def find_gt(a, x): 'Find leftmost value greater than x' i = bisect_right(a, x) if i != len(a): return a[i] return vals = [44990678, 44990679, 44990680, 44990681, 44990682, 589548954, 493459734, 3948305434, 34939349534] vals.sort() # we have to sort the values for bisect to work passed = [] originals = [] for val in vals: passed.append(val) l = find_lt(passed, val) m = find_gt(passed, val) cond1 = (l and l + 30000 &gt;= val) cond2 = (m and m - 30000 &lt;= val) if not l and not m: originals.append(val) continue elif cond1 or cond2: continue else: originals.append(val) </code></pre> <p>Which gives us:</p> <pre><code>print(originals) [44990678, 493459734, 589548954, 3948305434, 34939349534] </code></pre> <p>There might be another, more mathematical way to do this, but this should at least simplify your code.</p>
python|arrays|loops|numpy|for-loop
1
7,704
73,590,483
How to append multiple lists in Python
<p>I want to append two lists <code>A[0]</code> and <code>A</code> but I am getting an error. I present the expected output.</p> <pre><code>import numpy as np A=[] A[0]=[np.array([[0.4]])] A=[np.array([[0.15]])] print(&quot;A =&quot;,A) </code></pre> <p>The error is</p> <pre><code>in &lt;module&gt; A[0]=[np.array([[0.4]])] IndexError: list assignment index out of range </code></pre> <p>The expected output is</p> <pre><code>[array([[0.4]]), array([[0.15]])] </code></pre>
<p>In your code, <code>A[0]</code> is not a list; it's an element (the first one, to be precise) in list <code>A</code>. But <code>A</code> is empty, so there is no element <code>A[0]</code>in it that you can assign a value to, hence the &quot;index out of range&quot; error. Try the following instead:</p> <pre><code>import numpy as np A=[] A.append(np.array([[0.4]])) A.append(np.array([[0.15]])) print(&quot;A =&quot;,A) </code></pre>
python|list|numpy
2
7,705
73,787,437
Convert python output to csv file
<p>I have 500 files with their file size. Now I want to put them in a csv file.</p> <pre><code>Name size A.jpg 16.3 B.jpg 310.11 </code></pre> <p>I have converted Name and Value in two lists.</p> <pre><code>Result=[] for i in name: for j in value: Result.append(zip(i,j)) print(Result) </code></pre> <p>It is not working</p>
<pre><code>import pandas as pd read_file = pd.read_csv (r'Path where the Text file is stored\File name.txt') read_file.to_csv (r'Path where the CSV will be saved\File name.csv', index=None) </code></pre>
python|pandas|csv
0
7,706
71,432,173
Pandas Time series manipulation with large panel data
<p>Here is my large panel dataset:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Date</th> <th>x1</th> <th>x2</th> <th>x3</th> </tr> </thead> <tbody> <tr> <td>2017-07-20</td> <td>50</td> <td>60</td> <td>Kevin</td> </tr> <tr> <td>2017-07-21</td> <td>51</td> <td>80</td> <td>Kevin</td> </tr> <tr> <td>2016-05-23</td> <td>100</td> <td>200</td> <td>Cathy</td> </tr> <tr> <td>2016-04-20</td> <td>20</td> <td>20</td> <td>Cathy</td> </tr> <tr> <td>2019-01-02</td> <td>50</td> <td>60</td> <td>Leo</td> </tr> </tbody> </table> </div> <p>This dataset contains <strong>billions of rows</strong>. What I would like to do is that I would like to calculate the 1-day different in terms of percentage for x1 and x2: Denote t and t+1 to the time representing today and tomorrow. I would like to calculate <code>(x1_{t+1} - x2_t) / x2_t</code></p> <p><strong>First</strong> I used the fastest way in terms of writing:</p> <p>I created a nested list containing all the target values of each group of <code>x3</code>:</p> <pre><code>nested_list = [] flatten_list = [] for group in df.x3.unique(): df_ = df[df.x3 == group] nested_list.append((df_.x1.shift(-1) / df_.x2) / df_.x2)) for lst in nested_list: for i in lst: flatten_list.append(i) df[&quot;target&quot;] = flatten_list </code></pre> <p>However, this method will literality take a year to run, which is not implementable.</p> <p>I also tried the native pandas <code>groupby</code> method for potentially runnable outcome but it <strong>DID NOT</strong> seem to work:</p> <pre><code>def target_calculation(x): target = (x.x1.shift(-1) - x.x2) / x.x2 return target df[&quot;target&quot;] = df.groupby(&quot;x3&quot;)[[&quot;x1&quot;, &quot;x2&quot;]].apply(target_calculation) </code></pre> <p>How can I calculate this without using for loop or possibly vectorize the whole process?</p>
<p>You could <code>groupby</code> + <code>shift</code> &quot;x1&quot; and subtract &quot;x2&quot; from it:</p> <pre><code>df['target'] = (df.groupby('x3')['x1'].shift(-1) - df['x2']) / df['x2'] </code></pre> <p>Output:</p> <pre><code> Date x1 x2 x3 target 0 2017-07-20 50 60 Kevin -0.15 1 2017-07-21 51 80 Kevin NaN 2 2016-05-23 100 200 Cathy -0.90 3 2016-04-20 20 20 Cathy NaN 4 2019-01-02 50 60 Leo NaN </code></pre> <p>Note that</p> <pre><code>(df.groupby('x3')['x1'].shift(-1) / df['x2']) / df['x2'] </code></pre> <p>produces the output equivalent to <code>flatten_list</code> but I don't think this is your true desired output but rather a typo.</p>
python|pandas|dataframe|pandas-groupby
1
7,707
71,399,081
How to save empty pyspark dataframe with header into csv file?
<p>Hi I have dataframe which is having only columns. There is no data for columns. But I am trying to save into file, no header is saving. File is totally blank.</p> <p>Example:</p> <p>df.show()</p> <pre><code>+-----+----------------------+-------+---------------------+------------------------+----------------------------+--------------------------+----------------------+---------------+------------------------+-------------+-----------------+-----------------------+--------------+---------------+-----------+-----------------+-----------+------+--------+----------------+----------------------+--------------+-----+-------+---------+------+--------+ |owner|account_priority_score|account|call_objective_clm_id|call_objective_from_date|call_objective_on_by_default|call_objective_record_type|call_objective_to_date|display_dismiss|display_mark_as_complete|display_score|email_template_id|email_template_vault_id|email_template|expiration_date|no_homepage|planned_call_date|posted_date|reason|priority|record_type_name|suggestion_external_id|supress_reason|title|product|survey_id|groups|insrt_dt| +-----+----------------------+-------+---------------------+------------------------+----------------------------+--------------------------+----------------------+---------------+------------------------+-------------+-----------------+-----------------------+--------------+---------------+-----------+-----------------+-----------+------+--------+----------------+----------------------+--------------+-----+-------+---------+------+--------+ +-----+----------------------+-------+---------------------+------------------------+----------------------------+--------------------------+----------------------+---------------+------------------------+-------------+-----------------+-----------------------+--------------+---------------+-----------+-----------------+-----------+------+--------+----------------+----------------------+--------------+-----+-------+---------+------+--------+ </code></pre> <p>But while saving into file headers are not coming. I am using below code-</p> <pre><code>df.coalesce(1).write.mode('overwrite').csv(output_path, sep=output_delimiter,quote='',escape='\&quot;', header='True', nullValue=None) </code></pre>
<p>To do what you are asking you will have to define a schema.</p> <p>So for example:</p> <pre><code>schema = StructType([ \ StructField(&quot;firstname&quot;,StringType(),True), \ StructField(&quot;middlename&quot;,StringType(),True), \ StructField(&quot;lastname&quot;,StringType(),True), \ StructField(&quot;id&quot;, StringType(), True), \ StructField(&quot;gender&quot;, StringType(), True), \ StructField(&quot;salary&quot;, IntegerType(), True) \ ]) df = spark.createDataFrame([],schema=schema) df.coalesce(1).write.csv(&quot;/tmp/csv_data/&quot;, header=True) </code></pre> <p>this will output single csv file with just the headers.</p>
pyspark|apache-spark-sql|pyspark-pandas
-1
7,708
71,382,478
Difficulty in using subplots to shows multiple boxplots side-by-side
<p>I am using the heart_failure_clinical_records_dataset.csv dataset, you can find it here: <a href="https://www.kaggle.com/abdallahwagih/heart-failure-clinical-records-dataset-eda" rel="nofollow noreferrer">https://www.kaggle.com/abdallahwagih/heart-failure-clinical-records-dataset-eda</a> Now I created two subsets of the initial dataframe using the code below:</p> <pre><code>group1 = a[(a[&quot;sex&quot;] == 1) &amp; (a[&quot;diabetes&quot;] == 1) &amp; (a[&quot;high_blood_pressure&quot;] == 0)]group1.head() group2 = a[(a[&quot;sex&quot;] == 1) &amp; (a[&quot;diabetes&quot;] == 1) &amp; (a[&quot;high_blood_pressure&quot;] == 1)]group2.head() </code></pre> <p>I want the resulting boxplots for each of them to show up side by side as an output I have plotted them individually by using the code below:</p> <pre><code>sns.boxplot(x = group1['age'], y = group1['creatinine_phosphokinase']) sns.boxplot(x = group2['age'], y = group2['creatinine_phosphokinase']) </code></pre> <p>So I have been going around looking at subplots and subplot2grid and all that and this is what I have come up with so far</p> <pre><code>x1 = group1['age'] y1 = group1['creatinine_phosphokinase'] x2 = group2['age'] y2 = group2['creatinine_phosphokinase'] figure, axis = plt.subplots(1, 2) axis[0, 0].plot(x1, y1) </code></pre> <p>I get hit with this error:</p> <pre><code>IndexError Traceback (most recent call last) &lt;ipython-input-86-126627fbbb24&gt; in &lt;module&gt; ----&gt; 1 axis[0, 0].plot(x1, y1) 2 #axis[1, 0].plot(x2, y2) IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed </code></pre> <p>I do not understand this error so any help will be appreciated.</p> <p>I have also tried this:</p> <pre><code>plot1 = plt.subplot2grid((3, 3), (0, 0), colspan=2) plot2 = plt.subplot2grid((3, 3), (0, 2), rowspan=3, colspan=2) plot2.plot(x1, y1) </code></pre> <p>And I get this error:</p> <pre><code>TypeError Traceback (most recent call last) &lt;ipython-input-89-f3ee75fcd504&gt; in &lt;module&gt; ----&gt; 1 plot2.plot('x1', 'y1') ~\anaconda3\lib\site-packages\matplotlib\axes\_axes.py in plot(self, scalex, scaley, data, *args, **kwargs) 1741 &quot;&quot;&quot; 1742 kwargs = cbook.normalize_kwargs(kwargs, mlines.Line2D) -&gt; 1743 lines = [*self._get_lines(*args, data=data, **kwargs)] 1744 for line in lines: 1745 self.add_line(line) ~\anaconda3\lib\site-packages\matplotlib\axes\_base.py in __call__(self, data, *args, **kwargs) 271 this += args[0], 272 args = args[1:] --&gt; 273 yield from self._plot_args(this, kwargs) 274 275 def get_next_color(self): ~\anaconda3\lib\site-packages\matplotlib\axes\_base.py in _plot_args(self, tup, kwargs) 394 self.axes.xaxis.update_units(x) 395 if self.axes.yaxis is not None: --&gt; 396 self.axes.yaxis.update_units(y) 397 398 if x.shape[0] != y.shape[0]: ~\anaconda3\lib\site-packages\matplotlib\axis.py in update_units(self, data) 1464 neednew = self.converter != converter 1465 self.converter = converter -&gt; 1466 default = self.converter.default_units(data, self) 1467 if default is not None and self.units is None: 1468 self.set_units(default) ~\anaconda3\lib\site-packages\matplotlib\category.py in default_units(data, axis) 105 # the conversion call stack is default_units -&gt; axis_info -&gt; convert 106 if axis.units is None: --&gt; 107 axis.set_units(UnitData(data)) 108 else: 109 axis.units.update(data) ~\anaconda3\lib\site-packages\matplotlib\axis.py in set_units(self, u) 1539 self.units = u 1540 self._update_axisinfo() -&gt; 1541 self.callbacks.process('units') 1542 self.callbacks.process('units finalize') 1543 self.stale = True ~\anaconda3\lib\site-packages\matplotlib\cbook\__init__.py in process(self, s, *args, **kwargs) 227 except Exception as exc: 228 if self.exception_handler is not None: --&gt; 229 self.exception_handler(exc) 230 else: 231 raise ~\anaconda3\lib\site-packages\matplotlib\cbook\__init__.py in _exception_printer(exc) 79 def _exception_printer(exc): 80 if _get_running_interactive_framework() in [&quot;headless&quot;, None]: ---&gt; 81 raise exc 82 else: 83 traceback.print_exc() ~\anaconda3\lib\site-packages\matplotlib\cbook\__init__.py in process(self, s, *args, **kwargs) 222 if func is not None: 223 try: --&gt; 224 func(*args, **kwargs) 225 # this does not capture KeyboardInterrupt, SystemExit, 226 # and GeneratorExit ~\anaconda3\lib\site-packages\matplotlib\lines.py in recache_always(self) 646 647 def recache_always(self): --&gt; 648 self.recache(always=True) 649 650 def recache(self, always=False): ~\anaconda3\lib\site-packages\matplotlib\lines.py in recache(self, always) 651 if always or self._invalidx: 652 xconv = self.convert_xunits(self._xorig) --&gt; 653 x = _to_unmasked_float_array(xconv).ravel() 654 else: 655 x = self._x ~\anaconda3\lib\site-packages\matplotlib\cbook\__init__.py in _to_unmasked_float_array(x) 1287 return np.ma.asarray(x, float).filled(np.nan) 1288 else: -&gt; 1289 return np.asarray(x, float) 1290 1291 ~\anaconda3\lib\site-packages\numpy\core\_asarray.py in asarray(a, dtype, order, like) 100 return _asarray_with_like(a, dtype=dtype, order=order, like=like) 101 --&gt; 102 return array(a, dtype, copy=False, order=order) 103 104 TypeError: float() argument must be a string or a number, not 'pandas._libs.interval.Interval' </code></pre>
<p>Changing <code>axis[0, 0]</code> to <code>axis[0]</code> will fix the issue.</p> <pre><code>axis[0].plot(x1, y1) axis[1].plot(x2, y2) </code></pre> <p>If you only want to plot those boxplots side by side then:</p> <pre><code>fig, ax =plt.subplots(1, 2, figsize=(25, 8)) sns.boxplot(x = group1['age'], y = group1['creatinine_phosphokinase'], ax=ax[0]) sns.boxplot(x = group2['age'], y = group2['creatinine_phosphokinase'], ax=ax[1]) fig.show() </code></pre> <p>this should work</p>
python|pandas|matplotlib|jupyter-notebook
1
7,709
52,312,950
Replicate a numpy list m times
<p>I am trying to pair two different databases of images that I have in python (lets say database A and database B, they are stored in list of numpy arrays). For the database A I have x images (for example 6) and for database B y images (for example 78). Each image from database A is correspond to 78/6 = 13 images from database B and the order is the same. Thus,</p> <pre><code>A B -------------- 1 [1, 13] 2 [14, 26] 3 [27, 39] 4 [38, 52] 5 [51, 65] 6 [66, 78] </code></pre> <p>What I want to do is to replicate each image from database A 13 times in order to have the same number of images with database B. The problem is that those amount of numbers are not fixed (x and y). Therefore,</p> <pre><code>len1 = len(database_A) len2 = len(database_B) m = round(len2/len1) </code></pre> <p>How can I return a replicate m times the elements of the database list A (m times the image 1 then m times the image 2 etc.).</p> <p><strong>EDIT:</strong> Sometimes the division produces some module. How can I handle that module. There is the case i have for database A 6 images while for database B 68 and that produce mod = 2. I need in that case to store only 66 images from both databases.</p>
<p>Assuming the images are in <code>(n, x, y)</code> or <code>(n, x, y, RGB)</code> shaped arrays:</p> <pre><code>np.repeat(A, B.shape[0] // A.shape[0], axis = 0) </code></pre> <p>If you really want to keep lists, I still recommend doing the repetition in <code>numpy</code></p> <pre><code>list(np.repeat(np.array(A), len(B) // len(A), axis = 0)) </code></pre> <p>if you really want list comprehension</p> <pre><code>A = [a for a in A for _ in range(len(B) // len(A))] if len(B) % lenA &gt; 0: B = B[:-(len(B) % len(A))] </code></pre>
python|list|numpy
2
7,710
52,371,578
How to detect and remove lines above data set while reading from csv?
<p>I have a csv that looks like this: </p> <pre><code>name: john date modified: 2018-09 from: jane colum1 column2 column3 data data data </code></pre> <p>Is there any function I can apply that would strip off any lines before the tabular data begins when reading from csv? currently the lines above <code>column</code> look like strange characters when I read them in. </p> <p>New table should look like this: </p> <pre><code>colum1 column2 column3 data data data </code></pre>
<p>I would do something like this:</p> <pre><code>from io import StringIO with open('filename.csv') as f: lines = f.readlines() s = StringIO(''.join((l for l in lines if ':' not in l))) pd.read_csv(s) </code></pre> <p>Alternatively:</p> <pre><code>with open('filename.csv') as f: lines = f.readlines() skip_rows_idx = [i for i, l in enumerate(lines) if ':' in l] pd.read_csv('filename.csv', skiprows=skip_rows_idx) </code></pre> <p>If there are no colons in the header, then one could adapt the above code (first example) to drop first lines like this:</p> <pre><code>import itertools s = StringIO(''.join(itertools.dropwhile(lambda l: ':' in l, lines))) </code></pre> <p>(assuming there are no "bad" lines <em>after</em> the header).</p>
python|python-3.x|pandas|csv
2
7,711
60,619,900
Numerical errors in Keras vs Numpy
<p>In order to really understand convolutional layers, I have reimplemented the forward method of a single keras Conv2D layer in basic numpy. The outputs of both seam almost identical, but there are some minor differences.</p> <p>Getting the keras output:</p> <pre><code>inp = K.constant(test_x) true_output = model.layers[0].call(inp).numpy() </code></pre> <p>My output:</p> <pre><code>def relu(x): return np.maximum(0, x) def forward(inp, filter_weights, filter_biases): result = np.zeros((1, 64, 64, 32)) inp_with_padding = np.zeros((1, 66, 66, 1)) inp_with_padding[0, 1:65, 1:65, :] = inp for filter_num in range(32): single_filter_weights = filter_weights[:, :, 0, filter_num] for i in range(64): for j in range(64): prod = single_filter_weights * inp_with_padding[0, i:i+3, j:j+3, 0] filter_sum = np.sum(prod) + filter_biases[filter_num] result[0, i, j, filter_num] = relu(filter_sum) return result my_output = forward(test_x, filter_weights, biases_weights) </code></pre> <p>The results are largely the same, but here are some examples of differences:</p> <pre><code>Mine: 2.6608338356018066 Keras: 2.660834312438965 Mine: 1.7892705202102661 Keras: 1.7892701625823975 Mine: 0.007190803997218609 Keras: 0.007190565578639507 Mine: 4.970898151397705 Keras: 4.970897197723389 </code></pre> <p>I've tried converting everything to float32, but that does not solve it. Any ideas?</p> <p>Edit: I plotted the distribution over errors, and it might give some insight into what is happening. As can be seen, the errors all have very similar values, falling into four groups. However, these errors are not exactly these four values, but are almost all unique values around these four peaks. <a href="https://i.stack.imgur.com/1uN0j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1uN0j.png" alt="enter image description here"></a></p> <p>I am very interested in how to get my implementation to exactly match the keras one. Unfortunately, the errors seem to increase exponentially when implementing multiple layers. Any insight would help me out a lot!</p>
<p>Given how small the differences are, I would say that they are rounding errors.<br> I recommend using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.isclose.html" rel="nofollow noreferrer">np.isclose</a> (or <a href="https://docs.python.org/dev/library/math.html#math.isclose" rel="nofollow noreferrer">math.isclose</a>) to check if floats are "equal".</p>
python|numpy|keras|floating-point|precision
2
7,712
60,639,069
Remove rows from dataframe whose text does not contain items from a list
<p>I am importing data from a table with inconsistent naming conventions. I have created a list of manufacturer names that I would like to use as a basis of comparison against the imported name. Ideally, I will delete all rows from the dataframe that do not align with the manufacturer list. I am trying to create an index vector using a for loop to iterate through each element of the dataframe column and compare against the list. If the text is there, update my index vector to true. If not, index vector is updated to false. Finally, I want to use the index vector to drop rows from the original data frame. </p> <p>I have tried generators and sets, but to no avail. I thought a for loop would be less elegant but ultimately work, yet I'm still stuck. My code is below.</p> <ul> <li>meltdat.Products is my dataframe column that contains the imported data</li> <li>mfgs is my list of manufacturer names</li> <li>prodex is my index vector</li> </ul> <pre><code> meltdat = pd.DataFrame( {"Location":["S1","S1","S1","S1","S1"], "Date":["1/1/2020", "1/1/2020", "1/1/2020", "1/1/2020", "1/1/2020"], "Products":['CC304RED','COHoney','EtainXL','Med467','MarysTop'], "Sold":[1,3,0,1,2]}) mfgs = ['CC', 'Etain', 'Marys'] for prods in meltdat.Products: if any(mfg in meltdat.Products[prods] for mfg in mfgs): prodex[prods] = TRUE else: prodex[prods] = FALSE </code></pre> <p>I added example data in the dataframe that mirrors my imported data. </p>
<p>you can use <code>pd.DataFrame.apply</code>:</p> <pre><code>meltdat[meltdat.Products.apply(lambda x: any(m in x for m in mfgs))] </code></pre>
python|pandas|list|dataframe|text
0
7,713
60,406,058
How do you load a csv file format to jupyter notebook?
<p>I tried reading a file with a CSV extension from my local storage but it gave me the following error message.</p> <pre><code>File "&lt;ipython-input-12-bbfa50761a08&gt;", line 5 data=pd.read_csv("C:\Users\user\Documents\Projects\Data Science\weather_data.csv") ^ SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape </code></pre> <p>How do I go about this?</p>
<p>First try loading packages like numpy and pandas and then try loading the data</p> <pre><code>import numpy as np import pandas as pd data=pd.read_csv(&quot;C:\Users\user\Documents\Projects\Data Science\weather_data.csv&quot;) print(data) data.head </code></pre>
python|pandas
0
7,714
60,736,766
How to create the given jSON format from a pandas dataframe?
<blockquote> <p>The data looks like this :</p> </blockquote> <p><a href="https://i.stack.imgur.com/WmDRw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WmDRw.png" alt="enter image description here"></a></p> <blockquote> <p>The expected Json fomat is like this</p> </blockquote> <pre><code> { "DataExtractName": "SalesDataExtract", "BusinessName" : { "InvoiceDate": { "SourceSystem": { "MYSQL" : "Invc_Dt", "CSV" : "Invc_Date" }, "DataType": { "MYSQL" : "varchar", "CSV" : "string" } }, "Description": { "SourceSystem": { "MYSQL" : "Prod_Desc", "CSV" : "Prod_Descr" }, "DataType": { "MYSQL" : "varchar", "CSV" : "string" } } } }, { "DataExtractName": "DateDataExtract", "BusinessName" : { "InvoiceDate": { "SourceSystem": { "MYSQL" : "Date" }, "DataType": { "MYSQL" : "varchar" } } } } </code></pre> <p>How do i achieve this using python dataframes? Or do i need to write some script to make the data like this?</p> <p><strong>Note</strong></p> <p>I've tried using - </p> <ol> <li>df.to_json</li> <li>df.to_dict</li> </ol>
<p>With so many nested structures, you should use marshmallow. It is built with your use case in mind. Please check out the excellent documentation: <a href="https://marshmallow.readthedocs.io/en/stable/" rel="nofollow noreferrer">https://marshmallow.readthedocs.io/en/stable/</a> . All you need is the masic usage.</p> <p>It is a lot of code, but better be explicit than clever. I am sure a shorter solution exists, but it is probably unmaintainable. Also I had to build your dataframe. Please provide it in a data format next time.</p> <pre><code>import pandas as pd import marshmallow as ma # build test data df = pd.DataFrame.from_records([ ['InvoiceDate', 'MYSQL', 'Invc_Dt', 'varchar', 'SalesDataExtract'], ['InvoiceDate', 'CSV', 'Invc_Date', 'string', 'SalesDataExtract'], ['Description', 'MYSQL', 'Prod_Descr', 'varchar', 'SalesDataExtract'], ['Description', 'CSV', 'Prod_Descr', 'string', 'SalesDataExtract'], ['InvoiceDate', 'MYSQL', 'Date', 'varchar', 'DateDataExtract'],] ) df.columns = ['BusinessName', 'SourceSystem', 'FunctionalName', 'DataType', 'DataExtractName'] # define marshmallow schemas class SourceSystemTypeSchema(ma.Schema): MYSQL = ma.fields.String() CSV = ma.fields.String() class DataTypeSchema(ma.Schema): MYSQL = ma.fields.String() CSV = ma.fields.String() class InvoiceDateSchema(ma.Schema): InvoiceDate = ma.fields.Nested(SourceSystemTypeSchema()) DataType = ma.fields.Nested(DataTypeSchema()) class DescriptionSchema(ma.Schema): SourceSystem = ma.fields.Nested(SourceSystemTypeSchema()) DataType = ma.fields.Nested(DataTypeSchema()) class BusinessNameSchema(ma.Schema): InvoiceDate = ma.fields.Nested(InvoiceDateSchema()) Description = ma.fields.Nested(DescriptionSchema()) class DataSchema(ma.Schema): DataExtractName = ma.fields.String() BusinessName = ma.fields.Nested(BusinessNameSchema()) # building json result = [] mask_business_name_invoicedate = df.BusinessName == 'InvoiceDate' mask_business_name_description = df.BusinessName == 'Description' for data_extract_name in set(df['DataExtractName'].to_list()): mask_data_extract_name = df.DataExtractName == data_extract_name # you need these two helper dfs to get the dictionaries df_source_system = df[mask_data_extract_name &amp; mask_business_name_invoicedate].set_index('SourceSystem').to_dict(orient='dict') df_description = df[mask_data_extract_name &amp; mask_business_name_description].set_index('SourceSystem').to_dict(orient='dict') # all dictionaries are defined, so you can use your schemas source_system_type = SourceSystemTypeSchema().dump(df_source_system['FunctionalName']) data_type = DataTypeSchema().dump(df_source_system['DataType']) source_system = SourceSystemTypeSchema().dump(df_description['FunctionalName']) invoice_date = InvoiceDateSchema().dump({'SourceSystemType': source_system_type, 'DataType': data_type}) description = DescriptionSchema().dump({'SourceSystem': source_system, 'DataType': data_type}) business_name = BusinessNameSchema().dump({'InvoiceDate': invoice_date, 'Description': description}) data = DataSchema().dump({'DataExtractName': data_extract_name, 'BusinessName': business_name}) # end result result.append(data) </code></pre> <p>Now, </p> <pre><code>ma.pprint(result) </code></pre> <p>returns</p> <pre><code>[{'BusinessName': {'Description': {'DataType': {'CSV': 'string', 'MYSQL': 'varchar'}, 'SourceSystem': {'CSV': 'Prod_Descr', 'MYSQL': 'Prod_Descr'}}, 'InvoiceDate': {'DataType': {'CSV': 'string', 'MYSQL': 'varchar'}}}, 'DataExtractName': 'SalesDataExtract'}, {'BusinessName': {'Description': {'DataType': {'MYSQL': 'varchar'}, 'SourceSystem': {}}, 'InvoiceDate': {'DataType': {'MYSQL': 'varchar'}}}, 'DataExtractName': 'DateDataExtract'}] </code></pre>
python|json|pandas|csv|dictionary
2
7,715
32,395,890
plotting data from columns from the same dataframe in pandas
<p>I have a dataframe with 60 columns of data (column 1 = I 1, column 2 = S 1.... column 3 = I 2, column 4 =S 2.. and so on)... </p> <p>I want to create a function that selects two columns at a time for slicing, plotting and finding the integral of the slice. I can do this for two columns but I don't know how to implement a function to run all 60 columns. So far I have the following: </p> <pre><code>df = pd.DataFrame.from_csv(filepath, index_col =None) df_slice =df.iloc[23500:25053] R = df_slice['I 1'] I = df_slice['S 1'] rcParams['figure.figsize']= 10,5 plt.plot(R, I) plt.xlabel('cm-1') plt.ylabel('Hz') #integration of peak area = trapz(R) print area </code></pre> <p><a href="https://i.stack.imgur.com/WULe7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WULe7.png" alt="enter image description here"></a></p> <p>for the function:</p> <pre><code>def integrate_peak(filepath): df = pd.DataFrame.from_csv(filepath, index_col =None) for row in df: ..........slice ..........overlay plots ..........get integral for each plot curve </code></pre> <p>output: </p> <p>30 integral answers in a separate dataframe</p> <p>Any help would be appreciated.</p> <p>EDIT: </p> <p>I've tried this: </p> <pre><code>def get_slice(): df = pd.DataFrame.from_csv(filepath, index_col =None) for i in range(1,31): df_slice = df.iloc[23500:25053] R = df_slice['I %i' %i] I = df_slice['S %i' %i] plt.plot(R,I) area = trapz(R) print area get_slice() </code></pre> <p>This gives me an overlay of 30 plots, however gives me 30 values for integral (all being the same number)</p>
<p>Alas, i have found the solution. </p> <pre><code>def get_slice(): area_list = [] df = pd.DataFrame.from_csv(filepath, index_col =None) Raman = df['I 1'] Intensity = df['S 1'] for i in range(1,31): df_slice = df.iloc[23500:25053] R = df_slice['I %i' %i] I = df_slice['S %i' %i] for i in R: area = trapz(R, x = I) area_list.append(area) a = np.mean(area_list) </code></pre>
python|numpy|pandas|matplotlib
0
7,716
40,662,773
Image similarity detection with TensorFlow
<p>Recently I started to play with tensorflow, while trying to learn the popular algorithms i am in a situation where i need to find similarity between images.</p> <p>Image A is supplied to the system by me, and userx supplies an image B and the system should retrieve image A to the userx if image B is similar(color and class).</p> <p>Now i have got few questions:</p> <ol> <li>Do we consider this scenario to be supervised learning? I am asking because i don't see it as a classification problem(confused!!)</li> <li>What algorithms i should use to train etc..</li> <li>Re-training should be done quite often, how should i tackle this problem so i don't train everytime from scratch( fine-tuning??)</li> </ol>
<blockquote> <p>Do we consider this scenario to be supervised learning?</p> </blockquote> <p>It is supervised learning when you have labels to optimize your model. So for most neural networks, it is supervised.</p> <p>However, you might also look at the complete task. I guess you don't have any ground truth for image pairs and the "desired" similarity value your model should output?</p> <p>One way to solve this problem which sounds inherently unsupervised is to take a CNN (convolutional neural network) trained (in a supervised way) on the 1000 classes of image net. To get the similarity of two images, you could then simply take the euclidean distance of the output probability distribution. This will not lead to excellent results, but is probably a good starter.</p> <blockquote> <ol start="2"> <li>What algorithms i should use to train etc..</li> </ol> </blockquote> <p>First, you should define what "similar" means for you. Are two images similar when they contain the same object (classes)? Are they similar if the general color of the image is the same?</p> <p>For example, how similar are the following 3 pairs of images?</p> <p><a href="https://i.stack.imgur.com/LBa8D.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/LBa8D.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/U1icu.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/U1icu.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/ozgEl.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ozgEl.png" alt="enter image description here"></a></p> <p>Have a look at <a href="https://arxiv.org/abs/1503.03832" rel="noreferrer">FaceNet</a> and search for "Content based image retrieval" (CBIR):</p> <ul> <li><a href="https://en.wikipedia.org/wiki/Content-based_image_retrieval" rel="noreferrer">Wikipedia</a></li> <li><a href="https://scholar.google.de/scholar?hl=en&amp;q=content+based+image+retrival&amp;btnG=&amp;as_sdt=1%2C5&amp;as_sdtp=" rel="noreferrer">Google Scholar</a></li> </ul>
machine-learning|scikit-learn|computer-vision|tensorflow|deep-learning
9
7,717
61,740,933
Python/Pandas: How to detect if trend is suddenly increasing "X" amount
<p>I want to detect if I have some certain log event that is increasing "X" amount of percent, and then get the top 10 increased trends.</p> <p>I would have thought that pct_change().mean() would give me what I needed, but it seems I am getting some weird results.</p> <p>So this is what I got</p> <pre><code>import pandas as pd import numpy as np import csv from datetime import date, datetime, timedelta from matplotlib import pyplot as plt sample = "sampledata.csv" df = pd.read_csv(sample, sep=";") df['DATE'] = pd.to_datetime(df['DATE'], format='%d-%m-%Y') grp = df.groupby(['DATE','EVENT'])['COUNT'].sum() grp DATE EVENT 2020-05-01 DOE711 2 ODO001 32 2020-05-02 ODO001 3 2020-05-03 DOE711 1 2020-05-04 DOE711 62 ODO001 46 2020-05-05 DOE711 101 ODO001 43 2020-05-06 DOE711 65 ODO001 61 2020-05-07 DOE711 102 ODO001 26 2020-05-08 ODO001 16 2020-05-09 ODO001 3 2020-05-10 ODO001 5 Name: COUNT, dtype: int64 grp.groupby('EVENT').apply(lambda x: x.pct_change().mean()).reset_index(name='avg_change').nlargest(10,'avg_change') EVENT avg_change 0 DOE711 12.268365 1 ODO001 1.584531 grp = grp.reset_index() grp = grp.set_index('DATE') grp[grp.EVENT == "ODO001"].COUNT.plot() </code></pre> <p><a href="https://i.stack.imgur.com/wD2Un.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wD2Un.png" alt="enter image description here"></a></p> <p>Now, ODO001 is 1.58. which should indicate that the trend is increasing, BUT: If I import the data to excel, and ask excel to create a linear trend line, it says it's decreasing</p> <p><a href="https://i.stack.imgur.com/Pp0bV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pp0bV.png" alt="enter image description here"></a></p> <p>Does anyone have a suggestion for how to solve this?</p> <p>After the answer from: @Marco Cerliani this is the result.</p> <p>So this should be translatable to this:</p> <pre><code>def trend(series): return np.polyfit(np.arange(0,len(series)), series.values, 1)[0] trend(grp[grep.EVENT == "ODO001"].COUNT) </code></pre> <p>or in groupby</p> <pre><code>df.groupby('EVENT').apply(lambda x: trend(x.count)) </code></pre>
<p>the mean pct change and the linear trend have different behavior. look at my simulate example:</p> <pre><code>start = 100 end = 0 peak = 1000 steps = 50 series = pd.Series(np.append(start, np.arange(end, peak+steps, steps)[::-1])) series.plot() </code></pre> <p><a href="https://i.stack.imgur.com/jkqOm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jkqOm.png" alt="enter image description here"></a></p> <p>this series has a pct mean change of 0.257 (<code>series.pct_change().mean()</code>) and negative linear coefficient -38.73 (<code>np.polyfit(np.arange(0,len(series)), series.values, 1)[0]</code>)</p> <p>We have a single huge positive pct increase while all the other pct changes are negative, but this is sufficient to have a mean positive (this is classical for mean index when there are extreme outliers). The trend instead is more significative for the linear pattern in the data</p> <p>I suggest you use directly the linear coefficient. you can simply compute it with <code>np.polyfit</code></p>
python-3.x|pandas
1
7,718
61,841,921
pandas create column with names from values and substitute with True/False
<p>I have a dataframe like this:</p> <pre><code>df = pd.DataFrame({"id":[1, 1, 1, 2, 2, 2, 2, 3, 3], "val":["A12", "B23", "C34", "A12", "C34", "E45", "F56", "G67", "B23"]}) print(df) </code></pre> <pre><code> id val 0 1 A12 1 1 B23 2 1 C34 3 2 A12 4 2 C34 5 2 E45 6 2 F56 7 3 G67 8 3 B23 </code></pre> <p>How do I convert it to look like this?</p> <pre><code> id A12 B23 C34 E45 F56 G67 0 1 1 1 1 0 0 0 1 2 1 0 1 1 1 0 2 3 0 1 0 0 0 1 </code></pre> <p>I tried pivot and unstack but since the number of values in the 'val' column can be different for each 'id', I'm not able to create a master list of columns and then somehow fill the values in those columns. Please help.</p>
<p>Try crosstab:</p> <pre><code>pd.crosstab(df.id, df.val).reset_index() </code></pre>
python|pandas
4
7,719
62,019,234
Lifetimes package: float() argument must be a string or a number, not 'Day'
<p>Getting the following error while using the summary_data_from_transaction_data utility function included within the Lifestyles python package. Using pandas version 0.2 on Google Colab.</p> <p>TypeError: float() argument must be a string or a number, not 'Day'</p> <p>Any help will be much appreciated.</p> <h1>Code:</h1> <pre><code>data_summary = summary_data_from_transaction_data(data_final, customer_id_col = "CustomerID", datetime_col = "InvoiceDate", monetary_value_col = "Sales", observation_period_end = "2011-12-09", freq = "D") </code></pre> <h1>Stacktrace:</h1> <pre><code>/usr/local/lib/python3.6/dist-packages/lifetimes/utils.py in summary_data_from_transaction_data(transactions, customer_id_col, datetime_col, monetary_value_col, datetime_format, observation_period_end, freq) 194 summary_columns.append('monetary_value') 195 --&gt; 196 return customers[summary_columns].astype("float64") 197 198 /usr/local/lib/python3.6/dist-packages/pandas/core/generic.py in astype(self, dtype, copy, errors, **kwargs) 5880 # else, only a single dtype is given 5881 new_data = self._data.astype( -&gt; 5882 dtype=dtype, copy=copy, errors=errors, **kwargs 5883 ) 5884 return self._constructor(new_data).__finalize__(self) /usr/local/lib/python3.6/dist-packages/pandas/core/internals/managers.py in astype(self, dtype, **kwargs) 579 580 def astype(self, dtype, **kwargs): --&gt; 581 return self.apply("astype", dtype=dtype, **kwargs) 582 583 def convert(self, **kwargs): /usr/local/lib/python3.6/dist-packages/pandas/core/internals/managers.py in apply(self, f, axes, filter, do_integrity_check, consolidate, **kwargs) 436 kwargs[k] = obj.reindex(b_items, axis=axis, copy=align_copy) 437 --&gt; 438 applied = getattr(b, f)(**kwargs) 439 result_blocks = _extend_blocks(applied, result_blocks) 440 /usr/local/lib/python3.6/dist-packages/pandas/core/internals/blocks.py in astype(self, dtype, copy, errors, values, **kwargs) 557 558 def astype(self, dtype, copy=False, errors="raise", values=None, **kwargs): --&gt; 559 return self._astype(dtype, copy=copy, errors=errors, values=values, **kwargs) 560 561 def _astype(self, dtype, copy=False, errors="raise", values=None, **kwargs): /usr/local/lib/python3.6/dist-packages/pandas/core/internals/blocks.py in _astype(self, dtype, copy, errors, values, **kwargs) 641 # _astype_nansafe works fine with 1-d only 642 vals1d = values.ravel() --&gt; 643 values = astype_nansafe(vals1d, dtype, copy=True, **kwargs) 644 645 # TODO(extension) /usr/local/lib/python3.6/dist-packages/pandas/core/dtypes/cast.py in astype_nansafe(arr, dtype, copy, skipna) 727 if copy or is_object_dtype(arr) or is_object_dtype(dtype): 728 # Explicit copy, or required since NumPy can't view from / to object. --&gt; 729 return arr.astype(dtype, copy=True) 730 731 return arr.view(dtype) TypeError: float() argument must be a string or a number, not 'Day' </code></pre> <p>Sample data in the data_final df and associated dtypes are as per the attachments.</p> <p><a href="https://i.stack.imgur.com/lv70T.jpg" rel="nofollow noreferrer">sample data</a></p> <p><a href="https://i.stack.imgur.com/GApqb.jpg" rel="nofollow noreferrer">dtypes</a></p> <p>Thanks for any help.</p>
<p>Apologies folks - I was able to resolve my issue after updating the Lifetimes package to the latest 0.11.1 version in Colab!</p>
python|pandas|lifetimes-python
1
7,720
61,622,123
Best way to read data from S3 into pandas
<p>I have two CSV files one is around 60 GB and other is around 70GB in S3. I need to load both the CSV files into pandas dataframes and perform operations such as joins and merges on the data.</p> <p>I have an EC2 instance with sufficient amount of memory for both the dataframes to be loaded into memory at a time.</p> <p>What is the best way to read that huge file from S3 to pandas dataframe?</p> <p>Also after I perform the required operations on the dataframes the output dataframe should be re-uploaded to S3.</p> <p>What is th best way of uploading the huge csv file to S3?</p>
<p>For reading from S3, you can do:</p> <pre><code>import pandas as pd df = pd.read_csv('s3://bucket-name/file.csv') </code></pre> <p>Then do all the joins and merges on this dataframe and upload it back to S3:</p> <pre><code>df.to_csv('s3://bucket-name/file.csv', index=False) </code></pre>
python|pandas|amazon-web-services|amazon-s3|amazon-ec2
2
7,721
57,775,930
Value error when populating a new column in a dataframe based on conditional values
<p>I am trying to write a function that will populate a new column called <code>'BS_Trigger'</code> based on the values in another column in the same <code>dataframe</code> (<code>'cnms_df'</code>). </p> <pre><code>today = datetime.datetime.today().strftime('%Y%m%d') .... def bs_trigger(dataframe): dataframe['BS_Trigger'] = np.where((dataframe['PRELIM_DATE'] != None) and (dataframe['PRELIM_DATE'] &lt;= today), "Yes", "No") bs_trigger(cnms_df) </code></pre> <p>With the above code, I keep getting a Value Error thrown:</p> <blockquote> <p>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p> </blockquote> <p>What am I doing incorrectly here? <strong><em>Note:</em></strong> If `cnms_df['PRELIM_DATE'][i] = None' , that value is a NoneType, not a string *** </p>
<p>Try replacing:</p> <pre class="lang-py prettyprint-override"><code>dataframe['BS_Trigger'] = np.where((dataframe['PRELIM_DATE'] != None) and (dataframe['PRELIM_DATE'] &lt;= today), "Yes", "No") </code></pre> <p>with:</p> <pre class="lang-py prettyprint-override"><code>dataframe["BS_Trigger"]="No" dataframe.BS_Trigger[(dataframe['PRELIM_DATE'] != None) &amp; (dataframe['PRELIM_DATE'] &lt;= today)]= "Yes" </code></pre>
python-3.x|pandas|dataframe|conditional-statements
1
7,722
58,015,383
how to generate dynamic columns in pandas
<p>I have following dataframe in pandas</p> <pre><code>code tank nozzle_1 nozzle_2 nozzle_var 123 1 1 1 10 123 1 2 2 12 123 2 1 1 10 123 2 2 2 12 </code></pre> <p>I want to calculate cumulative sum of columns nozzle_1 and nozzle_2 grouping over tank. Following is my desired dataframe.</p> <pre><code>code tank nozzle_1 nozzle_2 nozzle_var nozzle_1_cumsum nozzle_2_cumsum 123 1 1 1 10 1 1 123 1 2 2 12 3 3 123 2 1 1 10 1 1 123 2 2 2 12 3 3 </code></pre> <p>I am getting nozzle_1 and nozzle_2 from following code in pandas</p> <pre><code>cols= df.columns[df.columns.str.contains(pat='nozzle_\d+$', regex=True)] </code></pre> <p>How can I calculate cumsum from above list of columns </p>
<p>How about this fancy solution:</p> <pre><code>cols= df.columns[df.columns.str.contains(pat='nozzle_\d+$', regex=True)] df.assign(**df.groupby('tank')[cols].agg(['cumsum'])\ .pipe(lambda x: x.set_axis(x.columns.map('_'.join), axis=1, inplace=False))) </code></pre> <p>Output:</p> <pre><code> tank nozzle_1 nozzle_2 nozzle_var nozzle_1_cumsum nozzle_2_cumsum 0 1 1 1 10 1 1 1 1 2 2 12 3 3 2 2 1 1 10 1 1 3 2 2 2 12 3 3 </code></pre> <hr> <p>In steps:</p> <pre><code>df_cumsum = df.groupby('tank')[cols].agg(['cumsum']) df_cumsum.columns = df_cumsum.columns.map('_'.join) pd.concat([df, df_cumsum], axis=1) </code></pre> <p>Output:</p> <pre><code> tank nozzle_1 nozzle_2 nozzle_var nozzle_1_cumsum nozzle_2_cumsum 0 1 1 1 10 1 1 1 1 2 2 12 3 3 2 2 1 1 10 1 1 3 2 2 2 12 3 3 </code></pre>
python|pandas
2
7,723
57,942,516
Understanding the output of fftfreq function and the fft plot for a single row in an image
<p>I am trying to understand the function <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fftfreq.html" rel="nofollow noreferrer">fftfreq</a> and the resulting plot generated by adding real and imaginary components for one row in the image. Here is what I did:</p> <pre><code>import numpy as np import cv2 import matplotlib.pyplot as plt image = cv2.imread("images/construction_150_200_background.png", 0) image_fft = np.fft.fft(image) real = image_fft.real imag = image_fft.imag real_row_bw = image_fft[np.ceil(image.shape[0]/2).astype(np.int),0:image.shape[1]] imag_row_bw = image_fft[np.ceil(image.shape[0]/2).astype(np.int),0:image.shape[1]] sum = real_row_bw + imag_row_bw plt.plot(np.fft.fftfreq(image.shape[1]), sum) plt.show() </code></pre> <p>Here is image of the plot generated : <a href="https://i.stack.imgur.com/I8eIM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I8eIM.png" alt="FIgure-1"></a></p> <p>I read the image from the disk, calculate the Fourier transform and extract the real and imaginary parts. Then I sum the <code>sine</code> and <code>cosine</code> components and plot using the <code>pyplot</code> library.</p> <p>Could someone please help me understand the <code>fftfreq</code> function? Also what does the peak represent in the plot for the following image: <a href="https://i.stack.imgur.com/HRXZv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HRXZv.png" alt="Figure-2"></a></p> <p>I understand that Fourier transform maps the image from spatial domain to the frequency domain but I cannot make much sense from the graph. </p> <p><strong>Note</strong>: I am unable to upload the images directly here, as at the moment of asking the question, I am getting an upload error.</p>
<p>I don't think that you really need <code>fftfreq</code> to look for frequency-domain information in images, but I'll try to explain it anyway.</p> <p><code>fftfreq</code> is used to calculate the frequencies that correspond to each bin in an FFT that you calculate. You are using <code>fftfreq</code> to define the x coordinates on your graph.</p> <p><code>fftfreq</code> has two arguments: one mandatory, one optional. The mandatory first argument is an integer, the window length you used to calculate an FFT. You will have the same number of frequency bins in the FFT as you had samples in the window. The optional second argument is the time period per window. If you don't specify it, the default is a period of 1. I don't know whether a sample rate is a meaningful quantity for an image, so I can understand you not specifying it. Maybe you want to give the period in pixels? It's up to you.</p> <p>Your FFT's frequency bins start at the negative Nyquist frequency, which is half the sample rate (default = -0.5), or a little higher; and it ends at the positive Nyquist frequency (+0.5), or a little lower.</p> <p>The <code>fftfreq</code> function returns the frequencies in a funny order though. The zero frequency is always the zeroth element. The frequencies count up to the maximum positive frequency, and then flip to the maximum negative frequency and count upwards towards zero. The reason for this strange ordering is that if you're doing FFT's with real-valued data (you are, image pixels do not have complex values), the negative frequency data is exactly equal to the corresponding positive frequency data and is redundant. This ordering makes it easy to throw the negative frequencies away: just take the first half of the array. Since you aren't doing that, you're plotting the negative frequencies too. If you should choose to ignore the second half of the array, the negative frequencies will be removed.</p> <p>As for the strong spike that you see at the zero frequency in your image, this is probably because your image data is RGB values which range from 0 to 255. There's a huge "DC offset" in your data. It looks like you're using Matplotlib. If you are plotting in an interactive window, you can use the zoom rectangle to look at that horizontal line. If you push the DC offset off scale, setting the Y axis scale to perhaps ±500, I bet you will start to see that the horizontal line isn't exactly horizontal after all.</p> <p>Once you know which bin contains your DC offset, if you don't want to see it, you can just assign the value of the fft in that bin to zero. Then the graph will scale automatically.</p> <p>By the way, these two lines of code perform identical calculations, so you aren't actually taking the sine and cosine components like your text says:</p> <pre><code>real_row_bw = image_fft[np.ceil(image.shape[0]/2).astype(np.int),0:image.shape[1]] imag_row_bw = image_fft[np.ceil(image.shape[0]/2).astype(np.int),0:image.shape[1]] </code></pre> <p>And one last thing: to sum the sine and cosine components properly (once you have them), since they're at right angles, you need to use a vector sum rather than a scalar sum. Look at the function <code>numpy.linalg.norm</code>.</p>
python|numpy|opencv|image-processing|fft
2
7,724
58,067,051
Pandas - read_csv scientific notation large number
<p>I am trying to read a csv file with pandas that has some rows in scientific notation.</p> <p>When it reads the values it is not capturing the true underlying number. When I re-purpose the data the true value gets lost. </p> <pre><code>df = pd.read_csv('0_IDI_Submitter_out.csv') </code></pre> <p>The underlying true values that I am trying to preserve are as follows:</p> <pre><code> INPUT: Extra 1 0 8921107 1 56300839420000 2 56207557000000 </code></pre> <p>However, pandas reads it as</p> <pre><code> INPUT: Extra 1 0 8921107 1 5.63008E+13 2 5.62076E+13 </code></pre> <p>If I try to write a new csv or use this data the values show as:</p> <pre><code> INPUT: Extra 1 0 8921107 1 56300800000000 2 56207600000000 </code></pre> <p>How can I get pandas to read the true number rather than the scientific notation which causes it to convert incorrectly?</p>
<p>The problem appears to be that opening a CSV file in Excel which contains large numbers or strings that appear as large numbers like product codes, SKU's, UPC's etc are automatically converted into scientific notation. Once this has been done, you'll have to manually go into Excel and re-format but trying to do this from Pandas does not appear possible and the data integrity is lost.</p> <p>However, if I never open the file in Excel and work on it purely through Pandas then it is all good. Similarly, if you work purely in Excel you're also good.</p> <p>My ultimate conclusion is that when working with large numbers or strings that appear as large numbers like product codes or UPC's it is best not to mix pandas with Excel. As an alternative, I just started saving all my dataframes as pickle files instead of csv.</p> <p>Hope that helps anyone in the future.</p> <p>Thanks</p>
python|pandas|scientific-notation
2
7,725
34,241,426
pandas automatically create dataframe from list of series with column names
<p>I have a list of pandas series objects. I have a list of functions that generate them. How do I create a dataframe of the objects with the column names being the names of the functions that created the objects?</p> <p>So, to create the regular dataframe, I've got:</p> <pre><code>pandas.concat([list of series objects],axis=1,join='inner') </code></pre> <p>But I don't currently have a way to insert all the <code>functionA.__name__, functionB.__name__, etc.</code> as column names in the dataframe. </p> <p>How would I preserve the same conciseness, and set the column names?</p>
<p>You can set the column names in a second step:</p> <pre><code>df = pandas.concat([list of series objects],axis=1,join='inner') df.columns = [functionA.__name__, functionB.__name__] </code></pre>
python|pandas
2
7,726
34,252,755
How to compute integrals dependent upon two variables with SciPy?
<p>How to compute integrals of this kind with SciPy?</p> <p>Product of functions P1 and P2 depends of <em>x</em> and integration variable <em>du</em></p> <p>It would be nice to express result as lambda function, like:</p> <p>joint_p = lambda x: quad(<em>[some code here]</em>, ...</p> <p><a href="https://i.stack.imgur.com/zvNeG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zvNeG.jpg" alt="enter image description here"></a></p>
<p>Is there any reason why a straightforward use of <code>scipy.integrate.quad</code> won't work? I mean:</p> <pre><code>import scipy as sp import scipy.integrate #define some dummy p1 and p2 def p1(y): return 3*y+2 def p2(y): return -4*y-4 #define p_{xi1+xi2} def pplus(x): return sp.integrate.quad(lambda u,x=x: p1(u)*p2(x-u), 0, x)[0] #define p_{xi1/xi2} def pdivide(x): return sp.integrate.quad(lambda u,x=x: u*p1(u)*p2(u/x), 0, sp.minimum(x,1))[0]/x**2 #use it x = 0.2 outplus = pplus(x) outdivide = pdivide(x) </code></pre> <p>This will result in</p> <pre><code>print(outplus, outdivide) -2.016 -8.06666666667 </code></pre> <p>You might want to define a proper function instead of the latter <code>lambda</code>s, in order to catch the full output of <code>quad</code> to check if everything went OK with the integration.</p> <hr> <p>Let's check with <code>sympy</code>:</p> <pre><code>import sympy as sym U,X = sym.symbols('U,X') pplus_sym = sym.lambdify(X, sym.integrate((3*U+2)*(-4*(X-U)-4), (U,0,X))) dct = {'Min': sp.minimum}; #it's best if we tell lambdify what to use for Min pdivide_sym = sym.lambdify(X, sym.integrate(U*(3*U+2)*(-4*(U/X)-4), (U,0,sym.Min(X,1)))/(X**2), dct) </code></pre> <p>Then the result is</p> <pre><code>print(pplus_sym(x), pdivide_sym(x)) -2.016 -8.06666666667 </code></pre>
python|numpy|scipy
1
7,727
34,226,605
Bizarre behavior of scipy.linalg.eigvalsh in Python
<p>I wanted to compare <code>scipy</code>'s and <code>numpy</code>'s routine for calculating eigenvalues of Hermitian matrices (<code>eigvalsh</code>), and ran into some unexpected behavior.</p> <p>In particular, <code>scipy</code>'s <code>eigvalsh</code> routine returns practically the same eigenvalues as <code>numpy</code>'s <code>eigvalsh</code> routine, but only for matrices with dimension smaller than 2000 x 2000. Example from Spyder 2.3.7 (Python 3.5):</p> <pre><code>In [1]: import scipy In [2]: import numpy as np In [3]: H = np.random.rand(1999,1999) + np.random.rand(1999,1999) * 1j ...: H = H + H.conj().T ...: Es = scipy.linalg.eigvalsh(H) ...: En = np.linalg.eigvalsh(H) ...: sum(abs(En - Es)) Out[3]: 1.85734656821257e-10 In [4]: H = np.random.rand(2000,2000) + np.random.rand(2000,2000) * 1j ...: H = H + H.conj().T ...: Es = scipy.linalg.eigvalsh(H) ...: En = np.linalg.eigvalsh(H) ...: sum(abs(En - Es)) Out[4]: 89786.239714130075 </code></pre> <p>In fact, if <code>H.shape &gt;= (2000,2000)</code>, the array of eigenvalues <code>Es</code> contains only zeros, apart from the last element, which equals 24 times the length of the array:</p> <pre><code>In [5]: np.unique(Es[:-1]) Out[5]: array([ 0.]) In [6]: Es[-1] Out[6]: 48000.0 </code></pre> <p>What is happening here?</p>
<p>For <code>*.linalg.eigvalsh</code> routines, <code>numpy</code> invokes LAPACK <code>-EVD</code> routines, which are based on a divide-and-conquer algorithm. <code>scipy</code> invokes LAPACK <code>-EVR</code>, which uses a different algorithm.</p> <p>For LAPACK docs, <a href="http://www.netlib.org/lapack/lug/node30.html" rel="nofollow noreferrer">see here</a>. This explains the slight numerical differences you're seeing in the under-2000 case.</p> <p>The "all zeros" situation does indeed look like a bug, but I cannot replicate on <code>scipy == '0.17.1'</code> and <code>np == '1.11.0'</code>. Does the issue persist after upgrading?</p>
python|numpy|scipy
0
7,728
36,776,494
How to load Image as binary image with threshold in Scipy?
<p>I have a grayscale image--the MNIST dataset, actually--and I need to convert it to a binary image with a threshold of, say, 240 so that all values below 240 are ones and all values above are zeros. </p> <p>This is a function in matlab, so I'm sure there is some corresponding function in scipy...but it is eluding my searches. </p> <hr> <p>Equivalently, if I have a (60000,28,28) shaped ndarray, how to I conditionally inspect all of the values and set values above 240 to zero and the rest to 1?</p> <p>In pseudo-numpy code, </p> <pre><code>image_array = big_array_of_28x28_images bw_image_array = image_array[image_array &gt; 240 yield 0, else yield 1] </code></pre>
<p>If A is your matrix, the binary matrix B is:</p> <p>B = np.where(A &lt;= 240, 1, 0)</p>
python|numpy|scipy
1
7,729
37,049,887
Print highest peak value of the frequency domain plot
<p>I've attempted to plot the oscillations of my home made quad in the time and frequency domain. How do I print the value of my highest peak in the frequency domain plot?</p> <p>code:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np from scipy import fft, arange csv = np.genfromtxt ('/Users/shaunbarney/Desktop/Results/quadOscillations.csv', delimiter=",",dtype=float) x = csv[:,0] y = csv[:,1] x = x - 6318 #Remove start offset av=0 for i in xrange(1097): #Calculate average sampling time in seconds oscillations if i == 1076: avSampleTime = av/1097000 # break av = av + (x[i+1]-x[i]) Fs = 1/avSampleTime #Average sampling freq. n = 1079 #no.Samples k = arange(n) Ts = n/Fs frq = k/Ts #Frequency range two sided frq = frq[range(n/2)] #Frequency range one sided Y = fft(y)/n #Fast fourier transfors Y = Y[range(n/2)] #Normalise # PLOTS plt.subplot(2,1,1) plt.plot(frq,abs(Y),'r') # plotting the spectrum plt.xlabel('Freq (Hz)') plt.ylabel('|Y(freq)|') plt.grid('on') plt.subplot(2,1,2) plt.plot(x,y) plt.xlabel('Time (ms)') plt.ylabel('Angle (degrees)') plt.grid('on') plt.show() </code></pre> <p>The results looks like:</p> <p><a href="https://i.stack.imgur.com/B65jX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B65jX.png" alt="enter image description here"></a></p> <p>Thanks, Shaun</p>
<p>Since you're using <code>numpy</code>, just simply use <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.ndarray.argmax.html#numpy.ndarray.argmax" rel="noreferrer"><code>numpy.max</code></a> and <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.argmax.html" rel="noreferrer"><code>numpy.argmax</code></a> to determine the peak as well as the location of the peak so you can print this out to your screen. Once you find this location, index into your frequency array to obtain the final coordinate. </p> <p>Assuming that all of your variables have been created when you run your code, simply do the following:</p> <pre><code>mY = np.abs(Y) # Find magnitude peakY = np.max(mY) # Find max peak locY = np.argmax(mY) # Find its location frqY = frq[locY] # Get the actual frequency value </code></pre> <p><code>peakY</code> contains the magnitude value that is the largest in your graph and <code>frqY</code> contains the frequency that this largest value (i.e. peak) is located at. As a bonus, you can plot that on your graph in a different colour and with a larger marker to distinguish it from the main magnitude graph. Remember that invoking multiple <code>plot</code> calls will simply append on top of the current figure of focus. Therefore, plot your spectrum then plot this point on top of the spectrum. I'll make the size of the point larger than the thickness of the plot as well as marking the point with a different colour. You could also perhaps make a title that reflects this largest peak value and the corresponding location.</p> <p>Also remember that this is to be done on the magnitude, so before you plot your actual magnitude, simply do this:</p> <pre><code># PLOTS # New - Find peak of spectrum - Code from above mY = np.abs(Y) # Find magnitude peakY = np.max(mY) # Find max peak locY = np.argmax(mY) # Find its location frqY = frq[locY] # Get the actual frequency value # Code from before plt.subplot(2,1,1) plt.plot(frq,abs(Y),'r') # plotting the spectrum # New - Plot the max point plt.plot(frqY, peakY, 'b.', markersize=18) # New - Make title reflecting peak information plt.title('Peak value: %f, Location: %f Hz' % (peakY, frqY)) # Rest of the code is the same plt.xlabel('Freq (Hz)') plt.ylabel('|Y(freq)|') plt.grid('on') plt.subplot(2,1,2) plt.plot(x,y) plt.xlabel('Time (ms)') plt.ylabel('Angle (degrees)') plt.grid('on') plt.show() </code></pre>
python|matlab|numpy|matplotlib|scipy
5
7,730
36,861,380
Merge pandas dataframe with unequal length
<p>I have two Pandas dataframes that I would like to merge into one. They have unequal length, but contain some of the same information.</p> <p>Here is the first dataframe:</p> <pre><code>BOROUGH TYPE TCOUNT MAN SPORT 5 MAN CONV 3 MAN WAGON 2 BRO SPORT 2 BRO CONV 3 </code></pre> <p>Where column <code>A</code> specifies a location, <code>B</code> a category and <code>C</code> a count.</p> <p>And the second:</p> <pre><code>BOROUGH CAUSE CCOUNT MAN ALCOHOL 5 MAN SIZE 3 BRO ALCOHOL 2 </code></pre> <p>Here <code>A</code> is again the same Location as in the other dataframe. But <code>D</code> is another category, and <code>E</code> is the count for <code>D</code> in that location.</p> <p>What I want (and haven't been able to do) is to get the following:</p> <pre><code>BOROUGH TYPE TCOUNT CAUSE CCOUNT MAN SPORT 5 ALCOHOL 5 MAN CONV 3 SIZE 3 MAN WAGON 2 NaN NaN BRO SPORT 2 ALCOHOL 2 BRO CONV 3 NaN NaN </code></pre> <p>&quot;-&quot; can be anything. Preferably a string saying &quot;Nothing&quot;. If they default to NaN values, I guess it's just a matter of replacing those with a string.</p> <p><strong>EDIT</strong>:<br /> Output:</p> <pre><code>&lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 233 entries, 0 to 232 Data columns (total 3 columns): BOROUGH 233 non-null object CONTRIBUTING FACTOR VEHICLE 1 233 non-null object RCOUNT 233 non-null int64 dtypes: int64(1), object(2) memory usage: 7.3+ KB None &lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 83 entries, 0 to 82 Data columns (total 3 columns): BOROUGH 83 non-null object VEHICLE TYPE CODE 1 83 non-null object VCOUNT 83 non-null int64 dtypes: int64(1), object(2) memory usage: 2.6+ KB None </code></pre>
<p>Perform a <code>left</code> type <a href="http://pandas.pydata.org/pandas-docs/version/0.18.0/merging.html#database-style-dataframe-joining-merging" rel="nofollow"><code>merge</code></a> on columns 'A','B' for the lhs and 'A','D' for the rhs as these are your key columns</p> <pre><code>In [16]: df.merge(df1, left_on=['A','B'], right_on=['A','D'], how='left') ​ Out[16]: A B C D E 0 1 1 3 1 5 1 1 2 2 2 3 2 1 3 1 NaN NaN 3 2 1 1 1 2 4 2 2 4 NaN NaN </code></pre> <p><strong>EDIT</strong></p> <p>Your question has changed but essentially here you can use <code>combine_first</code>:</p> <pre><code>In [26]: merged = df.combine_first(df1) merged Out[26]: BOROUGH CAUSE CCOUNT TCOUNT TYPE 0 MAN ALCOHOL 5 5 SPORT 1 MAN SIZE 3 3 CONV 2 MAN ALCOHOL 2 2 WAGON 3 BRO NaN NaN 2 SPORT 4 BRO NaN NaN 3 CONV </code></pre> <p>The <code>NaN</code> you see for 'CAUSE' is the string 'NaN', we can use <code>fillna</code> to replace these values:</p> <pre><code>In [27]: merged['CAUSE'] = merged['CAUSE'].fillna('Nothing') merged['CCOUNT'] = merged['CCOUNT'].fillna(0) merged Out[27]: BOROUGH CAUSE CCOUNT TCOUNT TYPE 0 MAN ALCOHOL 5 5 SPORT 1 MAN SIZE 3 3 CONV 2 MAN ALCOHOL 2 2 WAGON 3 BRO Nothing 0 2 SPORT 4 BRO Nothing 0 3 CONV </code></pre>
python|pandas|dataframe|merge
5
7,731
54,789,591
How to find all triplets of nodes (connected components of size 3) from a tsv file?
<p>From the matrix given below, i have to make a network and find all connected components of size 3. The dataset i use is:</p> <pre><code>0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 </code></pre> <p>Can anybody help? And the expected connected connected triplets will be:-</p> <pre><code>1 2 3 1 3 4 2 1 3 3 1 4 </code></pre> <p>My code:</p> <pre><code>import networkx as nx from itertools import chain import csv import numpy as np import pandas as pd adj = [] infile="2_mutual_info_adjacency.tsv" df=pd.read_csv(infile,delimiter="\t",header=None) arr=np.array(df.iloc[0:10,:]) arr1=np.array(df.iloc[:,0:10]) for i in range(arr): for j in range(arr1): if (i,j)==1: for k in range(j+1,arr1): if (i,k)==1: adj.append(i,j,k) for l in range(i+1,arr): if(l,j)==1: adj.append(i,j,l) </code></pre> <p>A little bit of help will be appreciated. Thank You in advance.</p>
<p>You can find all connected components using the function <code>connected_componets()</code>. Subsequently you can filter out the components, which consist of three nodes:</p> <pre><code>import networkx as nx import pandas as pd from itertools import chain adj_matrix = [ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0, 0, 0], [0, 0, 1, 0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ] df = pd.DataFrame(adj_matrix) G = nx.from_pandas_adjacency(df) # filter components of size 3 triplets = [c for c in nx.connected_components(G) if len(c) == 3] triplets = set(chain.from_iterable(triplets)) color = ['lime' if n in triplets else 'pink' for n in G.nodes()] # jupyter notebook %matplotlib inline nx.draw(G, with_labels=True, node_color=color, node_size=1000) </code></pre> <p><a href="https://i.stack.imgur.com/GMRzF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GMRzF.png" alt="Graph" /></a></p>
python|python-3.x|pandas|networkx
2
7,732
49,530,410
Python `expm` of an `(N,M,M)` matrix
<p>Let <code>A</code> be an <code>(N,M,M)</code> matrix (with <code>N</code> very large) and I would like to compute <code>scipy.linalg.expm(A[n,:,:])</code> for each <code>n in range(N)</code>. I can of course just use a <code>for</code> loop but I was wondering if there was some trick to do this in a better way (something like <code>np.einsum</code>).</p> <p>I have the same question for other operations like inverting matrices (inverting solved in comments).</p>
<p>Depending on the size and structure of your matrices you can do better than loop.</p> <p>Assuming your matrices can be diagonalized as <code>A = V D V^(-1)</code> (where <code>D</code> has the eigenvalues in its diagonal and <code>V</code> contains the corresponding eigenvectors as columns), you can compute the matrix exponential as</p> <pre><code>exp(A) = V exp(D) V^(-1) </code></pre> <p>where <code>exp(D)</code> simply contains <code>exp(lambda)</code> for each eigenvalue <code>lambda</code> in its diagonal. This is really easy to prove if we use the power series definition of the exponential function. If the matrix <code>A</code> is furthermore normal, the matrix <code>V</code> is unitary and thus its inverse can be computed by simply taking its adjoint.</p> <p>The good news is that <code>numpy.linalg.eig</code> and <code>numpy.linalg.inv</code> both work with stacked matrices just fine:</p> <pre><code>import numpy as np import scipy.linalg A = np.random.rand(1000,10,10) def loopy_expm(A): expmA = np.zeros_like(A) for n in range(A.shape[0]): expmA[n,...] = scipy.linalg.expm(A[n,...]) return expmA def eigy_expm(A): vals,vects = np.linalg.eig(A) return np.einsum('...ik, ...k, ...kj -&gt; ...ij', vects,np.exp(vals),np.linalg.inv(vects)) </code></pre> <p>Note that there's probably some room for optimization in specifying the order of operations in the call to <code>einsum</code>, but I didn't investigate that.</p> <p>Testing the above for the random array:</p> <pre><code>In [59]: np.allclose(loopy_expm(A),eigy_expm(A)) Out[59]: True In [60]: %timeit loopy_expm(A) 824 ms ± 55.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [61]: %timeit eigy_expm(A) 138 ms ± 992 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre> <p>That's already nice. If you're lucky enough that your matrices are all normal (say, because they are real symmetric):</p> <pre><code>A = np.random.rand(1000,10,10) A = (A + A.transpose(0,2,1))/2 def eigy_expm_normal(A): vals,vects = np.linalg.eig(A) return np.einsum('...ik, ...k, ...jk -&gt; ...ij', vects,np.exp(vals),vects.conj()) </code></pre> <p>Note the symmetric definition of the input matrix and the transpose inside the pattern of <code>einsum</code>. Results:</p> <pre><code>In [80]: np.allclose(loopy_expm(A),eigy_expm_normal(A)) Out[80]: True In [79]: %timeit loopy_expm(A) 878 ms ± 89.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [80]: %timeit eigy_expm_normal(A) 55.8 ms ± 868 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre> <p>That is a 15-fold speedup for the above example shapes.</p> <hr> <p>It should be noted though that <code>scipy.linalg.eigm</code> uses Padé approximation according to the documentation. This might imply that if your matrices are ill-conditioned, the eigenvalue decomposition may yield different results than <code>scipy.linalg.eigm</code>. I'm not familiar with how this function works, but I expect it to be safer for pathological inputs.</p>
python|numpy|scipy|linear-algebra|numpy-einsum
4
7,733
49,639,523
What is going on behind the Pandas scenes that is causing a level in a MultiIndex not to be dropped?
<p>Consider the data frame <code>df</code><br> Note that the columns object is a single level MultiIndex.</p> <pre><code>midx = pd.MultiIndex.from_product([list('AB')]) df = pd.DataFrame(1, range(3), midx) A B 0 1 1 1 1 1 2 1 1 </code></pre> <p>Now when I reference column <code>'A'</code></p> <pre><code>df.A A 0 1 1 1 2 1 </code></pre> <p>I get a single column data frame and not the series object I expected. Consequently, I can indefinitely reference this column.</p> <pre><code>df.A.A.A.A.A A 0 1 1 1 2 1 </code></pre> <p>As another check, I used <code>xs</code></p> <pre><code>df.xs('A', axis=1) A 0 1 1 1 2 1 </code></pre> <p>Same problem.<br> <code>pd.IndexSlice</code>?</p> <pre><code>df.loc[:, pd.IndexSlice['A']] A 0 1 1 1 2 1 </code></pre> <p>How about <code>squeeze</code></p> <pre><code>df.A.squeeze() 0 1 1 1 2 1 Name: (A,), dtype: int64 </code></pre> <p>This isn't at all what I expected.</p> <ol> <li>What is preventing this from turning into a series object with the name of <code>'A'</code>?</li> <li>What is the most intuitive way to fix this?</li> <li>Is there any good reason why we should ever want a single level <code>MultiIndex</code>?</li> </ol>
<p>I wrote this to fix the problem.</p> <pre><code>def fix_single_level_multiindex(midx): return midx.get_level_values(0) if midx.nlevels == 1 else midx </code></pre> <p>Or</p> <pre><code>def fix_single_level_multiindex(midx): return midx.levels[0][midx.labels[0]] if midx.nlevels == 1 else midx </code></pre>
python|pandas|multi-index
1
7,734
28,095,803
Using linalg.block_diag for variable number of blocks
<p>So I have a code that generates various matrices. These matrices need to be stored in a block diagonal matrix. This should be fairly simply as I can use scipy's:</p> <pre><code>scipy.linalg.block_diag(*arrs) </code></pre> <p>However the problem I have is I don't know how many matrices will need to be stored like this. I want to keep things as simply as possible (naturally). I thought of doing something like:</p> <pre><code>scipy.linalg.block_diag( matrix_list[ii] for ii in range(len(matrix_list)) ) </code></pre> <p>But this doesn't work. I can think of a few other ways to do it... but they all become quite convoluted for something I feel should be much simpler.</p> <p>Does anyone have an idea (or know) a simple way of carrying this out?</p> <p>Thanks in advance!</p>
<p>When you do:</p> <pre><code>scipy.linalg.block_diag( matrix_list[ii] for ii in range(len(matrix_list)) ) </code></pre> <p>you're passing a generator expression to <code>block_diag</code>, which is not the way to use it.</p> <p>Instead, use the <code>*</code> opertor, for expanding the argument list in the function call, like:</p> <pre><code>scipy.linalg.block_diag(*matrix_list) </code></pre>
python|numpy|matrix|scipy|parameter-passing
6
7,735
28,353,414
Converting a matlab script into python
<p>I'm an undergrad at university particapting in a research credit with a professor, so this is pretty much an independent project for me. </p> <p>I am converting a matlab script into a python (3.4) script for easier use on the rest of my project. The 'find' function is employed in the script, like so:</p> <pre><code>keyindx = find(summags&gt;=cumthresh,1) </code></pre> <p>Keyindx would contain the location of the first value inside summag above cumthresh </p> <p>So, as an example:</p> <pre><code>summags = [ 1 4 8 16 19] cumthresh = 5 </code></pre> <p>then keyindx would return with an index of 2, whose element corresponds to 8. </p> <p>My question is, I am trying to find a similar function in python (I am also using numpy and can use whatever library I need) that will work the same way. I mean, coming from a background in C I know how to get everything I need, but I figure there's a better way to do this then just write some C style code. </p> <p>So, any hints about where to look in the python docs and about finding useful functions in general?</p>
<p>A quick search led me to the <code>argwhere</code> function which you can combine with <code>[0]</code> to get the first index satisfying your condition. For example,</p> <pre><code>&gt;&gt; import numpy as np &gt;&gt; x = np.array(range(1,10)) &gt;&gt; np.argwhere(x &gt; 5)[0] array([5]) </code></pre> <p>This isn't quite the same as saying</p> <pre><code>find(x &gt; 5, 1) </code></pre> <p>in MATLAB, since the Python code will throw an <code>IndexError</code> if none of the values satisfy your condition (whereas MATLAB returns an empty array). However, you can catch this and deal with it appropriately, for example</p> <pre><code>try: ind = np.argwhere(x &gt; 5)[0] except IndexError: ind = np.array([1]) </code></pre>
matlab|numpy|python-3.4
2
7,736
73,241,736
Fill the missing value in Age column values by (means of the age of the players belonging to that particular game)
<p>Let's suppose there is a missing value of Age where the sport is Swimming, then replace that missing value of age with the mean age of all the players who belong to Swimming. Similarly for all other sports. How can I do that?</p> <p><a href="https://i.stack.imgur.com/OTROf.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>This is how you can fill the age with the mean value of the column.</p> <pre><code>df['Age'].fillna(int(df['Age'].mean()), inplace=True) </code></pre> <p>You can also use sklearn to achieve that in the whole df:</p> <pre><code>import pandas as pd import numpy as np df= pd.read_csv(&quot;data.csv&quot;) X = df.iloc[:,0].values # To calculate mean use imputer class from sklearn.impute import SimpleImputer imputer = SimpleImputer(missing_values=np.nan, strategy='mean') imputer = imputer.fit(X) X = imputer.transform(X) print(X) </code></pre>
pandas
0
7,737
73,242,757
using a for loop to rename columns of a list of data frames
<p>I have a list of dataframes I have saved in a variable x.</p> <pre><code>x=[df_1963,df_1974,df_1985,df_1996,df_2007,df_2018] </code></pre> <p>I wish to change all the headers to lowercase but nothing happens after the running the code below.</p> <pre><code>for df in x: for column in df.columns: df = df.withColumnRenamed(column, '_'.join(column.split()).lower()) </code></pre>
<p>You can do it in a list and dict comprehension way. Try with:</p> <pre><code>renamed_x = [a.rename(columns={y:y.lower() for y in a}) for a in x] </code></pre> <p>The inner dict comprehension generates a dictionary with the columns' original values and its lowercase, so that is how each dataframe's columns will be renamed in lowercase; while the outer list comprehension iterates over all the dfs.</p>
python|pandas|dataframe
0
7,738
73,365,975
Appending series to list, then converting to dataframe. Everything works, but if list gets too large it returns a numpy memory error
<p>The code works fine and outputs exactly what I need, but when db_diff_3 gets above about 5k lines, it spits out a memory error and breaks. It is using a lot of memory here (almost 7gbs) when running. A 5K list should be small for a list of series which is why I'm not understanding why its doing this. Any help would be appreciated. Sample of db_diff_full_2 below code.</p> <pre><code>db_diff_3 = [] db_diff_drops = [] db_diff_adds = [] for r, c in db_diff_full_2.iterrows(): for i in db_diff_full_2.columns.get_level_values(0): if db_diff_full_2.loc[r, (i, 'Old')] != db_diff_full_2.loc[r, (i, 'New')]: #compare old vs new values for each column (i) in each row. if at least one col has a difference, append entire row. try: if len(db_diff_full_2.loc[r, ('CompanyName', 'New')]) &lt; 1: #db_diff_drops.append(db_diff_full_2.loc[r]) # add new add records to separate frame. can shut this line off to aid in memory useage db_diff_full_2.drop(index=[r], inplace=True) #drop new add records from file #print('Dropped', r) break elif len(db_diff_full_2.loc[r, ('CompanyName', 'Old')]) &lt; 1: #db_diff_adds.append(db_diff_full_2.loc[r]) # add deleted records to separate frame - these likely were dissolved. can shut this line off to aid in memory useage db_diff_full_2.drop(index=[r], inplace=True) #drop deleted records from main list #print('Added', r) break else: db_diff_3.append(db_diff_full_2.loc[r]) # add any records with changes in old vs new columns values to changes file break except np.core._exceptions.MemoryError: #exception if export file is too large print(&quot;File too large to export!!&quot;) return #end script if cannot add any more lines to main db_diff_3``` ``` CompanyName ... IncorporationDate Old New ... Old New CompanyNumber ... 08209948 ! LTD ! LTD ... 11/09/2012 11/09/2012 11399177 !? LTD !? LTD ... 05/06/2018 05/06/2018 11743365 !BIG IMPACT GRAPHICS LIMITED !BIG IMPACT GRAPHICS LIMITED ... 28/12/2018 28/12/2018 13404790 !GOBERUB LTD !GOBERUB LTD ... 17/05/2021 17/05/2021 13522064 !NFOGENIE LTD !NFOGENIE LTD ... 21/07/2021 21/07/2021``` </code></pre>
<p>I'm not sure why your code gives a memory error but I wanted to share an approach that doesn't require the <code>for</code> loop and I believe shouldn't cause a memory issue.</p> <p>The main idea is to separate the <code>old</code> and <code>new</code> parts of your <code>df2</code> table into separate tables to make comparing easier. I've created an example test table and the variable <code>df2</code> refers to your <code>db_diff_full_2</code></p> <pre><code>import pandas as pd #Create an example table df2 = pd.DataFrame({ ('Company_name','Old'):['A','B','C','D','E'], ('Company_name','New'):['A','','C','D','E'], ('IncorporationDate','Old'):['Jan 1','Jan 2','Jan 3','Jan 4','Jan 5'], ('IncorporationDate','New'):['Jan 8','Jan 2','Jan 3','Jan 4','Jan 5'], }) df2.index.name = 'CompanyNumber' print(df2) #Separate old and new columns into separate tables old = df2.loc[:,pd.IndexSlice[:,'Old']] new = df2.loc[:,pd.IndexSlice[:,'New']] #Drop the multi-column index on old/new old.columns = old.columns.droplevel(1) new.columns = new.columns.droplevel(1) #Test whether each row has ANY mismatches between old and new in any column any_column_mismatch = old.ne(new).any(axis=1) #Test if the old or new name is blank (in which case we don't want it to go to df3) old_name_not_blank = old['Company_name'].apply(len).gt(0) new_name_not_blank = new['Company_name'].apply(len).gt(0) df3 = df2.loc[any_column_mismatch &amp; old_name_not_blank &amp; new_name_not_blank].copy() print(df3) </code></pre> <p>Here's what <code>df2</code> looks like:</p> <p><a href="https://i.stack.imgur.com/ZzCkt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZzCkt.png" alt="enter image description here" /></a></p> <p>Only the CompanyNumber 0 row is copied to <code>df3</code>. The CompanyNumber 1 is not copied over because the <code>New</code> name is blank</p> <p><a href="https://i.stack.imgur.com/d9nKa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d9nKa.png" alt="enter image description here" /></a></p>
python|pandas|dataframe|memory|append
0
7,739
35,284,883
How to search by Date given Datetime index
<p>Given a DataFrame, where the index is Datetime, how can I retrieve the row(s) by matching only on the Date portion?</p> <p>For example:</p> <pre><code>df1 = A B C D 2011-01-13 16:00:00 344 144 616 73 2011-01-14 16:00:00 346 145 624 74 2011-01-18 16:00:00 339 146 639 77 ... </code></pre> <p>And given:</p> <pre><code>df2['Date'] = 0 2011-01-13 1 2011-01-13 2 2011-01-26 3 2011-02-02 4 2011-02-10 5 2011-03-03 6 2011-03-03 7 2011-06-03 8 2011-05-03 9 2011-06-10 10 2011-08-01 11 2011-08-01 12 2011-12-20 </code></pre> <p>I want something like this:</p> <pre><code>for indx, row in df2.iterrows(): print df1.loc[df1.index.date() == row['Date'].date()] </code></pre>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.to_series.html" rel="nofollow"><code>to_series</code></a>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.date.html" rel="nofollow"><code>date</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a>:</p> <pre><code>print df1.index.to_series().dt.date 2011-01-13 16:00:00 2011-01-13 2011-01-14 16:00:00 2011-01-14 2011-01-18 16:00:00 2011-01-18 dtype: object print df1.index.to_series().dt.date.isin(df2['Date'].dt.date) Name: Date, dtype: object 2011-01-13 16:00:00 True 2011-01-14 16:00:00 False 2011-01-18 16:00:00 False dtype: bool print df1[df1.index.to_series().dt.date.isin(df2['Date'].dt.date)] A B C D 2011-01-13 16:00:00 344 144 616 73 </code></pre> <p>Or maybe you need:</p> <pre><code>print df1.index.date [datetime.date(2011, 1, 13) datetime.date(2011, 1, 14) datetime.date(2011, 1, 18)] print df2['Date'].dt.date.isin(df1.index.date) 0 True 1 True 2 False 3 False 4 False 5 False 6 False 7 False 8 False 9 False 10 False 11 False 12 False Name: Date, dtype: bool print df2[df2['Date'].dt.date.isin(df1.index.date)] Date 0 2011-01-13 1 2011-01-13 </code></pre>
python|datetime|pandas
3
7,740
31,214,916
How can I read successive arrays from a binary file using `np.fromfile`?
<p>I want to read a binary file in Python, the exact layout of which is stored in the binary file itself.</p> <p>The file contains a sequence of two-dimensional arrays, with the row and column dimensions of each array stored as a pair of integers preceding its contents. I want to successively read all of the arrays contained within the file.</p> <p>I know this can be done with <code>f = open("myfile", "rb")</code> and <code>f.read(numberofbytes)</code>, but this is quite clumsy because I would then need to convert the output into meaningful data structures. I would like to use numpy's <code>np.fromfile</code> with a custom <code>dtype</code>, but have not found a way to read part of the file, leaving it open, and then continue reading with a modified <code>dtype</code>.</p> <p>I know I can use <code>os</code> to <code>f.seek(numberofbytes, os.SEEK_SET)</code> and <code>np.fromfile</code> multiple times, but this would mean a lot of unnecessary jumping around in the file. </p> <p>In short, I want MATLAB's <code>fread</code> (or at least something like C++ <code>ifstream</code> <code>read</code>).</p> <p>What is the best way to do this?</p>
<p>You can pass an open file object to <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html" rel="noreferrer"><code>np.fromfile</code></a>, read the dimensions of the first array, then read the array contents (again using <code>np.fromfile</code>), and repeat the process for additional arrays within the same file.</p> <p>For example:</p> <pre><code>import numpy as np import os def iter_arrays(fname, array_ndim=2, dim_dtype=np.int, array_dtype=np.double): with open(fname, 'rb') as f: fsize = os.fstat(f.fileno()).st_size # while we haven't yet reached the end of the file... while f.tell() &lt; fsize: # get the dimensions for this array dims = np.fromfile(f, dim_dtype, array_ndim) # get the array contents yield np.fromfile(f, array_dtype, np.prod(dims)).reshape(dims) </code></pre> <p>Example usage:</p> <pre><code># write some random arrays to an example binary file x = np.random.randn(100, 200) y = np.random.randn(300, 400) with open('/tmp/testbin', 'wb') as f: np.array(x.shape).tofile(f) x.tofile(f) np.array(y.shape).tofile(f) y.tofile(f) # read the contents back x1, y1 = iter_arrays('/tmp/testbin') # check that they match the input arrays assert np.allclose(x, x1) and np.allclose(y, y1) </code></pre> <p>If the arrays are large, you might consider using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html" rel="noreferrer"><code>np.memmap</code></a> with the <code>offset=</code> parameter in place of <code>np.fromfile</code> to get the contents of the arrays as memory-maps rather than loading them into RAM.</p>
python|numpy
5
7,741
31,057,197
Should I use `random.seed` or `numpy.random.seed` to control random number generation in `scikit-learn`?
<p>I'm using scikit-learn and numpy and I want to set the global seed so that my work is reproducible.</p> <p>Should I use <code>numpy.random.seed</code> or <code>random.seed</code>?</p> <p>From the link in the comments, I understand that they are different, and that the numpy version is not thread-safe. I want to know specifically which one to use to create IPython notebooks for data analysis. Some of the algorithms from scikit-learn involve generating random numbers, and I want to be sure that the notebook shows the same results on every run.</p>
<blockquote> <p>Should I use np.random.seed or random.seed?</p> </blockquote> <p>That depends on whether in your code you are using numpy's random number generator or the one in <code>random</code>.</p> <p>The random number generators in <code>numpy.random</code> and <code>random</code> have totally separate internal states, so <code>numpy.random.seed()</code> will not affect the random sequences produced by <code>random.random()</code>, and likewise <code>random.seed()</code> will not affect <code>numpy.random.randn()</code> etc. If you are using both <code>random</code> and <code>numpy.random</code> in your code then you will need to separately set the seeds for both.</p> <h2>Update</h2> <p>Your question seems to be specifically about scikit-learn's random number generators. As far as I can tell, scikit-learn uses <code>numpy.random</code> throughout, so you should use <code>np.random.seed()</code> rather than <code>random.seed()</code>.</p> <p>One important caveat is that <code>np.random</code> is not threadsafe - if you set a global seed, then launch several subprocesses and generate random numbers within them using <code>np.random</code>, each subprocess will inherit the RNG state from its parent, meaning that you will get identical random variates in each subprocess. The usual way around this problem is to pass a different seed (or <code>numpy.random.Random</code> instance) to each subprocess, such that each one has a separate local RNG state.</p> <p>Since some parts of scikit-learn can run in parallel using joblib, you will see that some classes and functions have an option to pass either a seed or an <code>np.random.RandomState</code> instance (e.g. the <code>random_state=</code> parameter to <a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.MiniBatchSparsePCA.html#sklearn.decomposition.MiniBatchSparsePCA" rel="noreferrer"><code>sklearn.decomposition.MiniBatchSparsePCA</code></a>). I tend to use a single global seed for a script, then generate new random seeds based on the global seed for any parallel functions.</p>
python|numpy|random|scikit-learn|random-seed
57
7,742
31,069,951
Python: list vs. np.array: switching to use certain attributes
<p>I know, there are plenty of threads about list vs. array but I've got a slightly different problem.</p> <p>Using Python, I find myself converting between np.array and list quite often as I want to use attributes like</p> <p>remove, append, extend, sort, index, … for lists</p> <p>and on the other hand modify the content by things like</p> <p>*, /, +, -, np.exp(), np.sqrt(), … which only works for arrays.</p> <p>It must be pretty messy to switch between data types with list(array) and np.asarray(list), I assume. But I just can't think of a proper solution. I don't really want to write a loop every time I want to find and remove something from my array.</p> <p>Any suggestions?</p>
<p>A numpy array:</p> <pre><code>&gt;&gt;&gt; A=np.array([1,4,9,2,7]) </code></pre> <p>delete:</p> <pre><code>&gt;&gt;&gt; A=np.delete(A, [2,3]) &gt;&gt;&gt; A array([1, 4, 7]) </code></pre> <p>append (beware: it's <strong>O(n)</strong>, unlike list.append which is <strong>O(1)</strong>):</p> <pre><code>&gt;&gt;&gt; A=np.append(A, [5,0]) &gt;&gt;&gt; A array([1, 4, 7, 5, 0]) </code></pre> <p>sort:</p> <pre><code>&gt;&gt;&gt; np.sort(A) array([0, 1, 4, 5, 7]) </code></pre> <p>index:</p> <pre><code>&gt;&gt;&gt; A array([1, 4, 7, 5, 0]) &gt;&gt;&gt; np.where(A==7) (array([2]),) </code></pre>
python|arrays|list|numpy|type-conversion
3
7,743
67,452,037
How to preprocess a dataset for BERT model implemented in Tensorflow 2.x?
<h2>Overview</h2> <p>I have a dataset made for classification problem. There are two columns one is <code>sentences</code> and the other is <code>labels</code> (total: 10 labels). I'm trying to convert this dataset to implement it in a BERT model made for classification and that is implemented in Tensorflow 2.x. However, I can't preprocess correctly the dataset to make a <code>PrefetchDataset</code> used as input.</p> <h2>What I did?</h2> <ul> <li>Dataframe is balanced and shuffled (every label have 18708 data)</li> <li>Dataframe shape: (187080, 2)</li> <li><code>from sklearn.model_selection import train_test_split</code> was used to split the dataframe</li> <li>80% train data, 20% test data</li> </ul> <h3>Training data:</h3> <p><strong>X_train</strong></p> <pre><code>array(['i hate megavideo stupid time limits', 'wow this class got wild quick functions are a butt', 'got in trouble no cell phone or computer for a you later twitter', ..., 'we lied down around am rose a few hours later party still going lt', 'i wanna miley cyrus on brazil i love u my diva miley rocks', 'i know i hate it i want my dj danger bck'], dtype=object) </code></pre> <p><strong>y_train</strong></p> <pre class="lang-py prettyprint-override"><code>array(['unfriendly', 'unfriendly', 'unfriendly', ..., 'pos_hp', 'friendly', 'friendly'], dtype=object) </code></pre> <p><strong>BERT preprocessing Xy_dataset</strong></p> <pre class="lang-py prettyprint-override"><code>AUTOTUNE = tf.data.AUTOTUNE # autotune the buffer_size: optional = 1 train_Xy_slices = tf.data.Dataset.from_tensor_slices(tensors=(X_train, y_train)) dataset_train_Xy = train_Xy_slices.batch(batch_size=32) </code></pre> <p><strong>output</strong></p> <pre class="lang-py prettyprint-override"><code>dataset_train_Xy &lt;PrefetchDataset shapes: ((None,), (None,)), types: (tf.string, tf.string)&gt; for i in dataset_train_Xy: print(i) ( &lt;tf.Tensor: shape=(32,), dtype=string, numpy= array([b'some of us had to work al day', ... b'feels claudia cazacus free falling feat audrey gallagher amp thomas bronzwaers look ahead are the best trance offerings this summer'], dtype=object)&gt;, &lt;tf.Tensor: shape=(32,), dtype=string, numpy= array([b'interested', b'uninterested', b'happy', b'friendly', b'neg_hp', ... b'friendly', b'insecure', b'pos_hp', b'interested', b'happy'], dtype=object)&gt; ) </code></pre> <h2>Expected output (example)</h2> <pre class="lang-py prettyprint-override"><code>dataset_train_Xy &lt;PrefetchDataset shapes: ({input_word_ids: (None, 128), input_mask: (None, 128), input_type_ids: (None, 128)}, (None,)), types: ({input_word_ids: tf.int32, input_mask: tf.int32, input_type_ids: tf.int32}, tf.int64)&gt; </code></pre> <p><a href="https://i.stack.imgur.com/iqmu6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iqmu6.png" alt="enter image description here" /></a></p> <h2>Observations/problem:</h2> <p>I know I need to tokenize <code>X_train</code> and <code>y_train</code>, but when I tried to tokenize had an error:</p> <pre class="lang-py prettyprint-override"><code>AUTOTUNE = tf.data.AUTOTUNE # autotune the buffer_size: optional = 1 train_Xy_slices = tf.data.Dataset.from_tensor_slices(tensors=(X_train, y_train)) dataset_train_Xy = train_Xy_slices.batch(batch_size=batch_size) # 32 print(type(dataset_train_Xy)) # Tokenize the text to word pieces. bert_preprocess = hub.load(tfhub_handle_preprocess) tokenizer = hub.KerasLayer(bert_preprocess.tokenize, name='tokenizer') dataset_train_Xy = dataset_train_Xy.map(lambda ex: (tokenizer(ex), ex[1])) # print(i[1]) # correspond to labels dataset_train_Xy = dataset_train_Xy.prefetch(buffer_size=AUTOTUNE) </code></pre> <h3>Traceback</h3> <pre class="lang-py prettyprint-override"><code>&lt;class 'tensorflow.python.data.ops.dataset_ops.BatchDataset'&gt; --------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-69-8e486f7b671b&gt; in &lt;module&gt;() 14 tokenizer = hub.KerasLayer(bert_preprocess.tokenize, name='tokenizer') 15 ---&gt; 16 dataset_train_Xy = dataset_train_Xy.map(lambda ex: (tokenizer(ex), ex[1])) # print(i[1]) #labels 17 dataset_train_Xy = dataset_train_Xy.prefetch(buffer_size=AUTOTUNE) 10 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in wrapper(*args, **kwargs) 668 except Exception as e: # pylint:disable=broad-except 669 if hasattr(e, 'ag_error_metadata'): --&gt; 670 raise e.ag_error_metadata.to_exception(e) 671 else: 672 raise TypeError: in user code: TypeError: &lt;lambda&gt;() takes 1 positional argument but 2 were given </code></pre>
<p><strong>Working sample BERT model</strong></p> <pre><code>#importing neccessary modules import os import tensorflow as tf import tensorflow_hub as hub data = {'input' :['i hate megavideo stupid time limits', 'wow this class got wild quick functions are a butt', 'got in trouble no cell phone or computer for a you later twitter', 'we lied down around am rose a few hours later party still going lt', 'i wanna miley cyrus on brazil i love u my diva miley rocks', 'i know i hate it i want my dj danger bck'], 'label' : ['unfriendly', 'unfriendly', 'unfriendly', 'unfriendly ', 'friendly', 'friendly']} import pandas as pd df = pd.DataFrame(data) df['category']=df['label'].apply(lambda x: 1 if x=='friendly' else 0) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(df['input'],df['category'], stratify=df['category']) bert_preprocess = hub.KerasLayer(&quot;https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3&quot;) bert_encoder = hub.KerasLayer(&quot;https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4&quot;) def get_sentence_embeding(sentences): preprocessed_text = bert_preprocess(sentences) return bert_encoder(preprocessed_text)['pooled_output'] get_sentence_embeding([ &quot;we lied down around am rose&quot;, &quot;i hate it i want my dj&quot;] ) #Build model # Bert layers text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text') preprocessed_text = bert_preprocess(text_input) outputs = bert_encoder(preprocessed_text) # Neural network layers l = tf.keras.layers.Dropout(0.1, name=&quot;dropout&quot;)(outputs['pooled_output']) l = tf.keras.layers.Dense(1, activation='sigmoid', name=&quot;output&quot;)(l) # Use inputs and outputs to construct a final model model = tf.keras.Model(inputs=[text_input], outputs = [l]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) model.fit(X_train, y_train, epochs=10) </code></pre>
python|tensorflow|tokenize|bert-language-model
1
7,744
67,455,124
How to combine data from two columns based on multipe conditions in pandas?
<p>I have this dataframe where I am trying to combine <code>Email_x</code> and <code>Email_y</code> to be a column <code>email</code>.</p> <ol> <li>If both are <code>NaN</code> then the result should be <code>&quot;&quot;</code> or <code>np.nan</code>.</li> <li><code>Email_y</code> has a value and <code>Email_x</code> does not then use <code>Email_y</code> as the result(vice versa).</li> <li>If both have values then take the first one only.</li> </ol> <pre><code> Email_x Email_y Verification status_x Verification status_y 489 sample3@gmail.com NaN valid valid 975 samle4@gmail.com NaN accept_all NaN 1192 NaN NaN NaN NaN 1370 sample5@gmail.com NaN unknown NaN 2001 NaN NaN NaN NaN 2565 sample2@gmail.com sample2@gmail.com valid NaN 3900 NaN NaN NaN NaN 3998 NaN NaN NaN NaN 4192 NaN NaN NaN NaN 4757 NaN sample@gmail.com NaN NaN </code></pre> <p>I have tried using this but it fails when both have a value and results in both the values being combined.</p> <pre><code>df[&quot;email&quot;] = df[[&quot;Email_x&quot;, &quot;Email_y&quot;]].fillna(&quot;&quot;).agg(&quot; &quot;.join, axis=1) </code></pre> <h3>Result:</h3> <pre><code> Email_x Email_y Verification status_x Verification status_y email 489 sample3@gmail.com NaN valid valid sample3@gmail.com 975 samle4@gmail.com NaN accept_all NaN samle4@gmail.com 1192 NaN NaN NaN NaN 1370 sample5@gmail.com NaN unknown NaN sample5@gmail.com 2001 NaN NaN NaN NaN 2565 sample2@gmail.com sample2@gmail.com valid NaN sample2@gmail.com sample2@gmail.com 3900 NaN NaN NaN NaN 3998 NaN NaN NaN NaN 4192 NaN NaN NaN NaN 4757 NaN sample@gmail.com NaN NaN sample@gmail.com </code></pre> <h3>Expected result:</h3> <pre><code> Email_x Email_y Verification status_x Verification status_y email 489 sample3@gmail.com NaN valid valid sample3@gmail.com 975 samle4@gmail.com NaN accept_all NaN samle4@gmail.com 1192 NaN NaN NaN NaN 1370 sample5@gmail.com NaN unknown NaN sample5@gmail.com 2001 NaN NaN NaN NaN 2565 sample2@gmail.com sample2@gmail.com valid NaN sample2@gmail.com 3900 NaN NaN NaN NaN 3998 NaN NaN NaN NaN 4192 NaN NaN NaN NaN 4757 NaN sample@gmail.com NaN NaN sample@gmail.com </code></pre> <h3>Sample dictionary:</h3> <pre><code>import numpy as np a = { &quot;Email_x&quot;: { 489: &quot;sample3@gmail.com&quot;, 975: &quot;samle4@gmail.com&quot;, 1192: np.nan, 1370: &quot;sample5@gmail.com&quot;, 2001: np.nan, 2565: &quot;sample2@gmail.com&quot;, 3900: np.nan, 3998: np.nan, 4192: np.nan, 4757: np.nan, }, &quot;Email_y&quot;: { 489: np.nan, 975: np.nan, 1192: np.nan, 1370: np.nan, 2001: np.nan, 2565: &quot;sample2@gmail.com&quot;, 3900: np.nan, 3998: np.nan, 4192: np.nan, 4757: &quot;sample@gmail.com&quot;, }, &quot;Verification status_x&quot;: { 489: &quot;valid&quot;, 975: &quot;accept_all&quot;, 1192: np.nan, 1370: &quot;unknown&quot;, 2001: np.nan, 2565: &quot;valid&quot;, 3900: np.nan, 3998: np.nan, 4192: np.nan, 4757: np.nan, }, &quot;Verification status_y&quot;: { 489: &quot;valid&quot;, 975: np.nan, 1192: np.nan, 1370: np.nan, 2001: np.nan, 2565: np.nan, 3900: np.nan, 3998: np.nan, 4192: np.nan, 4757: np.nan, }, } </code></pre>
<p>Via <code>np.select</code></p> <pre><code>condlist = [ (df.Email_x.isna()) &amp; (~df.Email_y.isna()), # 1st column NAN but 2nd is not (df.Email_y.isna()) &amp; (~df.Email_x.isna()), # 2nd column NAN but 1st is not (~df.Email_x.isna()) &amp; (~df.Email_y.isna()) # both is not NAN ] choicelist = [ df.Email_y, df.Email_x, df.Email_x ] df['Email'] = np.select(condlist,choicelist, default=‘') # default value '' </code></pre>
python|pandas
2
7,745
67,591,044
Docker container with Python modules gets too big
<p>I want my Docker container to use tensorflow lite (tflite) in a python script. My Dockerfile looks like this:</p> <pre><code>FROM arm32v7/python:3.7-slim-buster COPY model.tflite / COPY docker_tflite.py / COPY numpy-1.20.2-cp37-cp37m-linux_armv7l.whl / RUN apt-get update \ &amp;&amp; apt-get -y install libatlas-base-dev RUN pip install numpy-1.20.2-cp37-cp37m-linux_armv7l.whl \ &amp;&amp; pip install --no-build-isolation --extra-index-url https://google-coral.github.io/py-repo/ tflite_runtime CMD [&quot;python&quot;, &quot;docker_tflite.py&quot;] </code></pre> <p>The Docker Container is too big for my microcontroller at 197 MB, is there any way to make it smaller?</p> <hr /> <p><strong>UPDATE:</strong></p> <p>Following Itamar's answer, I have adjusted my Dockerfile:</p> <pre><code>FROM arm32v7/python:3.7-slim-buster as dev COPY model.tflite / COPY docker_tflite.py / COPY numpy-1.20.2-cp37-cp37m-linux_armv7l.whl / RUN apt-get update \ &amp;&amp; apt-get -y install libatlas-base-dev RUN pip install --user numpy-1.20.2-cp37-cp37m-linux_armv7l.whl \ &amp;&amp; pip install --user --no-build-isolation --extra-index-url https://google-coral.github.io/py-repo/ tflite_runtime FROM arm32v7/python:3.7-slim-buster as runtime COPY model.tflite / COPY docker_tflite.py / COPY --from=dev /root/.local /root/.local RUN apt-get update \ &amp;&amp; apt-get -y install libatlas-base-dev CMD [&quot;python&quot;, &quot;docker_tflite.py&quot;] </code></pre> <p>Meanwhile the Docker container is at 179 MB, which is already a progress, thank you very much. Is there any more optimization potential in my Dockerfile, e.g. in the apt-get statements?</p>
<p>You end up with two copies of numpy: the wheel, and the installed version. The way to solve that is with a multi-stage build, where the second stage doesn't have the wheel, or development headers, or any other unnecessary build files.</p> <pre><code>FROM arm32v7/python:3.7-slim-buster as dev # ... RUN pip install --user numpy.whl &amp;&amp; pip install --user --no-build-isolation ... FROM arm32v7/python:3.7-slim-buster as runtime COPY --from=dev /root/.local /root/.local </code></pre> <p>Something like that. See <a href="https://pythonspeed.com/articles/multi-stage-docker-python/" rel="nofollow noreferrer">https://pythonspeed.com/articles/multi-stage-docker-python/</a></p>
python|docker|arm|dockerfile|tensorflow-lite
2
7,746
67,577,054
Pandas and python: deduplication of dataset by several fields
<p>I have a dataset of companies. Each company has tax payer number, address, phone and some other fields. Here is a Pandas code I take from Roméo Després:</p> <pre><code>import pandas as pd df = pd.DataFrame({ &quot;tax_id&quot;: [&quot;A&quot;, &quot;B&quot;, &quot;C&quot;, &quot;D&quot;, &quot;E&quot;, &quot;A&quot;, &quot;B&quot;, &quot;C&quot;, &quot;F&quot;, &quot;E&quot;], &quot;phone&quot;: [0, 1, 2, 3, 4, 5, 0, 0, 6, 3], &quot;address&quot;: [&quot;x&quot;, &quot;y&quot;, &quot;z&quot;, &quot;x&quot;, &quot;y&quot;, &quot;x&quot;, &quot;t&quot;, &quot;z&quot;, &quot;u&quot;, &quot;v&quot;], }) print(df) tax_id phone address 0 A 0 x 1 B 1 y 2 C 2 z 3 D 3 x 4 E 4 y 5 A 5 x 6 B 0 t 7 C 0 z 8 F 6 u 9 E 3 v </code></pre> <p>I need to deduplicate the dataset by these fields, meaning that not-unique companies can be linked by only one of these fields. I.e. some company is definitely unique in my list only if it doesn't have ANY matches by ANY of the key fields. If company shares tax payer num with some other entity, and that entity shares address with 3rd one, then all three companies are the same one. Expected output in terms of unique companies should be:</p> <pre><code> tax_id phone address 0 A 0 x 1 B 1 y 2 C 2 z 8 F 6 u </code></pre> <p>Expected output along with unique company index for each duplicate should look like:</p> <pre><code> tax_id phone address representative_index 0 A 0 x 0 1 B 1 y 1 2 C 2 z 2 3 D 3 x 0 4 E 4 y 1 5 A 5 x 0 6 B 0 t 0 7 C 0 z 0 8 F 6 u 8 9 E 3 v 3 </code></pre> <p>How can I filter out duplicates in this case using python/pandas?</p> <p>The only algo which comes to my head is the following direct approach:</p> <ol> <li>I group dataset by first key, collect other keys as sets in resulting dataset</li> <li>Then iteratively I walk over set with 2nd key and add to my grouped dataset for some value of 1st key new 2nd key values, iterating over them over and over.</li> <li>Finally there is nothing more to add and I repeat this for 3rd key.</li> </ol> <p>This doesn't look very promising in terms of performance and simplicity of coding.</p> <p>Any other ways for removing duplicates by one of several keys?</p>
<p>You could solve this using the graph analysis library <a href="https://networkx.org" rel="nofollow noreferrer"><code>networkx</code></a>.</p> <pre class="lang-py prettyprint-override"><code>import itertools import networkx as nx import pandas as pd df = pd.DataFrame({ &quot;tax_id&quot;: [&quot;A&quot;, &quot;B&quot;, &quot;C&quot;, &quot;D&quot;, &quot;E&quot;, &quot;A&quot;, &quot;B&quot;, &quot;C&quot;, &quot;F&quot;, &quot;E&quot;], &quot;phone&quot;: [0, 1, 2, 3, 4, 5, 0, 0, 6, 3], &quot;address&quot;: [&quot;x&quot;, &quot;y&quot;, &quot;z&quot;, &quot;x&quot;, &quot;y&quot;, &quot;x&quot;, &quot;t&quot;, &quot;z&quot;, &quot;u&quot;, &quot;v&quot;], }) def iter_edges(df): &quot;&quot;&quot;Yield all relationships between rows.&quot;&quot;&quot; for name, series in df.iteritems(): for nodes in df.groupby(name).indices.values(): yield from itertools.combinations(nodes, 2) def iter_representatives(graph): &quot;&quot;&quot;Yield all elements and their representative.&quot;&quot;&quot; for component in nx.connected_components(graph): representative = min(component) for element in component: yield element, representative graph = nx.Graph() graph.add_nodes_from(df.index) graph.add_edges_from(iter_edges(df)) df[&quot;representative_index&quot;] = pd.Series(dict(iter_representatives(graph))) </code></pre> <p>In the end <code>df</code> looks like:</p> <pre class="lang-py prettyprint-override"><code> tax_id phone address representative_index 0 A 0 x 0 1 B 1 y 0 2 C 2 z 0 3 D 3 x 0 4 E 4 y 0 5 A 5 x 0 6 B 0 t 0 7 C 0 z 0 8 F 6 u 8 9 E 3 v 0 </code></pre> <p>Note you can go <code>df.drop_duplicates(&quot;representative_index&quot;)</code> to obtain unique rows:</p> <pre class="lang-py prettyprint-override"><code> tax_id phone address representative_index 0 A 0 x 0 8 F 6 u 8 </code></pre>
python|pandas|algorithm|duplicates|dataset
1
7,747
34,605,249
sklearn Convert Text Series to Sparse Matrix, Then Scale Numeric, Then Combine Into Single X
<p>If I have both text and numeric values, and I want to:</p> <ol> <li>Convert the text to numeric (I'm using <code>CountVectorizer</code> as a general example)</li> <li>Convert numeric data to the same scale</li> <li>Combine 1 and 2 into a single <code>X</code> matrix to pass to an estimator</li> </ol> <p>How do I combine a sparse matrix and numpy array into a single <code>X</code> while being mindful about memory limitations when dealing with huge sparse matrices?</p> <p>Here is an example dataframe:</p> <pre><code>df = pd.DataFrame({ 'Term': [ 'johns company', 'johns company home', 'home repair', 'home remodeling', 'johns company home repair system', 'home repair systems', 'home systems', 'repair a home', 'home remodeling ideas', 'home repair system'], 'Metric1': [ 319434, 21644, 113185, 73210, 8907, 23016, 36789, 48025, 29624, 6944], 'Metric2': [13270, 5015, 4301, 3722, 2502, 2190, 1934, 2468, 2706, 904], 'Metric3': [ 24170.83, 11034.36, 24137.57, 16548.53, 4777.27, 9565.45, 8014.29, 9041.97, 7612.31, 4045.37], 'Metric4': [1.0, 1.1, 2.9, 2.7, 1.1, 2.0, 3.0, 1.9, 1.6, 1.5], 'y': [712, 406, 297, 215, 190, 0, 125, 100, 94, 93] }, columns=['Term', 'Metric1', 'Metric2', 'Metric3', 'Metric4', 'y']) ## df looks like this Term Metric1 Metric2 Metric3 Metric4 y 0 johns company 319434 13270 24170.83 1.0 712 1 johns company home 21644 5015 11034.36 1.1 406 2 home repair 113185 4301 24137.57 2.9 297 3 home remodeling 73210 3722 16548.53 2.7 215 4 johns company home repair system 8907 2502 4777.27 1.1 190 5 home repair systems 23016 2190 9565.45 2.0 0 6 home systems 36789 1934 8014.29 3.0 125 7 repair a home 48025 2468 9041.97 1.9 100 8 home remodeling ideas 29624 2706 7612.31 1.6 94 9 home repair system 6944 904 4045.37 1.5 93 </code></pre> <p>My intent here is to convert text to numbers.</p> <pre><code>from sklearn.feature_extraction.text import CountVectorizer cv = CountVectorizer() text_features = cv.fit_transform(df['Term']) text_features &lt;10x8 sparse matrix of type '&lt;class 'numpy.int64'&gt;' with 27 stored elements in Compressed Sparse Row format&gt; </code></pre> <p>My intent here is to normalize numeric X values.</p> <pre><code>from sklearn.preprocessing import StandardScaler ss = StandardScaler() num_features = ss.fit_transform(df[['Metric1', 'Metric2', 'Metric3', 'Metric4']]) num_features array([[ 2.81861161, 2.81931317, 1.76781103, -1.22081006], [-0.52069075, 0.3351711 , -0.12390699, -1.08208165], [ 0.50581477, 0.12031011, 1.76302143, 1.41502985], [ 0.05755051, -0.05392589, 0.67016134, 1.13757301], [-0.66351856, -0.42105531, -1.02495954, -1.08208165], [-0.50530567, -0.51494414, -0.33543744, 0.1664741 ], [-0.35086055, -0.59198114, -0.55881232, 1.55375826], [-0.22486438, -0.43128678, -0.41082121, 0.02774568], [-0.4312061 , -0.35966646, -0.61669947, -0.38843957], [-0.68553089, -0.90193466, -1.13035684, -0.52716798]]) </code></pre> <p>My intent here is join <code>text_features</code> and <code>num_features</code> in efforts to make one single <code>X</code> to pass to an estimator.</p> <pre><code>from sklearn.pipeline import FeatureUnion fu = FeatureUnion([('text', text_features), ('num', num_features)]) from sklearn.linear_model import LinearRegression lr = LinearRegression() lr.fit(fu, df['y']) Traceback (most recent call last): File "&lt;pyshell#230&gt;", line 1, in &lt;module&gt; lr.fit(fu, df['y']) File "C:\Python34\lib\site-packages\sklearn\linear_model\base.py", line 427, in fit y_numeric=True, multi_output=True) File "C:\Python34\lib\site-packages\sklearn\utils\validation.py", line 510, in check_X_y ensure_min_features, warn_on_dtype, estimator) File "C:\Python34\lib\site-packages\sklearn\utils\validation.py", line 393, in check_array array = array.astype(np.float64) TypeError: float() argument must be a string or a number, not 'FeatureUnion' </code></pre> <p>Is <code>FeatureUnion</code> what I should be trying to use here to join text and numeric data into a single <code>X</code> matrix?</p>
<p>I think you are misunderstanding how <code>FeatureUnion</code> works. <code>FeatureUnion</code> applies multiple feature extractors / preprocessors and combines the resulting features into a single matrix. Since you do not have multiple preprocessors, but instead have multiple matricies, you should probably use <code>hstack</code> instead. Using <code>numpy.hstack()</code> it requires two dense matrices. If sparse is desired instead, use <code>scipy.sparse.hstack()</code> instead.</p>
python|pandas|scikit-learn
1
7,748
34,865,791
NumPy unique() returns indices that are out-of-bounds
<p>I am trying to remove points from a point cloud that are too close to each other. My input is an mx3 matrix where the columns represent xyz coordinates. Code is as follows:</p> <pre><code>def remove_duplicates(points, threshold): # Convert to numpy points = np.array(points) # Round to within the threshold rounded_points = points if threshold &gt; 0.0: rounded_points = np.round(points/threshold)*threshold # Remove duplicate points point_tuples = [tuple(point) for point in rounded_points] unique_rounded_points, unique_indices = np.unique(point_tuples, return_index = True) points = points[unique_indices] return points </code></pre> <p>The issue I am running into is that unique_indices contains values larger than the length of points (2265 and 1000 for my test data). Am I doing something wrong, or is this a bug in NumPy?</p> <p>Edit: I should note that for very small inputs (tried 27 points), unique() appears to work correctly.</p>
<p>So <code>points</code> is a 2d array, <code>(m,3)</code> in shape, right?</p> <p><code>point_tuples</code> is a list of tuples, i.e. row of <code>rounded_points</code> is now a tuple of 3 floats.</p> <p><code>np.unique</code> is going to turn that into an array to do it's thing</p> <p><code>np.array(point_tuples)</code> is a <code>(m,3)</code> array (again 2d like <code>points</code>). The tuple did nothing.</p> <p><code>unique</code> will act on the raveled form of this array, so <code>unique_indices</code> could have values between 0 and 3*m. Hence your error.</p> <p>I see 2 problems - if you want <code>unique</code> to find unique 'rows', you need to make a structured array</p> <pre><code>np.array(point_tuples, 'f,f,f') </code></pre> <p>Also applying <code>unique</code> to floats is tricky. It's next to impossible to find 2 floats that are equal. Rounding reduces this problem but does not eliminate it.</p> <p>So it probably is better to use round in such a way that <code>rounded_points</code> is an array of integers. The values don't need to scaled back to match <code>points</code>.</p> <p>I can add an example if needed, but first try these suggestions. I'm making a lot of guesses about your data, and I'd like to get some feedback before going further.</p>
python|numpy|matrix|indexing|point-clouds
2
7,749
60,096,717
Pandas groupby and subtract rows
<p>I have the following dataframe:</p> <pre><code>id variable year value 1 a 2020 2 1 a 2021 3 1 a 2022 5 1 b 2020 3 1 b 2021 8 1 b 2022 10 </code></pre> <p>I want to groupby id and variable and subtract 2020 values from all the rows of the group. So I will get:</p> <pre><code>id variable year value 1 a 2020 0 1 a 2021 1 1 a 2022 3 1 b 2020 0 1 b 2021 5 1 b 2022 7 </code></pre> <p>How can I do that?</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>DataFrame.merge</code></a> if not sure if <code>2020</code> is first per groups:</p> <pre><code>df1 = df[df['year'].eq(2020)] df['value'] -= df.merge(df1,how='left',on=['id','variable'],suffixes=('_',''))['value'].values print (df) id variable year value 0 1 a 2020 0 1 1 a 2021 1 2 1 a 2022 3 3 1 b 2020 0 4 1 b 2021 5 5 1 b 2022 7 </code></pre> <p>If <code>2020</code> is always first per groups use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.first.html" rel="nofollow noreferrer"><code>GroupBy.first</code></a>:</p> <pre><code>df['value'] -= df.groupby(['id','variable'])['value'].transform('first') print (df) id variable year value 0 1 a 2020 0 1 1 a 2021 1 2 1 a 2022 3 3 1 b 2020 0 4 1 b 2021 5 5 1 b 2022 7 </code></pre> <p>EDIT:</p> <p>If in data are duplicated <code>2020</code> rows per groups solution first remove dupes and subtract only first value:</p> <pre><code>print (df) id variable year value 0 1 a 2020 3 1 1 a 2020 2 2 1 a 2022 5 3 1 b 2020 3 4 1 b 2021 8 5 1 b 2022 10 df1 = df[df['year'].eq(2020)] df['value'] -= df.merge(df1.drop_duplicates(['id','variable']), how='left', on=['id','variable'], suffixes=('_',''))['value'].values print (df) id variable year value 0 1 a 2020 0 1 1 a 2020 -1 2 1 a 2022 2 3 1 b 2020 0 4 1 b 2021 5 5 1 b 2022 7 </code></pre> <p>Or aggregate values, e.g. by <code>sum</code> for deduplicate data:</p> <pre><code>print (df) id variable year value 0 1 a 2020 3 1 1 a 2020 1 2 1 a 2022 5 3 1 b 2020 3 4 1 b 2021 8 5 1 b 2022 10 df = df.groupby(['id','variable','year'], as_index=False).sum() print (df) id variable year value 0 1 a 2020 4 1 1 a 2022 5 2 1 b 2020 3 3 1 b 2021 8 4 1 b 2022 10 df1 = df[df['year'].eq(2020)] df['value'] -= df.merge(df1, how='left', on=['id','variable'], suffixes=('_',''))['value'].values print (df) id variable year value 0 1 a 2020 0 1 1 a 2022 1 2 1 b 2020 0 3 1 b 2021 5 4 1 b 2022 7 </code></pre>
python|pandas|dataframe|group-by
3
7,750
60,053,378
How do I count the features percentage when only the label is true in Python machine learning?
<p>I'm using Jupyter to learn machine learning.</p> <p>I would like to know how to count the features percentage (Style, Typo, Layout percentage) when only the "Like" column is 1?</p> <p><img src="https://i.stack.imgur.com/bYpP4.png" alt="enter image description here"></p>
<p>I'm assuming you want to find the percentage of each unique value for each column when Like == 1. If that's the case, you can do:</p> <pre><code>df[df['Like'] == 1]['Style'].value_counts(normalize=True) * 100 df[df['Like'] == 1]['Typo'].value_counts(normalize=True) * 100 df[df['Like'] == 1]['Layout'].value_counts(normalize=True) * 100 </code></pre>
python|pandas|machine-learning|count|jupyter
0
7,751
60,171,858
Why machine learning algorithms focus on speed and not accuracy?
<p>I study ML and I see that most of the time the focus of the algorithms is run time and not accuracy. Reducing features, taking sample from the data set, using approximation and so on.</p> <p>Im not sure why its the focus since once I trained my model I dont need to train it anymore if my accuracy is high enough and for that if it will take me 1 hours or 10 days to train my model it does not really matter because I do it only 1 time and my goal is to predict as better as I can my outcomes (minimum loss).</p> <p>If I train a model to differ between cats and dogs I want it to be the most accurate it can be and not the fasted since once I trained this model I dont need to train any more models. I can understand why models that depends on fasting changing data need this focus of speed but for general training models I dont understand why the focus is on speed.</p>
<p>Speed is relative term. Accuracy is also relative depending on the difficulty of the task. Currently the goal is to achieve human-like performance for application at reasonable costs because this will replace human labor and cut costs.</p> <p>From what I have seen in reading papers, people usually focus on accuracy first to produce something that works. Then do ablation studies - studies where pieces of the models are removed or modified - to achieve the same performance in less time or memory requirements.</p> <p>The field is very experimentally validated. There really isn't much of a theory that states why CNN work so well other than that it can model any function given non-linear activations functions. (<a href="https://en.wikipedia.org/wiki/Universal_approximation_theorem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Universal_approximation_theorem</a>) There have been some recent efforts to explain why it works well. One I recall is MobileNetV2: Inverted Residuals and Linear Bottlenecks. The explaination of <strong>embedding data into a low dimensional space without losing information</strong> might be worth reading.</p>
tensorflow|machine-learning
1
7,752
65,215,320
openCV doesn't always actualize the display/show the image
<p>I am displaying some very simple numpy array with openCV by using <code>cv2.imshow()</code> and <code>cv2.waitKey()</code>. Sometimes, I want a display to stay for a few seconds and then resume my program. I simply add a <code>time.sleep()</code> with the desired sleep value. Most of the time, everything works fine; however, sometimes the display doesn't change and remains on the previous one. It is very inconsistent. The same code might work 80% of the time, and might not work for the remaining 20%. Interestingly enough, it always seems to be the same image/display which doesn't work across multiple runs; although it does work as well sometimes. As I said, it's very inconsistent.</p> <p>Here is a small portion of the code I used to create and show the displays:</p> <pre><code>#!/usr/bin/env python3 # -*- coding: utf-8 -*- import sys import numpy as np import cv2 import time from matplotlib import colors class Visual: def __init__(self, window, screen_size=None): self.window = window # screen size and message setting if screen_size is None: if sys.platform.startswith('win'): from win32api import GetSystemMetrics self.screen_width = GetSystemMetrics(0) self.screen_height = GetSystemMetrics(1) else: self.screen_width = 1024 self.screen_height = 768 else: self.screen_width, self.screen_height = screen_size self.screen_center = (self.screen_width//2, self.screen_height//2) def draw_background(self, color=(210, 210, 210)): if isinstance(color, str): r, g, b, _ = colors.to_rgba(color) color = [int(c*255) for c in (b, g, r)] self.img = np.full((self.screen_height, self.screen_width, 3), fill_value=color, dtype=np.uint8) def show(self, wait=1): cv2.imshow(self.window, self.img) cv2.waitKey(wait) class CrossVisual(Visual): def __init__(self, window, screen_size=None): super().__init__(window, screen_size) def draw_cross(self, width_ratio=0.2, height_ratio=0.02, position='center', color='black'): if isinstance(color, str): r, g, b, _ = colors.to_rgba(color) color = [int(c*255) for c in (b, g, r)] rect_width = int(width_ratio*self.screen_width) rect_height = int(height_ratio*self.screen_height) if position == 'center': x = self.screen_center[0] y = self.screen_center[1] else: x, y = position # Rectangle 1 xP1 = x - rect_width//2 yP1 = y + rect_height//2 xP2 = x + rect_width//2 yP2 = y - rect_height//2 cv2.rectangle(self.img, (xP1, yP1), (xP2, yP2), color, -1) # Rectangle 2 xP1 = x - rect_height//2 yP1 = y + rect_width//2 xP2 = x + rect_height//2 yP2 = y - rect_width//2 cv2.rectangle(self.img, (xP1, yP1), (xP2, yP2), color, -1) class TextVisual(Visual): def __init__(self, window, screen_size=None): super().__init__(window, screen_size) def putText(self, text, fontFace=cv2.FONT_HERSHEY_DUPLEX, fontScale=2, color='white', thickness=2, vertical_offset=0): if isinstance(color, str): r, g, b, _ = colors.to_rgba(color) color = [int(c*255) for c in (b, g, r)] # Measuring text and choosing position on the screen textWidth, textHeight = cv2.getTextSize(text, fontFace, fontScale, thickness)[0] xtext = int(self.screen_center[0] - textWidth//2) ytext = int(self.screen_center[1] - textHeight//2 + vertical_offset) # print warnings if textWidth &gt; self.screen_width or textHeight &gt; (self.screen_height - ytext): print ('WARNING: The text and text settings lead to an output too large for the screen size.') cv2.putText(self.img, text=text, org=(xtext, ytext), fontFace=fontFace, fontScale=fontScale, color=color, thickness=thickness, lineType=cv2.LINE_AA) </code></pre> <p>Sadly, I am unable to reproduce the problem with small piece of code. But I can demonstrate with the following function:</p> <pre><code>def display_3s_countdown(window, images=None): if images is None: images = list() Visual = TextVisual(window=WINDOW) Visual.draw_background(color='lightgrey') Visual.putText('3', color='black') images.append(Visual.img) Visual = TextVisual(window=WINDOW) Visual.draw_background(color='lightgrey') Visual.putText('2', color='black') images.append(Visual.img) Visual = TextVisual(window=WINDOW) Visual.draw_background(color='lightgrey') Visual.putText('1', color='black') images.append(Visual.img) for image in images: cv2.imshow(window, image) cv2.waitKey(1) time.sleep(1) </code></pre> <p>As you can see, this small function simply creates 3 images, each with a different number written, in order to create a small 3 seconds countdown. When I do something like:</p> <pre><code>v = TextVisual('test window') v.draw_background(color='lightgrey') v.putText('some random text', color='black') time.sleep(3) # To let enough time to read the text display_3s_countdown(window='test window', images=None) </code></pre> <p>Sometimes, the number 3 will not be displayed, and the display in the window <code>test window</code> will directly go from <code>'some random text'</code> to the number 2.</p> <p>Finally, usually, this behavior goes in pairs with a spinning wheel mouse when I have the mouse on the active cv2 window (only one window is opened at a time anyway). As you might have guessed from the header of my file, I am working on macOS. However, this behavior also appears on Windows.</p> <p>I am not very familiar with openCV, thank you for any tip and information you could provide.</p> <p>EDIT: Some of my displays are dynamic and are shown as soon as the image is created within a while loop. Thus, efficiency and speed in the display are important.</p>
<p>I would suggest that you <em>not</em> use <code>time.sleep</code> in GUI programs (OpenCV's imshow counts as a GUI).</p> <p>work with the delay argument to <code>waitKey()</code>. pass 1000 for one second of maximum delay.</p> <p>the GUI locks up if waitKey() isn't running enough. waitKey runs the event/message loop.</p> <p>when you simply sleep for a second, the whole GUI doesn't get to do its vital housekeeping.</p>
python|numpy|opencv
0
7,753
50,048,786
Apply property pandas.DataFrame.shape to multiple dataframe stored in a tuple
<p>I have a function which returns as output a tuple containing multiple <code>pd.DataFrame</code> objects.</p> <p>Take the example:</p> <pre><code>import pandas as pd def myfunction(): x = pd.DataFrame(data = [1,2,3]) y = pd.DataFrame(data = [[1,2,3],[4,5,6]]) return x, y myfunction() </code></pre> <p>I am looking for a concise way to apply the property <code>pandas.DataFrame.shape</code> to each of the object stored in the tuple resulting from the call <code>myfunction()</code>.</p> <h3>The solution must not include a cycle!</h3>
<p>One way is to use <code>operator.attrgetter</code>:</p> <pre><code>from operator import attrgetter shapes = list(map(attrgetter('shape'), myfunction())) [(3, 1), (2, 3)] </code></pre> <p>Although you are not looking for an explicit loop, this is the more readable version:</p> <pre><code>shapes = [x.shape for x in myfunction()] </code></pre>
python|pandas
1
7,754
49,810,707
compare list of data with CSV file and sort the matching
<p>I have a data set of product names and a brands list. I need to find the how much branded products are there in my list.</p> <pre><code>**Brands sample :** ['HM International', 'Sara', 'Wildcraft', 'Nike'] **Product name sample :** [Attache backpack11Green Waterproof Backpack Simba BTSPOKEMON POKÈMON POKÈ BALLS 18 BP Waterproof S... HM International HMHTPB 24304MK Waterproof Multipurpos... Chris &amp; Kate CKB_122SS Waterproof School Bag Simba BTSPRINCESS FOLLOW YOUR DREAMS 16 BP Waterproof ... Kuber Industries School Bag, Backpack Waterproof School... Minnie Trio School Bag Waterproof School Bag Thomas School Bag Waterproof School Bag Sara Green 002 Shoulder Bag Disney Frozen Anna &amp; Elsa Pink Sequins 16' ' Backpack Disney Princess Pink Flap 18' ' Backpack My Baby Excel Peppa Side Sling Bag Sling Bag Ranger Black School Bag with laptop compartment Waterpr... HM International HMHTPB 73279AV Waterproof Multipurpos... Peppa Peppa Pig Pink Plush Toy Wallet Round Shape Plush... Disney Frozen Anna &amp; Elsa Pink Sequins 14' ' Backpack Disney Frozen Magic Blue 16' ' School Bag Good Friends stylish Waterproof School Bag ZEVORA Pink 3D Design Children Travel &amp; School Bag, 1 L... Gleam A103 School Bag SARA BAGS TG15 Waterproof Backpack Despicable Me Favourite Subject School Bag 16 inches Tr... AARIP LTB037 Waterproof School Bag Simba BTSSMURFS FOOTBALL 18 BP Waterproof School Bag Gleam JB0402C Waterproof School Bag Simba BTSSMURFS SMURFETTE SINGING STAR 18 BP Waterproo... ] </code></pre>
<p>I suggest use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.findall.html" rel="nofollow noreferrer"><code>str.findall</code></a> with <code>word boundary regex</code> for search multiple values, then flatten nested lists and use <a href="https://docs.python.org/3.5/library/collections.html#collections.Counter" rel="nofollow noreferrer"><code>Counter</code></a>:</p> <pre><code>from collections import Counter Brands = ['HM International', 'Sara', 'Wildcraft', 'Nike'] pat = r'\b{}\b'.format('|'.join(Brands)) d = Counter([y for x in df['Product'].str.findall(pat) for y in x]) print (d) Counter({'HM International': 2, 'Sara': 1}) </code></pre> <p>Or if want <code>Series</code> in output use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>Series.value_counts</code></a>:</p> <pre><code>s = pd.Series(np.concatenate(df['Product'].str.findall(pat))).value_counts() print (s) HM International 2 Sara 1 dtype: int64 </code></pre> <p><strong>Setup</strong>:</p> <pre><code>d = {'Product': ['Attache backpack11Green Waterproof Backpack', 'Simba BTSPOKEMON POKÈMON POKÈ BALLS 18 BP Waterproof S...', 'HM International HMHTPB 24304MK Waterproof Multipurpos...', 'Chris &amp; Kate CKB_122SS Waterproof School Bag', 'Simba BTSPRINCESS FOLLOW YOUR DREAMS 16 BP Waterproof ...', 'Kuber Industries School Bag, Backpack Waterproof School...', 'Minnie Trio School Bag Waterproof School Bag', 'Thomas School Bag Waterproof School Bag', 'Sara Green 002 Shoulder Bag', "Disney Frozen Anna &amp; Elsa Pink Sequins 16' ' Backpack", "Disney Princess Pink Flap 18' ' Backpack", 'My Baby Excel Peppa Side Sling Bag Sling Bag', 'Ranger Black School Bag with laptop compartment Waterpr...', 'HM International HMHTPB 73279AV Waterproof Multipurpos...', 'Peppa Peppa Pig Pink Plush Toy Wallet Round Shape Plush...', "Disney Frozen Anna &amp; Elsa Pink Sequins 14' ' Backpack", "Disney Frozen Magic Blue 16' ' School Bag", 'Good Friends stylish Waterproof School Bag', 'ZEVORA Pink 3D Design Children Travel &amp; School Bag, 1 L...', 'Gleam A103 School Bag', 'SARA BAGS TG15 Waterproof Backpack', 'Despicable Me Favourite Subject School Bag 16 inches Tr...', 'AARIP LTB037 Waterproof School Bag', 'Simba BTSSMURFS FOOTBALL 18 BP Waterproof School Bag', 'Gleam JB0402C Waterproof School Bag', 'Simba BTSSMURFS SMURFETTE SINGING STAR 18 BP Waterproo']} df = pd.DataFrame(d) print (df.head()) Product 0 Attache backpack11Green Waterproof Backpack 1 Simba BTSPOKEMON POKÈMON POKÈ BALLS 18 BP Wate... 2 HM International HMHTPB 24304MK Waterproof Mul... 3 Chris &amp; Kate CKB_122SS Waterproof School Bag 4 Simba BTSPRINCESS FOLLOW YOUR DREAMS 16 BP Wat... </code></pre>
python|pandas|analytics|data-analysis|text-analysis
0
7,755
49,977,236
TensorFlow broadcasting
<p>Broadcasting is the process of making arrays with different shapes have compatible shapes for arithmetic operations. In numpy, we can broadcast arrays. Does TensorFlow graph support broadcasting similar to the numpy one?</p>
<p>yes it is supported. Open a terminal and try this:</p> <pre><code>import tensorflow as tf #define tensors a=tf.constant([[10,20],[30,40]]) #Dimension 2X2 b=tf.constant([5]) c=tf.constant([2,2]) d=tf.constant([[3],[3]]) sess=tf.Session() #start a session #Run tensors to generate arrays mat,scalar,one_d,two_d = sess.run([a,b,c,d]) #broadcast multiplication with scalar sess.run(tf.multiply(mat,scalar)) #broadcast multiplication with 1_D array (Dimension 1X2) sess.run(tf.multiply(mat,one_d)) #broadcast multiply 2_d array (Dimension 2X1) sess.run(tf.multiply(mat,two_d)) sess.close() </code></pre>
python|tensorflow|array-broadcasting
3
7,756
50,199,829
how to calculation cost time
<pre><code>id|order_id|order_no|order_status|remark|handle_time|create_time|update_time 11237|3942|2018050307542800005985|新建订单||20180503075428|2018/5/3 07:54:28|2018/5/3 07:54:28 11238|3943|2018050307591600005986|新建订单||20180503075916|2018/5/3 07:59:16|2018/5/3 07:59:16 11239|3943|2018050307591600005986|新建订单||20180503082115|2018/5/3 08:21:15|2018/5/3 08:21:15 11240|3943|2018050307591600005986|新建订单||20180503083204|2018/5/3 08:32:04|2018/5/3 08:32:04 11241|3941|2018050308564400005991|新建订单||20180503085644|2018/5/3 08:56:02|2018/5/3 08:56:44 11242|3941|2018050222320800001084|初审成功||20180503085802|2018/5/3 08:58:02|2018/5/3 08:58:02 11243|3941|2018050222320800001084|审核成功||20180503085821|2018/5/3 08:59:21|2018/5/3 08:58:21 11244|3945|2018050309152000005993|新建订单||20180503091520|2018/5/3 09:15:21|2018/5/3 09:15:21 </code></pre> <p>Above is the data from my txt file. It contains order information for stock trades. </p> <p>I want to calculate the time difference for the create_time column for each unique order_id. How do I do this with Pandas?</p> <p>For example order_id 3941, there are three entries. The difference in create_time from the first to the second entry is 2 minutes, and the difference from the 2nd to the 3rd entry is 1 minute.</p> <p>The final output looks like below:</p> <pre><code>order_id,stage1_time,stage2_time,... 3941,2,1,... </code></pre> <p>Sorry for my poor English.</p>
<p>I think I understand what you're asking. You just want to have a new dataframe that calculates the time difference between the three different entries for each unique order id?</p> <p>So, I start by creating the dataframe:</p> <pre><code>data = [ [11238,3943,201805030759165986,'新建订单',20180503075916,'2018/5/3 07:59:16','2018/5/3 07:59:16'], [11239,3943,201805030759165986,'新建订单',20180503082115,'2018/5/3 08:21:15','2018/5/3 08:21:15'], [11240,3943,201805030759165986,'新建订单',20180503083204,'2018/5/3 08:32:04','2018/5/3 08:32:04'], [11241,3941,201805030856445991,'新建订单',20180503085644,'2018/5/3 08:56:02','2018/5/3 08:56:44'], [11242,3941,201805022232081084,'初审成功',20180503085802,'2018/5/3 08:58:02','2018/5/3 08:58:02'], [11243,3941,201805022232081084,'审核成功',20180503085821,'2018/5/3 08:59:21','2018/5/3 08:58:21'] ] df = pd.DataFrame(data, columns=['id','order_id','order_no','order_status','handle_time','create_time','update_time']) df.loc[:, 'create_time'] = pd.to_datetime(df.loc[:, 'create_time']) </code></pre> <p>Sort values by order_id and then create_time:</p> <pre><code>df = df.sort_values(by=['order_id', 'create_time']) </code></pre> <p>Next, I group by order id and select the 1st, 2nd, and 3rd entry:</p> <pre><code>first_df = df.groupby('order_id').nth(0) second_df = df.groupby('order_id').nth(1) third_df = df.groupby('order_id').nth(2) </code></pre> <p>Subtract the 1st from the second to get the 1st stage, and subtract the 2nd from the 3rd to get the second stage. Then combine them into an output dataframe:</p> <pre><code>stage_two = third_df.loc[:, 'create_time'] - second_df.loc[:, 'create_time'] stage_one = second_df.loc[:, 'create_time'] - first_df.loc[:, 'create_time'] stages = pd.concat([stage_one, stage_two], axis=1, keys=['stage_one', 'stage_two']) print(stages) </code></pre> <p>And the output looks like:</p> <pre><code> stage_one stage_two order_id 3941 00:02:00 00:01:19 3943 00:21:59 00:10:49 </code></pre>
python|pandas|dataframe
0
7,757
64,015,321
Escaping missing parenthesis using pandas str.match
<p>I'm having trouble with regex. I'm trying to check if my database fully matches with the item name I'm working. The problem is that sometimes the data is incomplete and I'll get errors. I would like to ignore regex completely as it is not necessary at this point.</p> <p>For example the code below returns <code>re.error: missing ), unterminated subpattern at position 10</code> as the last item on the list is missing a parenthesis. I've tried using <code>if database['Item Name'].str.match(item, regex=False).any():</code> but it's not enough as the items can be named quite similarly and I would need perfect match. I've also tried to read re module documentation but I do not understand it well enough to get rid of the problem.</p> <p>Any ideas how could I bypass the issue?</p> <pre><code>database = pd.read_csv(&quot;database.csv&quot;, sep=&quot;;&quot;) list = [&quot;Test Name !&quot;, &quot;Test Name (2020)&quot;, &quot;Test name (&quot;] for item in list: if database['Item Name'].str.match(item).any(): # do something pass else: #do something else pass </code></pre>
<p>If I understand your post correctly, you are trying to use the data read to create a regex. Since you don't want these treated as regexes, you might simply use string comparisons.</p> <p>However, if your application requires the use of regex, you can use re.escape() render the string as literal so the paren won’t be magic.</p> <p>For example:</p> <pre><code>import re string1 = 'this is a magic ( that will break your regex' string2 = re.escape(string1) # escapes your string re.match(string2, &quot;this won't cause issues&quot;) #re.match(string1, &quot;this will cause issues&quot;) </code></pre>
python|pandas|match
0
7,758
63,835,532
Input 0 of layer sequential is incompatible with the layer expected ndim=3, found ndim=2. Full shape received: [None, 1]
<p>I am working with keras for text classification. After pre-processing and vectorization my train and validation data details is like bellow:</p> <pre><code>print(X_train.shape, ',', X_train.ndim, ',', type(X_train)) print(y_train.shape, ',', y_train.ndim, ',', type(y_train)) print(X_valid.shape, ',', X_valid.ndim, ',', type(X_valid)) print(y_valid.shape, ',', y_valid.ndim, ',', type(y_valid)) print(data_dim) </code></pre> <p>output is:</p> <pre><code>(14904,) , 1 , &lt;class 'numpy.ndarray'&gt; (14904,) , 1 , &lt;class 'numpy.ndarray'&gt; (3725,) , 1 , &lt;class 'numpy.ndarray'&gt; (3725,) , 1 , &lt;class 'numpy.ndarray'&gt; 15435 </code></pre> <p>then model definition is:</p> <pre><code>model = Sequential() model.add(LSTM(100, input_shape=(data_dim,1 ), return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(200)) model.add(Dropout(0.2)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics = ['accuracy']) model.summary() </code></pre> <p>model summury:</p> <p><a href="https://i.stack.imgur.com/iwZyg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iwZyg.png" alt="enter image description here" /></a></p> <p>model fitting:</p> <pre><code>model.fit(X_train,y_train, validation_data = (X_valid, y_valid), batch_size=batch_size, epochs=epochs) </code></pre> <p>Why does this error occur?</p> <pre><code>----&gt; 1 model.fit(X_train,y_train, validation_data = (X_valid, y_valid), 2 batch_size=batch_size, epochs=epochs) ... ... ValueError: Input 0 of layer sequential is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 1] </code></pre>
<p>I finally overcame the problem with the help of <a href="https://www.kaggle.com/hassanamin/time-series-analysis-using-lstm-keras/notebook" rel="noreferrer">this kaggle notebook</a>.</p> <p>I change data dimensions to:</p> <pre><code>print(X_train.shape) print(y_train.shape) print(X_valid.shape) print(y_valid.shape) print(X_test.shape) print(y_test.shape) print(data_dim) ########################## output ########################### (14904, 15435) (14904,) (3725, 15435) (3725,) (5686, 15435) (5686,) 15435 </code></pre> <p>and then reshape data to:</p> <pre><code>X_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1])) X_valid = np.reshape(X_valid, (X_valid.shape[0], 1, X_valid.shape[1])) X_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1])) ########################## output ########################### (14904, 1, 15435) (3725, 1, 15435) (5686, 1, 15435) </code></pre> <p>finally change <code>LSTM</code> <code>input_shape</code> to:</p> <pre><code>model.add(LSTM(units=50, input_shape=(1, data_dim), return_sequences=True)) </code></pre> <p>now, model summary is: <br><br> <a href="https://i.stack.imgur.com/flZQx.png" rel="noreferrer"><img src="https://i.stack.imgur.com/flZQx.png" alt="" /></a><br /> There is no problem right now and <code>model.fit</code> executes fine.</p>
python|tensorflow|keras|model-fitting
8
7,759
46,883,276
How long does tensorflow object detection API train.py complete training using CPU only?
<p>I am a beginner in machine learning. Recently, I had successfully running a machine learning application using <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="nofollow noreferrer">Tensorflow object detection API</a>. My dataset is 200 images of object with 300*300 resolution. However, the training had been running for two days and yet to be completed.</p> <p>I wonder how long would it take to complete a training?? At the moment it is running at global step 9000, how many global step needed to complete the training?</p> <p>P.S: the training used only CPUs</p>
<p>It depends on your desired accuracy and data set of course but I generally stop training when the loss value gets around 4 or less. What is your current loss value after 9000 steps?</p>
python|tensorflow|object-detection
2
7,760
47,048,846
How can I output csv of groupby object?
<p>I get the follow data with code :</p> <pre><code>import pandas as pd df = {'ID': ['H1','H2','H3','H4','H5','H6'], 'AA1': ['C','B','B','X','G','G'], 'AA2': ['W','K','K','A','B','B'], 'name':['n1','n2','n3','n4','n5','n6'] } df = pd.DataFrame(df) df.groupby('AA1').apply(lambda x:x.sort_values('name')) </code></pre> <p>The ouput: </p> <pre><code> AA1 AA2 ID name AA1 B 1 B K H2 n2 2 B K H3 n3 C 0 C W H1 n1 G 4 G B H5 n5 5 G B H6 n6 X 3 X A H4 n4 </code></pre> <p>When I try to <code>to_csv</code> it will lose the first index <code>AA1</code>,I hope I can output the csv just like the <code>groupby</code> result but not the result like this:</p> <pre><code> AA1 AA2 ID name 1 B K H2 n2 2 B K H3 n3 0 C W H1 n1 4 G B H5 n5 5 G B H6 n6 3 X A H4 n4 </code></pre> <p>I mean ,when I open the csv file in <code>excel</code> I hope can see the format like the output in jupyter!</p>
<p>CSV formats have its limitations. One of them being keeping information about multi-indexes. You will have to keep track and judiciously load your data. Here's an example. </p> <pre><code>df AA1 AA2 ID name AA1 B 1 B K H2 n2 2 B K H3 n3 C 0 C W H1 n1 G 4 G B H5 n5 5 G B H6 n6 X 3 X A H4 n4 df.to_csv('test.csv') !cat test.csv AA1,,AA1,AA2,ID,name B,1,B,K,H2,n2 B,2,B,K,H3,n3 C,0,C,W,H1,n1 G,4,G,B,H5,n5 G,5,G,B,H6,n6 X,3,X,A,H4,n4 </code></pre> <p>That's how the CSV is saved. Now, when loading it back, specify <code>index_col</code> and the multi-index will be loaded as before.</p> <pre><code>(pd.read_csv('test.csv', index_col=[0, 1]) .rename_axis(['AA1', None]) .rename(columns=lambda x: x.split('.')[0])) AA1 AA2 ID name AA1 B 1 B K H2 n2 2 B K H3 n3 C 0 C W H1 n1 G 4 G B H5 n5 5 G B H6 n6 X 3 X A H4 n4 </code></pre> <p>Keep in mind that your column names are mangled when saving and re-loading - this is another CSV limitation. </p> <p>As the other answer mentions, it would be better to explicitly save with an <code>index_label</code> when calling <code>to_csv</code> so you don't have to unmangle your columns. </p>
python|pandas|csv
4
7,761
38,667,350
function to return copy of np.array with some elements replaced
<p>I have a Numpy array and a list of indices, as well as an array with the values which need to go into these indices.</p> <p>The quickest way I know how to achieve this is:</p> <pre><code>In [1]: a1 = np.array([1,2,3,4,5,6,7]) In [2]: x = np.array([10,11,12]) In [3]: ind = np.array([2,4,5]) In [4]: a2 = np.copy(a1) In [5]: a2.put(ind,x) In [6]: a2 Out[6]: array([ 1, 2, 10, 4, 11, 12, 7]) </code></pre> <p>Notice I had to make a copy of <code>a1</code>. What I'm using this for is to wrap a function which takes an array as input, so I can give it to an optimizer which will vary <em>some</em> of those elements.</p> <p>So, ideally, I'd like to have something which returns a modified copy of the original, in one line, that works like this:</p> <pre><code>a2 = np.replace(a1, ind, x) </code></pre> <p>The reason for that is that I need to apply it like so:</p> <pre><code>def somefunction(a): .... costfun = lambda x: somefunction(np.replace(a1, ind, x)) </code></pre> <p>With <code>a1</code> and <code>ind</code> constant, that would then give me a costfunction which is only a function of x.</p> <p>My current fallback solution is to define a small function myself:</p> <pre><code>def replace(a1, ind, x): a2 = np.copy(a1) a2.put(ind,x) return(a2) </code></pre> <p>...but this appears not very elegant to me.</p> <p>=> Is there a way to turn that into a lambda function?</p>
<p>Well you asked for a one-liner, here's one using sparse matrices with <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html" rel="nofollow"><code>Scipy's csr_matrix</code></a> -</p> <pre><code>In [280]: a1 = np.array([1,2,3,4,5,6,7]) ...: x = np.array([10,11,12]) ...: ind = np.array([2,4,5]) ...: In [281]: a1+csr_matrix((x-a1[ind], ([0]*x.size, ind)), (1,a1.size)).toarray() Out[281]: array([[ 1, 2, 10, 4, 11, 12, 7]]) </code></pre>
python|python-2.7|numpy
2
7,762
38,584,494
Python generator to read large CSV file
<p>I need to write a Python generator that yields tuples (X, Y) coming from two different CSV files. </p> <p>It should receive a batch size on init, read line after line from the two CSVs, yield a tuple (X, Y) for each line, where X and Y are arrays (the columns of the CSV files).</p> <p>I've looked at examples of lazy reading but I'm finding it difficult to convert them for CSVs: </p> <ul> <li><a href="https://stackoverflow.com/questions/519633/lazy-method-for-reading-big-file-in-python">Lazy Method for Reading Big File in Python?</a></li> <li><a href="https://stackoverflow.com/questions/6475328/read-large-text-files-in-python-line-by-line-without-loading-it-in-to-memory">Read large text files in Python, line by line without loading it in to memory</a></li> </ul> <p>Also, unfortunately Pandas Dataframes are not an option in this case.</p> <p>Any snippet I can start from?</p> <p>Thanks</p>
<p>You can have a generator, that reads lines from two different csv readers and yield their lines as pairs of arrays. The code for that is:</p> <pre><code>import csv import numpy as np def getData(filename1, filename2): with open(filename1, "rb") as csv1, open(filename2, "rb") as csv2: reader1 = csv.reader(csv1) reader2 = csv.reader(csv2) for row1, row2 in zip(reader1, reader2): yield (np.array(row1, dtype=np.float), np.array(row2, dtype=np.float)) # This will give arrays of floats, for other types change dtype for tup in getData("file1", "file2"): print(tup) </code></pre>
python|csv|numpy|bigdata
28
7,763
38,694,292
When create an optimizer in Tensorflow, how to deal with AttributeError happens?
<p>I try to incorporate a self-designed optimization algorithm PSGLD into TensorFlow. And that algorithm is similar to the concept of RMSProp. So I didn't create a new Op, but complement PSGLD following RMSProp. My procedure of incorporating is as follows:</p> <ol> <li><p>In Python side, create a <code>psgld.py</code> under the folder of <code>tensorflow\python\training</code>,which represents the Python wrapper. And in <code>psgld.py</code>, define the class of <code>PSGLDOptimizer</code>.</p> <p><code>class PSGLDOptimizer(optimizer.Optimizer)</code> </p></li> <li><p>Then, in <code>tensorflow\python\training\training_ops.py</code>, define the shape function of <code>_ApplyPSGLDShape</code> and <code>_SparseApplyPSGLD</code>, for dense and sparse circumstances respectively.</p></li> <li><p>For C++ side, in <code>tensorflow\core\ops\training_ops.cc</code>, define the input, output and attribute of ApplyPSGLD Op:<br> <code>REGISTER_OP("ApplyPSGLD") .Input("var: Ref(T)") .Input("ms: Ref(T)") .Input("mom: Ref(T)") .Input("lr: T") .Input("decay: T") .Input("epsilon: T") .Input("grad: T") .Output("out: Ref(T)") .Attr("T: numbertype") .Attr("use_locking: bool = false")</code> </p></li> <li><p>Meanwhile, also define <code>ApplyPSGLD</code> in the header file of <code>tensorflow\core\kernels\training_ops.h</code></p> <p><code>template &lt;typename Device, typename T&gt; struct ApplyPSGLD { ... };</code> </p></li> <li><p>To realize the computation of our algorithm on C++ side, complement corresponding code in the kernel of <code>tensorflow\core\kernels\training_ops.cc</code>.</p></li> </ol> <p>After all, when I run <code>tensorflow/models/image/mnist/convolutional.py</code>, and the optimizer is adjusted,</p> <p><code>optimizer = tf.train.PSGLDOptimizer(learning_rate).minimize(loss, global_step=batch)</code></p> <p>an AttributeError happens: <code>AttributeError: 'module' object has no attribute 'PSGLDOptimizer'</code></p> <p>And the environment is TF-0.9, cudnn5. So I ask if someone can give me any advice on this issue or the whole procedure of adding an optimizer.</p>
<p>(I'm assuming that you've rebuilt TensorFlow from source, as <a href="https://stackoverflow.com/questions/38694292/when-create-an-optimizer-in-tensorflow-how-to-deal-with-attributeerror-happens#comment64768151_38694292">Olivier suggested in his comment</a>, and you are trying to construct your optimizer as <code>optimizer = tf.train.PSGLDOptimizer(...)</code>.)</p> <p>To add a symbol to the <code>tf.train</code> namespace, you have to do the following:</p> <ol> <li><p>Add an explicit import to the file <a href="https://github.com/tensorflow/tensorflow/blob/27eeb441bad8bcaa1bcba42a4b4ee49fb50ea0d3/tensorflow/python/training/training.py" rel="nofollow noreferrer"><code>tensorflow/python/training/training.py</code></a>. In that file, you can see imports for, e.g., the <a href="https://github.com/tensorflow/tensorflow/blob/27eeb441bad8bcaa1bcba42a4b4ee49fb50ea0d3/tensorflow/python/training/training.py#L160" rel="nofollow noreferrer"><code>tf.train.RMSPropOptimizer</code></a> class.</p></li> <li><p>Either:</p> <ol> <li><p>Add documentation for your new class, and add <code>@@PSGLDOptimizer</code> to the module docstring. The corresponding line for <code>tf.train.RMSPropOptimizer</code> is <a href="https://github.com/tensorflow/tensorflow/blob/27eeb441bad8bcaa1bcba42a4b4ee49fb50ea0d3/tensorflow/python/training/training.py#L36" rel="nofollow noreferrer">here</a>. This marks the class as a publicly documented API symbol.</p></li> <li><p>Add an exception to the <a href="https://github.com/tensorflow/tensorflow/blob/27eeb441bad8bcaa1bcba42a4b4ee49fb50ea0d3/tensorflow/python/training/training.py#L222" rel="nofollow noreferrer">whitelist</a> of symbols that are added to <a href="https://stackoverflow.com/q/2187583/3574081"><code>__all__</code></a> in that file. For example, <a href="https://github.com/tensorflow/tensorflow/blob/27eeb441bad8bcaa1bcba42a4b4ee49fb50ea0d3/tensorflow/python/training/training.py#L231" rel="nofollow noreferrer">this line</a> whitelists the symbol <code>tf.train.LooperThread</code>.</p></li> </ol></li> </ol> <p>For most TensorFlow modules*, the rule is that a symbol can appear in <code>__all__</code> if it is either (i) publicly documented, or (ii) explicitly whitelisted. If neither condition holds, it will not be accessible through a <code>tf.*</code> name. This is intended to keep the API surface small, and avoid exposing private implementation details that might change between versions.</p> <p>*&nbsp;Note however that this is a work in progress. At present, a method is considered to be stable only if it is documented in the <a href="https://www.tensorflow.org/versions/r0.10/api_docs/index.html" rel="nofollow noreferrer">public API docs</a>.</p>
optimization|tensorflow|deep-learning|attributeerror
0
7,764
63,135,485
How to use PyTorch's torchaudio in Android?
<p>Starting from <a href="https://pytorch.org/mobile/android/" rel="nofollow noreferrer">here</a>, I downloaded the tutorial project and got it to build and run. Then I tried adding this to the project's app build.gradle (after upping the pytorch version to 1.5.0):</p> <pre><code>implementation 'org.pytorch:pytorch_android_torchaudio:1.5.0' </code></pre> <p>And I got this error:</p> <pre><code>Could not find org.pytorch:pytorch_android_torchaudio:1.5.0. </code></pre> <p>Anyone else had any luck getting PyTorch's torchaudio to work in Android?</p>
<p>Got this answer from <a href="https://github.com/pytorch/audio/issues/408#issuecomment-665052963" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>There is no android package dedicated for torchaudio. You build your model or pipeline in Python, then dump it as a Torchscript file, then load it from your app and run it with Torchscript runtime. Please refer to the following.</p> </blockquote> <blockquote> <p><a href="https://pytorch.org/mobile/home/" rel="nofollow noreferrer">https://pytorch.org/mobile/home/</a></p> </blockquote> <blockquote> <p><a href="https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html" rel="nofollow noreferrer">https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html</a></p> </blockquote>
android|pytorch
0
7,765
67,869,267
'Subset' object is not an iterator for updating torch' legacy IMDB dataset
<p>I'm updating a pytorch network from legacy code to the current code. Following documentation such as that <a href="https://colab.research.google.com/github/pytorch/text/blob/master/examples/legacy_tutorial/migration_tutorial.ipynb#scrollTo=opQ6LcnigTKx" rel="nofollow noreferrer">here</a>.</p> <p>I used to have:</p> <pre><code>import torch from torchtext import data from torchtext import datasets # setting the seed so our random output is actually deterministic SEED = 1234 torch.manual_seed(SEED) torch.backends.cudnn.deterministic = True # defining our input fields (text) and labels. # We use the Spacy function because it provides strong support for tokenization in languages other than English TEXT = data.Field(tokenize = 'spacy', include_lengths = True) LABEL = data.LabelField(dtype = torch.float) from torchtext import datasets train_data, test_data = datasets.IMDB.splits(TEXT, LABEL) import random train_data, valid_data = train_data.split(random_state = random.seed(SEED)) example = next(iter(test_data)) example.text MAX_VOCAB_SIZE = 25_000 TEXT.build_vocab(train_data, max_size = MAX_VOCAB_SIZE, vectors = &quot;glove.6B.100d&quot;, unk_init = torch.Tensor.normal_) # how to initialize unseen words not in glove LABEL.build_vocab(train_data) </code></pre> <p>Now in the new code I am struggling to add the validation set. All goes well until here:</p> <pre><code>from torchtext.datasets import IMDB train_data, test_data = IMDB(split=('train', 'test')) </code></pre> <p>I can print the outputs, while they look different (problems later on?), they have all the info. I can print test_data fine with next(train_data.</p> <p>Then after I do:</p> <pre><code>test_size = int(len(train_dataset)/2) train_data, valid_data = torch.utils.data.random_split(train_dataset, [test_size,test_size]) </code></pre> <p>It tells me:</p> <p>next(train_data)</p> <pre><code>TypeError: 'Subset' object is not an iterator </code></pre> <p>This makes me think I am not correct in applying random_split. How to correctly create the validation set for this dataset? Without causing issues.</p>
<p>Try <code>next(iter(train_data))</code>. It seems one have to create iterator over <code>dataset</code> explicitly. And use <code>Dataloader</code> when effectiveness is required.</p>
pytorch|sentiment-analysis|imdb
0
7,766
67,834,173
AttributeError : has no attribute 'BatchNormalizationBase
<p>when I run the script, there is error</p> <pre><code>File &quot;akurasi.py&quot;, line 3, in &lt;module&gt; import keras File &quot;C:\projeku\Lib\site-packages\keras\__init__.py&quot;, line 21, in &lt;module&gt; from tensorflow.python import tf2 File &quot;C:\projeku\Lib\site-packages\tensorflow\__init__.py&quot;, line 41, in &lt;module&gt; from tensorflow.python.tools import module_util as _module_util File &quot;C:\projeku\Lib\site-packages\tensorflow\python\__init__.py&quot;, line 48, in &lt;module&gt; from tensorflow.python import keras File &quot;C:\projeku\Lib\site-packages\tensorflow\python\keras\__init__.py&quot;, line 25, in &lt;module&gt; from tensorflow.python.keras import models File &quot;C:\projeku\Lib\site-packages\tensorflow\python\keras\models.py&quot;, line 20, in &lt;module&gt; from tensorflow.python.keras import metrics as metrics_module File &quot;C:\projeku\Lib\site-packages\tensorflow\python\keras\metrics.py&quot;, line 37, in &lt;module&gt; from tensorflow.python.keras import activations File &quot;C:\projeku\Lib\site-packages\tensorflow\python\keras\activations.py&quot;, line 18, in &lt;module&gt; from tensorflow.python.keras.layers import advanced_activations File &quot;C:\projeku\Lib\site-packages\tensorflow\python\keras\layers\__init__.py&quot;, line 147, in &lt;module&gt; from tensorflow.python.keras.layers.normalization_v2 import SyncBatchNormalization File &quot;C:\projeku\Lib\site-packages\tensorflow\python\keras\layers\normalization_v2.py&quot;, line 29, in &lt;module&gt; class SyncBatchNormalization(normalization.BatchNormalizationBase): AttributeError: *module 'tensorflow.python.keras.layers.normalization'* has no attribute **'BatchNormalizationBase'** </code></pre> <p>in my script there is no &quot;import tensorflow.python.keras.layers.normalization&quot;. could somebody help me? how to solve this problem? thank you :)</p> <pre><code>import matplotlib.pyplot as plt import keras from keras.models import Sequential from keras.layers import Dense, Conv2D , MaxPool2D , Flatten , Dropout from keras.preprocessing.image import ImageDataGenerator from keras.optimizers import Adam from sklearn.metrics import classification_report,confusion_matrix import tensorflow as tf import cv2 import os import numpy as np </code></pre>
<p>Your issue can be resolved, once you modify imports as shown below</p> <pre><code>import matplotlib.pyplot as plt from tensorflow import keras from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Conv2D , MaxPool2D , Flatten , Dropout from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.optimizers import Adam from sklearn.metrics import classification_report,confusion_matrix import cv2 import os import numpy as np </code></pre>
python|tensorflow|keras|conv-neural-network
1
7,767
67,944,419
set values to columns but using indexes
<p>I have a dataframe like this:</p> <pre><code>a b c d e 42 1 0 1 0 42 0 0 0 1 42 0 1 0 0 42 1 1 0 0 </code></pre> <p>I want to do something that can make all 1 in column bcde equal to column a, so it will basically be this:</p> <pre><code>a b c d e 42 42 0 42 0 42 0 0 0 42 42 0 42 0 0 42 42 42 0 0 </code></pre> <p>so it should be something like df.loc[df['b']==1,'b'] = df['a'] but for all bcde. the whole dataframe is hundreds of columns so i can not use .loc to set values, and iloc can not set value like loc.</p>
<p>Edit: You can simply use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.where.html" rel="nofollow noreferrer">pandas.DataFrame.where()</a> and give it the <code>df[&quot;a&quot;] Series</code> as replacement. This way, it will work for both numbers and strings.</p> <pre><code># df is your DataFrame # New DataFrame... new_df = df.where(df != 1, df[&quot;a&quot;], axis=&quot;index&quot;) # ...or in place: df.where(df != 1, df[&quot;a&quot;], axis=0), inplace=True) </code></pre>
python|pandas
0
7,768
67,668,520
group gps data on haversine distance and calculate the mean
<p>I have a dataframe like following (cannot show from original due to NDA):</p> <pre><code>points = [(-57.213878612138828, 17.916958304169601), (76.392039480378514, 0.060882542482108504), (0.12417670682730897, 1.0417670682730924), (-64.840321976787706, 21.374279296143762), (-48.966302937359913, 81.336323778066188), (11.122014925372399, 85.001119402984656), (8.6383049769438465, 84.874829066623917), (-57.349835526315836, 16.683634868421084), (83.051530302006697, 97.450469562867383), (8.5405200433369473, 83.566955579631625), (81.620435769843965, 48.106831247886376), (78.713027357450656, 19.547209139192304), (82.926153287322933, 81.026080639302577)] x = [i[0] for i in points] y = [i[1] for i in points] df = pd.DataFrame(list(zip(x, y)), columns =['lat', 'lon']) df </code></pre> <p><a href="https://i.stack.imgur.com/erl2U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/erl2U.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/VqXEa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VqXEa.png" alt="enter image description here" /></a> I want to cluster/group all points where distance &lt;= 400 and after I want to calculate the mean of each group -- PLEASE NO CLUSTERING TECHNIQUE - ONLY HARD CODED:</p> <p>What I have done is - first wrote the haversine funtion:</p> <pre><code>from math import radians, cos, sin, asin, sqrt from scipy.spatial.distance import pdist, squareform import pandas as pd def haversine(lon1,lat1, lon2,lat2): &quot;&quot;&quot; Calculate the great circle distance between two points on the earth (specified in decimal degrees) &quot;&quot;&quot; # convert decimal degrees to radians lat1, lon1 = lon1,lat1 lat2, lon2 = lon2,lat2 lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2]) # haversine formula dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2 c = 2 * asin(sqrt(a)) r = 6371 # Radius of earth in kilometers. Use 3956 for miles return c * r </code></pre> <p>Then I have wrote a function to combine all the gps data - without recurrings - is that a MISTAKE?</p> <pre><code>def pairs(number): list_num = [] for i in range(number): for j in range(number): if j &gt;= i+1: list_num.append([i,j]) return list_num pair_list = pairs(12) print(pair_list) </code></pre> <p><a href="https://i.stack.imgur.com/AuRzM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AuRzM.png" alt="enter image description here" /></a></p> <p><strong>!!!HERE HELP NEEDED!!!</strong> In the following code I tried to implement the code using a dictionary but I didn't workout</p> <pre><code>df = df.sort_values(['lat', 'lon']) df = df.reset_index(drop=True) max_index = 0 for index, row in df.iterrows(): max_index = index pair_list = pairs(len(list(df['lat']))) cluster = -1 cluster_dict = {} for i, j in pair_list: #print(i, j) index = i #stat_nr = df.loc[i]['vwbhfnrprz'] lat = df.loc[i]['lat'] lon = df.loc[i]['lon'] index_next = j #stat_nr_next = df.loc[j]['vwbhfnrprz'] lat_next = df.loc[j]['lat'] lon_next = df.loc[j]['lon'] distance = haversine(lon,lat, lon_next, lat_next) if distance &lt;= 400: cluster += 1 print('cluster ' + str(cluster)) if len(cluster_dict) == 0: cluster_dict[cluster]=[i,j] elif len(cluster_dict) != 0: for k in cluster_dict.copy(): print(cluster_dict) print('k '+str(k)) print(i,j) if i not in cluster_dict[k]: cluster_dict[cluster] = [i] print(cluster_dict) #cluster_dict[k].append(i) if j not in cluster_dict[k]: #print(cluster_dict[k]) break if j in cluster_dict[k]: break print(cluster, index, index_next, lat, lon, lat_next, lon_next, distance) </code></pre> <p>The idea behind this was - dictionary=cluster_dict ie. pair (5,6):</p> <ul> <li>if cluster_dict was empty then 5 and 6 should be appended to cluster_dict with cluster = 0 --&gt; cluster_dict = {0:[5,6]}</li> <li>if cluster_dict was not empty cluster_dict = {0:[1,2]} and 5,6 were not in cluster_dict then 5 and 6 should be appended to cluster_dict with cluster = 1 --&gt; cluster_dict = {0:[1,2], 1:[5,6]}</li> <li>if cluster_dict was not empty cluster_dict = {0:[1,2], 0:[4,5]} and 5 or 6 were in cluster_dict then 5 or 6 should be appended to cluster_dict with cluster = cluster of existing one --&gt; cluster_dict = {0:[1,2], 0:[4,5,6]}</li> </ul> <p>Thanks!</p>
<p>I took a different approach on some steps by utilizing pandas functionality, but that should do it.</p> <p>First, from your provided data set, I created a helper dataframe that gives me a point I and the point as list.</p> <pre><code>points = (df .assign( id = lambda x : np.arange(len(x)), point = lambda x : x.apply(lambda x : [x.lat, x.lon], axis = 1) ) ) points </code></pre> <p>looking like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">lat</th> <th style="text-align: left;">lon</th> <th style="text-align: left;">id</th> <th style="text-align: left;">point</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">-57.213879</td> <td style="text-align: left;">17.916958</td> <td style="text-align: left;">0</td> <td style="text-align: left;"><code>[-57.21387861213883, 17.9169583041696]</code></td> </tr> <tr> <td style="text-align: left;">76.392039</td> <td style="text-align: left;">0.060883</td> <td style="text-align: left;">1</td> <td style="text-align: left;"><code>[76.39203948037851, 0.060882542482108504]</code></td> </tr> <tr> <td style="text-align: left;">0.124177</td> <td style="text-align: left;">1.041767</td> <td style="text-align: left;">2</td> <td style="text-align: left;"><code>[0.12417670682730897, 1.0417670682730924]</code></td> </tr> <tr> <td style="text-align: left;">-64.840322</td> <td style="text-align: left;">21.374279</td> <td style="text-align: left;">3</td> <td style="text-align: left;"><code>[-64.8403219767877, 21.37427929614376]</code></td> </tr> <tr> <td style="text-align: left;">-48.966303</td> <td style="text-align: left;">81.336324</td> <td style="text-align: left;">4</td> <td style="text-align: left;"><code>[-48.96630293735991, 81.33632377806619]</code></td> </tr> </tbody> </table> </div> <p>Then the following steps are applied:</p> <ul> <li>expand the ids to create the pairs</li> <li>reduce pairs to unique ones</li> <li>left join the point coordinates for &quot;from&quot; and &quot;to&quot;</li> <li>calculate haversine (using your provided function)</li> <li>grouping and calcuating the mean</li> </ul> <pre><code>distances = ( # create the pairs pd.DataFrame(index = pd.MultiIndex .from_product( [points.id.to_list(), points.id.to_list()], names = [&quot;from_id&quot;, &quot;to_id&quot;] ) ) .reset_index() # reduce to unique pairs (including itself, to get single clusters later) # (if you imaginge this as a from-to-matrix, it takes the upper half in a # very simplified way) .query(&quot;from_id &lt; to_id&quot;) # add the coordinates from the helper table (please be aware of the required renaming. # This would create the right id columns as additional columns you would not want if # this steps is the required output. You would have to .drop() them... .merge( points.filter([&quot;id&quot;, &quot;point&quot;]).rename(columns={&quot;point&quot; : &quot;from_point&quot;}), how=&quot;left&quot;, left_on = &quot;from_id&quot;, right_on = &quot;id&quot; ) .merge( points.filter([&quot;id&quot;, &quot;point&quot;]).rename(columns={&quot;point&quot; : &quot;to_point&quot;}), how=&quot;left&quot;, left_on = &quot;to_id&quot;, right_on = &quot;id&quot; ) # calculate haversine and grouping and calculating the mean (I find it confusing, # that you points are (lat, lon) but your function takes (lon,lat). Better check this # for errors! .assign( haversine = lambda x : x.apply(lambda x : haversine(x.from_point[1], x.from_point[0], x.to_point[1], x.to_point[0]) , axis = 1), # !!!! important change to original distance_group = lambda x : (x.haversine &lt;= 400).map({True : &quot;&lt; 400&quot;, False : &quot;&gt;= 400&quot;}).astype('category'), # this avoids points showing up in smaler clusters already_clusterd = lambda x : x.apply(lambda y : y.from_id in x.query(&quot;distance_group == '&lt; 400' and from_id != to_id&quot;).to_id.to_list() , axis = 1) ) .groupby([&quot;from_id&quot;, &quot;distance_group&quot;]) .agg(cluster = (&quot;to_id&quot;, lambda x : x.to_list())) .reset_index() ) print(distances) </code></pre> <p>This gives us the following table (example)</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;"></th> <th style="text-align: left;">from_id</th> <th style="text-align: left;">distance_group</th> <th style="text-align: left;">cluster</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">&lt; 400</td> <td style="text-align: left;"><code>[0, 7]</code></td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: left;">0</td> <td style="text-align: left;">&gt;= 400</td> <td style="text-align: left;"><code>[1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 12]</code></td> </tr> <tr> <td style="text-align: left;">2</td> <td style="text-align: left;">1</td> <td style="text-align: left;">&lt; 400</td> <td style="text-align: left;"><code>[1]</code></td> </tr> <tr> <td style="text-align: left;">3</td> <td style="text-align: left;">1</td> <td style="text-align: left;">&gt;= 400</td> <td style="text-align: left;"><code>[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]</code></td> </tr> <tr> <td style="text-align: left;">4</td> <td style="text-align: left;">2</td> <td style="text-align: left;">&lt; 400</td> <td style="text-align: left;"><code>[2]</code></td> </tr> <tr> <td style="text-align: left;">5</td> <td style="text-align: left;">2</td> <td style="text-align: left;">&gt;= 400</td> <td style="text-align: left;"><code>[3, 4, 5, 6, 7, 8, 9, 10, 11, 12]</code></td> </tr> <tr> <td style="text-align: left;">6</td> <td style="text-align: left;">3</td> <td style="text-align: left;">&lt; 400</td> <td style="text-align: left;"><code>[3]</code></td> </tr> </tbody> </table> </div> <p>So, from here on, I somewhat confused on your inteded result, but this would give us the closest to what I think you want</p> <pre><code>(distances .query(&quot;distance_group == '&lt; 400'&quot;) # comment the next two lines to get single length clusters .assign(cluter_length = lambda x : x.cluster.map(lambda x : len(x))) .query(&quot;cluter_length &gt; 1&quot;) ).cluster.to_list() </code></pre> <p>without the line commented the output: <code>[[0, 7], [5, 6, 9]]</code> if you comment them out, the output is <code>[[0, 7], [1], [2], [3], [4], [5, 6, 9], [8], [10], [11], [12]]</code></p>
python|pandas|dataframe|grouping|cluster-analysis
1
7,769
67,793,711
Plot argmax of numpy array
<p>I have a numpy array with shape (20,20,6).</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.cm as cm num_channels = 6 a = np.random.choice(range(100),(20,20,num_channels)) </code></pre> <p>I want to get an array with shape (20,20,4), 20x20 times an RGBA colour indicating which value in <code>a</code> is the argmax along the last axis. Therefore, I generate 6 RGBA colours and take the argmax over <code>a</code>.</p> <pre class="lang-py prettyprint-override"><code>color_list = cm.rainbow(np.linspace(0, 1, num_channels)) a = a.argmax(axis=-1) </code></pre> <p>now <code>a</code> has dimensions <code>20x20</code>. How do I get it to <code>20x20x4</code> using the colour list?</p> <p>This is what the colour list looks like:</p> <pre><code>[[0.5 0. 1. 1. ] [0.1 0.59 0.95 1. ] [0.3 0.95 0.81 1. ] [0.7 0.95 0.59 1. ] [1. 0.59 0.31 1. ] [1. 0. 0. 1. ]] </code></pre>
<p>This should do what you want.</p> <pre class="lang-py prettyprint-override"><code>import random import numpy as np import matplotlib.cm as cm # get colormap num_channels = 6 color_list = cm.rainbow(np.linspace(0, 1, num_channels)) # generate an array a = np.random.rand(20,20,6) a = np.argmax(a, axis=-1) print(a) # create new array new_data = np.zeros((20,20,4)) for i in range(20): for j in range(20): # put RGBA value in new_data color = color_list[a[i][j]] new_data[i][j][:] = color print(new_data) </code></pre> <p>Please mark this question as solved if it helped you!</p>
python|arrays|numpy
0
7,770
41,236,061
python pandas how to combine the pandas with the same column value
<p>convert this frame: </p> <pre><code>1, 2 ---- a, g a, a a, j d, b c, e </code></pre> <p>into:</p> <pre><code>1, 2 ---- a, g,a,j d, b c, e </code></pre> <p>what can I do, can I use groupby? what other methods?</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> with <code>apply</code> function <code>join</code>:</p> <pre><code>df.columns = list('AB') print (df) A B 0 a g 1 a a 2 a j 3 d b 4 c e df = df.groupby('A')['B'].apply(','.join).reset_index() print (df) A B 0 a g,a,j 1 c e 2 d b </code></pre>
python|python-2.7|pandas
2
7,771
41,538,692
Using sparse matrices with Keras and Tensorflow
<p>My data can be viewed as a matrix of 10B entries (100M x 100), which is very sparse (&lt; 1/100 * 1/100 of entries are non-zero). I would like to feed the data into into a Keras Neural Network model which I have made, using a Tensorflow backend.</p> <p>My first thought was to expand the data to be dense, that is, write out all 10B entries into a series of CSVs, with most entries zero. However, this is quickly overwhelming my resources (even doing the ETL overwhelmed pandas and is causing postgres to struggle). So I need to use true sparse matrices.</p> <p>How can I do that with Keras (and Tensorflow)? While numpy doesn't support sparse matrices, scipy and tensorflow both do. There's lots of discussion (e.g. <a href="https://github.com/fchollet/keras/pull/1886" rel="noreferrer">https://github.com/fchollet/keras/pull/1886</a> <a href="https://github.com/fchollet/keras/pull/3695/files" rel="noreferrer">https://github.com/fchollet/keras/pull/3695/files</a> <a href="https://github.com/pplonski/keras-sparse-check" rel="noreferrer">https://github.com/pplonski/keras-sparse-check</a> <a href="https://groups.google.com/forum/#!topic/keras-users/odsQBcNCdZg" rel="noreferrer">https://groups.google.com/forum/#!topic/keras-users/odsQBcNCdZg</a> ) about this idea - either using scipy's sparse matrixcs or going directly to Tensorflow's sparse matrices. But I can't find a clear conclusion, and I haven't been able to get anything to work (or even know clearly which way to go!).</p> <p>How can I do this?</p> <p>I believe there are two possible approaches:</p> <ol> <li>Keep it as a scipy sparse matrix, then, when giving Keras a minibatch, make it dense</li> <li>Keep it sparse all the way through, and use Tensorflow Sparse Tensors</li> </ol> <p>I also think #2 is preferred, because you'll get much better performance all the way through (I believe), but #1 is probably easier and will be adequate. I'll be happy with either.</p> <p>How can either be implemented?</p>
<p>Sorry, don't have the reputation to comment, but I think you should take a look at the answer here: <a href="https://stackoverflow.com/questions/37609892/keras-sparse-matrix-issue">Keras, sparse matrix issue</a>. I have tried it and it works correctly, just one note though, at least in my case, the shuffling led to really bad results, so I used this slightly modified non-shuffled alternative:</p> <pre><code>def nn_batch_generator(X_data, y_data, batch_size): samples_per_epoch = X_data.shape[0] number_of_batches = samples_per_epoch/batch_size counter=0 index = np.arange(np.shape(y_data)[0]) while 1: index_batch = index[batch_size*counter:batch_size*(counter+1)] X_batch = X_data[index_batch,:].todense() y_batch = y_data[index_batch] counter += 1 yield np.array(X_batch),y_batch if (counter &gt; number_of_batches): counter=0 </code></pre> <p>It produces comparable accuracies to the ones achieved by keras's shuffled implementation (setting <code>shuffle=True</code> in <code>fit</code>).</p>
tensorflow|sparse-matrix|keras
19
7,772
41,510,045
Pandas groupby().apply() - returning None from the applied function messes up the results
<p>I perform groupby() and apply() on a few data frames with the same structure:</p> <pre><code>d = d.groupby( 'groupby_col', as_index = False ).apply( some_function ) </code></pre> <p>For some it works as expected, for some it fails. The way it fails is that the dataframe becomes a series where each element contains just column names. It looks like this:</p> <pre><code>In [18]: d.head() Out[18]: groupby_col 134663372801 some_col_1 some_col_2 some_col_3 some_col_4... 134663372802 some_col_1 some_col_2 some_col_3 some_col_4... 134663372803 some_col_1 some_col_2 some_col_3 some_col_4... 134663372804 some_col_1 some_col_2 some_col_3 some_col_4... 134663372805 some_col_1 some_col_2 some_col_3 some_col_4... dtype: object </code></pre> <p>BTW, the applied function returns either a data frame with a correct number of columns or None.</p> <p>What might be the reason for this and how to debug it?</p>
<p>The problem goes away if instead of returning None from the applied function I alway return a frame - replaced</p> <pre><code>if some_condition: return </code></pre> <p>with</p> <pre><code>if some_condition: return d[:0] # return the empty frame so that the columns match </code></pre>
python|pandas|data-structures|dataframe
0
7,773
61,304,463
Issue with keras fit_generator epoch
<p>I'm creating an LSTM Model for Text generation using Keras. As the dataset(around 25 novels,which has around 1.4 million words) I'm using can't be processed at once(An Memory issue with converting my outputs to_Categorical()) I created a custom generator function to read the Data in. </p> <pre><code># Data generator for fit and evaluate def generator(batch_size): start = 0 end = batch_size while True: x = sequences[start:end,:-1] #print(x) y = sequences[start:end,-1] y = to_categorical(y, num_classes=vocab_size) #print(y) yield x, y if batch_size == len(lines): break; else: start += batch_size end += batch_size </code></pre> <p>when i excecute the model.fit() method, after 1 epoch is done training the following error is thrown.</p> <pre><code>UnknownError: [_Derived_] CUDNN_STATUS_BAD_PARAM in tensorflow/stream_executor/cuda/cuda_dnn.cc(1459): 'cudnnSetTensorNdDescriptor( tensor_desc.get(), data_type, sizeof(dims) / sizeof(dims[0]), dims, strides)' [[{{node CudnnRNN}}]] [[sequential/lstm/StatefulPartitionedCall]] [Op:__inference_train_function_25138] Function call stack: train_function -&gt; train_function -&gt; train_function </code></pre> <p>does anyone know how to solve this issue ? Thanks</p>
<p>From many sources in the Internet, this issue seems to occur while using <code>LSTM Layer</code> along with <code>Masking Layer</code> and while training on <code>GPU</code>.</p> <p>Mentioned below can be the workarounds for this problem:</p> <ol> <li><p>If you can compromise on speed, you can Train your <code>Model</code> on <code>CPU</code> rather than on <code>GPU</code>. It works without any error.</p></li> <li><p>As per <a href="https://github.com/tensorflow/tensorflow/issues/33069#issuecomment-539038068" rel="nofollow noreferrer">this comment</a>, please check if your <code>Input Sequences</code> comprises of all Zeros, as the <code>Masking Layer</code> may mask all the <code>Inputs</code></p></li> <li><p>If possible, you can Disable the <code>Eager Execution</code>. As per <a href="https://github.com/tensorflow/tensorflow/issues/33148#issuecomment-622998846" rel="nofollow noreferrer">this comment</a>, it works without any error.</p></li> <li><p>Instead of using a Masking Layer, you can try the alternatives mentioned in <a href="https://www.tensorflow.org/guide/keras/masking_and_padding#masking" rel="nofollow noreferrer">this link</a></p> <p>a. Adding the argument, <code>mask_zero = True</code> to the <code>Embedding Layer</code>. or</p> <p>b. Pass a mask argument manually when calling layers that support this argument</p></li> <li><p>Last solution can be to remove <code>Masking Layer</code>, if that is possible.</p></li> </ol> <p>If none of the above workaround solves your problem, Google Tensorflow Team is working to resolve this error. We may have to wait till that is fixed.</p> <p>Hope this information helps. Happy Learning!</p>
python|tensorflow|keras|deep-learning
0
7,774
61,607,694
pandas df.fillna() not replacing na values
<p>I have a dataframe that looks like this <strong>(For clarity: This represents a df with 5 rows and 8 columns)</strong>: </p> <pre><code> BTC-USD_close BTC-USD_volume LTC-USD_close LTC-USD_volume \ time 1528968660 6489.549805 0.587100 96.580002 9.647200 1528968720 6487.379883 7.706374 96.660004 314.387024 1528968780 6479.410156 3.088252 96.570000 77.129799 1528968840 6479.410156 1.404100 96.500000 7.216067 1528968900 6479.979980 0.753000 96.389999 524.539978 BCH-USD_close BCH-USD_volume ETH-USD_close ETH-USD_volume time 1528968660 871.719971 5.675361 NaN NaN 1528968720 870.859985 26.856577 486.01001 26.019083 1528968780 870.099976 1.124300 486.00000 8.449400 1528968840 870.789978 1.749862 485.75000 26.994646 1528968900 870.000000 1.680500 486.00000 77.355759 </code></pre> <p>And I would like to replace the nan-values in the ETH-USD_close and ETH-USD_volume column. However, when i call <code>df.fillna(method='ffill', inplace=True)</code>, nothing seems to happen; the missing values are still there and nothing changes in the columns when I step through the program with a debugger.</p> <p>When i use <code>df.isna()</code> to check whether my nan values are correctly interpreted by pandas, this does seem to be the case; check the output of the first few rows when I check by <code>print(df.isna())</code>:</p> <pre><code> BTC-USD_close BTC-USD_volume LTC-USD_close LTC-USD_volume \ time 1528968660 False False False False 1528968720 False False False False 1528968780 False False False False 1528968840 False False False False 1528968900 False False False False BCH-USD_close BCH-USD_volume ETH-USD_close ETH-USD_volume time 1528968660 False False True True 1528968720 False False False False 1528968780 False False False False 1528968840 False False False False 1528968900 False False False False </code></pre> <p>A call like <code>df.dropna(inplace=True)</code> <em>does</em> remove the entire row, but this is not what I want. Any suggestions?</p> <p>EDIT: In case anyone wants to reproduce the problem, one can download the data from <a href="https://pythonprogramming.net/static/downloads/machine-learning-data/crypto_data.zip" rel="nofollow noreferrer">https://pythonprogramming.net/static/downloads/machine-learning-data/crypto_data.zip</a>, unzip it and run the following code in the same directory:</p> <pre><code>import pandas as pd #Initialize empty df main_df = pd.DataFrame() ratios = ["BTC-USD", "LTC-USD", "BCH-USD", "ETH-USD"] for ratio in ratios: #SET CORRECT PATH HERE dataset = f'crypto_data/{ratio}.csv' #Use f-strings so we know which close/volume is which df_ratio = pd.read_csv(dataset, names=['time', 'low', 'high', 'open', f"{ratio}_close", f"{ratio}_volume"]) #Set time as index so we can join them on this shared time df_ratio.set_index("time", inplace=True) #ignore the other columns besides price and volume df_ratio = df_ratio[[f"{ratio}_close", f"{ratio}_volume"]] if main_df.empty: main_df = df_ratio else: main_df = main_df.join(df_ratio) main_df.fillna(method='ffill', inplace=True) #THIS DOESN'T SEEM TO WORK </code></pre>
<p>Ah.</p> <p>You cannot <code>ffill</code> a <code>NaN</code> value if it is the first value of a series: it has no previous value.</p> <p>Using <code>.ffill().bfill()</code> could solve this but might be creating false data.</p>
python|pandas
1
7,775
61,600,636
Adding new element into 3 dimensional array NUMPY
<p>I have an array, which shape's is equal to (1,59,1) It looks in the following way:</p> <pre><code>[[[0.93169003] [0.96923472] [0.97881434] [0.99266784] [0.97358235] ............ [0.83777312] [0.82086134]]] </code></pre> <p>I wish I could add new element to the end, which is equal to [[0.86442673]], so that the shape of my array would be equal to (1,60,1) and would look in the following way:</p> <pre><code>[[[0.93169003] [0.96923472] [0.97881434] [0.99266784] [0.97358235] ............ [0.83777312] [0.82086134] [0.86442673]]] </code></pre> <p>I tried with np.append but it doesn't work for me. Please, help me</p>
<p>Try:</p> <pre class="lang-py prettyprint-override"><code>arr=np.append(arr,[[[0.86442673]]], axis=1) </code></pre> <p>Where <code>arr</code> is your input array</p>
python|arrays|numpy
0
7,776
61,443,044
Find words from one DataFrame in string of another DataFrame
<p>I have two DataFrames and need to find out how many times a word from the second DataFrame occurs in the first DataFrame.</p> <pre><code>df = pd.DataFrame({"Field_6" : ["THE SURGEON RECEIVED A WARNING MESSAGE ON THE SCREEN INDICATING THE INSTRUMENT WAS BROKEN.", "ON MAY 10 2015, THE REPORTER CONTACTED THE COMPANY ALLEGING THAT THE DISPLAY SCREEN WAS BLANK.", "THE NUMBERS DID NOT APPEAR ON THE SCREEN, AND THE BEEPS COULD NOT BE HEARD."]}) df2 = pd.DataFrame({"Search" : ["beep", "screen", "trigger"], "Bucket": ["Tones", "Screen", "Trigger"]}) </code></pre> <p>The result should be:</p> <pre><code>df3 = pd.DataFrame("Bucket": ["Tones, Screen", "Screen"], "Count": [1, 2]}) </code></pre> <p>My script so far:</p> <pre><code>df2["Search"] = df2["Search"].str.upper() def find_keyword(field_6): for word in df2["Search"]: if word in field_6: flag = 1 df["flag"] = df2["Search"].apply(find_keyword) </code></pre> <p>The function doesn't return anything and I'm trying to get my head around this problem. How do I find a string from one df in a string of another df? Once I've got that I think I can solve it myself with group_by.</p>
<p>You can use existing packages to meet your goal here and process the data much faster.</p> <p>packages needed to use below solution flashtext and inflection (there are other packages as well, we need any package having functionality to change plural english words to singular). Better option is to expand your df2 and include plural forms if possible.</p> <pre><code>#Package install command pip install flashtext pip install inflection </code></pre> <hr> <pre><code>#Script import pandas as pd from flashtext import KeywordProcessor from collections import Counter import inflection as inf keyword_processor = KeywordProcessor(case_sensitive=False) df = pd.DataFrame({"Field_6" : ["THE SURGEON RECEIVED A WARNING MESSAGE ON THE SCREEN INDICATING THE INSTRUMENT WAS BROKEN.", "ON MAY 10 2015, THE REPORTER CONTACTED THE COMPANY ALLEGING THAT THE DISPLAY SCREEN WAS BLANK.", "THE NUMBERS DID NOT APPEAR ON THE SCREEN, AND THE BEEPS COULD NOT BE HEARD."]}) df2 = pd.DataFrame({"Search" : ["beep", "screen", "trigger"], "Bucket": ["Tones", "Screen", "Trigger"]}) df2 = df2.set_index('Bucket') keyword_dict = df2.groupby('Bucket')['Search'].apply(list).to_dict() keyword_processor.add_keywords_from_dict(keyword_dict) def extract_key(row): field_string = row['Field_6'].split() temp = [] #Below for loop is computationally expensive and #can be avoided by extending keyword directory for element in field_string: temp.append(inf.singularize(element)) final_string = ' '.join(temp) keywords_found = keyword_processor.extract_keywords(final_string) counter = Counter(keywords_found) # extracted_list = list(counter.items()) # use if count of strings are important extracted_list = list(counter) return extracted_list df['Bucket'] = df.apply(lambda row: extract_key(row), axis = 1) df3 = df['Bucket'].value_counts().to_frame().reset_index() df3.columns = ['Bucket', 'Count'] </code></pre> <p>Result output:</p> <pre><code>#df3 Bucket Count 0 [Screen] 2 1 [Screen, Tones] 1 </code></pre>
python|pandas
0
7,777
68,836,573
Sorting 2 single dimensional arrays into a 1 dimensional array
<p>I am trying to write a code that chooses one by one from <code>a</code> and <code>b</code>. I want to make a 2 dimensional array where the first index is either <code>0 or 1</code>. <code>0</code> representing <code>a</code> and <code>1</code> representing <code>b</code> and the second index would just be the values in array <code>a</code> or <code>b</code> so it will be something like this <code>[[0 7][1 13]]</code>. I want the function to also have it in order so it will be The function starts off with <code>a</code> then it will be like <code>a,b,a,b,a...</code> if its the other way around <code>b,a,b,a,b...</code>. Comparing which index function comes before the other so since the first index of <code>b</code> is 0 and the first index of <code>a</code> is 7, since 0 &lt; 7 the code will start off with b <code>[[1 0]]</code> and then it will go for the next index on 'a' which is <code>7</code> so the <code>[[1 0],[0, 7]]</code>. It will keep on doing this until it reaches the end of the array <code>a</code> and <code>b</code>. How can I get the expected output below?</p> <pre><code>import numpy as np a = np.array([ 7, 9, 12, 15, 17, 22]) b = np.array([ 0, 13, 17, 18]) </code></pre> <p>Expected Output:</p> <pre><code>[[ 1 0] [ 0 7] [ 1 13] [ 0 15] [ 1 17] [ 0 17] [ 1 18] [ 0 22]] </code></pre>
<p>This isn't a Numpy solution, but may work if you are okay processing these as lists. You can make iterators out of the lists, then alternate between them using <code>itertools.dropwhile</code> to proceed through the elements until you get the next in line. It might look something like:</p> <pre><code>from itertools import dropwhile def pairs(a, b): index = 0 if a[0] &lt;= b[0] else 1 iters = [iter(a), iter(b)] while True: try: current = next(iters[index]) yield [index,current] index = int(not index) except StopIteration: break iters[index] = dropwhile(lambda n: n &lt; current, iters[index]) list(pairs(a, b)) </code></pre> <p>Which results in:</p> <pre><code>[[1, 0], [0, 7], [1, 13], [0, 15], [1, 17], [0, 17], [1, 18], [0, 22]] </code></pre>
python|arrays|function|numpy|multidimensional-array
1
7,778
68,775,953
Cant read xlsx file with pandas
<p>I am trying read .xlsx file as dataframe. File itself has two worksheet but when I tried to read it returns empty worksheet. Even though I have specified the sheet_name, it returns there is not a worksheet named like you have provided.</p> <p>I have used several methods but all returns []. '''</p> <pre><code>from openpyxl import load_workbook workbook = load_workbook(filename=&quot;filename.xlsx&quot;,read_only = True, data_only = True) print(workbook.sheetnames) </code></pre> <p>'''</p> <p>'''</p> <pre><code>xl = pd.read_excel('filename.xlsx',engine='openpyxl') xl.sheet_names </code></pre> <p>'''</p>
<p>Thanks everyone, I found the problem. It was because of excel.</p>
python|excel|pandas
0
7,779
68,474,569
How to skip duplicates and blank values from JSON dataframe and store into an array?
<p>The following code displays data from a JSON Line file.</p> <pre><code>import pandas as pd import numpy start = time.time() with open('stela_zerrl_t01_201222_084053_test_edited.json', 'r') as fin: df = pd.read_json(fin, lines=True) parsed_data = df[[&quot;SRC/Word1&quot;]].drop_duplicates().replace('', np.NAN).dropna().values.tolist() print(parsed_data) </code></pre> <p>The output is:</p> <pre><code>[[' '], ['E1F25701'], ['E15511D7']] </code></pre> <p>Is there a way remove the blank data, duplicates, and store it as an array?</p> <p><a href="https://i.stack.imgur.com/jrhOY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jrhOY.png" alt="enter image description here" /></a></p>
<p>Yup! Pandas has built-in functions for all of these operations:</p> <pre><code>import pandas as pd df = pd.read_json('stela_zerrl_t01_201222_084053_test_edited.json', lines=True) series = df['SRC/Word1'] no_dupes = series.drop_duplicates() no_blanks = no_dupes.dropna() final_list = no_blanks.tolist() </code></pre> <p>If you want an numpy array rather than a python list, you can change the last line to the following:</p> <pre><code>final_array = no_blanks.to_numpy() </code></pre>
python|json|pandas
1
7,780
36,521,388
multi column selection with pandas xs function is failed
<p>I have a following multiindex time-series data.</p> <pre><code>first 001 \ second open high low close jdiff_vol value date time 20150721 90100 2082.18 2082.18 2082.18 2082.18 11970 99466 90200 2082.72 2083.01 2082.18 2083.01 4886 40108 90300 2083.68 2084.20 2083.68 2083.98 6966 48847 90400 2083.63 2084.21 2083.63 2084.00 6817 48020 90500 2084.03 2084.71 2083.91 2084.32 10193 58399 20150721 90100 2084.14 2084.22 2083.59 2083.65 7860 39128 90200 2084.08 2084.08 2083.47 2083.50 7171 39147 90300 2083.25 2083.65 2083.08 2083.60 4549 34373 90400 2084.06 2084.06 2083.66 2083.80 6980 38088 90500 2083.61 2084.04 2083.27 2083.89 5292 33466 </code></pre> <p>The below code works. </p> <pre><code>opens = data.xs('open', level='second', axis=1, drop_level=True) </code></pre> <p>But, selecting multi columns using the below code fails.</p> <pre><code>opens = data.xs(('open','close'), level='second', axis=1, drop_level=True) </code></pre> <p>How can I modify it in order to select multi columns ?</p>
<p>Example:</p> <pre><code>df = pd.DataFrame( [[1,2,3,4,5,6,7,8]], columns=pd.MultiIndex.from_product([['A','B'], ['a', 'b', 'c', 'd']]) ) Out: A B a b c d a b c d 1 2 3 4 5 6 7 8 </code></pre> <p>we want to select columns <code>a</code> and <code>b</code>.</p> <pre><code>Out: A B a b a b 1 2 5 6 </code></pre> <h2>Solution 1: positive selection (same idea as jezrael)</h2> <p>Search position of the columns using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.get_loc.html" rel="nofollow noreferrer">pandas.MultiIndex.get_loc</a> and select them</p> <pre><code>select = df.columns.get_level_values(1).isin(['a', 'b']) df.loc[:, select] </code></pre> <h2>Solution 2: negative selection</h2> <p>To solve this problem, it can be more convenient to not try to select columns of interest, but rather remove unwanted columns using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html" rel="nofollow noreferrer">pandas.DataFrame.drop</a>. It enables to remove several columns in batch.</p> <p>To select <code>a</code> and <code>b</code>, remove <code>c</code> and <code>d</code>.</p> <pre><code>df.drop(['c', 'd'], level=1, axis=1) </code></pre>
python|pandas|multiple-columns|multi-index
2
7,781
52,922,131
Adjusting dataframe columns in Pandas
<p>I have a dataframe output that looks like this:</p> <pre><code>Index Region Date 0 W S CENTRAL Sep 2018 1 388 0 MOUNTAIN Sep 2018 1 229 0 PACIFIC Sep 2018 1 145 </code></pre> <p>I would like to put each iteration of numerical value <em>underneath</em> every Region, ie: 388 under Region, and place it in a new column right beside the Region column called <em>Total.</em></p> <p>The data begins in .txt form and reads into the script as a list within a list like:</p> <pre><code>[[W S CENTRAL, 388], [MOUNTAIN, 229], [PACIFIC, 145]] </code></pre> <p>I'd like my output to be:</p> <pre><code>Region Total Date WS CENTRAL 388 Sep 2018 MOUNTAIN 229 Sep 2018 PACIFIC 145 Sep 2018 </code></pre> <p>So then I can groupby() the Date for each region.</p> <p>Code for the parse of lists into the dataframe is:</p> <pre><code>def join_words(n): frames = list() for listy in n: grouper = groupby(listy, key=str.isalpha) joins = [[' '.join(v)] if alpha_flag else list(v) for alpha_flag, v in grouper] res = list(chain.from_iterable(joins)) df = pd.DataFrame(res, columns = ['Region']) df['Date'] = os.path.split(file)[-1] frames.append(df) new_df = pd.concat(frames) return new_df </code></pre> <p>The issue arises when changing the res variable into a dataframe; as res prints as the list version of what I want as an output. The grouper and joins variables are used to pass through strings next to one another and join them into a single string (for country name purposes).</p>
<p>You can use the shift function in your case. (looking at how your dataframe looks like)</p> <pre><code>df['Total'] = df['Region'].shift(-1) df = df[df.index %2 == 0] order = [0,2,1] df = df[df.columns[order]] </code></pre>
python|python-3.x|pandas|list|dataframe
1
7,782
53,274,668
Creating a data frame during a for loop
<p>So, I have the following bit of code</p> <pre><code>for category in ['a','b','c','d']: 'HML_Flag_'+ category = pd.merge(category,HML_base_table,'inner','random') 'HML_Flag_'+ category = 'HML_Flag_'+ category[['random','HML']] 'HML_Flag_'+ category = 'HML_Flag_'+ category.groupby('HML').count() </code></pre> <p>The error I am getting is as follows </p> <blockquote> <p>SyntaxError: can't assign to operator</p> </blockquote> <p>How do I create dataframes and change them for every cycle within the loop?</p>
<p>try this,</p> <p>Note: This method is highly not recommended</p> <pre><code>for category in ['a','b','c','d']: exec("%s=%d" % ('HML_Flag_'+ category , 5)) print HML_Flag_a </code></pre> <p>Instead of above code use below,</p> <pre><code>dic={} for category in ['a','b','c','d']: dic['HML_Flag_'+ category]= 5 print dic['HML_Flag_a'] </code></pre> <p>Note: As per your code, it reassigns the value in that same variable.</p>
python|pandas
0
7,783
53,008,798
Complete pandas dataframe with zero values for large datasets
<p>I have a dataframe that looks like this:</p> <pre><code>&gt;&gt; df index week day hour count 5 10 2 10 70 5 10 3 11 80 7 10 2 18 15 7 10 2 19 12 </code></pre> <p>where <code>week</code> is the week of the year, <code>day</code> is day of the week (<code>0-6</code>), and <code>hour</code> is hour of the day (<code>0-23</code>). However, since I plan to convert this to a 3D array (week x day x hour) later, I have to include hours where there are no items in the <code>count</code> column. Example:</p> <pre><code>&gt;&gt; target_df index week day hour count 5 10 0 0 0 5 10 0 1 0 ... 5 10 2 10 70 5 10 2 11 0 ... 7 10 0 0 0 ... ... </code></pre> <p>and so on. What I do is to generate a dummy dataframe containing all index-week-day-hour combinations possible (basically <code>target_df</code> without the <code>count</code> column):</p> <pre><code>&gt;&gt; dummy_df index week day hour 5 10 0 0 5 10 0 1 ... 5 10 2 10 5 10 2 11 ... 7 10 0 0 ... ... </code></pre> <p>and then using</p> <pre><code>target_df = pd.merge(df, dummy_df, on=['index','week','day','hour'], how='outer').fillna(0) </code></pre> <p>This works fine for small datasets, but I'm working with a lot of rows. With the case I'm working on now, I get 82M rows for <code>dummy_df</code> and <code>target_df</code>, and it's painfully slow.</p> <p><em>EDIT</em>: The slowest part is actually constructing <code>dummy_df</code>!!! I can generate the individual lists but combining them into a pandas dataframe is the slowest part.</p> <pre><code>num_weeks = len(week_list) num_idxs = len(df['index'].unique()) print('creating dummies') _dummy_idxs = list(itertools.chain.from_iterable( itertools.repeat(x, 24*7*num_weeks) for x in df['index'].unique())) print('\t_dummy_idxs') _dummy_weeks = list(itertools.chain.from_iterable( itertools.repeat(x, 24*7) for x in week_list)) * num_idxs print('\t_dummy_weeks') _dummy_days = list(itertools.chain.from_iterable( itertools.repeat(x, 24) for x in range(0,7))) * num_weeks * num_idxs print('\t_dummy_days') _dummy_hours = list(range(0,24)) * 7 * num_weeks * num_idxs print('\t_dummy_hours') print('Creating dummy_hour_df with {0} rows...'.format(len(_dummy_hours))) # the part below takes the longest time dummy_hour_df = pd.DataFrame({'index': _dummy_idxs, 'week': _dummy_weeks, 'day': _dummy_days, 'hour': _dummy_hours}) print('dummy_hour_df completed') </code></pre> <p>Is there a faster way to do this?</p>
<p>As an alternative, you can use <code>itertools.product</code> for the creation of <code>dummy_df</code> as a product of lists:</p> <pre><code>import itertools index = range(100) weeks = range(53) days = range(7) hours = range(24) dummy_df = pd.DataFrame(list(itertools.product(index, weeks, days, hours)), columns=['index','week','day','hour']) dummy_df.head() 0 1 2 3 0 0 0 0 0 1 0 0 0 1 2 0 0 0 2 3 0 0 0 3 4 0 0 0 4 </code></pre>
python|pandas
1
7,784
53,163,862
What is the maximum number of pseudo-random numbers that can be generated with numpy.random before the sequence begins to repeat?
<p>I need to generate many hundreds of millions of random numbers for a clustering analysis. I am using numpy.random and was wondering if anyone knows the maximum number of pseudo-randoms that can be generated with numpy.random before the sequence begins to repeat? A quick look in the numpy documentation didn't help.</p> <p>I know I can generate numbers in chunks using different seeds, but I'm curious as to the maximum number.</p>
<p>It is, I believe, Mersenne Twister with period 2<sup>19937</sup>-1</p> <p><a href="https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.set_state.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.set_state.html</a></p>
python|numpy|random|numpy-random
3
7,785
21,189,107
Getting the index and value from a Series
<p>I'm having a bit of a slow moment with selection and indexing in pandas.</p> <p>I have a Date Time series from which I am trying to select certain elements, along with their date time index in order to append them to a new series. Example:</p> <pre><code>import pandas as pd x=pd.Series([11,12,13,14,15,16,17,18,19,20]) for i in range(len(x)): print x.ix[i] </code></pre> <p>Gives the output:</p> <pre><code>runfile('C:/Users/AClayton/WinPython-64bit-2.7.5.3/python-2.7.5.amd64/untitled6.py', wdir='C:/Users/AClayton/WinPython-64bit-2.7.5.3/python-2.7.5.amd64') 11 12 13 14 15 16 17 18 19 20 </code></pre> <p>I would like to have the output</p> <pre><code>1 11 2 12 3 13 4 14 5 15 etc </code></pre> <p>Anyway to get the index too? (I don't just want to print it, I want to append some to a new series. For example, add all the numbers divisible by two to a Series, along with their index).</p> <p>Sorry if this is obvious, been a long day.</p>
<p>To get both index and value, you can iterate over series. Note that default index starts from <code>0</code>, not from <code>1</code>:</p> <pre><code>&gt;&gt;&gt; for i, v in x.iteritems(): ... print i, v ... 0 11 1 12 2 13 3 14 4 15 5 16 6 17 7 18 8 19 9 20 </code></pre> <p>Sure you can assign a custom index to a series:</p> <pre><code>&gt;&gt;&gt; x.index = range(1, 6)*2 &gt;&gt;&gt; for i, v in x.iteritems(): ... print i, v ... 1 11 2 12 3 13 4 14 5 15 1 16 2 17 3 18 4 19 5 20 </code></pre> <p>Don't completely get what you mean with <em>"I want to append some to a new series"</em>, but you can access index with <code>index</code> property:</p> <pre><code>&gt;&gt;&gt; x.index Int64Index([1, 2, 3, 4, 5, 1, 2, 3, 4, 5], dtype='int64') </code></pre> <p>or move it into series, making a dataframe:</p> <pre><code>&gt;&gt;&gt; x.reset_index() index 0 0 1 11 1 2 12 2 3 13 3 4 14 4 5 15 5 1 16 6 2 17 7 3 18 8 4 19 9 5 20 [10 rows x 2 columns] </code></pre>
python|indexing|pandas|append|series
4
7,786
63,355,338
extracting data with different columns from different rows from pd dataframe
<p>Is there any way to extract data from dataframe by selecting different columns and different rows. For example, I want 3rd column data from row 1-100 and 2nd column data from row 101-200. I am currently using for loop but would be nice if there is any faster option.</p> <pre><code>low_data =[] up_data = [] for i in range(200): low = df.iloc[i,2] up = df.iloc[i,1] low_data.append(low) up_data.append(up) </code></pre>
<p>you coud create your indexer:</p> <pre><code>ix = np.array([np.arange(100), np.arange(100, 200)]).T iy = np.array([[1, 2]]) </code></pre> <p>and the values are then</p> <pre class="lang-py prettyprint-override"><code>df.values[ix, iy] </code></pre>
python|pandas
0
7,787
63,630,933
Expected input batch_size (1) to match target batch_size (10)
<p>I'm trying to run MNIST dataset in PyTorch, I'm using cross-entropy loss and simple Neural network of neurons(748,512,128,10) for the process. But I'm getting this error:</p> <pre><code>ValueError: Expected input batch_size (1) to match target batch_size (10). </code></pre> <p>My Model:</p> <pre><code>class Netz(nn.Module): def __init__(self,n_input_features): super(Netz,self).__init__() self.linear=nn.Linear(784,512,bias=True) self.l1=nn.Linear(512,128,bias=True) self.l2=nn.Linear(128,10,bias=True) def forward(self,x): x=F.relu(self.linear(x)) x=F.relu(self.l1(x)) return F.softmax(self.l2(x),dim=1) model=Netz(784) </code></pre> <p>Data preprocessing:</p> <pre><code>mnist = keras.datasets.mnist #Copying data (x_train, y_train),(x_test, y_test) = mnist.load_data() #One-hot encoding the labels y_train = keras.utils.to_categorical(y_train) y_test = keras.utils.to_categorical(y_test) #Flattening the images x_train_reshaped = x_train.reshape((60000,784)) x_test_reshaped = x_test.reshape((10000,784)) #Normalizing the inputs x_train_nml = x_train_reshaped/255.0 x_test_nml = x_test_reshaped/255.0 comb_data = np.hstack([x_train_nml,y_train]) #np.random.shuffle(comb_data) x_train = comb_data[:,:-10] y_train = comb_data[:,-10:] x_train=torch.from_numpy(x_train.astype(np.float32)) x_test=torch.from_numpy(x_test.astype(np.float32)) y_train=torch.from_numpy(y_train.astype(np.float32)) y_test=torch.from_numpy(y_test.astype(np.float32)) </code></pre> <p>Data Loading:</p> <pre><code>class Data(Dataset): def __init__(self): self.x=x_train self.y=y_train self.len=self.x.shape[0] def __getitem__(self,index): return self.x[index],self.y[index] def __len__(self): return self.len </code></pre> <p>Main:</p> <pre><code>criterion=nn.CrossEntropyLoss() print(criterion) optimizer=torch.optim.SGD(model.parameters(),lr=0.05) dataset=Data() train_data=DataLoader(dataset=dataset,batch_size=1,shuffle=False) num_epochs=5 for epoch in range(num_epochs): for x,y in train_data: x = x.view(x.shape[0], -1) y_pred=model(x) #y=y.squeeze_() print(y_pred) loss=criterion(y_pred,y.flatten()) loss.backward() optimizer.step() optimizer.zero_grad() </code></pre> <p>Full error:</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-85-142a974fa5cd&gt; in &lt;module&gt;() 10 #y=y.squeeze_() 11 print(y_pred) ---&gt; 12 loss=criterion(y_pred,y.flatten()) 13 loss.backward() 14 optimizer.step() 3 frames /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --&gt; 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target) 946 def forward(self, input: Tensor, target: Tensor) -&gt; Tensor: 947 return F.cross_entropy(input, target, weight=self.weight, --&gt; 948 ignore_index=self.ignore_index, reduction=self.reduction) 949 950 /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 2420 if size_average is not None or reduce is not None: 2421 reduction = _Reduction.legacy_get_string(size_average, reduce) -&gt; 2422 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 2423 2424 /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 2214 if input.size(0) != target.size(0): 2215 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' -&gt; 2216 .format(input.size(0), target.size(0))) 2217 if dim == 2: 2218 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) ValueError: Expected input batch_size (1) to match target batch_size (10). </code></pre> <p>Can someone please tell me how to fix this error?</p>
<p>I believe that your</p> <pre><code>loss=criterion(y_pred,y.flatten()) </code></pre> <p>should be modified to</p> <pre><code>loss=criterion(y_pred.squeeze(0),y.flatten()) </code></pre> <p>to match the size at the first dimension.</p>
python|pytorch|mnist
0
7,788
21,603,275
How could I make logarithmic bins between 0 and 1?
<p>I want to bin my data logarithmically while they are distributed between zero and one. I use this command :</p> <pre><code>nstep=10 loglvl=np.logspace(np.log10(0.0),np.log10(1.0),nstep) </code></pre> <p>but it doesn't work. any idea how it can be done in python?</p>
<p>How about:</p> <pre><code>np.logspace(0.0, 1.0, nstep) / 10. </code></pre>
python|numpy|scipy
2
7,789
21,750,012
Is there a way to make a complex number in NumPy with the accuracy of a Decimal type?
<p>I'm working in Python with NumPy arrays of complex numbers that extend well past the normal floating point limits of NumPy’s default <em>Complex</em> type (numbers greater than 10^500). I wanted to know if there was some way I could extend NumPy so that it will be able to handle complex numbers with this sort of magnitude. For example, is there a way to make a NumPy complex type that uses functionality from the Decimal module?</p> <p>I know there are resources available such as <a href="https://code.google.com/p/mpmath/" rel="nofollow noreferrer">mpmath</a> that could probably do what I need. However, it is a requirement of my project that I use NumPy.</p> <p>For anyone that's interested in why I need these enormous numbers, it's because I'm working on a <a href="https://en.wikipedia.org/wiki/Numerical_relativity" rel="nofollow noreferrer">numerical relativity</a> simulation of the early universe.</p>
<p>Depending on your platform, you may have support for complex192 and/or complex256. These are generally not available on Intel platforms under Windows, but they are on some others—if your code is running on <a href="https://en.wikipedia.org/wiki/Solaris_%28operating_system%29" rel="nofollow noreferrer">Solaris</a> or on a supercomputer cluster, it may support one of these types. On Linux, you may even find them available or you could create your array using <code>dtype=object</code> and then use bigfloat or gmpy2.mpc.</p> <ol> <li>complex256 numbers up to 10<sup>4932</sup> + 10<sup>4932</sup>j</li> <li>complex192 ditto, but with less precision</li> <li>I have even seen mention of NumPy and complex512...</li> </ol>
python|numpy|decimal|precision|complex-numbers
2
7,790
24,821,390
Read columns from file where first row is string
<p>What I would like to do is to read in columns from a <code>.dat</code> file. I have been able to do this using <code>scitools.filetable.read_columns()</code>. The problem that I am having is that the first row of my <code>.dat</code> file contains <code>strings</code>. How can I skip the first row?</p> <p>So for a short example I have the following <code>.dat</code> file:</p> <pre><code>a b c d e 1 3 5 7 9 2 4 6 8 10 </code></pre> <p>From this type of <code>.dat</code> file I want to create arrays for each column without the <code>string</code>. When the .dat file would not contain <code>a,b,c,d,e</code> it would be very easy as it would just be:</p> <pre><code>import scitools.filetable fp = open("blabla.dat", "r") a, b, c, d, e = scitools.filetable.read_columns(fp) </code></pre> <p>Now <code>a</code> would read: <code>[1 2]</code>.</p> <p>However, when I try to do the same thing when <code>a</code>, <code>b</code>, <code>c</code>, <code>d</code>, and <code>e</code> are part of the <code>.dat</code> file, as indicated in my example, <code>scitools</code> does not work since it cannot convert a <code>string</code> to a <code>float</code>. How can I open the file without the first row or create the desired columns?</p>
<h1>Use <code>fp.next</code> to advance one line further</h1> <p>As <code>fp</code> is file descriptor for file open in text mode, iterating over it reads it line by line.</p> <p>You can ask <code>fp</code> to read one line further by <code>fp.next()</code> and then pass it to your `scitools.filetable.read_colums(fp)</p> <p>Your code would look like this after modification:</p> <pre><code>import scitools.filetable fp = open("blabla.dat", "r") fp.next() a, b, c, d, e = scitools.filetable.read_columns(fp) </code></pre> <p>Or using context manager for closing the file:</p> <pre><code>import scitools.filetable with open("blabla.dat", "r") as fp: fp.next() a, b, c, d, e = scitools.filetable.read_columns(fp) </code></pre>
python|numpy|scitools
1
7,791
24,777,820
Numpy array within a specific range
<p>I have a numpy array, <code>z</code>, of around 400,000 values. The range of <code>z</code> is from <code>0 to 2.9</code></p> <p>I want to divide this array into four parts: </p> <pre><code>z1 = 0.0&lt;z&lt;=0.5 z2 = 0.5&lt;z&lt;=1.0 z3 = 1.0&lt;z&lt;=1.5 z4 = 1.5&lt;z&lt;=2.9 </code></pre> <p>I have been using: </p> <pre><code>z1 = np.where(np.logical_and(z&gt;0, z&lt;=0.5)) z2 = np.where(np.logical_and(z&gt;0.5, z&lt;=1.0)) </code></pre> <p>the above does not seem to give me <code>z1</code> or <code>z2</code> within the required range (approximately <code>z1</code> should be an array of length 100,000 with values in the range <code>0&lt;z&lt;=0.5</code>!! <strong>I have tried it with simple arrays of length 100 or so and it works.</strong> </p> <p>What am I doing wrong here? Or is there another way of dividing my array into four parts? </p>
<p>The problem is that <code>np.where()</code> returns the indices. To get the values you can do:</p> <pre><code>z1 = np.take(z, np.where(np.logical_and(z&gt;0, z&lt;=0.5))[0]) </code></pre> <p>and so forth...</p> <p>It is perhaps faster to directly use the mask to obtain the values through fancy indexing:</p> <pre><code>z1 = z[np.logical_and(z&gt;0, z&lt;=0.5)] </code></pre>
python|arrays|numpy|range
0
7,792
24,847,560
best way to plot web based graph in Python
<p>Is there a python package to generate web based interactive bar graphs?</p> <p>I have the following requirements:</p> <ul> <li><p>I cannot use plotly, matplotlib as they depend on numpy (lots of dependencies). My environment cannot install any such packages, however I can try using the source of the package.</p></li> <li><p>I need cross-platform package</p></li> </ul>
<p>You need to rely on d3.js if you want to do without any packages. Generate data from python , render in d3.js for plotting , and interactivity. Not reusable much , not suitalbe if project is huge. <a href="http://d3js.org" rel="nofollow noreferrer">http://d3js.org</a></p> <p>If you are looking for full stack (which will generate plots for you and you can host on web server) look at docs.bokeh.org . It depends on :</p> <pre><code>Jinja2 numpy packaging pillow python-dateutil PyYAML six tornado </code></pre> <p>They are automatically installed by using contiuum.io Anaconda/MiniConda Python distribution.</p> <p>Using conda package manger you do not need to worried about installing binary packages , Anaconda python distribution offers everything you need in your scenario. It have <code>conda</code> package manager , which install platform independent <code>Binaries</code> with all the dependancies. Thats mean you do not need Extra package manager or Compiler (GCC) to building binary from scratch at all. </p> <p>I have tested conda on Bare-bone linuxes without packages and it works perfectly. <a href="http://conda.pydata.org/miniconda.html" rel="nofollow noreferrer">http://conda.pydata.org/miniconda.html</a> to do it :</p> <p>Download miniconda:</p> <pre><code>wget http://repo.continuum.io/miniconda/Miniconda-3.5.5-Linux-x86_64.sh </code></pre> <p>install it (no root needed): </p> <p>bash Miniconda-3.5.5-Linux-x86_64.sh</p> <p>then do:</p> <pre><code>conda create -n plotting_env python conda update conda conda install bokeh </code></pre> <p>Conda is Full (dependancy resolving) , cross platform , packagemanager which already have Virtualenv-style support. It will install binary of all need libaries (including C libaries) without needing OS provided package manager. Then code away!</p>
python|numpy|plotly
1
7,793
24,528,218
Aligning array values
<p>Lets say I have two arrays, both with values representing a brightness of the sun. The first array has values measured in the morning and second one has values measured in the evening. In the real case I have around 80 arrays. I'm going to plot the pictures using matplotlib. The plotted circle will (in both cases) be the same size. However the position of the image will change a bit because of the Earth's motion and this should be avoided.</p> <pre><code>&gt;&gt;&gt; array1 [0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0] [0, 0, 1, 3, 1, 0] [0, 0, 1, 1, 2, 0] [0, 0, 1, 1, 1, 0] [0, 0, 0, 0, 0, 0] &gt;&gt;&gt; array2 [0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0] [0, 0, 1, 2, 1, 0] [0, 0, 1, 1, 4, 0] [0, 0, 1, 1, 1, 0] </code></pre> <p>In the example above larger values mean brighter spots and zero values are plotted as black space. The arrays are always the same size. How do I align the significant values (not zero) in array2 with the ones in array1? So the outcome should be like this.</p> <pre><code>&gt;&gt;&gt; array2(aligned) [0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0] [0, 0, 1, 2, 1, 0] [0, 0, 1, 1, 4, 0] [0, 0, 1, 1, 1, 0] [0, 0, 0, 0, 0, 0] </code></pre> <p>This must be done in order to post-process arrays in a meaningful way e.q. calculating average or sum etc. Note! Finding a mass center point and aligning accordingly doesn't work because of possible high values on the edges that change during a day.</p>
<p>One thing that may cause problems with this kind of data is that the images are not nicely aligned with the pixels. I try to illustrate my point with two arrays with a square in them:</p> <pre><code>array1: 0 0 0 0 0 0 2 2 2 0 0 2 2 2 0 0 2 2 2 0 0 0 0 0 0 array2: 0 0 0 0 0 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 0 0 0 0 </code></pre> <p>As you see, the limited resolution is a challenge, as the image has moved 0.5 pixels.</p> <p>Of course, it is easy to calculate the COG of both of these, and see that it is (row,column=2,2) for the first array and (2, 2.5) for the second array. But if we move the second array by .5 to the left, we get:</p> <pre><code>array2_shifted: 0 0 0 0 0 0.5 1.5 2.0 1.5 0.5 0.5 1.5 2.0 1.5 0.5 0.5 1.5 2.0 1.5 0.5 0 0 0 0 0 </code></pre> <p>So that things start to spread out.</p> <p>Of course, it may be that your arrays are large enough so that you can work without worrying about subpixels, but if you only have a few or a few dozen pixels in each direction, this may become a nuisance.</p> <p>One way out of this is to first increase the image size by suitable extrapolation (such as done with an image processing program; the <code>cv2</code> module is full of possibilities with this). Then the images can be fitted together with single-pixel precision and downsampled back.</p> <hr /> <p>In any case you'll need a method to find out where the fit between the images is the best. There are a lot of choices to make. One important thing to notice is that you may not want to align the images with the first image, you may want to alignt all images with a reference. The reference could in this case be a perfect circle in the center of the image. Then you will just need to move all images to match this reference.</p> <p>Once you have chosen your reference, you need to choose the method which gives you some metrics about the alignment of the images. There are several possibilities, but you may start with these:</p> <ol> <li><p>Calculate the center of gravity of the image.</p> </li> <li><p>Calculate the correlation between an image and the reference. The highest point(s) of the resulting correlation array give you the best match.</p> </li> <li><p>Do either of the above but only after doing some processing for the image (typically limiting the dynamic range at each or both ends).</p> </li> </ol> <p>I would start by something like this:</p> <ul> <li>possibly upsample the image (if the resolution is low)</li> <li>limit the high end of the dynamic range (e.g. <code>clipped=np.clip(image,0,max_intensity)</code>)</li> <li>calculate the center of gravity (e.g. <code>scipy.ndimage.center_of_mass(clipped)</code>)</li> <li>translate the image by the offset of the center of gravity</li> </ul> <p>Translation of a 2D array requires a bit of code but should not be excessively difficult. If you are sure you have black all around, you can use:</p> <pre><code>translated = np.roll(np.roll(original, deltar, axis=0), deltac, axis=1) </code></pre> <p>This rolls the leftmost pixels to the right (or vice versa). If that is bad, then you'll need to zero them out. (Or have a look at: <a href="https://stackoverflow.com/questions/2777907/python-numpy-roll-with-padding">python numpy roll with padding</a>).</p> <p>A word of warning about the alignment procedures: The simples (COG, correlation) fail, if you have an intensity gradient across the image. Due to this you may want to look for edges and then correlate. The intensity limiting also helps here, if your background is really black.</p>
python|arrays|numpy|matplotlib
3
7,794
53,474,945
Cannot plot dataframe as barh because TypeError: Empty 'DataFrame': no numeric data to plot
<p>I have been all over this site and google trying to solve this problem.<br> It appears as though I'm missing a fundamental concept in making a plottable dataframe.<br> I've tried to ensure that I have a column of strings for the "Teams" and a column of ints for the "Points"<br> Still I get: TypeError: Empty 'DataFrame': no numeric data to plot<br></p> <pre><code>import csv import pandas import numpy import matplotlib.pyplot as plt from matplotlib.ticker import StrMethodFormatter set_of_teams = set() def load_epl_games(file_name): with open(file_name, newline='') as csvfile: reader = csv.DictReader(csvfile) raw_data = {"HomeTeam": [], "AwayTeam": [], "FTHG": [], "FTAG": [], "FTR": []} for row in reader: set_of_teams.add(row["HomeTeam"]) set_of_teams.add(row["AwayTeam"]) raw_data["HomeTeam"].append(row["HomeTeam"]) raw_data["AwayTeam"].append(row["AwayTeam"]) raw_data["FTHG"].append(row["FTHG"]) raw_data["FTAG"].append(row["FTAG"]) raw_data["FTR"].append(row["FTR"]) data_frame = pandas.DataFrame(data=raw_data) return data_frame def calc_points(team, table): points = 0 for row_number in range(table["HomeTeam"].count()): home_team = table.loc[row_number, "HomeTeam"] away_team = table.loc[row_number, "AwayTeam"] if team in [home_team, away_team]: home_team_points = 0 away_team_points = 0 winner = table.loc[row_number, "FTR"] if winner == 'H': home_team_points = 3 elif winner == 'A': away_team_points = 3 else: home_team_points = 1 away_team_points = 1 if team == home_team: points += home_team_points else: points += away_team_points return points def get_goals_scored_conceded(team, table): scored = 0 conceded = 0 for row_number in range(table["HomeTeam"].count()): home_team = table.loc[row_number, "HomeTeam"] away_team = table.loc[row_number, "AwayTeam"] if team in [home_team, away_team]: if team == home_team: scored += int(table.loc[row_number, "FTHG"]) conceded += int(table.loc[row_number, "FTAG"]) else: scored += int(table.loc[row_number, "FTAG"]) conceded += int(table.loc[row_number, "FTHG"]) return (scored, conceded) def compute_table(df): raw_data = {"Team": [], "Points": [], "GoalDifference":[], "Goals": []} for team in set_of_teams: goal_data = get_goals_scored_conceded(team, df) raw_data["Team"].append(team) raw_data["Points"].append(calc_points(team, df)) raw_data["GoalDifference"].append(goal_data[0] - goal_data[1]) raw_data["Goals"].append(goal_data[0]) data_frame = pandas.DataFrame(data=raw_data) data_frame = data_frame.sort_values(["Points", "GoalDifference", "Goals"], ascending=[False, False, False]).reset_index(drop=True) data_frame.index = numpy.arange(1,len(data_frame)+1) data_frame.index.names = ["Finish"] return data_frame def get_finish(team, table): return table[table.Team==team].index.item() def get_points(team, table): return table[table.Team==team].Points.item() def display_hbar(tables): raw_data = {"Team": [], "Points": []} for row_number in range(tables["Team"].count()): raw_data["Team"].append(tables.loc[row_number+1, "Team"]) raw_data["Points"].append(int(tables.loc[row_number+1, "Points"])) df = pandas.DataFrame(data=raw_data) #df = pandas.DataFrame(tables, columns=["Team", "Points"]) print(df) print(df.dtypes) df["Points"].apply(int) print(df.dtypes) df.plot(kind='barh',x='Points',y='Team') games = load_epl_games('epl2016.csv') final_table = compute_table(games) #print(final_table) #print(get_finish("Tottenham", final_table)) #print(get_points("West Ham", final_table)) display_hbar(final_table) </code></pre> <p><br> The output:<br></p> <pre><code> Team Points 0 Chelsea 93 1 Tottenham 86 2 Man City 78 3 Liverpool 76 4 Arsenal 75 5 Man United 69 6 Everton 61 7 Southampton 46 8 Bournemouth 46 9 West Brom 45 10 West Ham 45 11 Leicester 44 12 Stoke 44 13 Crystal Palace 41 14 Swansea 41 15 Burnley 40 16 Watford 40 17 Hull 34 18 Middlesbrough 28 19 Sunderland 24 Team object Points int64 dtype: object Team object Points int64 dtype: object Traceback (most recent call last): File "C:/Users/Michael/Documents/Programming/Python/Premier League.py", line 99, in &lt;module&gt; display_hbar(final_table) File "C:/Users/Michael/Documents/Programming/Python/Premier League.py", line 92, in display_hbar df.plot(kind='barh',x='Points',y='Team') File "C:\Program Files (x86)\Python36-32\lib\site- packages\pandas\plotting\_core.py", line 2941, in __call__ sort_columns=sort_columns, **kwds) File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\plotting\_core.py", line 1977, in plot_frame **kwds) File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\plotting\_core.py", line 1804, in _plot plot_obj.generate() File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\plotting\_core.py", line 258, in generate self._compute_plot_data() File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\plotting\_core.py", line 373, in _compute_plot_data 'plot'.format(numeric_data.__class__.__name__)) TypeError: Empty 'DataFrame': no numeric data to plot </code></pre> <p><br> What am I doing wrong in my display_hbar function that is preventing me from plotting my data?</p> <p>Here is the <a href="http://www.filedropper.com/epl2016" rel="nofollow noreferrer">csv file</a></p>
<pre><code>df.plot(x = "Team", y="Points", kind="barh"); </code></pre> <p><a href="https://i.stack.imgur.com/1kN2a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1kN2a.png" alt="enter image description here"></a></p>
python-3.x|pandas|matplotlib
3
7,795
53,562,669
how to average all rows of the data set when a certain column's row has similar value
<p>Consider the Time column there are two 18 values so i want to average those two rows which contain that 18 value. <a href="https://i.stack.imgur.com/II1hS.png" rel="nofollow noreferrer">this is a picture of the data set</a></p>
<p>I'm not sure how you have your columns and rows indexed in your actual code, how the table was generated, etc. so you can't just copy and paste this but something in this vein might work.</p> <pre><code> for i in time if i==i+1: rows[i]=(rows[i]+rows[i+1])/2 rows.remove(rows[i+1] else: pass </code></pre> <p>Again, this code will almost certainly not work as is. It's basically pseudocode in python syntax. I have no idea what your code looks like so I can't provide anything more specific, but it could give you an idea of how to approach this sort of problem.</p>
python|pandas
0
7,796
53,753,929
I can't solve issue "axis -1 is out of bounds for array of dimension 0"
<p>I'm trying to model the motion of a spring pendulum and this is my code:</p> <pre><code>import numpy as np from scipy.integrate import odeint from numpy import sin, cos, pi, array import matplotlib.pyplot as plt #Specify initial conditions init = array([pi/18, 0]) # initial values def deriv(z, t): x,y=z dy=np.diff(y,1) dy2=np.diff(y,2) dx=np.diff(x,1) dx2=np.diff(x,2) dt=np.diff(t,1) dt2=np.diff(t,1) dx2dt2=(4+x)*(dydt)^2-5*x+9.81*cos(y) dy2dt2=(-9.81*sin(y)-2*(dxdt)*(dydt))/(l+x) return np.array([dx2dt2,dy2dt2]) time = np.linspace(0.0,10.0,1000) y = odeint(deriv,init,time) plt.xlabel("time") plt.ylabel("y") plt.plot(time, y) plt.show() </code></pre> <p>I keep getting the error </p> <pre><code>Traceback (most recent call last): File "/Users/cnoxon/Desktop/GRRR.py", line 24, in &lt;module&gt; y = odeint(deriv,init,time) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/integrate/odepack.py", line 233, in odeint int(bool(tfirst))) File "/Users/cnoxon/Desktop/GRRR.py", line 13, in deriv dy=np.diff(y,1) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/lib/function_base.py", line 1163, in diff axis = normalize_axis_index(axis, nd) numpy.core._internal.AxisError: axis -1 is out of bounds for array of dimension 0 </code></pre> <p>I'm a complete beginner to Python so I won't understand most of the terminology, so please bear with me. How do I fix this problem? I'm trying to plot the solutions to the two equations </p> <pre><code>dx2dt2=(4+x)*(dydt)^2-5*x+9.81*cos(y) dy2dt2=(-9.81*sin(y)-2*(dxdt)*(dydt))/(l+x) </code></pre> <p>but I'm having a lot of trouble. Would someone please explain to me how I should rewrite my code to resolve this?</p> <p>Thank you!</p>
<p>The problem happens because <code>x</code> and <code>y</code> are integers, not arrays, so you can't do <code>np.diff(y,1)</code>.</p> <p>But your problem is deeper. Each entry of the <code>y</code> array must fully describe your system, this means that every value needed to compute <code>dx2dt2</code> and <code>dy2dt2</code> must be in this vector. So <code>y</code> has to be a list of <code>[x, y, dxdt, dydt]</code>. (Adapt <code>init</code> to correspond to this)</p> <p>Then, your <code>deriv</code> function just has to give the derivative of such a vector, which is: <code>[dxdt, dydt, dx2dt2, dy2dt2]</code>. Your <code>deriv</code> function becomes very simple!</p> <pre><code>def deriv(z, t): x, y, dxdt, dydt = z dx2dt2=(4+x)*(dydt)^2-5*x+9.81*cos(y) dy2dt2=(-9.81*sin(y)-2*(dxdt)*(dydt))/(l+x) return np.array([dxdt, dydt, dx2dt2, dy2dt2]) </code></pre> <hr> <p>And you have two other little errors: use <code>**</code> instead of <code>^</code> in python, and I think you changed a <code>1</code> into a <code>l</code>...</p>
python|numpy|indexoutofboundsexception|odeint|index-error
3
7,797
53,645,041
CSV Files and pandas
<p>I assume this is a trick question on this hw i'm working on but maybe it's not?</p> <p>What object do you get after reading a csv file?</p> <p>data frame</p> <p>character vector</p> <p>panel</p> <p>all of the above</p> <p>From what I know, you can use pandas to read in a csv file into a dataframe. But i know a panel is a data structure in pandas too...character vector I've never even heard of.</p> <p>Any one got any ideas? I'm fairly certain the answer is just dataframe, but hey never know.</p>
<p>The time when you read a CSV file into a variable it is stored as a <code>pandas.core.frame.DataFrame</code> object which you are familiar of. </p> <p>Now, talking about <code>Panel</code>, which represents wide format panel data, stored as 3-dimensional array have been deprecated since version 0.20.0 as listed <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.Panel.html" rel="nofollow noreferrer">Pandas Panel</a>.</p>
python|pandas|csv
1
7,798
53,619,239
How to adjust Moving Average for weekly analysis?
<p>I need to plot moving average on the basis of weekly intervals like 3 week interval or 21 days, but while adjusting for the missed dates it now counts 0 and thus it gives incorrect result.</p> <pre><code>from nsepy import get_history as gh from datetime import date import pandas as pd nifty = gh(symbol="NIFTY IT", start=date(2015,1,1), end=date(2016,1,3), index=True) idx = pd.date_range('01-01-2015', '01-01-2016') nifty.index = pd.DatetimeIndex(nifty.index) nifty = nifty.reindex(idx, fill_value=0) nifty["3weekMA"]=nifty["Close"].rolling(21).mean() nifty[nifty.Open != 0] </code></pre> <p>What can be done to tackle that.</p> <p>This is the actual result : <a href="https://i.stack.imgur.com/4xlj9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4xlj9.png" alt="![enter image description here"></a></p> <p>And the desired result must be something like :</p> <p><a href="https://i.stack.imgur.com/UXA0k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UXA0k.png" alt="![enter image description here"></a></p> <p>This is because the moving average for close must be in the range of 11000 and not 8000.</p>
<p>The simplest thing that comes to mind is that just remove the weekend values from your data:</p> <pre><code>nifty=nifty[nifty['Close']!=0] </code></pre> <p>And then perform the moving average:</p> <pre><code>nifty["3weekMA"]=nifty["Close"].rolling(15).mean() </code></pre> <p>Just instead of 21, use 15 and well, it will work as well as it should. There are few pointers to this though. Rolling mean will give mean of last 15 values but the issue is that it results this as the 15th value or 21st in your case, so the resultant plot would look something like this:</p> <p><a href="https://i.stack.imgur.com/qwady.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qwady.png" alt="enter image description here"></a></p> <p>So to tackle this, all we need to do is maybe shift the new found moving average up or maybe just plot the Close values after first 7 and before last 7 alongwith moving average values and that would look something like:</p> <pre><code>plt.figure(figsize=(10,8)) plt.plot(nifty['Close'].values.tolist()[7:-7]) plt.plot(nifty['3weekMA'].values.tolist()[14:]) </code></pre> <p><a href="https://i.stack.imgur.com/jZONv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jZONv.png" alt="enter image description here"></a></p> <p>Well but visualization is just for representation purpose; I hope you get the gist about what to do with such data. I hope this solves your problem and yes the Moving Average value is indeed coming in 11Ks and not in 8Ks.</p> <p>Sample Output:</p> <pre><code> Date Open High Low Close Volume Turnover 3weekMA ------------------------------------------------------------------------------------------------- 2015-01-15 11672.30 11774.50 11575.10 11669.85 13882213 1.764560e+10 NaN 2015-01-16 11708.85 11708.85 11582.85 11659.60 12368107 1.714690e+10 NaN 2015-01-19 11732.50 11797.60 11629.05 11642.75 13696381 1.183750e+10 NaN 2015-01-20 11681.80 11721.90 11635.70 11695.00 11021415 1.234730e+10 NaN 2015-01-21 11732.45 11838.30 11659.70 11813.70 18679282 1.973070e+10 11418.113333 2015-01-22 11832.55 11884.50 11782.95 11850.85 15715515 1.655670e+10 11460.456667 2015-01-23 11877.90 11921.00 11767.40 11885.15 30034833 2.001210e+10 11494.660000 2015-01-27 11915.60 11917.25 11679.55 11693.45 17005337 1.866840e+10 11524.320000 2015-01-28 11712.55 11821.80 11693.80 11809.55 16876897 1.937590e+10 11580.963333 2015-01-29 11812.35 11861.50 11728.75 11824.15 15520902 2.160790e+10 11641.506667 2015-01-30 11998.35 12003.35 11799.35 11824.75 18559078 2.905950e+10 11695.280000 2015-02-02 11871.35 11972.60 11847.80 11943.95 17272113 2.304050e+10 11731.566667 2015-02-03 11963.75 12000.65 11849.00 11963.90 21053605 1.770590e+10 11759.583333 </code></pre>
python|pandas|algorithm|dataframe|nsepy
1
7,799
53,363,060
keras unable to call model.predict_classes for multiple times
<pre><code>def predictOne(imgPath): model = load_model("withImageMagic.h5") image = read_image(imgPath) test_sample = preprocess(image) predicted_class = model.predict_classes(([test_sample])) return predicted_class </code></pre> <p>I have already trained a model. In this function, I load my model, read a new image, do some preprocessing and finally predict its label. </p> <p>When I run my main.py file, this function is called and everything goes smoothly. However, after a couple of seconds, this function will be called again with another image and I get this error:</p> <blockquote> <pre><code>'Cannot interpret feed_dict key as Tensor: ' + e.args[0]) </code></pre> <p>TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(5, 5, 1, 32), dtype=float32) is not an element of this graph.</p> </blockquote> <p>It's very strange that the function only works the first time. I tested multiple images and got the same behavior.</p> <p>Windows 10 - tensorflow-gpu with keras</p>
<p>Try loading model from file outside the function, and give the model object as argument to the function <code>def predictOne(imgPath, model)</code>. This will also be much faster, since the weights don't need to be loaded from disk every time a prediction is needed. </p> <p>If you want to keep loading model inside the function, import the backend:</p> <pre><code>from keras import backend as K </code></pre> <p>and then </p> <pre><code>K.clear_session() </code></pre> <p>before loading the model.</p>
python|tensorflow|machine-learning|keras|prediction
2