Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
2,000
55,870,212
How to convert a numpy array dtype=object to a sparse matrix?
<p>I have an numpy array of dtype = object containing multiple other arrays for elements and I need to convert it to a sparse matrix.</p> <p>Ex:</p> <pre><code>a = np.array([np.array([1,0,2]),np.array([1,3])]) array([array([1, 0, 2]), array([1, 3])], dtype=object) </code></pre> <p>I have tried the solution given by <a href="https://stackoverflow.com/questions/47845327/convert-numpy-object-array-to-sparse-matrix">Convert numpy object array to sparse matrix</a> with no success.</p> <pre><code>In [45]: M=sparse.coo_matrix(a) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-45-d75020bb3a38&gt; in &lt;module&gt;() ----&gt; 1 M=sparse.coo_matrix(a) /home/arturcastiel/.local/lib/python3.6/site-packages/scipy/sparse/coo.py in __init__(self, arg1, shape, dtype, copy) 183 self._shape = check_shape(M.shape) 184 --&gt; 185 self.row, self.col = M.nonzero() 186 self.data = M[self.row, self.col] 187 self.has_canonical_format = True ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre> <p>As it was explained on the comments, this is actually a jagged array. In essence, this array represents a graph that I have to convert to sparse matrix so I can use the scipy.sparse.csgraph.shortest_path routine.</p> <p>Thus,</p> <pre><code>np.array([np.array([1,0,2]),np.array([1,3])]) </code></pre> <p>should become something such as:</p> <pre><code>(1,1) 1 (1,2) 0 (1,3) 2 (2,1) 1 (2,2) 3 </code></pre>
<p>You can't. This error arises when it tries to find the nonzero elements of <code>a</code>. A sparse matrix just stores the nonzero elements of a matrix. Try</p> <pre><code>np.nonzero(a) </code></pre> <p>If your array contained lists instead of arrays, it would work - sort of:</p> <pre><code>In [615]: a = np.array([[1,0,1],[1,3]]) In [616]: np.nonzero(a) Out[616]: (array([0, 1]),) In [618]: sparse.coo_matrix(a) Out[618]: &lt;1x2 sparse matrix of type '&lt;class 'numpy.object_'&gt;' with 2 stored elements in COOrdinate format&gt; In [619]: print(_) (0, 0) [1, 0, 1] (0, 1) [1, 3] </code></pre> <p>Note this is a (1,2) shaped array, with 2 nonzero elements, both of which are the lists (objects) of the original.</p> <p>But <code>coo</code> format does little processing. It can't for example be converted to <code>csr</code> for computations:</p> <pre><code>In [622]: _618.tocsr() --------------------------------------------------------------------------- TypeError: no supported conversion for types: (dtype('O'),) </code></pre> <p>If the array wasn't jagged, it could be made into a useful sparse matrix:</p> <pre><code>In [623]: a = np.array([[1,0,1],[1,3,0]]) In [624]: a Out[624]: array([[1, 0, 1], [1, 3, 0]]) In [626]: sparse.coo_matrix(a) Out[626]: &lt;2x3 sparse matrix of type '&lt;class 'numpy.int64'&gt;' with 4 stored elements in COOrdinate format&gt; In [628]: print(_) (0, 0) 1 (0, 2) 1 (1, 0) 1 (1, 1) 3 </code></pre> <p>note that the 0 values have been omitted. In large useful sparse matrices, more than 90% of the elements are zero.</p> <p>===</p> <p>Here's a way of constructing a sparse matrix from your array of arrays. I build the <code>row,col,data</code> attributes of a <code>coo</code> format matrix from the individual arrays in <code>a</code>. </p> <pre><code>In [630]: a = np.array([np.array([1,0,1]),np.array([1,3])]) In [631]: row, col, data = [],[],[] In [632]: for i,n in enumerate(a): ...: row.extend([i]*len(n)) ...: col.extend(np.arange(len(n))) ...: data.extend(n) ...: In [633]: row,col,data Out[633]: ([0, 0, 0, 1, 1], [0, 1, 2, 0, 1], [1, 0, 1, 1, 3]) In [634]: M = sparse.coo_matrix((data, (row,col))) In [635]: M Out[635]: &lt;2x3 sparse matrix of type '&lt;class 'numpy.int64'&gt;' with 5 stored elements in COOrdinate format&gt; In [636]: print(M) (0, 0) 1 (0, 1) 0 (0, 2) 1 (1, 0) 1 (1, 1) 3 In [637]: M.A Out[637]: array([[1, 0, 1], [1, 3, 0]]) </code></pre> <p>An alternative to is to pad <code>a</code> to make a 2d numeric array, and make the sparse one from that. Padding a jagged list/array has been asked before, with various solutions. This is one of the easier ones to remember and use:</p> <pre><code>In [658]: alist = list(zip(*(itertools.zip_longest(*a,fillvalue=0)))) In [659]: alist Out[659]: [(1, 0, 1), (1, 3, 0)] In [661]: sparse.coo_matrix(alist) Out[661]: &lt;2x3 sparse matrix of type '&lt;class 'numpy.int64'&gt;' with 4 stored elements in COOrdinate format&gt; In [662]: _.A Out[662]: array([[1, 0, 1], [1, 3, 0]]) </code></pre>
python|numpy|scipy
2
2,001
64,815,718
Filter column value from other columns' values and turn the results into multiple lists Pandas
<pre><code> import pandas as pd data = {&quot;Country&quot;: [&quot;AA&quot;, &quot;BB&quot;,&quot;CC&quot;,&quot;DD&quot;,&quot;EE&quot;,&quot;FF&quot;,&quot;GG&quot;], &quot;1990&quot;: [0,1,1,1,0,1,1], &quot;1991&quot;: [0,0,1,1,1,0,1], &quot;1992&quot;: [1,1,1,1,1,0,0], &quot;1993&quot;: [0,1,1,1,1,0,0]} df = pd.DataFrame(data) </code></pre> <p>The goal is: for column 1990-1993, if value == 1, return Country to 4 lists, I also want to set each list a #name of the year and don't know how to do that.</p> <p>here is my try:</p> <pre><code> for i in range(1,5): print(df[(df == 1)].iloc[:7,0].to_list()) </code></pre> <p>I got the output as 4 lists of nans... The desired output would be</p> <pre><code>c1990=[&quot;BB&quot;, &quot;CC&quot;, &quot;DD&quot;, &quot;FF&quot;, &quot;GG&quot;] c1991=[&quot;CC&quot;, &quot;DD&quot;, &quot;EE&quot;, &quot;GG&quot;] c1992=[&quot;AA&quot;, &quot;BB&quot;,&quot;CC&quot;,&quot;DD&quot;,&quot;EE&quot;] c1993=[&quot;BB&quot;,&quot;CC&quot;,&quot;DD&quot;,&quot;EE&quot;] </code></pre>
<p>One way using dict comprehension with <code>groupby</code> on <code>axis=1</code>:</p> <pre><code>res = {name: i.index[i[name]].tolist() for name, i in df.set_index(&quot;Country&quot;).astype(bool).groupby(level=0, axis=1)} print (res) {'1990': ['BB', 'CC', 'DD', 'FF', 'GG'], '1991': ['CC', 'DD', 'EE', 'GG'], '1992': ['AA', 'BB', 'CC', 'DD', 'EE'], '1993': ['BB', 'CC', 'DD', 'EE']} </code></pre>
python|pandas|list
0
2,002
64,945,683
Walk along 2D numpy array as long as values remain the same
<p><strong>Short description</strong><br> I want to walk along a numpy 2D array starting from different points in specified directions (either 1 or -1) until a column changes (see below)</p> <p><strong>Current code</strong></p> <p>First let's generate a dataset:</p> <pre><code># Generate big random dataset # first column is id and second one is a number np.random.seed(123) c1 = np.random.randint(0,100,size = 1000000) c2 = np.random.randint(0,20,size = 1000000) c3 = np.random.choice([1,-1],1000000 ) m = np.vstack((c1, c2, c3)).T m = m[m[:,0].argsort()] </code></pre> <p>Then I wrote the following code that starts at specific rows in the matrix (<code>start_points</code>) then keeps extending in the specified direction (<code>direction_array</code>) until the metadata changes:</p> <pre><code> def walk(mat, start_array): start_mat = mat[start_array] metadata = start_mat[:,1] direction_array = start_mat[:,2] walk_array = start_array while True: walk_array = np.add(walk_array, direction_array) try: walk_mat = mat[walk_array] walk_metadata = walk_mat[:,1] if sorted(metadata) != sorted(walk_metadata): raise IndexError except IndexError: return start_mat, mat[walk_array + (direction_array *-1)] s = time.time() for i in range(100000): start_points = np.random.randint(0,1000000,size = 3) res = walk(m, start_points) </code></pre> <p><strong>Question</strong><br> While the above code works fine I think there must be an easier/more elegant way to walk along a numpy 2D array from different start points until the value of another column changes? This for example requires me to slice the input array for every step in the while loop which seems quite inefficient (especially when I have to run <code>walk</code> millions of times).</p>
<p>You don't have to whole input array in while loop. You could just use the column that values you want to check.</p> <p>I refactored a little bit your code as well so there is no <code>while True</code> statement and so there is no <code>if</code> that raises error for no particular reason.</p> <p>Code:</p> <pre class="lang-py prettyprint-override"><code>def walk(mat, start_array): start_mat = mat[start_array] metadata = sorted(start_mat[:,1]) direction_array = start_mat[:,2] data = mat[:,1] walk_array = np.add(start_array, direction_array) try: while metadata == sorted(data[walk_array]): walk_array = np.add(walk_array, direction_array) except IndexError: pass return start_mat, mat[walk_array - direction_array] </code></pre> <p>In this particular reason if <code>len(start_array)</code> is a big number (thousands of elements) you could use <code>collections.Counter</code> instead of sort as it will be much faster.</p> <p>I was thinking of another approach that could be used and I that there could be a array with desired slices in correct direction. But this approach seems very dirty. Anyway I will post it maybe you will find it anyhow useful</p> <p>Code:</p> <pre class="lang-py prettyprint-override"><code>def walk(mat, start_array): start_mat = mat[start_array] metadata = sorted(start_mat[:,1]) direction_array = start_mat[:,2] _data = mat[:,1] walk_slices = zip(*[ data[start_points[i]+direction_array[i]::direction_array[i]] for i in range(len(start_points)) ]) for step, walk_metadata in enumerate(walk_slices): if metadata != sorted(walk_metadata): break return start_mat, mat[start_array + (direction_array * step)] </code></pre>
python|arrays|numpy
0
2,003
64,731,533
Great accuracy on IMDB Sentiment Analysis. Is there any train data leakage I'm missing?
<p>I'm getting an unusual high accuracy on a sentiment analysis classifier I'm testing with python <code>sklearn</code> library. This is usually some sort of training data leakage but I can't figure out if that's the case.</p> <p>My dataset has ~50k nonduplicated IMDB reviews.</p> <pre><code>import pandas as pd import sklearn from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.feature_extraction.text import CountVectorizer from sklearn.linear_model import SGDClassifier from sklearn.model_selection import GridSearchCV from sklearn.pipeline import Pipeline from pprint import pprint from time import time from sklearn.metrics import classification_report,confusion_matrix,accuracy_score, roc_curve, auc, plot_confusion_matrix import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(imdb_data.text, imdb_data.label, test_size=0.30, random_state=2) imdb_data=pd.read_csv('../../data/home/data/tm/en-sentiment/imdb_reviews_train.csv') imdb_data=imdb_data.drop_duplicates().reset_index(drop=True) imdb_data['label'] = imdb_data.label.map(lambda x: int(1) if x =='pos' else int(0) if x =='neg' else np.nan) x_train, x_test, y_train, y_test = train_test_split(imdb_data.text, imdb_data.label, test_size=0.30, random_state=2) pipeline = Pipeline([ ('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', SGDClassifier()), ]) parameters_final = { 'vect__max_df': [0.3], 'vect__min_df': [1], 'vect__max_features': [None], 'vect__ngram_range': [(1, 2)], 'tfidf__use_idf': (True, False), 'tfidf__norm': ['l2'], 'tfidf__sublinear_tf': (True, False), 'clf__alpha': (0.00001, 0.000001), 'clf__penalty': ['elasticnet'], 'clf__max_iter': [50], } grid_search = GridSearchCV(pipeline, parameters_final, n_jobs=-1, verbose=1, cv=3) grid_search.fit(x_train, y_train) y_pred = grid_search.predict(x_test) print(&quot;Accuracy: &quot;, sklearn.metrics.accuracy_score(y_true=y_test, y_pred=y_pred)) </code></pre> <p>Output:</p> <pre><code>Accuracy: 0.8967533466687183 </code></pre> <p>The review dataset can be found <a href="https://github.com/ricardorei/lightning-text-classification/blob/master/data/imdb_reviews_train.csv" rel="nofollow noreferrer">here</a></p> <p>Any clues?</p>
<p>A good way to test if there is data leakage would be to check the performance on the validation set in the repository you linked, <a href="https://github.com/ricardorei/lightning-text-classification/blob/master/data/imdb_reviews_test.csv" rel="nofollow noreferrer">here</a>.</p> <p>I downloaded the dataset and tried to construct a Naive Bayes classifier with a pipeline like so:</p> <pre><code>pipeline = Pipeline([ ('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB()), ]) </code></pre> <p>Using the same train–test split as you, I got an accuracy score of 0.86 on the hold-out data from the training set, and 0.83 on the validation set. If you get similar results, I think it might just be the case that the dataset isn't too difficult to learn from. I checked to see if there were any <code>NA</code> values that might be causing the strange performance, but <code>imdb_data.isnull().any()</code> does indeed return false.</p>
python|pandas|machine-learning|scikit-learn|confusion-matrix
1
2,004
64,865,618
Create ID column in a pandas dataframe
<p>I have a dataframe containing a trading log. My problem is that I do not have any ID to match buy and sell of a stock. The stock could be traded many times and I would like to have an ID to match each finished trade. My original dataframe a sequential timeseries dataframe with timestamps. The below example illustrates my problem, I need to match and ID traded stock in sequential order. Very simplified example:</p> <pre><code>df1 = pd.DataFrame({'stock': ['A', 'B', 'C', 'A','C', 'A', 'A'], 'deal': ['buy', 'buy', 'buy', 'sell','sell', 'buy', 'sell']}) df1 Out[84]: stock deal 0 A buy 1 B buy 2 C buy 3 A sell 4 C sell 5 A buy 6 A sell </code></pre> <p>Here is my desired output:</p> <pre><code>df1 = pd.DataFrame({'stock': ['A', 'B', 'C', 'A','C', 'A', 'A'], 'deal': ['buy', 'buy', 'buy', 'sell','sell', 'buy', 'sell'], 'ID': [1, 2, 3, 1,3, 4, 4]}) df1 Out[82]: stock deal ID 0 A buy 1 1 B buy 2 2 C buy 3 3 A sell 1 4 C sell 3 5 A buy 4 6 A sell 4 </code></pre> <p>Any ideas?</p>
<p>Try this:</p> <pre><code>m = df1['deal'] == 'buy' df1['ID'] = m.cumsum().where(m) df1['ID'] = df1.groupby('stock')['ID'].ffill() df1 </code></pre> <p>Output:</p> <pre><code> stock deal ID 0 A buy 1.0 1 B buy 2.0 2 C buy 3.0 3 A sell 1.0 4 C sell 3.0 5 A buy 4.0 6 A sell 4.0 </code></pre> <p>Details:</p> <ul> <li>Create a boolean series, True where deal equals 'buy'</li> <li>Cumsum and assign to 'ID' to buy records</li> <li>Use groupby and ffill to assign 'ID' to next 'sell' record buy 'stock'</li> </ul>
python|pandas|dataframe
3
2,005
69,534,875
Exclude df rows where a dates field: time/seconds are between a specific period
<p>Morning All,</p> <p>I have a very large df but need to strip out data NOT between 8.30am AEST to 5pm UTC.</p> <pre><code># Dates are dd/mm/yyyy df ={ 'rfq_create_date_time': ['01/10/2021 00:00:00 AM', '02/10/2021 01:01:01 AM', '03/10/2021 05:00:00 AM', '04/10/2021 10:15:15 AM', '05/10/2021 01:01:01 PM', '21/10/2021 10:29:29 PM', '22/10/2021 10:30:00 PM', '23/10/2021 10:30:01 PM',], 'Other_Field': ['A', 'B', 'C','D','E','F','G','H',], } df = pd.DataFrame.from_dict(df) print(df) </code></pre> <p>Required df:</p> <pre><code> rfq_create_date_time Other_Field 2 03/10/2021 05:00:00 AM C 3 04/10/2021 10:15:15 AM D 4 05/10/2021 01:01:01 PM E 5 21/10/2021 10:29:29 PM F 6 22/10/2021 10:30:00 PM G </code></pre> <p>First issue: I tried the <code>between_time</code> function but I don't want the date to be the index. This was added as I was getting <code>TypeError: Index must be DatetimeIndex</code></p> <pre><code>df.index = pd.to_datetime(df['rfq_create_date_time']) </code></pre> <p>Second issue: I just want to do counts of before and after but am getting <code>TypeError: bad operand type for unary ~: 'str'</code> when assigning <code>mask = ~</code></p> <pre><code># Count the number of rows excluded dfUTC_05_To_2230 = ((df['rfq_create_date_time'].between_time('5:00', '22:30'))) print(dfUTC_05_To_2230) Total_UTC_Removed = np.sum(dfUTC_05_To_2230) print(&quot; Total records filtered out due to exclusion of RFQ's from UTC 0500 to UTC 2230 &quot; + str(Total_UTC_Removed), end='\n') # Mask to exclude these rows mask = ~((df['rfq_create_date_time'].between_time('5:00', '22:30'))) Total_Rows_After_Mask = df.shape[0] Difference = Total_Rows_Db - Total_UTC_Removed - Total_Rows_After_Mask print(&quot;Total records in df after exclusion of RFQ's from UTC 0500 to UTC 2230 &quot; + str(Total_Rows_After_Mask), end='\n') print(&quot;Difference after exclusion of RFQ's from UTC 0500 to UTC 2230 &quot; + str(Difference), end='\n') </code></pre>
<p>To use <code>between_time</code>, as you've probably realised, the date/time needs to be the index of the dataframe.</p> <p>When the date/time is a column in the dataframe you can use 'standard' filtering.</p> <pre><code>from datetime import time import pandas as pd # Dates are dd/mm/yyyy data = { &quot;rfq_create_date_time&quot;: [ &quot;01/10/2021 00:00:00 AM&quot;, &quot;02/10/2021 01:01:01 AM&quot;, &quot;03/10/2021 05:00:00 AM&quot;, &quot;04/10/2021 10:15:15 AM&quot;, &quot;05/10/2021 01:01:01 PM&quot;, &quot;21/10/2021 10:29:29 PM&quot;, &quot;22/10/2021 10:30:00 PM&quot;, &quot;23/10/2021 10:30:01 PM&quot;, ], &quot;Other_Field&quot;: [ &quot;A&quot;, &quot;B&quot;, &quot;C&quot;, &quot;D&quot;, &quot;E&quot;, &quot;F&quot;, &quot;G&quot;, &quot;H&quot;, ], } df = pd.DataFrame.from_dict(data) df[&quot;rfq_create_date_time&quot;] = pd.to_datetime(df[&quot;rfq_create_date_time&quot;]) mask = (df[&quot;rfq_create_date_time&quot;].dt.time &gt;= time(5, 0)) &amp; ( df[&quot;rfq_create_date_time&quot;].dt.time &lt;= time(23, 30) ) df_filtered = df[~mask] print(df_filtered) print( f&quot;&quot;&quot;There were {df.shape[0]} records in the original data, and after filtering there are {df_filtered.shape[0]} records left.&quot;&quot;&quot; ) </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>rfq_create_date_time</th> <th>Other_Field</th> </tr> </thead> <tbody> <tr> <td>10/01/2021 00:00:00</td> <td>A</td> </tr> <tr> <td>10/02/2021 01:01:01</td> <td>B</td> </tr> </tbody> </table> </div>
python|pandas
1
2,006
69,599,377
Python Congressional Plotly TypeError: Object of type MultiPolygon is not JSON serializable for Congressional Districts
<pre><code>import pandas as pd from census import Census import geopandas as gpd import numpy as np import plotly.io as pio import plotly.express as px pio.renderers.default='browser' file_path = &quot;Path&quot; # Load Census Median Age by District Data c = Census(&quot;KEY&quot;) district_df = c.acs1.state_congressional_district(['NAME', 'B01002_001E'], '*', '*') district_df = pd.DataFrame(district_df) district_df.rename(columns={'B01002_001E': 'median_age'}, inplace=True) district_df['GEOID'] = district_df['state'] + district_df['congressional district'] district_df.sort_values(by=['GEOID'], ascending=True) district_df['id'] = np.arange(1,438) # Import District Geojson geojson_path = 'https://raw.githubusercontent.com/CivilServiceUSA/us-house/master/us-house/geojson/us-house.geojson' geojson_file = gpd.read_file(geojson_path) # Transformations # Fill at-large districts nans with 0 to align with census geojson_file[['district']] = geojson_file[['district']].fillna(value=0) geojson_file['district'] = geojson_file.district.astype(float) # drop congressional districts that have no voting power (State Number 98) district_df['congressional district'] = district_df['congressional district'].astype(float) # district_df = district_df.loc[district_df['state'] != 98] # Create index state_name to inner join new = district_df['NAME'].str.split(&quot;, &quot;, n = 1, expand = True) district_df['district_name'] = new[0] district_df['state_name'] = new[1] district_df.drop(columns = 'NAME', inplace = True) # Inner Join merged_data = pd.merge(district_df, geojson_file, left_on=['state_name', 'congressional district'], right_on=['state_name', 'district']) # Plot fig = px.choropleth(merged_data, geojson=merged_data.geometry, locations=merged_data.index, color='median_age', color_continuous_scale='Viridis', scope='usa', projection='mercator', basemap_visible=True) fig.update_geos(fitbounds=&quot;locations&quot;, visible=True) fig.update_layout(title_text='Map') fig.show() </code></pre> <p>When I run this code, I get the following error</p> <p>Traceback:</p> <pre><code> Traceback (most recent call last): File &quot;/Users/colby/Desktop/Colby's Folder/Congressional Demo ETL Project/scripts/etl.py&quot;, line 81, in &lt;module&gt; fig.show() File &quot;/Users/colby/opt/anaconda3/lib/python3.8/site-packages/plotly/basedatatypes.py&quot;, line 3398, in show return pio.show(self, *args, **kwargs) File &quot;/Users/colby/opt/anaconda3/lib/python3.8/site-packages/plotly/io/_renderers.py&quot;, line 404, in show renderers._perform_external_rendering(fig_dict, renderers_string=renderer, **kwargs) File &quot;/Users/colby/opt/anaconda3/lib/python3.8/site-packages/plotly/io/_renderers.py&quot;, line 341, in _perform_external_rendering renderer.render(fig_dict) File &quot;/Users/colby/opt/anaconda3/lib/python3.8/site-packages/plotly/io/_base_renderers.py&quot;, line 747, in render html = to_html( File &quot;/Users/colby/opt/anaconda3/lib/python3.8/site-packages/plotly/io/_html.py&quot;, line 141, in to_html jdata = to_json_plotly(fig_dict.get(&quot;data&quot;, [])) File &quot;/Users/colby/opt/anaconda3/lib/python3.8/site-packages/plotly/io/_json.py&quot;, line 124, in to_json_plotly return json.dumps(plotly_object, cls=PlotlyJSONEncoder, **opts) File &quot;/Users/colby/opt/anaconda3/lib/python3.8/json/__init__.py&quot;, line 234, in dumps return cls( File &quot;/Users/colby/opt/anaconda3/lib/python3.8/site-packages/_plotly_utils/utils.py&quot;, line 59, in encode encoded_o = super(PlotlyJSONEncoder, self).encode(o) File &quot;/Users/colby/opt/anaconda3/lib/python3.8/json/encoder.py&quot;, line 199, in encode chunks = self.iterencode(o, _one_shot=True) File &quot;/Users/colby/opt/anaconda3/lib/python3.8/json/encoder.py&quot;, line 257, in iterencode return _iterencode(o, 0) File &quot;/Users/colby/opt/anaconda3/lib/python3.8/site-packages/_plotly_utils/utils.py&quot;, line 136, in default return _json.JSONEncoder.default(self, obj) File &quot;/Users/colby/opt/anaconda3/lib/python3.8/json/encoder.py&quot;, line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type MultiPolygon is not JSON serializable </code></pre> <p>Solutions that I have tried:</p> <p>1: I have also tried using geojson=geometry and locations=index based on the <a href="https://plotly.com/python/mapbox-county-choropleth/" rel="nofollow noreferrer">documentation</a></p> <p>2: I have taken out geojson as an argument (since it is included in the df), and named locations='congressional district'. This gives me a blank white plot in browser.</p> <p>3: I even used the data for a Congressional District shp from the census but that did not work either. I suspect because I am not sure how to properly input the arguments with a gpd df.</p>
<ul> <li>it's not clear to me if your <em>geojson</em> is valid. Given you are plotting US census data, may as well use US census mapping data <a href="https://www.census.gov/geographies/mapping-files/time-series/geo/carto-boundary-file.html" rel="nofollow noreferrer">https://www.census.gov/geographies/mapping-files/time-series/geo/carto-boundary-file.html</a></li> <li>there is no need to merge <strong>geopandas</strong> data frame and <strong>pandas</strong> dataframe. Just require a link between the two <em>GEOID</em></li> <li>full solution below. (Need to define <code>KEY</code> to be your key) Pan and zoom are not great by default</li> </ul> <pre><code>import requests import urllib from pathlib import Path from zipfile import ZipFile import geopandas as gpd import pandas as pd from census import Census import plotly.express as px # get geometry data as a geopandas dataframe # fmt: off # https://www.census.gov/geographies/mapping-files/time-series/geo/carto-boundary-file.html src = [{&quot;name&quot;: &quot;counties&quot;, &quot;suffix&quot;: &quot;.shp&quot;, &quot;url&quot;: &quot;https://www2.census.gov/geo/tiger/GENZ2018/shp/cb_2018_us_cd116_20m.zip&quot;,},] data = {} for s in src: f = Path.cwd().joinpath(urllib.parse.urlparse(s[&quot;url&quot;]).path.split(&quot;/&quot;)[-1]) if not f.exists(): r = requests.get(s[&quot;url&quot;],stream=True,) with open(f, &quot;wb&quot;) as fd: for chunk in r.iter_content(chunk_size=128): fd.write(chunk) fz = ZipFile(f) fz.extractall(f.parent.joinpath(f.stem)) data[s[&quot;name&quot;]] = gpd.read_file(f.parent.joinpath(f.stem).joinpath([f.filename for f in fz.infolist() if Path(f.filename).suffix == s[&quot;suffix&quot;]][0])) gdf = pd.concat(data.values()).to_crs(&quot;EPSG:4326&quot;) # fmt: on # get census measures... c = Census(KEY) district_df = ( pd.DataFrame(c.acs1.state_congressional_district([&quot;NAME&quot;, &quot;B01002_001E&quot;], &quot;*&quot;, &quot;*&quot;)) .rename(columns={&quot;B01002_001E&quot;: &quot;median_age&quot;}) .assign( GEOID=lambda d: d.loc[:, [&quot;state&quot;, &quot;congressional district&quot;]] .astype(str) .apply(&quot;&quot;.join, axis=1) ) ) # plot fig = px.choropleth( district_df, geojson=gdf.set_index(&quot;GEOID&quot;).geometry, locations=&quot;GEOID&quot;, color=&quot;median_age&quot;, color_continuous_scale=&quot;Viridis&quot;, # scope=&quot;usa&quot;, projection=&quot;mercator&quot;, basemap_visible=True, ) fig.update_geos(fitbounds=&quot;geojson&quot;, visible=True) fig.update_layout(title_text=&quot;Map&quot;) </code></pre> <h3>mapbox</h3> <p>I prefer Mapbox. Data sourcing identical, <strong>plotly</strong> API very similar</p> <pre><code>px.choropleth_mapbox( district_df, geojson=gdf.set_index(&quot;GEOID&quot;).geometry, locations=&quot;GEOID&quot;, color=&quot;median_age&quot;, color_continuous_scale=&quot;Viridis&quot;, ).update_layout( mapbox={ &quot;style&quot;: &quot;carto-positron&quot;, &quot;zoom&quot;: 3, &quot;center&quot;: {&quot;lat&quot;: 39.50, &quot;lon&quot;: -98.35}, }, title_text=&quot;Map&quot;, ) </code></pre> <p><a href="https://i.stack.imgur.com/jNVzg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jNVzg.png" alt="enter image description here" /></a></p> <h3>original geojson</h3> <ul> <li>this only partially works. Some states are missing</li> <li>the geojson does not have an ability to directly build <strong>FIPS</strong>. Hence source of state alpha2 code to numeric FIPS code</li> </ul> <pre><code>import requests import urllib from pathlib import Path from zipfile import ZipFile import geopandas as gpd import pandas as pd from census import Census import plotly.express as px res = requests.get(&quot;https://raw.githubusercontent.com/CivilServiceUSA/us-house/master/us-house/geojson/us-house.geojson&quot;) gdf = gpd.GeoDataFrame.from_features(res.json()) df_fips = pd.read_html( &quot;https://www.nrcs.usda.gov/wps/portal/nrcs/detail/?cid=nrcs143_013696&quot; )[0] df_fips = df_fips.dropna().assign( FIPS=lambda d: d[&quot;FIPS&quot;].apply(lambda x: str(int(x)).zfill(2)) ) gdf = gdf.merge(df_fips, left_on=&quot;state_code&quot;, right_on=&quot;Postal Code&quot;).assign( GEOID=lambda d: d.apply(lambda r: r[&quot;FIPS&quot;] + str(r[&quot;district&quot;]).zfill(2), axis=1) ) # get census measures... c = Census(KEY) district_df = ( pd.DataFrame(c.acs1.state_congressional_district([&quot;NAME&quot;, &quot;B01002_001E&quot;], &quot;*&quot;, &quot;*&quot;)) .rename(columns={&quot;B01002_001E&quot;: &quot;median_age&quot;}) .assign( GEOID=lambda d: d.loc[:, [&quot;state&quot;, &quot;congressional district&quot;]] .astype(str) .apply(&quot;&quot;.join, axis=1) ) ) # plot px.choropleth_mapbox( district_df, geojson=gdf.set_index(&quot;GEOID&quot;).geometry, locations=&quot;GEOID&quot;, color=&quot;median_age&quot;, color_continuous_scale=&quot;Viridis&quot;, ).update_layout( mapbox={ &quot;style&quot;: &quot;carto-positron&quot;, &quot;zoom&quot;: 3, &quot;center&quot;: {&quot;lat&quot;: 39.50, &quot;lon&quot;: -98.35}, }, title_text=&quot;Map&quot;, ) </code></pre>
python|pandas|plotly|choropleth
0
2,007
69,314,600
Python - Error in String literal str.replace
<p>I have attempted to replace a <code>string</code> in a column with either of the two commands below. For both of them, I am getting the &quot;SyntaxError: EOL while scanning string literal&quot; error. Please help/guide. Thanks.</p> <pre><code>df['filename'] = df['filename'].str.replace(&quot;H:\May2017\hb_ymvid\HB_ED_S\Pictures1\05cropped_PC\&quot;,&quot;&quot;, inplace=True) df[&quot;filename&quot;] = df[&quot;filename&quot;].apply(lambda x: x.replace(&quot;H:\May2017\hb_ymvid\HB_ED_S\Pictures1\05cropped_PC\&quot;, &quot;&quot;)) </code></pre>
<p><code>\</code> denotes escape sequence in <code>python</code>, if you mean literal <code>\</code> then use <code>\\</code>, i.e. replace</p> <pre><code>&quot;H:\May2017\hb_ymvid\HB_ED_S\Pictures1\05cropped_PC\&quot; </code></pre> <p>using</p> <pre><code>&quot;H:\\May2017\\hb_ymvid\\HB_ED_S\\Pictures1\\05cropped_PC\\&quot; </code></pre> <p>and so on</p>
python|pandas|dataframe
0
2,008
41,175,797
How to create a list of dictionaries
<p>I want to calculate data on the frequencies of words in documents grouped by year, and then place the data in a pandas dataframe. </p> <p>My routine creates a dictionary for each row, containing words and frequencies as keys and values. I then want to loop through years, appending the dictionaries to each other to create a list of dictionaries which i convert into a dataframe. </p> <p>Creating dataframes out of lists of dictionaries seems standard; and i can do it by manually creating the list.</p> <p>I'd like to be able to do something like this:</p> <pre><code>wordtable = {'year':'1965','word1':20, 'word2': 250, 'word3': 125} newrow={'year':'1966','word1':150, 'word4': 250, 'word2': 125} wordtable.append(newrow) df = pandas.DataFrame(wordtable, index=[0]) df.to_csv('testdata.csv') </code></pre> <p>But .append() leads to an error message stating .append() doesn't work with dictionary types. </p>
<p>As the previous poster mentioned, append() is a list method but not a dict method. This should work, though:</p> <pre><code>import pandas word_data = [] # list type word_counts_1 = {'year': '1965', 'word1':20, 'word2': 250, 'word3': 125} # dict type word_counts_2 = {'year':'1966','word1':150, 'word4': 250, 'word2': 125} # dict type word_data.append(word_counts_1) # append 1st word count data to list, word_data word_data.append(word_counts_2) # append 2nd word count data to list, word_data df = pandas.DataFrame(word_data) # create data frame from word_data df.to_csv('testdata.csv') # write it out </code></pre>
python|pandas|dictionary
1
2,009
53,899,752
How can I create a custom connection between two different keras layers in LeNet5 architecture?
<p>I am working on <a href="https://engmrk.com/lenet-5-a-classic-cnn-architecture/" rel="nofollow noreferrer">LeNet5</a> architecture. I want to implement a custom connection between the layers C3 and S2 as explained <a href="https://i.stack.imgur.com/UBhya.png" rel="nofollow noreferrer">here</a>. How do I have to define my model in "<strong>CODE-1</strong>" and "<strong>CODE-2</strong>" if I want to implement the custom connections as explained <a href="https://i.stack.imgur.com/UBhya.png" rel="nofollow noreferrer">here</a>. How many filters should I take in "<strong>CODE-2</strong>". Any type of help will be appreciated. </p> <p>The output of s2 is 14*14*6 and 16 filters need to be applied on these 6 feature maps. However, instead of all 6 S2 maps to 16 neurons of c3, most neurons in layer C3 maps are connected to neurons in only three or four S2 maps. More details can be found in Images 2 and 3. c3 layer with 16 feature maps having size 5×5 and a stride of 1. In this layer, only 10 out of 16 feature maps are connected to 6 feature maps of the s2.</p> <p>If you notice image 3, neuron 0 of c3 is connected to 0, 1, and 2 feature maps of s2. How to implement this kind of connections.</p> <p>My code is something similar below:</p> <pre><code>from keras.models import Model from tensorflow.keras.layers import Conv2D, Input, Concatenate, Lambda </code></pre> <p><strong>CODE-1</strong></p> <pre><code>inputTensor = Input(shape=(14, 14, 6)) group0 = Lambda(lambda x: x[:,:,:3], output_shape=((10, 10, 1)))(inputTensor) group1 = Lambda(lambda x: x[:,:,1:4], output_shape=((10, 10, 1)))(inputTensor) group2 = Lambda(lambda x: x[:,:,2:5], output_shape=((10, 10, 1)))(inputTensor) group3 = Lambda(lambda x: x[:,:,3:6], output_shape=((10, 10, 1)))(inputTensor) # all 16 layers of c3 (of the Custom Connections image) </code></pre> <p><strong>CODE-2</strong></p> <pre><code>conv_group0 = Conv2D(1, kernel_size=[5,5], strides=(stride,stride), padding="valid", activation = 'tanh')(group0) conv_group1 = Conv2D(1, kernel_size=[5,5], strides=(stride,stride), padding="valid", activation = 'tanh')(group1) conv_group2 = Conv2D(1, kernel_size=[5,5], strides=(stride,stride), padding="valid", activation = 'tanh')(group2) #all 16 layers convolution output_layer = Concatenate()([conv_group0,conv_group1,conv_group2,conv_group3,conv_group4,conv_group5,conv_group6,conv_group7, conv_group8,conv_group9,conv_group10,conv_group11,conv_group12,conv_group13,conv_group14,conv_group15]) Mymodel = Model(inputTensor,output_layer) </code></pre>
<p><strong>You can create the following custom layer classclass:</strong></p> <pre><code>CustomLayer(tf.keras.layers.Layer): &quot;&quot;&quot; Custom layer with initialize matrix = connect_matrix. &quot;&quot;&quot; def __init__(self, activation, units, connect_matrix): super(CustomLayer, self).__init__() self.units = units self.connect_matrix = np.asarray(connect_matrix, dtype='float32') self.activation = tf.keras.activations.get(activation) def build(self, input_shape): self.w = self.add_weight(shape=(input_shape[-1], self.units), initializer=tf.constant_initializer(value=self.connect_matrix), trainable=True) self.b = self.add_weight(shape=(self.units,), initializer=&quot;zeros&quot;, trainable=True) self.m = self.add_weight(shape=(input_shape[-1], self.units), initializer=tf.constant_initializer(value=self.connect_matrix), trainable=False) #super().build(batch_input_shape) def call(self, inputs): wm = tf.math.multiply(self.w, self.m) out_lin = tf.matmul(inputs, wm) + self.b return self.activation(out_lin) def get_config(self): base_config = super().get_config() return {**base_config, &quot;units&quot;: self.units, &quot;activation&quot;: keras.activations.serialize(self.activation)} </code></pre> <p>Save you connection matrix as a 2D numpy array, A, and use CustomLayer(units, activation, connect_matrix=A) as a usual tf.keras.layer You can find more at the following link: <a href="https://github.com/yboryshchak/CustomTfKerasObjects/blob/main/CustomTFKerasObjects.py" rel="nofollow noreferrer">CustomTF2KerasObjects</a></p>
tensorflow|machine-learning|keras|neural-network|data-science
0
2,010
53,858,902
How to save Tensorflow encoder decoder model?
<p>I followed <a href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb#" rel="nofollow noreferrer">this tutorial</a> about building an encoder-decoder language translation model and built one for my native language.</p> <p>Now I want to save it, deploy on cloud ML engine and make predictions with HTTP request. </p> <p>I couldn't find a clear example on how to save this model, </p> <p>I am new to ML and found <a href="https://www.tensorflow.org/guide/saved_model" rel="nofollow noreferrer">TF save guide</a> v confusing..</p> <p>Is there a way to save this model using something like <a href="https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model" rel="nofollow noreferrer">tf.keras.models.save_model</a></p>
<p>You can save a Keras model in Keras's HDF5 format, see:</p> <p><a href="https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model" rel="nofollow noreferrer">https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model</a></p> <p>You will want to do something like:</p> <pre><code>import tf.keras model = tf.keras.Model(blah blah) model.save('my_model.h5') </code></pre> <p>If you migrate to TF 2.0, it's more straightforward to build a model in tf.keras and deploy using the TF SavedModel format. This 2.0 tutorial shows using a pretrained tf.keras model, saving the model in SavedModel format, deploying to the cloud and then doing an HTTP request for a prediction:</p> <p><a href="https://www.tensorflow.org/beta/guide/saved_model" rel="nofollow noreferrer">https://www.tensorflow.org/beta/guide/saved_model</a></p>
tensorflow|keras|google-cloud-ml|encoder-decoder
0
2,011
54,221,484
Select data based on multiple criteria using Pandas
<p>I am new to using Pandas. I want to select rows from a dataframe where multiple columns match in value. Along the lines of:</p> <p>if column A equals column AB and column B equals column BC </p> <p>then I want those values. </p> <p>I haven't actually used an if statement, I read iteration was not good to use with pandas.</p> <p>I've tried to find a solution, I'm not sure if it is my syntax or if its unhappy with different data types of the columns?</p> <p>My code is a little long, so I'll provided just the line where I attempt the selection but I can post the entire code if that is helpful.</p> <pre><code>dfequal=dfMerged.loc[(dfMerged['MetCode']==dfMerged['GCD_METCODE']) &amp; (dfMerged[dfMerged['Zone Code']==dfMerged['GCD_Senior_ZONE']]) &amp; (dfMerged[dfMerged['Municipality Code']==dfMerged['GCD_CSDUID']])] </code></pre> <p>Edit*</p> <p>The expected output would be a dataframe where only rows where the statement is true would exist.</p> <p>This is the error:<br> ValueError: operands could not be broadcast together with shapes (84778,) (4462,) </p> <p>This is my data table i'm pulling from</p> <p><a href="https://i.stack.imgur.com/h4Gdu.png" rel="nofollow noreferrer">Sample Data</a></p> <pre><code> FileID,MetCode,Municipality Code,Zone Code,GCD_Senior_ZONE,GCD_METCODE,GCD_CSDUID A100101,7175,1005018,303006,303006,7175,1005018 A100102,7175,1005018,303006,303006,7175,1005018 A100103,7175,1005018,303006,303006,7175,1005018 A100104,7280,1006009,202003,202003,7280,1006009 A100105,7300,1006017,202003,202003,7300,1006017 A100108,7300,1006017,202003,202003,7300,1006017 A100109,7300,1006017,202003,202003,7300,1006017 A100110,1640,1001485,101001,101001,1640,1001485 A100111,1640,1001517,101001,101001,1640,1001517 A100114,9000,1008011,202003,202003,0,1008011 A100115,9000,1001370,101002,101002,0,1001370 A100119,9000,1003034,202003,202003,0,1003034 </code></pre>
<p>You'll simply need to add the conditions inside parenthesis inside your <code>.loc</code> and not repeat a DF filter inside the df filter:</p> <p>First, creating a crude datasample, as you didn't provide one besides the image:</p> <pre><code># creating the values, first one will be ID, then next 4 will be the values to compare check_values = [ [1, 5, 10, 20, 30], [2, 5, 11, 32, 11], [3, 10, 10, 20, 20], [4, 9, 9, 11, 11], [5, 11, 23, 41, 11] ] # creating columns names check_cols = ['id', 'A', 'B', 'C', 'D'] # making the DataFrame dfcheck = pd.DataFrame(check_values, columns=check_cols) # Setting the id column, just because dfcheck.set_index('id', inplace=True) </code></pre> <p><strong>The solution</strong>, where you need to nest each condition inside parenthesis:</p> <pre><code>dfcheck.loc[(dfcheck['A'] == dfcheck['B']) &amp; (dfcheck['C'] == dfcheck['D'])] </code></pre> <p><strong>EDIT: What you missed/did wrong?:</strong></p> <p>Looking at your filter, you're adding unecessary dfMerged inside your parenthesis, your code broken in lines (delete everything inside &quot;** CODE **&quot;):</p> <pre><code>dfequal= dfMerged.loc[(dfMerged['MetCode']==dfMerged['GCD_METCODE']) &amp; (**dfMerged[**dfMerged['Zone Code']==dfMerged['GCD_Senior_ZONE']**]**) &amp; (**dfMerged[**dfMerged['Municipality Code']==dfMerged['GCD_CSDUID']**]**)] </code></pre> <p>So you see, that you're searching inside a search that it's not needed? It should be:</p> <pre><code>dfequal= dfMerged.loc[(dfMerged['MetCode']==dfMerged['GCD_METCODE']) &amp; (dfMerged['Zone Code']==dfMerged['GCD_Senior_ZONE']) &amp; (dfMerged['Municipality Code']==dfMerged['GCD_CSDUID'])] </code></pre>
python|pandas|select
3
2,012
54,057,338
Weight decay loss
<p>I need to write a code to gradually decay the weight of my loss function by computes lambda with given steps, But I don't have any idea. Any help will be appreciated.</p> <p>This is my Loss function:</p> <pre><code>loss_A = criterion(recov_A, real_A) loss_Final = lambda_A * loss_A + #lambda_A is a fixed number: 10 </code></pre> <p>I don't want the lambda_A to be fixed. I need to gradually decay the lambda after passing the specified number of steps</p> <pre><code># write function that computes lambda given the steps cur_lambda = compute_lambda(step, decay_params, initial_lamdba) Loss_Final = cur_lambda * loss_A </code></pre>
<p>To decay the fixed number depends on the number of steps or even the number of epochs you can use the following code or you can write the code as a function and call it whenever you want.</p> <pre><code>final_value = 1e-3 # Small number because dont end up with 0 initial_value = 20 starting_step = 25 total_step = 100 for i in range(total_step): if i &lt;= starting_step: print(i, initial_value) else: print (i, initial_value + i * (final_value-initial_value)/total_step) </code></pre>
python|python-3.x|pytorch
1
2,013
53,970,733
i want to compute the distance between two numpy histogram
<p>i'm creating an image processing program and i want to measure the wasserstein distance between two numpy histograms. the two histogram are created with the function <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html" rel="nofollow noreferrer">numpy.histogram</a></p> <p>i tried the wasserstein_distance from the scipy.stats package like this</p> <pre><code>from scipy.stats import wasserstein_distance wasserstein_distance(histogram1,histogram2) </code></pre> <p>but it gives me that error</p> <blockquote> <p>ValueError: setting an array element with a sequence.</p> </blockquote> <p>the complete code:</p> <p>first the function that calculate the distance:</p> <pre><code> def f_dist( histogram1 ,histogram2): return wasserstein_distance(histogram1,histogram2) </code></pre> <p>than the function that calculate the mask for the histograme creation:</p> <pre><code>def prepare_mask(polygon, image,value): """Returns binary mask based on input polygon presented as list of coordinates of vertices Params: polygon (list) - coordinates of polygon's vertices. Ex: [(x1,y1),(x2,y2),...] or [x1,y1,x2,y2,...] image (numpy array) - original image. Will be used to create mask of the same size. Shape (H, W, C). Output: mask (numpy array) - boolean mask. Shape (H, W). """ # create an "empty" pre-mask with the same size as original image width = image.shape[1] height = image.shape[0] mask = Image.new('L', (width, height),value ) # Draw your mask based on polygon ImageDraw.Draw(mask).polygon(polygon, outline=1, fill=abs(value-1)) # Covert to np array mask = np.array(mask).astype(bool) return mask </code></pre> <p>than the function that creat the histogram</p> <pre><code>def compute_histogram(mask, image): """Returns histogram for image region defined by mask for each channel Params: image (numpy array) - original image. Shape (H, W, C). mask (numpy array) - boolean mask. Shape (H, W). Output: list of tuples, each tuple (each channel) contains 2 arrays: first - computed histogram, the second - bins. """ # Apply binary mask to your array, you will get array with shape (N, C) region = image[mask] hist = np.histogram(region.ravel(), bins=256, range=[0, 255]) return hist </code></pre> <p>and now for the main fnction:</p> <pre><code>points=[(633, 312), (630, 351), (623, 389), (611, 426), (594, 462), (573, 495), (548, 525), (519, 552), (488, 575), (453, 594), (417, 608), (379, 618), (340, 623), (301, 623), (262, 618), (224, 608), (188, 594), (153, 575), (122, 552), (93, 525), (68, 495), (47, 462), (30, 426), (18, 389), (11, 351), (9, 311), (11, 272), (18, 234), (30, 197), (47, 161), (68, 128), (93, 98), (122, 71), (153, 48), (188, 29), (224, 15), (262, 5), (301, 0), (340, 0), (379, 5), (417, 15), (453, 29), (488, 48), (519, 71), (548, 98), (573, 128), (594, 161), (611, 197), (623, 234), (630, 272)] mask1 = prepare_mask(points, image_gray, 0) mask2 = prepare_mask(points, image_gray, 1) histogram1 = compute_histogram(mask1, image_gray) histogram2 = compute_histogram(mask2, image_gray) dist=f_dist(histogram1,histogram2) </code></pre>
<p>thank to SpghttCd the solution was simple ... i had just to replace</p> <pre><code>wasserstein_distance(histogram1, histogram2) </code></pre> <p>with</p> <pre><code>wasserstein_distance(histogram1[0], histogram2[0]) </code></pre>
python|numpy|opencv|histogram|distance
0
2,014
53,803,676
Binary Markov-K Random Generator
<p>Hello Stackoverflow Community, </p> <p>currently I'm working on an entropy encoder (MQ-coder) implementation (cython wrapper and internal c source code). To create a test setting, I want to use a binary markov-k random generator, that outputs numpy arrays as input for the encoder. What would be the easiest way to implement such a generator in python, numpy, scipy, or tensorflow? The transition probabilities p(x|x1,...,xk) and the order k should be possible to specify. </p> <p>Thanks in advance, meridius</p>
<p>FYI: this generates a table[256] of probabilities, based on the bits of it's (ascii) input.</p> <p>Usage: <code>cat*.c| ./a.out</code></p> <p>;-)</p> <hr> <pre><code>#include &lt;stdio.h&gt; struct cell { unsigned nhit; unsigned ones; } cells[256] ={{0,0},}; int main(void) { unsigned state, newstate, bit,val; int ch; for(state=0; 1; ){ ch=getc(stdin); if(ch==EOF)break; for(bit=8; bit-- ;state=newstate &amp; 0xff ){ val= ch &amp; bit? 1: 0; if(val) cells[state].ones += 1; cells[state].nhit += 1; newstate= (state &lt;&lt;1) | val; } } for(state=0; state&lt; 256; state++ ){ if ( cells[state].nhit &lt; 1)continue; fprintf(stdout, "%2x: %u / %u (%lf)\n" , state , cells[state].ones , cells[state].nhit , (0.0+cells[state].ones) / cells[state].nhit ); } return 0; } </code></pre> <hr> <p>It is funny to see that most of the values are 0 or 1 ; or close to it; or near 0.5 .</p>
python|numpy|random|scipy|generator
0
2,015
38,179,248
Absolute difference of two NumPy arrays
<p>Is there an efficient way/function to subtract one matrix from another and writing the absolute values in a new matrix? I can do it entry by entry but for big matrices, this will be fairly slow...</p> <p>For example:</p> <pre><code>X = [[12,7,3], [4 ,5,6], [7 ,8,9]] Y = [[5,8,1], [6,7,3], [4,5,9]] for i in range(len(r_0)): for j in range(len(r)): delta_r[i][j]= sqrt((r[i][j])**2 - (r_0[i][j])**2) </code></pre>
<p>If you want the absolute element-wise difference between both matrices, you can easily subtract them with NumPy and use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.absolute.html" rel="noreferrer"><code>numpy.absolute</code></a> on the resulting matrix. </p> <pre><code>import numpy as np X = [[12,7,3], [4 ,5,6], [7 ,8,9]] Y = [[5,8,1], [6,7,3], [4,5,9]] result = np.absolute(np.array(X) - np.array(Y)) </code></pre> <p><strong>Outputs</strong>:</p> <pre><code>[[7 1 2] [2 2 3] [3 3 0]] </code></pre> <hr> <p>Alternatively (<em>although unnecessary</em>), if you were required to do so in native Python you could zip the dimensions together in a nested list comprehension. </p> <pre><code>result = [[abs(a-b) for a, b in zip(xrow, yrow)] for xrow, yrow in zip(X,Y)] </code></pre> <p><strong>Outputs</strong>:</p> <pre><code>[[7, 1, 2], [2, 2, 3], [3, 3, 0]] </code></pre>
python|arrays|numpy
27
2,016
38,368,500
What's the most efficient way to sum up an ndarray in numpy while minimizing floating point inaccuracy?
<p>I have a big matrix with values that vary greatly in orders of magnitude. To calculate the sum as accurate as possible, my approach would be to reshape the ndarray into a 1-dimensional array, sort it and then add it up, starting with the smallest entries. Is there a better / more efficient way to do this?</p>
<p>I think that, given floating point precision problems, the best known algorithm for your task is <a href="https://en.wikipedia.org/wiki/Kahan_summation_algorithm" rel="noreferrer">Kahan summation</a>. For practical purposes, Kahan summation has an error bound that is independent of the number of summands, while naive summation has an error bound that grows linearly with the number of summands.</p> <p>NumPy does not use Kahan summation, and there is no easy way of implementing it without a big performance tradeoff. But it uses the next best thing, <a href="https://en.wikipedia.org/wiki/Pairwise_summation" rel="noreferrer">pairwise summation</a>, where error grows, under some reasonable assumptions, like the square root of the logarithm of the number of summands.</p> <p>So it is very likely that Numpy is on its own already able to provide sufficiently good precision for your problem. To validate this, I would actually run a few sample cases through Kahan summation (the pseudocode in the Wikipedia link above can be trivially converted to Python), and take this as the golden, best possible result, and compare it against:</p> <ol> <li>Calling <code>np.sum</code> on your matrix as is.</li> <li>Calling <code>np.sum</code> on your matrix after reshaping to 1D, which may give better results if your matrix is not contiguous in memory.</li> <li>Calling <code>np.sum</code> on a sorted version of the 1D array.</li> </ol> <p>For most cases these last three options should behave similarly, but the only way to know is to actually test it.</p>
python|numpy|precision
6
2,017
38,324,603
How to keep a slice of a numpy array and clear the rest from memory?
<p>I have a list which contains several large <code>numpy arrays</code></p> <p>I want to only keep a slice of each of those arrays, and clear my system memory. I have tried using the keywords <code>del</code> and <code>None</code> but those do not seem to have any effect (I use the fedora system monitor to monitor RAM usage).</p> <p>The issue is that I want to save my slices using <code>numpy.save()</code> but I run out of memory, hence my question.</p> <p>For example I have:</p> <pre><code>my_list = [arr0, arr1, arr2] </code></pre> <p>And I would like to end up with:</p> <pre><code>my_list = [arr0[10:100], arr1[10:100], arr2[10:100]] </code></pre> <p>So I have tried to do</p> <pre><code>arr_tmp = np.copy(arr0[10:100]) my_list[0] = arr_tmp arr0 = None </code></pre> <p>and</p> <pre><code>arr_tmp = np.copy(arr0[10:100]) my_list[0] = arr_tmp del arr0 </code></pre> <p>but none of those seems to work.</p> <p>EDIT : I run out of memory when using the <code>numpy.save()</code> function, not when slicing my array. I want to free some memory before calling <code>numpy.save()</code> so it does not get killed by the system.</p>
<p>You can add zero to the slice:</p> <p><code>smallSlice = bigArray[...,::10]</code></p> <p><code>del bigArray</code></p> <p>will leave bigArray in memory, as there is a copy the slice points to.</p> <p><code>smallSlice = bigArray[...,::10] + 0</code></p> <p><code>del bigArray</code></p> <p>will create a new array, and bigArray will be deleted.</p>
python|python-3.x|numpy
0
2,018
66,091,666
Move one column to another dataframe pandas
<p>I have a DataFrame <code>df1</code> that looks like this:</p> <pre><code>userId movie1 movie2 movie3 0 4.1 0.0 1.0 1 3.1 1.1 3.4 2 2.8 0.0 1.7 3 0.0 5.0 0.0 4 0.0 0.0 0.0 5 2.3 0.0 2.0 </code></pre> <p>and another DataFrame, <code>df2</code> that looks like this:</p> <pre><code>userId movie4 movie5 movie6 0 4.1 0.0 1.0 1 3.1 1.1 3.4 2 2.8 0.0 1.7 3 0.0 5.0 0.0 4 0.0 0.0 0.0 5 2.3 0.0 2.0 </code></pre> <p>How do I select one column from <code>df2</code> and add it to <code>df1</code>? For example, adding <code>movie6</code> to <code>df1</code> would result:</p> <pre><code>userId movie1 movie2 movie3 movie6 0 4.1 0.0 1.0 1.0 1 3.1 1.1 3.4 3.4 2 2.8 0.0 1.7 1.7 3 0.0 5.0 0.0 0.0 4 0.0 0.0 0.0 0.0 5 2.3 0.0 2.0 2.0 </code></pre>
<pre><code> df1=pd.concat([df1,df2['movie6']],axis=0) </code></pre>
python|pandas|dataframe
0
2,019
66,311,611
When and How Keras calculate metrics for each batch of samples?
<p>I was seeing how Keras custom metrics working, and calculation doesn't match between <code>tf.print</code> in metric function and callback print of <code>model.fit</code>.</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf # tf2.4.1 import numpy as np model = tf.keras.models.Sequential( tf.keras.layers.Dense(1, input_shape=(1,)) ) def my_metric_fn(y_true, y_pred): squared_difference = tf.square(y_true - y_pred) loss = tf.reduce_mean(squared_difference, axis=-1) tf.print(y_true.shape, y_pred.shape, loss, tf.reduce_mean(squared_difference)) return loss model.compile(optimizer='adam', loss='mean_squared_error', metrics=[my_metric_fn]) x = np.random.rand(4,1) y = x ** 2 history = model.fit(x=x, y=y, batch_size=2, epochs=2) print(history.history) </code></pre> <p>Output (formatted for a better readability)</p> <pre><code>Epoch 1/2 TensorShape([2, 1]) TensorShape([2, 1]) [9.79962078e-06 0.0534314588] 0.02672063 1/2 [==============&gt;...............] - ETA: 0s - loss: 0.0267 - my_metric_fn: 0.0267 TensorShape([2, 1]) TensorShape([2, 1]) [0.0397406667 0.179955378] 0.109848022 2/2 [==============================] - 0s 7ms/step - loss: 0.0544 - my_metric_fn: 0.0544 Epoch 2/2 TensorShape([2, 1]) TensorShape([2, 1]) [0.0392204635 0.0521505736] 0.0456855185 1/2 [==============&gt;...............] - ETA: 0s - loss: 0.0457 - my_metric_fn: 0.0457 TensorShape([2, 1]) TensorShape([2, 1]) [0.177408844 2.45939535e-08] 0.088704437 2/2 [==============================] - 0s 5ms/step - loss: 0.0600 - my_metric_fn: 0.0600 {'loss': [0.06828432530164719, 0.06719497591257095], 'my_metric_fn': [0.06828432530164719, 0.06719497591257095]} </code></pre> <p>See a printed loss of a batch in an above output.</p> <p>Epoch 1/2 1/2 tf.print: 0.02672063, and model.fit: 0.0267. OK.<br /> Epoch 1/2 2/2 tf.print: 0.109848022, but model.fit: 0.0544. Not OK.</p> <p>How can I understand these match and mismatch? Where did 0.0544 come from?</p>
<p>In keras, the training loss/metric is calculated at the end of each epoch as the mean of loss/metric in each batch. so in your case:</p> <pre><code>EPOCH 1: (0.02672063 + 0.109848022) / 2 = 0.068284326 EPOCH 2: (0.0456855185 + 0.088704437) / 2 = 0.06719497775 </code></pre> <p>which correspond to:</p> <pre><code>history.history['loss'] ==&gt; [0.06828432530164719, 0.06719497591257095] </code></pre>
python|tensorflow|machine-learning|keras|deep-learning
1
2,020
65,910,850
How can I get a value from other dataframe's column based on other index?
<p>Take this dataframe <code>df</code> fragment:</p> <pre><code> col_1 col_2 col_3 0 aaa !!! sss 1 bbb @@@ jjj 2 ccc !!! NaN 3 ddd $$$ nnn 4 eee %%% xxx </code></pre> <p>I need to run a <code>fillna()</code> on <code>col_3</code> to get the value of <code>col_1</code> based on the first occurrence of the value of <code>col_2</code>.</p> <p>To get it simple, this <code>NaN</code> value should by filled with <code>aaa</code>. It needs to be dynamic for the whole dataframe, and run for the whole <code>col_3</code>.</p>
<p>Here's how to get this done:</p> <ul> <li><p>Step 1: Do a Groupby of <code>col_2</code> and find the values of <code>col_1</code> but pick only the first entry of this value</p> </li> <li><p>Step 2: Convert this into a dictionary Both of these steps can be accomplished by doing:</p> <p><code>df.groupby('col_2')['col_1'].first().to_dict()</code></p> </li> <li><p>Step 3: Now do a fillna for <code>col_3</code> using a lookup of value in <code>col_2</code> but mapping it back to a dictionary. So the value in <code>col_2</code> will be checked against the dictionary. The key would return a value. This value will be assigned back to <code>col_3</code>.</p> </li> </ul> <p>Putting all this together, the full code is as shown below:</p> <pre><code>import pandas as pd import numpy as np c = ['col_1','col_2','col_3'] d = [['aaa','!!!','sss'], ['bbb','@@@','jjj'], ['ccc','!!!',np.NaN], ['ddd','$$$','nnn'], ['eee','%%%','xxx'], ['fff','@@@',np.NaN], ['ggg','$$$',np.NaN], ['hhh','%%%',np.NaN]] df = pd.DataFrame(d,columns=c) print (df) dx = df.groupby('col_2')['col_1'].first().to_dict() df['col_3'] = df.col_3.fillna(df.col_2.map(dx)) print (df) </code></pre> <p>Output of this will be:</p> <p><strong>Original Dataframe:</strong></p> <pre><code> col_1 col_2 col_3 0 aaa !!! sss 1 bbb @@@ jjj 2 ccc !!! NaN 3 ddd $$$ nnn 4 eee %%% xxx 5 fff @@@ NaN 6 ggg $$$ NaN 7 hhh %%% NaN </code></pre> <p><strong>Updated DataFrame:</strong></p> <pre><code> col_1 col_2 col_3 0 aaa !!! sss 1 bbb @@@ jjj 2 ccc !!! aaa 3 ddd $$$ nnn 4 eee %%% xxx 5 fff @@@ bbb 6 ggg $$$ ddd 7 hhh %%% eee </code></pre> <p>Added more records and tested:</p> <p>Original:</p> <pre><code> col_1 col_2 col_3 0 aaa !!! sss 1 bbb @@@ jjj 2 ccc !!! NaN 3 ddd $$$ nnn 4 eee %%% xxx 5 fff @@@ NaN 6 ggg $$$ NaN 7 hhh %%% NaN 8 iii !!! NaN 9 jjj $$$ NaN 10 kkk &amp;&amp;&amp; ttt </code></pre> <p>Updated:</p> <pre><code> col_1 col_2 col_3 0 aaa !!! sss 1 bbb @@@ jjj 2 ccc !!! aaa 3 ddd $$$ nnn 4 eee %%% xxx 5 fff @@@ bbb 6 ggg $$$ ddd 7 hhh %%% eee 8 iii !!! aaa 9 jjj $$$ ddd 10 kkk &amp;&amp;&amp; ttt </code></pre>
python|pandas|dataframe
1
2,021
66,316,981
How to build a custom accuracy metric with tolerance in TF2?
<p>I want to build a custom accuracy metric with tolerance. Instead of counting elements exactly equal in <code>y_true</code> and <code>y_pred</code>, this accuracy regards the two elements are consistent if their difference within a given tolerance value. For example, if the differences between predicted degrees and true degrees are smaller than 5 degree, we can think the results are correct and calculate the accuracy based on this rule. I want to use this metric in <code>model.compile</code> so it should be a callable function.</p> <p>I wrote a function as follows.</p> <pre><code>def accuracy_with_tolerence(y_true,y_pred): &quot;&quot;&quot; y_true/y_pred: batch of samples; (BatchSize, 1) &quot;&quot;&quot; threshold = 5 differnece = tf.abs(tf.subtract(y_true,y_pred)) - threshold boolean_results = [True if i &lt; 0 else False for i in differnece] return K.mean(math_ops.cast(boolean_results, K.floatx())) </code></pre> <p>It can return the correct accuracy value.</p> <pre><code>x = tf.constant([1, 2, 3], dtype=tf.float32) y = tf.constant([5, 8, 10], dtype=tf.float32) acc = accuracy_with_tolerence(x,y) print(acc) </code></pre> <pre><code>tf.Tensor(0.33333334, shape=(), dtype=float32) </code></pre> <p>But when I want to use it in compile, there is an error:</p> <pre><code># Initialize ResNet50 model = resnet50() model.compile(optimizer='adam',loss='mse',metrics=[accuracy_with_tolerence]) model.load_weights(checkpoint_filepath_0) model.evaluate(x_test,y_test) </code></pre> <pre><code>OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. </code></pre> <p>It seems I cannot iterate the Tensor. So how can I get element-wise boolean comparison results in the metric function? How can I realize this accuracy function?</p> <p>Thank you in advance.</p>
<p>You can't make a list comprehension with a tensor. The operation you're looking for is <a href="https://www.tensorflow.org/api_docs/python/tf/where" rel="nofollow noreferrer"><code>tf.where</code></a> and you can use it as follows:</p> <pre><code>def accuracy_with_tolerence(y_true, y_pred): threshold = 5 differnece = tf.abs(tf.subtract(y_true, y_pred)) - threshold boolean_results = tf.where(differnece&gt;0, True, False) return K.mean(math_ops.cast(boolean_results, K.floatx())) </code></pre> <p>Note that you can simplify the code further:</p> <pre><code> ... boolean_results = tf.where(tf.abs(tf.subtract(y_true, y_pred)) - threshold&gt;0, 1., 0.) return K.mean(boolean_results) </code></pre>
python|tensorflow|keras|customization|metrics
1
2,022
66,306,546
Append array as column to dataframe (or create new dataframe according to other dataframe's date)
<p>First of all, I want to say that there's a lot of similar questions and I'm spending almost 2 days looking and try to solve my problem, using all of the functions but couldn't find what I need, even though I believe there's going to be a very simple solution.</p> <p>Complete code</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import pandas as pd mt = pd.Series([34.678714, 34.087302, 33.857141, 33.250000, 33.124999, 31.818181, 31.082676, 29.107807, 30.144405], index=['2019-12-31', '2020-01-02', '2020-01-03', '2020-01-06', '2020-01-07', '2020-01-08', '2020-01-09', '2020-01-10', '2020-01-13']) mn = np.array([ 7.76179772 16.68166719 23.3037243, 27.30909839, 29.68638615, 30.56226802, 30.77646665, 30.08922891, 30.11195783]) plt.figure(figsize=(10,6)) print(mt) print(mn) mt.plot() plt.show() </code></pre> <p>I get this <img src="https://i.stack.imgur.com/cOGcs.png" alt="plot(1)" /></p> <p>And printing my results are:</p> <pre><code>print(mt) Date 2019-12-31 34.678714 2020-01-02 34.087302 2020-01-03 33.857141 2020-01-06 33.250000 2020-01-07 33.124999 2020-01-08 31.818181 2020-01-09 31.082676 2020-01-10 29.107807 2020-01-13 30.144405 print(mn) [ 7.76179772 16.68166719 23.3037243 27.30909839 29.68638615 30.56226802 30.77646665 30.08922891 30.11195783 ... ] </code></pre> <p>I need to add <code>mn</code> array to <code>mt</code>series and plot them all together with Date indices of <code>mt</code>. So it can look like this: <strong>(First Question is how to make merge series and array above, to make it look like below)</strong></p> <pre><code> print(mt) Date actual est 2019-12-31 34.678714 7.76179772 2020-01-02 34.087302 16.68166719 2020-01-03 33.857141 23.3037243 2020-01-06 33.250000 27.30909839 2020-01-07 33.124999 29.68638615 2020-01-08 31.818181 30.56226802 2020-01-09 31.082676 30.77646665 2020-01-10 29.107807 30.08922891 2020-01-13 30.144405 30.11195783 </code></pre> <p><strong>Finally and more important question, how can I plot mt (with jumping dates) and mn (without date indices) together into like <img src="https://i.stack.imgur.com/mt4gc.png" alt="this plot(2)" /> (with x-axis as dates)</strong></p> <p>I used hstack, join, append, insert, add, asarray and a lot others that cant even remember. Maybe used them wrong, open to all kind of answers really.</p>
<p>The easiest is to use <code>pd.concat</code> like this:</p> <pre class="lang-py prettyprint-override"><code>mt = pd.Series( [34.678714, 34.087302, 33.857141, 33.250000, 33.124999, 31.818181, 31.082676, 29.107807, 30.144405], index=['2019-12-31', '2020-01-02', '2020-01-03', '2020-01-06', '2020-01-07', '2020-01-08', '2020-01-09', '2020-01-10', '2020-01-13'], name='mt' # Added for next step ) mn = np.array([ 7.76179772, 16.68166719, 23.3037243, 27.30909839, 29.68638615, 30.56226802, 30.77646665, 30.08922891, 30.11195783]) # Combine data: combined_data = pd.concat([ mt, pd.Series(data=mn, index=mt.index, name='mn') ], axis=1) # mt mn # 2019-12-31 34.678714 7.761798 # 2020-01-02 34.087302 16.681667 # 2020-01-03 33.857141 23.303724 # 2020-01-06 33.250000 27.309098 # 2020-01-07 33.124999 29.686386 # 2020-01-08 31.818181 30.562268 # 2020-01-09 31.082676 30.776467 # 2020-01-10 29.107807 30.089229 # 2020-01-13 30.144405 30.111958 # Plot data: combined_data.plot(marker='o', figsize=(12, 4.8)) </code></pre> <p>Which generates this graph:<a href="https://i.stack.imgur.com/yK76A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yK76A.png" alt="enter image description here" /></a></p> <p>An extra tip, you're currently using a string index, even though they represent dates. You can convert it to a <code>pd.DatetimeIndex</code> like this:</p> <pre class="lang-py prettyprint-override"><code>$ combined_data.index = pd.to_datetime(combined_data.index) $ combined_data.index DatetimeIndex(['2019-12-31', '2020-01-02', '2020-01-03', '2020-01-06', '2020-01-07', '2020-01-08', '2020-01-09', '2020-01-10', '2020-01-13'], dtype='datetime64[ns]', freq=None) </code></pre> <p>This function is also extremely helpful <code>pd.date_range</code>:</p> <pre class="lang-py prettyprint-override"><code>$ pd.date_range('2019-12-31', '2020-01-13', freq='1B') # 1B = 1 business day DatetimeIndex(['2019-12-31', '2020-01-01', '2020-01-02', '2020-01-03', '2020-01-06', '2020-01-07', '2020-01-08', '2020-01-09', '2020-01-10', '2020-01-13'], dtype='datetime64[ns]', freq='B') </code></pre>
python|pandas|dataframe|matplotlib|series
0
2,023
52,665,131
How we can create similarity matrix from dictionar?
<p>I have a dict as following:</p> <pre><code>dic = {a1: [a,b,c], b1:[b,k,l]}. </code></pre> <p>I want to create a similarity matrix for each key's value list. for example, for key <code>a1</code>, I want to compute similarities between <code>(a,b), (a,c) and (b,c)</code> using suppose method <code>f</code>. <code>f((a,a)) = 1</code>. We can do it by creating a vector and indexing its element by the value of similarity between <code>(a,b), (a,c) and (b,c)</code> and repeat the same procedure to b i.e. <code>(b,a), (b,b), and (b,c)</code> and so on. but It is not necessary as the similarity of <code>(b,a) =(a,b)</code>. so how can solve that?how can create such matrix? the same way will be applied then to each key of the dic (i.e. b1 etc)</p>
<p>If <code>f</code> is expensive and not vectorizable, you could use <code>np.tri</code> and friends along the lines of</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; from operator import itemgetter as iget &gt;&gt;&gt; # set up an example &gt;&gt;&gt; a1, b1 = 'a1', 'b1' &gt;&gt;&gt; a, b, c, k, l = np.random.randint(0, 10, (5, 3)) &gt;&gt;&gt; dic = {a1: [a,b,c], b1: [b,k,l]} &gt;&gt;&gt; f = np.dot &gt;&gt;&gt; # do the computation &gt;&gt;&gt; RES = {} &gt;&gt;&gt; for k, v in dic.items(): ... N = len(v) ... res = np.ones((N, N)) ... I, J = np.triu_indices_from(res, 1) ... res[I, J] = np.fromiter(map(f, iget(*I.tolist())(v), iget(*J.tolist())(v)), float, N*(N-1)//2) ... np.copyto(res, res.T, where=np.tri(*res.shape, -1, bool)) ... RES[k] = res ... # check &gt;&gt;&gt; RES {'a1': array([[ 1., 108., 122.], [108., 1., 120.], [122., 120., 1.]]), 'b1': array([[ 1., 42., 66.], [42., 1., 20.], [66., 20., 1.]])} </code></pre> <p>Instead of <code>map(f, iget(...</code> you could also use <code>itertools.starmap(f, itertools.combinations(v, 2))</code>.</p>
python|numpy|scipy
1
2,024
46,353,749
How to Union Intersecting Geometries in Same Geopandas Dataframe
<p>I have a dataframe with circles, some of which intersect others. I want to merge those intersecting regions to be new rows in the dataframe, adding the attributes from the intersecting regions. I only see how to use sjoin between two dataframes.</p>
<p><strong>Setup</strong> </p> <pre><code>import geopandas as gpd, pandas as pd from urbansim.maps import dframe_explorer from shapely.geometry import Point %matplotlib inline c1 = Point(1, 0).buffer(1) c2 = Point(.5, 0).buffer(1) gdf = gpd.GeoDataFrame(dict(A=[1, 2], B=[3, 4]), geometry=[c1, c2]) gdf.plot() </code></pre> <p><a href="https://i.stack.imgur.com/CQNTr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CQNTr.png" alt="enter image description here"></a></p> <p><strong>Solution</strong><br> Using <code>reduce</code> from <code>functools</code> </p> <pre><code>from functools import reduce intersection = reduce(Point.intersection, gdf.geometry) summed = gpd.GeoDataFrame( gdf.sum().to_frame().T, geometry=[intersection] ) gdf.set_geometry( gdf.difference(intersection) ).append(summed, ignore_index=True).plot() </code></pre> <p><a href="https://i.stack.imgur.com/fbcda.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fbcda.png" alt="enter image description here"></a></p>
pandas|union|intersect|geopandas
1
2,025
46,627,610
Successfully installed SciPy, but "from scipy.misc import imread" gives ImportError: cannot import name 'imread'
<p>I have successfully installed scipy, numpy, and pillow, however I get error as below</p> <blockquote> <p>ImportError: cannot import name 'imread'</p> </blockquote>
<p><code>imread</code> and <code>imsave</code> are deprecated in scipy.misc</p> <p>Use <code>imageio.imread</code> instead after <code>import imageio</code>.</p> <p>For saving - Use <code>imageio.imsave</code> instead or use <code>imageio.write</code></p> <p>For resizing use <code>skimage.transform.resize</code> instead after <code>import skimage</code></p>
tensorflow|scipy|ubuntu-16.04|python-import
0
2,026
58,173,241
text classification with machine learning
<p>I have a data set, with news headlines and the category of that news. I wish I could predict the category of the news by entering only its headline. I need to be able to classify text. Thank you</p>
<p>your question cannot be answered completely, but i can give you some starting points. , you need to do some own research this tutorial will is good for start. <a href="https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/" rel="nofollow noreferrer">link</a></p> <p>For local development i would suggest Anaconda for libraries etc and Jupyter notebooks. Or You can use google colab or Microsoft Azure notebook for it.</p> <ul> <li>Load required libraries, </li> <li>load data, check and clean your data</li> <li>Split dataset for train and test</li> <li>convert text to vectors</li> <li>train and test the model and make prediction</li> </ul> <p>and some code for help, </p> <pre><code># Split-out validation dataset X = df_row['tweets'].values Y = df_row['label'].values validation_size = 0.20 seed = 7 X_train, X_test, Y_train, Y_test = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed) vocab_size = 1000 # define Tokenizer with Vocab Size tokenizer = Tokenizer(num_words=vocab_size) tokenizer.fit_on_texts(X_train) #X_test and X_train are data tweets(text columns) X_train = tokenizer.texts_to_matrix(X_train, mode='tfidf') #X_train is now in vectorized form </code></pre>
python|tensorflow|machine-learning
0
2,027
69,038,101
Pandas groupBy multiple columns and aggregation
<p>In dataframe have 4 columns col_A,col_B,col_C,col_D.Need to group the columns(col_A,col_B,col_C) and aggregate mean by col_D. Below is the code snippet I tried and it worked</p> <p><code>df.groupby(['col_A','col_B','col_C']).agg({'col_D':'mean'}).reset_index()</code></p> <p>But in addition to the above result, also require the group by count of ('col_A','col_B','col_C') along with aggregation. Any help on this please.</p>
<p>Using <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#named-aggregation" rel="nofollow noreferrer">Named Aggregation</a>:</p> <pre class="lang-py prettyprint-override"><code>result = ( df.groupby(['col_A', 'col_B', 'col_C'], as_index=False) .agg(mean=('col_D', 'mean'), count=('col_D', 'count')) ) </code></pre> <p>For the <code>count</code> columns, you have 2 choices in choosing the aggregate function:</p> <ul> <li><code>count=('col_D', 'count')</code> will ignore any NaN value in <code>col_D</code></li> <li><code>count=('col_D', 'size')</code> will include NaN values in <code>col_D</code></li> </ul>
python|pandas
2
2,028
44,590,646
Iterating through a dataframe to create PDF documents
<p>I have a worksheet that I have imported as a Pandas dataframe which looks something like this:</p> <p>FileName FilePath Date Pagestart PageEnd</p> <p>file1 path1 date1 5 10</p> <p>file2 path2 date2 20 100</p> <p>My goal here is to iterate through the dataframe and create a PDF for each row based on the specified page range. The first row should create a new PDF by pulling pages 5-10 from file1, the second row should build a new PDF by pulling pages 20-100 from file2.</p> <p>I am having trouble finding a good way to first, iterate through a dataframe and second, create the PDF based on the page range. Is there a way to iterate through a dataframe pretty easily? is there a module that will create PDFs where I can specify a page range (I have used PyPDF2 in the past with .getPage() but I dont think that allows a page range but rather a single value)? </p> <p>Edit: I think I found a good way to iterate through the dataframe, but am still searching for a way to build the PDF. Here is my iteration:</p> <pre><code>i = 0 for row in df.iterrows(): iteration = df.iloc[i] i +=1 </code></pre>
<pre><code>import PyPDF2 import os for row in df.itertuples(): page_start, page_end = row.PageStart, row.PageEnd output_filename = generate_output_name filename = os.path.join(row.FilePath, row.FileName) with PdfFileMerger() as merger: merger.append(filename, pages=(page_start, page_en)) merger.write(output_filename) </code></pre>
python|python-3.x|pandas|pdf|dataframe
1
2,029
44,434,416
Plot basic example of neural network
<p>I am studying about neural network tutorial and made simple perceptron code like this below </p> <p>The purpose is </p> <ul> <li>Spliting 20 points into two groups.</li> </ul> <p>perceptron.py</p> <pre><code>import numpy as np from pprint import pprint import pandas as pd import matplotlib import matplotlib.pyplot as plt from tensorflow.contrib.learn.python.learn.tests.dataframe.mocks import Mock2x2Transform plt.style.use('ggplot') font = {'family' : 'meiryo'} matplotlib.rc('font', **font) rng = np.random.RandomState(123) d = 2 #dimension N = 10 # each group items mean = 5 x1 = rng.randn(N,d) + np.array([0,0]) # group 0 x2 = rng.randn(N,d) + np.array([mean,mean]) $group 1 x = np.concatenate((x1,x2),axis = 0) ##### Plot points allDf = pd.DataFrame(columns=['x','y']) k = 0 for i in x: print(i[0]) temp = pd.DataFrame({'x' : i[0], 'y' : i[1]},index=[k]) k = k + 1 allDf = pd.concat([allDf,temp]) pprint(allDf) allDf.plot(kind='scatter',x = 'x',y='y') ######### #initialize w b w = np.zeros(d) b = 0 def y(x): return step(np.dot(w,x) + b) def step(x): return 1 * (x &gt; 0) def t(i): if i &lt; N: return 0 else: return 1 while True: classified = True for i in range(N * 2): delta_w = (t(i) - y(x[i])) * x[i] delta_b = (t(i) - y(x[i])) w += delta_w b += delta_b classified *= all(delta_w == 0 ) * (delta_b == 0) if classified: print("Final answer") pprint(w) pprint(b) # I get the answer here but how can I plot this w and b X = np.linspace(-2,6,100) # it's wrong!! Y = (w[0] * X + w[1] * X) - b # it's wrong!! plt.plot(X,Y) plt.show() break </code></pre> <p>This source code gives me the final answer like this </p> <pre><code>w = array([ 2.14037745, 1.2763927 ]) b = -9 </code></pre> <p>But how can I plot this?? </p> <p>I want to make line between two groups.</p> <p>The final graph(line) is supposed to be like this <a href="https://i.stack.imgur.com/UVtmm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UVtmm.png" alt="enter image description here"></a></p>
<p>You can plot using <code>scatter</code> for data and <code>contour</code> for boundary decision:</p> <pre><code>xx = np.linspace(-2,10) yy = np.linspace(-2,10) [X1,X2] = np.meshgrid(xx,yy) Y = [t(i) for i in range(len(x))] Z = (w[0] * X1.ravel() + w[1] * X2.ravel()) + b plt.scatter(x[:,0], x[:,1], s=20, c=Y, cmap=None, vmin=0, vmax=2) plt.contour(X1,X2,Z.reshape(X1.shape), levels=[0], cmap='gray') plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/TZDnS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TZDnS.png" alt="enter image description here"></a></p>
python|pandas|neural-network|deep-learning|artificial-intelligence
0
2,030
61,113,087
TypeError: 'DataFrame' object cannot be interpreted as an integer in python 3.7
<p>I have a simple question, I am creating new column in a list of dataFrame within function. I got this error</p> <pre><code>data['datenum'] = np.zeros((data)) TypeError: 'DataFrame' object cannot be interpreted as an integer </code></pre>
<p>Your argument to np.zeros needs to be an integer. Right now you have data, which you say is a DataFrame. Perhaps you're looking for: </p> <pre><code>data['datenum'] = np.zeros(data.shape[0]) </code></pre> <p>If you have multiple dataframes, you can do the following: </p> <pre><code>for data in dataframes: data['datenum'] = np.zeros(data.shape[0]) </code></pre>
python|python-3.x|pandas|python-2.7
1
2,031
69,821,979
Stop Tensorflow trying to load cuda temporarily
<p>I have this code to disable GPU usage:</p> <pre><code>import numpy as np import os os.environ[&quot;CUDA_VISIBLE_DEVICES&quot;] = &quot;-1&quot; import tensorflow as tf w = tf.Variable( [ [1.], [2.] ]) </code></pre> <p>I get this output still, not sure why :</p> <pre><code>E:\MyTFProject\venv\Scripts\python.exe E:/MyTFProject/tfvariable.py 2021-11-03 14:09:16.971644: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2021-11-03 14:09:16.971644: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 2021-11-03 14:09:19.563793: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected 2021-11-03 14:09:19.566793: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: newtonpc 2021-11-03 14:09:19.567793: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: mypc 2021-11-03 14:09:19.567793: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. </code></pre> <p>TF Version: '2.6.1' Not able to stop it from loading Cuda DLLs. I dont want to setup cuda just right now. Maybe later.</p> <p>I am using the latest PyCharm and installed tensorflow as given in the site with pip.</p>
<p>You can try to reinstall tensorflow with CPU-only version. The links are available here depending on your OS and your python version: <a href="https://www.tensorflow.org/install/pip?hl=fr#windows_1" rel="nofollow noreferrer">https://www.tensorflow.org/install/pip?hl=fr#windows_1</a></p>
tensorflow
0
2,032
69,778,354
Pandas mistake when reading date from excel file
<p>Pandas error when reading date from excel file. I am creating a dataframe using the following command.</p> <pre><code>df = pd.read_excel(&quot;report_file.xls&quot;, parse_dates=['operation_date']) df.dtypes operation_date datetime64[ns] </code></pre> <p>Everything looks good. But when analyzing the dataframe, an error was found. After the number of the day matches the number of the month, the pandas is mistaken and reverses the day and month. For example, in October data it looks like this.</p> <pre><code>45 2021-10-13 11:50:34 ... 329.97 46 2021-10-13 11:41:56 ... 323.50 47 2021-10-13 11:41:55 ... 2600.00 48 2021-10-10 02:05:13 ... 1479.45 49 2021-09-10 20:22:01 ... 40.00 50 2021-09-10 19:39:39 ... 42.64 51 2021-09-10 19:39:39 ... 350.00 52 2021-06-10 20:11:48 ... 20.00 53 2021-06-10 13:34:25 ... 1.96 </code></pre> <p>You can see that after 2021-10-10 day number at the place of month.</p>
<p>Try passing the date format explicitly, something like this:</p> <pre><code>pd.read_excel( &quot;report_file.xls&quot;, parse_dates=['operation_date'], date_parser=lambda x: pd.to_datetime(x, format='%Y-%m-%d %I:%M:%S') ) </code></pre>
python|excel|pandas|datetime64
0
2,033
69,933,833
Calculate de mean of a list inside of a Nested Dictionary
<p>I have a nested dictionary, that I transformed in a pickle file. The pickle file can be found <a href="https://github.com/joaodavidfreitas/sistemas_inteligentes/blob/main/map_results_LSTM_acoes_variandotest.pickle" rel="nofollow noreferrer">here</a>. To open the pickle file is just like thar:</p> <pre><code>import pickle resultados_acoes_testvar = pickle.load(open('map_results_modelo_acoes_variandotest.pickle', 'rb')) </code></pre> <p>The file is a dictionary, with that structure:</p> <pre><code>{'amat': {'Test_Size_100': {'raw_0': array([1.39838652e+02, 1.42292998e+02, 1.45314363e+02, 1.49162546e+02....)]}}} </code></pre> <p>Where &quot;amat&quot; is the name of the dataset(it has 9 datasets in the dict), test_size is the length of my prediction(prediction of a time serie), and raw is the model(it has 6 models in the dict) and the _0 is the time that I run. I run each model 10 times(0 to 9).</p> <p>I would like to to get one time serie and for test_size and each model, with the mean of the nines times that I run each model.</p> <p>I'm trying to do that way:</p> <pre><code>resultado = {} lista_modelos = ['raw','difference', 'logaritmica', 'box_cox', 'mas', 'pct'] for acao in resultados_acoes_testvar_transformadas.keys(): resultado[acao] = {} for testsize in resultados_acoes_testvar_transformadas[acao].keys(): resultado[acao][testsize] = {} for values in resultados_acoes_testvar_transformadas[acao][testsize].keys(): for prefix in lista_modelos: resultado[acao][testsize][prefix] = [] for a, b, c, d, e, f, g, h, i, j in zip(values[prefix + '_0'],values[prefix+'_1'],values[prefix+'_2'],values[prefix+'_3'],values[prefix+'_4'],values[prefix+'_5'],values[prefix+'_6'],values[prefix+'_7'],values[prefix+'_8'],values[prefix+'_9']): mean = float((a+b+c+d+e+f+g+h+i+j)/10) resultado[acao][testsize][prefix].append(mean) </code></pre> <p>But I'm getting a error:</p> <pre><code> TypeError Traceback (most recent call last) &lt;ipython-input-59-80ef15c9251e&gt; in &lt;module&gt; 19 20 ---&gt; 21 for a, b, c, d, e, f, g, h, i, j in zip(values[prefix + '_0'],values[prefix+'_1'],values[prefix+'_2'],values[prefix+'_3'],values[prefix+'_4'],values[prefix+'_5'],values[prefix+'_6'],values[prefix+'_7'],values[prefix+'_8'],values[prefix+'_9']): 22 23 mean = float((a+b+c+d+e+f+g+h+i+j)/10) TypeError: string indices must be integers </code></pre> <p>Thanks for your help.</p>
<p>The problem is that <code>values</code> is a string, not a dictionary like you try to use it. <code>keys()</code> returns a list of strings. I suggest you use <code>items()</code> instead to get the key, value pairs from the dictionary you are iterating. This will also let you avoid the long indexing syntax from the root data structure.</p>
python|numpy|dictionary|time-series|mean
1
2,034
43,132,792
matplotlib unexpected results polar plot
<p>I am trying to plot simple function r = 3*sin(2*theta) using matplotlib:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt theta = np.arange(0,2*np.pi,0.01) r = 3.0*np.sin(2.0*theta) ax = plt.subplot(111, projection='polar') ax.plot(theta, r) plt.show() </code></pre> <p>This is the result I get (it is not correct):</p> <p><img src="https://i.stack.imgur.com/6Kp6D.png" alt="matplolib"></p> <p>This is what I expect to see (wolfram alpha): <img src="https://i.stack.imgur.com/vhqDo.png" alt="wolfram alpha"></p> <p>Am I missing something? Thanks!</p>
<p>this patches the polar plot for neg r</p> <pre><code>import numpy as np import matplotlib.pyplot as plt theta = np.arange(0,2*np.pi,0.01) r = 3.0*np.sin(2.0*theta) theta = theta + (1 - np.sign(r))*np.pi/2 # add pi to points with negative r values r = np.abs(r) # make all r values postive to fake out matplotlib ax = plt.subplot(111, projection='polar') ax.plot(theta, r) plt.show() </code></pre>
python|numpy|matplotlib
1
2,035
72,256,427
Determine if geopandas point is in generic polygon with holes
<p>This thread <a href="https://stackoverflow.com/questions/48097742/geopandas-point-in-polygon">here</a> gave a solution of how to determine if a <code>geopandas</code> <code>POINT</code> is in a solid <code>POLYGON</code>.</p> <p>What would be a generic solution to determine this for a <code>POLYGON</code> with holes, i.e. <code>MULTIPOLYGON</code>.</p> <p>For e.g., using <code>foo</code> below:</p> <pre><code>from shapely.geometry import Point, Polygon import geopandas polys = geopandas.GeoSeries({ 'foo': Polygon([(5, 5), (5, 13), (13, 13), (13, 5)], [[(7, 7), (7, 11), (11, 11), (11, 7)]]), 'bar': Polygon([(10, 10), (10, 15), (15, 15), (15, 10)]), }) _pnts = [Point(3, 3), Point(8, 8), Point(11, 11)] pnts = geopandas.GeoDataFrame(geometry=_pnts, index=['A', 'B', 'C']) </code></pre>
<p>Strictly the sample you have provided are polygons. Geometry contains a hole.</p> <p>It's pretty straight forward to test, just use <strong>convex_hull</strong>. Code below does both tests.</p> <pre><code>pnts.assign( **{ **{key: pnts.within(geom) for key, geom in polys.items()}, **{key+&quot;_filled&quot;: pnts.within(geom.convex_hull) for key, geom in polys.items()}, } ) </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;"></th> <th style="text-align: left;">geometry</th> <th style="text-align: left;">foo</th> <th style="text-align: left;">bar</th> <th style="text-align: left;">foo_filled</th> <th style="text-align: left;">bar_filled</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">POINT (3 3)</td> <td style="text-align: left;">False</td> <td style="text-align: left;">False</td> <td style="text-align: left;">False</td> <td style="text-align: left;">False</td> </tr> <tr> <td style="text-align: left;">B</td> <td style="text-align: left;">POINT (8 8)</td> <td style="text-align: left;">False</td> <td style="text-align: left;">False</td> <td style="text-align: left;">True</td> <td style="text-align: left;">False</td> </tr> <tr> <td style="text-align: left;">C</td> <td style="text-align: left;">POINT (11 11)</td> <td style="text-align: left;">False</td> <td style="text-align: left;">True</td> <td style="text-align: left;">True</td> <td style="text-align: left;">True</td> </tr> </tbody> </table> </div>
python|pandas|geopandas|point-in-polygon
0
2,036
72,360,913
Is there a non-looping way to perform text searching in a data frame
<p>I have a huge list of ngrams to search. I want to know what frequency they have on my historic dataframe and the mean of a numeric variable that I have on my historic. I have a really really ugly way of doing it (that works), but as the list of ngrams is huge, it's really slow.</p> <p>I am trying to avoid doing the loop, as I guess is the main reason of my velocity problem, but I don't see how I can do it.</p> <p>Any idea?</p> <pre><code>output = pd.DataFrame() ngrams = ['ngram1', 'ngram2', 'ngram3', ..., 'ngram350000'] for i in list(ngrams): temp = pd.DataFrame(data={'ngram' : [i], 'count' : historic_df['text_variable'].str.contains(i, na=False).sum(), 'mean' : historic_df[historic_df['text_variable'].str.contains(i, na=False)]['numeric_variable'].mean()}) output = pd.concat([output, temp], axis=0) </code></pre>
<p>Try DataFrame.apply()</p> <pre class="lang-python prettyprint-override"><code>def func(x): temp = pd.DataFrame(data={'ngram' : [i], 'count' : historic_df['text_variable'].str.contains(i, na=False).sum(), 'mean' : historic_df[historic_df['text_variable'].str.contains(i, na=False)]['numeric_variable'].mean()}) output = pd.concat([output, temp], axis=0) return x output = pd.DataFrame() ngrams = pd.DataFrame({'ngram':['ngram1', 'ngram2', 'ngram3', ..., 'ngram350000']}) ngrams.apply(func) </code></pre>
python|pandas|n-gram
0
2,037
72,146,783
Groupby id and change values for all rows for the earliest date to NaN
<p>I have the following id, i would like to groupby id and then replace value <code>X</code> with <code>NaN</code>. My current df.</p> <pre><code> ID Date X other variables.. 1 1/1/18 0.118758835 1 1/1/18 0.148103273 1 1/1/18 0.365541214 1 1/2/18 0.405002687 1 1/2/18 0.130580643 1 1/2/18 0.395113106 2 1/1/18 0.425580038 2 1/1/18 0.889677796 2 1/1/18 0.835311629 2 1/2/18 0.8375818 2 1/2/18 0.648162611 2 1/2/18 0.639060695 </code></pre> <p>desired output</p> <pre><code> ID Date X other variables.. 1 1/1/18 NaN 1 1/1/18 NaN 1 1/1/18 NaN 1 1/2/18 0.405002687 1 1/2/18 0.130580643 1 1/2/18 0.395113106 2 1/1/18 NaN 2 1/1/18 NaN 2 1/1/18 NaN 2 1/2/18 0.8375818 2 1/2/18 0.648162611 2 1/2/18 0.639060695 </code></pre>
<p>You can call <code>min</code> in <code>groupby.transform</code> to get the earliest dates for each ID; then compare it with &quot;Date&quot; to get a boolean mask; finally use the mask to <code>mask</code> earliest &quot;X&quot;s:</p> <pre class="lang-py prettyprint-override"><code>df['X'] = df['X'].mask(df.groupby('ID')['Date'].transform('min').eq(df['Date'])) </code></pre> <p>Output:</p> <pre class="lang-py prettyprint-override"><code> ID Date X 0 1 1/1/18 NaN 1 1 1/1/18 NaN 2 1 1/1/18 NaN 3 1 1/2/18 0.405003 4 1 1/2/18 0.130581 5 1 1/2/18 0.395113 6 2 1/1/18 NaN 7 2 1/1/18 NaN 8 2 1/1/18 NaN 9 2 1/2/18 0.837582 10 2 1/2/18 0.648163 11 2 1/2/18 0.639061 </code></pre>
python|pandas|dataframe|pandas-groupby
1
2,038
50,291,083
how to parse selected values from nested json using pandas
<p>I am trying to parse only a selected elements from nested json.</p> <p>below is my json file</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>{ "creation-date": "Fri Mar 23 07:03:31 UTC 2018", "scan-with-high-privileges": true, "system-infos": { "hostname": "vmDiscovery", "domain": "aw4gb5ukuefulow5njy3bfktkc.rx.internal.cloudapp.net", "os": "", "os-details": { "kernel-version": "Linux vmDiscovery 3.10.0-693.17.1.el7.x86_64 #1 SMP Sun Jan 14 10:36:03 EST 2018 x86_64 x86_64 x86_64 GNU/Linux", "lsb-id": "", "lsb-version-compliance": "", "lsb-description": "", "lsb-release": "", "lsb-codename": "" } }</code></pre> </div> </div> </p> <p>I am trying to access only hostname and domain from system infos. And i only read a json file from local machine and not insert the complete file.</p> <p>Below is my code i tried</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>import pandas as pd import json from pandas.io.json import json_normalize with open("C:\\Users\\esrilka\\Documents\\jsonFiles\\jsonFiles\\Mynew.json") as fi: d = json.load(fi) works_data3=pd.DataFrame(data=d['system-infos'],columns=['hostname','domain']) </code></pre> </div> </div> </p> <p>I get an error to pass through the index aswell.</p> <p>Expected output is <a href="https://i.stack.imgur.com/y7f3K.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>I'm not really sure why you want to get this into pandas. You only have two columns with one value for each of them.</p> <p>Also, the pandas json loader isn't really designed to get data from your ad hoc JSON files, but to load more regular ones.</p> <p>I would extract the data I wanted and load that into pandas if you really need to. Basically this just makes a new dictionary using a dictionary comprehension to load into pandas with the two fields you are interested in, then loads that.</p> <pre><code>import json import pandas as pd with open('main.json', 'rb') as f: s = f.read() d = json.loads(s) d = { k: [d['system-infos'][k]] for k in ['hostname', 'domain'] } df = pd.DataFrame(d) print(df) </code></pre>
python|pandas
0
2,039
50,457,074
How to polyfit() an OpenCV calcHist() histogram?
<p>I have something like this:</p> <pre><code>import numpy as np import cv2 as cv from matplotlib import pyplot as plt import numpy.polynomial.polynomial as poly img = cv.imread('SomeImage.jpg') color = ('b','g','r') for i,col in enumerate(color): histr = cv.calcHist([img],[i],None,[32],[0,256]) plt.plot(histr,color = col) plt.xlim([0,32]) x = np.linspace(0,histr.shape[0],1); # &lt;== ERROR HERE poly.polyfit(x, histr, 4) </code></pre> <p>I get the following error:</p> <p><code>File "/Users/case/anaconda2/lib/python2.7/site-packages/numpy/polynomial/polynomial.py", line 1438, in polyfit</code> <code>raise TypeError("expected x and y to have same length")</code> <code>TypeError: expected x and y to have same length</code></p> <p>I'm pretty new to this, but seems I'm missing something simple?</p>
<p>It looks like a minor syntax mistake when calling <code>np.linspace</code>. The correct syntax is</p> <pre><code>x = np.linspace(interval_start, interval_end, number_of_points) </code></pre> <p>so in your case, that would be</p> <pre><code>x = np.linspace(0, 1, histr.shape[0]) </code></pre>
python|numpy|opencv
1
2,040
50,653,562
Why does "tf.constant(tf.random_normal((10, 4)))" cause an error?
<p>In the following code, "a" works perfectly fine, and "c" also works. But "b" causes an error. Could someone explain the reason?</p> <pre><code>#!/usr/bin/python import tensorflow as tf import numpy as np a = tf.Variable(tf.random_normal((10, 4))) b = tf.constant(tf.random_normal((10, 4))) c = tf.constant(np.random.randn(10, 4)) </code></pre>
<p>I am also a new one who start using tensorflow. I believe that there is something wrong with your variable type. According to the tensorflow API, you should feed a constant or list of value to 'tf.constant()'. However, in you code, before you initialize the variables and run this session, 'tf.random_normal()' is something like a placeholder without any real meaning. You can try to run this code. I am not sure if I understand this problem and I would like to discuss with you.</p> <pre><code>import tensorflow as tf a = tf.random_normal((10, 4)) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) b = tf.constant(sess.run(a)) print(sess.run(b)) </code></pre>
python|tensorflow|initialization
1
2,041
45,374,905
higher precision in python
<p>I am running some <code>python</code> v3.3.2 scripts that use <code>numpy</code> and <code>scipy</code> and <code>math</code>. I am suspecting that there is an issue of numerical precision in my computation, and I would like to increase the precision in some particular modules that I have written and see if it makes a difference in the final result. In my module I am using some algebraic manipulations and <code>numpy.sqrt</code>.</p> <p>How do I manipulate the precision of the computations in a module that is already written by me?(I can modify it). I have seen that there are several modules available like <code>decimal</code> and <code>mpmath</code> and <code>bigfloat</code>, and I was trying to figure out from the documentation which one would be more suitable for my task. The first one is already installed and I should install the other two. Ideally I would like to write a command on the top of the module and specifying the precision that I need in that module, does something like that exist?</p> <p>EDIT ---------</p> <p>I think the problem may come from computation of second derivative:</p> <pre><code>def secondderivative(x,y): xl = np.roll(x,1) # x1 xr = np.roll(x,-1)# x3 yl = np.roll(y,1) yr = np.roll(y,-1) ind = np.where(np.isnan(y) | np.isnan(yl) | np.isnan(yr) )[0] deriv2 = (2 * yl / ((x - xl) * (xr - xl)) - 2 * y / ((xr - x) * (x - xl)) + 2 * yr / ((xr - xl) * (xr - x))) for i in ind: deriv2[i] = np.nan deriv2[0] = np.nan deriv2[len(deriv2)-1] = np.nan return deriv2 </code></pre> <p>I have seen the result from gradient is completely different:</p> <pre><code>np.gradient(np.gradient(y,x),x) </code></pre>
<p>When your code is based on numpy/scipy and co., you can only use the types supported by these libs. Here is the <a href="https://docs.scipy.org/doc/numpy/user/basics.types.html" rel="nofollow noreferrer">overview</a>.</p> <p>The paragraph <a href="https://docs.scipy.org/doc/numpy/user/basics.types.html#extended-precision" rel="nofollow noreferrer">Extended precision</a> will be relevant for you.</p> <p>Combining numpy/scipy with decimal, mpmath and co. would need a lot of work (in the general case)!</p> <p>It would have been much more wise to show some code so that one could guess what's going on. Even with limited precision, there are techniques which makes a difference: e.g. <a href="https://en.wikipedia.org/wiki/Iterative_refinement" rel="nofollow noreferrer">iterative-refinement in solving linear systems</a>.</p>
python-3.x|numpy|scipy|precision|scientific-computing
2
2,042
62,605,998
fill column with value of a column from another dataframe, depending on conditions
<p>I have a dataframe that looks like this (my input database on COVID cases)</p> <p>data:</p> <pre><code> date state cases 0 20200625 NY 300 1 20200625 CA 250 2 20200625 TX 200 3 20200625 FL 100 5 20200624 NY 290 6 20200624 CA 240 7 20200624 TX 100 8 20200624 FL 80 ... </code></pre> <p>worth noting that the &quot;date&quot; column in the above data is a number (not datetime)</p> <p>I want to make it a timeseries like this (desired output), with dates as index and each state's COVID cases as columns</p> <pre><code> NY CA TX FL 20200625 300 250 200 100 20200626 290 240 100 80 ... </code></pre> <p>As of now I managed to create only the scheleton of the output with the following code</p> <pre><code>states = ['NY', 'CA', 'TX', 'FL'] days = [20200625, 20200626] columns = states positives = pd.DataFrame(columns = columns) i = 0 for day in days: positives.loc[i, &quot;date&quot;] = day i = i +1 positives.set_index('date', inplace=True) positives= positives.rename_axis(None) print(positives) </code></pre> <p>which returns:</p> <pre><code> NY CA TX FL 20200625.0 NaN NaN NaN NaN 20200626.0 NaN NaN NaN NaN </code></pre> <p>how can I get from the &quot;data&quot; dataframe the value of column &quot;cases&quot; when:</p> <p>(i) value in data[&quot;state&quot;] = column header of &quot;positives&quot;,</p> <p>(ii) value in data[&quot;date&quot;] = row index of &quot;positives&quot;</p>
<p>You can do:</p> <pre><code>df = df.set_index(['date', 'state']).unstack().reset_index() # fix column names df.columns = df.columns.get_level_values(1) state CA FL NY TX 0 20200624 240.0 NaN 290.0 NaN 1 20200625 250.0 100.0 300.0 200.0 </code></pre> <p>Later, to set index again we need to set the name explicitly, do:</p> <pre><code>df = df.set_index(&quot;&quot;) df.index.name = &quot;date&quot; </code></pre>
python|pandas|numpy
4
2,043
62,488,554
Pandas shows data in a wrong diagram
<p>I have two functions which both create a diagramm. But when I run those 2 functions, in the second one is the data which should be in the first one. Here are the diagramms:<a href="https://i.stack.imgur.com/u3oII.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u3oII.jpg" alt="enter image description here" /></a></p> <p>This diagramm shows the temerature <a href="https://i.stack.imgur.com/k7Sun.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k7Sun.jpg" alt="enter image description here" /></a></p> <p>And this one should only show the humidity data. Not the humidity and the temperature data. Here is my source code:</p> <pre><code>from pandas import DataFrame import sqlite3 import matplotlib.pyplot as plt import pandas as pd from datetime import date, datetime datum = str(date.today()) date = [datum] con = sqlite3.connect(&quot;/home/pi/test2.db&quot;) sql = &quot;SELECT * from data4 WHERE date in (?)&quot; df3 = pd.read_sql_query(sql,con, params=[datum]) def daily_hum(): df3 = pd.read_sql_query(sql,con, params=[datum]) df3['datetime'] = pd.to_datetime((df3.date + ' ' + df3.time)) df3.groupby([df3.datetime]).hum.mean().plot() plt.savefig('/home/pi/flask/static/daily_hum.jpg') def daily_temp1(): df4 = pd.read_sql_query(sql,con, params=[datum]) df4['datetime'] = pd.to_datetime((df4.date + ' ' + df4.time)) df4.groupby([df4.datetime]).temp.mean().plot() plt.savefig('/home/pi/flask/static/daily_temp.jpg') daily_temp() daily_hum() </code></pre> <p>The database/ the DataFrame looks like this:</p> <pre><code>id,hum,temp,zeit,date 721,60,21,11:04:23,2020-06-21 722,64,22,11:04:24,2020-06-21 723,68,22,11:04:27,2020-06-21 724,70,22,11:07:20,2020-06-21 725,63,22,11:08:20,2020-06-21 726,63,22,11:09:21,2020-06-21 727,63,22,11:10:22,2020-06-21 728,63,22,11:11:22,2020-06-21 729,69,22,11:12:24,2020-06-21 730,64,22,11:13:29,2020-06-21 731,70,22,11:14:32,2020-06-21 732,64,22,11:15:33,2020-06-21 733,64,22,11:16:34,2020-06-21 734,64,22,11:17:34,2020-06-21 735,64,22,11:18:35,2020-06-21 736,64,22,11:19:35,2020-06-21 737,64,22,11:20:36,2020-06-21 738,64,22,11:21:37,2020-06-21 739,64,22,11:22:37,2020-06-21 740,64,22,11:23:38,2020-06-21 741,65,22,11:24:38,2020-06-21 742,65,22,11:25:39,2020-06-21 743,65,22,11:26:40,2020-06-21 744,65,22,11:27:40,2020-06-21 </code></pre> <p>I hope you can help me</p>
<p>You could try this. Matplotlib needs to know, if you want a <em>new figure</em> for each plot or not.</p> <pre class="lang-py prettyprint-override"><code>from pandas import DataFrame import sqlite3 import matplotlib.pyplot as plt import pandas as pd from datetime import date, datetime datum = str(date.today()) date = [datum] con = sqlite3.connect(&quot;/home/pi/test2.db&quot;) sql = &quot;SELECT * from data4 WHERE date in (?)&quot; df3 = pd.read_sql_query(sql,con, params=[datum]) df3['datetime'] = pd.to_datetime((df3.date + ' ' + df3.time)) # new figure fig, ax = plt.subplots() # Some figure modifying code fig.suptitle('Titel of Figure') ax.set_xlabel('X-Label') ax.set_ylabel('Y-Label') df3.groupby([df3.datetime]).hum.mean().plot(ax=ax) plt.savefig('/home/pi/flask/static/daily_hum.jpg') # new figure fig, ax = plt.subplots() # Some figure modifying code fig.suptitle('Titel of Figure') ax.set_xlabel('X-Label') ax.set_ylabel('Y-Label') df3.groupby([df3.datetime]).temp.mean().plot(ax=ax) plt.savefig('/home/pi/flask/static/daily_temp.jpg') </code></pre>
python|pandas|matplotlib
1
2,044
54,291,617
Vectorizing array access from indices matrix
<p>Consider the following:</p> <pre><code>In [51]: arr = np.arange(6, 10) In [52]: idx = np.random.randint(4, size=(3, 4)) In [53]: idx Out[53]: array([[0, 3, 3, 1], [1, 3, 3, 2], [1, 1, 1, 1]]) In [54]: result = np.empty_like(idx) In [55]: for i in range(idx.shape[0]): ...: result[i] = arr[idx[i]] ...: In [56]: result Out[56]: array([[6, 9, 9, 7], [7, 9, 9, 8], [7, 7, 7, 7]]) </code></pre> <p>How can I vectorize the <code>for</code> loop? I couldn't find a way accessing a 1D array "multiple times" via indices matrix where each row is an index array.</p>
<p>As noted in the comments, you can simply index into the array <code>arr</code> using the <code>idx</code> array.</p> <pre><code>In [47]: arr Out[47]: array([6, 7, 8, 9]) In [48]: idx Out[48]: array([[3, 2, 2, 0], [0, 3, 2, 3], [3, 2, 2, 3]]) In [49]: arr[idx] Out[49]: array([[9, 8, 8, 6], [6, 9, 8, 9], [9, 8, 8, 9]]) </code></pre> <hr> <p>If you want an approach that is less magical and more enlightening, then the below one would be more helpful.</p> <pre><code># flatten the `idx` array; index into `arr`; then reshape to `idx's` shape. In [50]: arr[idx.ravel()].reshape(idx.shape) Out[50]: array([[9, 8, 8, 6], [6, 9, 8, 9], [9, 8, 8, 9]]) </code></pre>
python|numpy|multidimensional-array|vectorization|matrix-indexing
0
2,045
54,362,961
Concatenating, sorting, and re-partitioning xyz data
<p>I have a situation where I have two lists of [x, y, z] data, I want to concatenate these lists, sort them, then extract a matrix for the z values, with x increasing along the columns, and y increasing along the rows. </p> <p>To give an example:</p> <pre><code>list1 = np.linspace(-2,2,3) list2 = np.linspace(-1,1,3) dat1 = [] for x in list1: for y in list1: z = x * y dat1 += [[x,y,z]] dat1 = np.array(dat1) dat2 = [] for x in list2: for y in list2: z = x * y dat2 += [[x,y,z]] dat2 = np.array(dat2) </code></pre> <p>I can build an array from the z values for each of these list individually using:</p> <pre><code>dat1[:, 2].reshape((list1.shape[0],list1.shape[0])) </code></pre> <p>but I want an (ordered) array for all values from both lists, i.e. I want to do the same thing with full sorted data set:</p> <pre><code>dat_full=np.vstack((dat1, dat2)) dat_index = np.lexsort((dat_full[:,1], dat_full[:,0])) dat_sorted = dat_full[dat_index] </code></pre> <p>the problem is that this is not a square array anymore, so I can't use the simple reshape trick I used previously. Is there a good way to do this?</p> <p><strong>Edit:</strong></p> <p>I should clarify that I am only interested in the unique data in concatenated array, which can be found using:</p> <pre><code>dat_full=np.unique(np.vstack((dat1, dat2))) dat_index = np.lexsort((dat_full[:,1], dat_full[:,0])) dat_sorted = dat_full[dat_index] </code></pre>
<p>My approach would be </p> <pre><code>result = [] _, occurences = np.unique(dat_sorted[:,0], return_inverse=True) for i in range(np.max(occurences) + 1): result.append(dat_sorted[occurences == i, 2]) </code></pre> <p>This will give you a x value ordered list of y value ordered arrays of z values. This is not a matrix because there are x values occuring more often than others, resulting in different sized arrays.</p>
python|numpy|sorting|multidimensional-array|data-structures
2
2,046
54,413,499
How to use the black/white image as the input to tensorflow
<p>When implementing the reinforcement learning with tensorflow, the inputs are black/white images. Each pixel can be represented as a bit 1/0.</p> <p>Can I give the data directly to tensorflow, with each bit as a feature? Or I had to expand the bits to bytes before sending to tensorflow? I'm new to tensorflow, so some code example would be nice.</p> <p>Thanks</p>
<p>You can directly load the Image data as you would normally do, the Image being binary will have no effect other that the input channel width becoming 1 for the input.</p> <p>Whenever you put an Image through a convnet, each output filter generally learns features for all the channels, so in case of a binary image, there is a separate kernel defined for each input channel / output channel combination (Since Only 1 input channel) in the first layer.</p> <p>Each channel is defined by it's <code>number of filters</code> and there exists a 2D kernel for each input channel which averages over all filters, so you will have weights/parameters equal to <code>input_channels * number_of_filters * filter_dims</code>, here for the first layer <code>input_channels</code> becomes one.</p> <p>Since you asked for some sample code. Let your image be in a tensor X, simply use</p> <p><code>X_out = tf.nn.conv2d(X, filters = 6, kernel_size = [height,width])</code></p> <p>After that you can apply an activation, this will make your output image have 6 channels. If you face any problem or have some doubts, feel free to comment, for theoretical clarification, check out <a href="https://www.coursera.org/learn/convolutional-neural-networks/lecture/nsiuW/one-layer-of-a-convolutional-network" rel="nofollow noreferrer">https://www.coursera.org/learn/convolutional-neural-networks/lecture/nsiuW/one-layer-of-a-convolutional-network</a></p> <p><strong>Edit</strong> </p> <p>Since the question was about simple neural net, not conv net, here is the code for that,</p> <p><code>X_train</code> is the variable in which image is stored as (n_x,n_x) byte resolution, n_x is used later.</p> <p>You will need to flatten the input.</p> <p><code>X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T</code> This first flattens the image horizontally and then transposes it to arrange it vertically.</p> <p>Then you will create placeholder tensor <code>X</code> as : </p> <p><code>X = tf.placeholder(tf.bool,[n_x*n_x,None])</code> #Your Input tensor should have dimension same as your input layer.</p> <p>let <code>W, b</code> be weight and bias respectively.</p> <pre><code>Z1 = tf.add(tf.matmul(W1,X),b1) #Linear Transformation step A1 = tf.nn.relu(Z1) #Activation Step </code></pre> <p>And you keep on creating your graph, I think that answers your question, if not let me know.</p>
tensorflow
0
2,047
73,638,057
count number of elements in a list inside a dataframe
<p>Assume that we have a dataframe and inside the dataframe in a column we have lists. How can I count the number per list? For example</p> <pre><code>A B (1,2,3) (1,2,3,4) (1) (1,2,3) </code></pre> <p>I would like to create 2 new columns with the count of each column. something like the following</p> <pre><code>A B C D (1,2,3) (1,2,3,4) 3 4 (1) (1,2,3) 1 3 </code></pre> <p>where C corresponds to the number of the elements in the column A for that row, and D for the number of elements in the list in column B for that row</p> <p>I cannot just do</p> <pre><code>df['A'] = len(df['A']) </code></pre> <p>Because that returns the len of my dataframe</p>
<p>You can use the <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.apply.html" rel="nofollow noreferrer"><code>.apply</code></a> method on the Series for the column <code>df['A']</code>.</p> <pre><code>&gt;&gt;&gt; import pandas &gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; pd.DataFrame({&quot;column&quot;: [[1, 2], [1], [1, 2, 3]]}) column 0 [1, 2] 1 [1] 2 [1, 2, 3] &gt;&gt;&gt; df = pd.DataFrame({&quot;column&quot;: [[1, 2], [1], [1, 2, 3]]}) &gt;&gt;&gt; df[&quot;column&quot;].apply &lt;bound method Series.apply of 0 [1, 2] 1 [1] 2 [1, 2, 3] Name: column, dtype: object&gt; &gt;&gt;&gt; df[&quot;column&quot;].apply(len) 0 2 1 1 2 3 Name: column, dtype: int64 &gt;&gt;&gt; df[&quot;column&quot;] = df[&quot;column&quot;].apply(len) &gt;&gt;&gt; </code></pre> <p>See <a href="https://stackoverflow.com/questions/43481039/python-pandas-apply-function">Python Pandas, apply function</a> for a more general discussion of apply.</p>
python|pandas
2
2,048
73,776,270
Try catch condition while combining CSV files in pandas
<p>I am combining multiple csv files into a single dataframe using this line -</p> <pre><code>df = pd.concat(map(pd.read_csv, files), ignore_index=True) </code></pre> <p>I was earlier using a <code>for</code> loop where I combine two dataframes at time. This allowed me to use <code>try-catch</code> statements to catch any errors arising from empty/badly formatted csv files. But with the single-line command presented at the top, how do I enter a similar <code>try-catch</code> condition?</p>
<p>try:</p> <pre><code>files = ['file1.csv', 'file2.csv', 'file3.csv'] def readcsv(path): try: dff = pd.read_csv(path) except pd.errors.EmptyDataError: print('error') dff = pd.DataFrame([]) #or anything else when error happen #I put empty dataframe here so the concat don't fail, but you can decide the behaviour you want when error happen (concat will fail if error happens...) return dff df = pd.concat(map(readcsv, files), ignore_index=True) </code></pre>
python-3.x|pandas
1
2,049
73,743,698
Pandas UDF with dictionary lookup and conditionals
<p>I want to use pandas_udf in Pyspark for certain transformations and calculations of column. And it seems that pandas udf can't be written exactly as normal UDFs.</p> <p>An example function looks something like below:</p> <pre><code>def modify_some_column(example_column_1, example_column_2): lookup_dict = {'a' : 1, 'b' : 2, 'c' : 3,'d': 4, 'e' : 5} #can be anything if example_column_1 in lookup_dict: if(example_column_1 == 'a' and example_column_2 == &quot;something&quot;): return lookup_dict[example_column_1] elif(example_column_1 == 'a' and example_column_2 == &quot;something else&quot;): return &quot;something else&quot; else: return lookup_dict[example_column_1] else: return &quot;&quot; </code></pre> <p>Basically, takes in two column values from a spark dataframe and returns a value which I intend to use with <code>withColumn</code>:</p> <pre><code>modify_some_column_udf = pandas_udf(modify_some_column, returnType= StringType()) df = df.withColumn('new_col',modify_property_type_udf(df.col_1,df.col_2)) </code></pre> <p>But this does not work. How should I modify the above to be able to use it in pandas udf?</p> <p>Edit: It is clear to me that the above conditions can be easily and efficiently be implemented using native PySpark functions. But I am looking to write the above logic using Pandas UDF.</p>
<p>With this simple if/else logic, you don't have to use UDF. In fact you should avoid to use UDFs as much as possible.</p> <p>Assuming you have the dataframe as follow</p> <pre><code>df = spark.createDataFrame([ ('a', 'something'), ('a', 'something else'), ('c', None), ('c', ''), ('c', 'something'), ('c', 'something else'), ('c', 'blah'), ('f', 'blah'), ], ['c1', 'c2']) df.show() +---+--------------+ | c1| c2| +---+--------------+ | a| something| | a|something else| | c| null| | c| | | c| something| | c|something else| | c| blah| | f| blah| +---+--------------+ </code></pre> <p>You can create a temporary lookup column and use it to check against other columns</p> <pre><code>import json your_lookup_dict = {'a' : 1, 'b' : 2, 'c' : 3,'d': 4, 'e' : 5} import pyspark.sql.functions as F (df .withColumn('lookup', F.from_json(F.lit(json.dumps(your_lookup_dict)), 'map&lt;string, string&gt;')) .withColumn('mod', F .when((F.col('c1') == 'a') &amp; (F.col('c2') == 'something'), F.col('lookup')[F.col('c1')]) .when((F.col('c1') == 'a') &amp; (F.col('c2') == 'something else'), F.lit('something else')) .otherwise(F.col('lookup')[F.col('c1')]) ) .show(10, False) ) +---+--------------+----------------------------------------+--------------+ |c1 |c2 |lookup |mod | +---+--------------+----------------------------------------+--------------+ |a |something |{a -&gt; 1, b -&gt; 2, c -&gt; 3, d -&gt; 4, e -&gt; 5}|1 | |a |something else|{a -&gt; 1, b -&gt; 2, c -&gt; 3, d -&gt; 4, e -&gt; 5}|something else| |c |null |{a -&gt; 1, b -&gt; 2, c -&gt; 3, d -&gt; 4, e -&gt; 5}|3 | |c | |{a -&gt; 1, b -&gt; 2, c -&gt; 3, d -&gt; 4, e -&gt; 5}|3 | |c |something |{a -&gt; 1, b -&gt; 2, c -&gt; 3, d -&gt; 4, e -&gt; 5}|3 | |c |something else|{a -&gt; 1, b -&gt; 2, c -&gt; 3, d -&gt; 4, e -&gt; 5}|3 | |c |blah |{a -&gt; 1, b -&gt; 2, c -&gt; 3, d -&gt; 4, e -&gt; 5}|3 | |f |blah |{a -&gt; 1, b -&gt; 2, c -&gt; 3, d -&gt; 4, e -&gt; 5}|null | +---+--------------+----------------------------------------+--------------+ </code></pre> <h3>EDIT</h3> <p>Since you insisted to use Pandas UDF, you'd have to understand that Pandas execute your dataframe by batches, so you'll have to wrap your functions to something like this</p> <pre><code>def wrapper(iterator): def modify_some_column(example_column_1, example_column_2): lookup_dict = {'a' : 1, 'b' : 2, 'c' : 3,'d': 4, 'e' : 5} #can be anything if example_column_1 in lookup_dict: if(example_column_1 == 'a' and example_column_2 == &quot;something&quot;): return str(lookup_dict[example_column_1]) elif(example_column_1 == 'a' and example_column_2 == &quot;something else&quot;): return &quot;something else&quot; else: return str(lookup_dict[example_column_1]) else: return &quot;&quot; for pdf in iterator: pdf['mod'] = pdf.apply(lambda r: modify_some_column(r['c1'], r['c2']), axis=1) yield pdf df = df.withColumn('mod', F.lit('temp')) df.mapInPandas(wrapper, df.schema).show() +---+--------------+--------------+ | c1| c2| mod| +---+--------------+--------------+ | a| something| 1| | a|something else|something else| | c| null| 3| | c| | 3| | c| something| 3| | c|something else| 3| | c| blah| 3| | f| blah| | +---+--------------+--------------+ </code></pre>
apache-spark|pyspark|pyspark-pandas|pandas-udf
1
2,050
71,371,204
How can I use row index values as column for dataframe?
<p>So, I collected data from 21 participants with 16 EEG channels and I extracted the Gamma band. My current dataframe looks like this ([336 rows x 2 columns]):</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Channels</th> <th>Gamma</th> </tr> </thead> <tbody> <tr> <td>Fp1</td> <td>0.345908</td> </tr> <tr> <td>Fp2</td> <td>0.121232</td> </tr> <tr> <td>F3</td> <td>0.213212</td> </tr> <tr> <td>.....</td> <td>....</td> </tr> </tbody> </table> </div> <p>Now I want to transpose it in such a way, that I have the gamma values for each channel in one column. Like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Fp1</th> <th>Fp2</th> <th>F3</th> <th>....</th> <th>Oz</th> </tr> </thead> <tbody> <tr> <td>0.067005</td> <td>0.345908</td> <td>0.207540</td> <td>....</td> <td>0.013512</td> </tr> <tr> <td>0.137292</td> <td>0.121232</td> <td>0.121210</td> <td>....</td> <td>0.121111</td> </tr> <tr> <td>0.112121</td> <td>0.213212</td> <td>0.123443</td> <td>....</td> <td>0.432233</td> </tr> </tbody> </table> </div> <p>when I just transpose the dataframe, then I get one row with all channels next to each other:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Fp1</th> <th>Fp1</th> <th>Fp1</th> <th>....</th> <th>Oz</th> <th>Oz</th> <th>Oz</th> </tr> </thead> <tbody> <tr> <td>0.067005</td> <td>0.345908</td> <td>0.207540</td> <td>....</td> <td>0.013512</td> <td>0.12123</td> <td>0.112423</td> </tr> </tbody> </table> </div> <p>I looked at pd.melt but I can't figure it out. Can someone help?</p> <p>Thank you in advance!</p>
<p>One approach is to group by the Channels and then set these groups as columns of your new dataframe. Assuming following dataframe:</p> <pre><code> Channels Gamma 0 Fp1 0.345908 1 Fp2 0.121232 2 Fp1 0.455908 3 Fp2 0.213212 </code></pre> <p>Then apply this code to the dataframe:</p> <pre><code>pd.concat( {k: g.reset_index(drop=True) for k, g in df.groupby('Channels')['Gamma']}, axis=1) </code></pre> <p>and receive the following output:</p> <pre><code> Fp1 Fp2 0 0.345908 0.121232 1 0.455908 0.213212 </code></pre>
python|pandas|dataframe|transpose|melt
2
2,051
52,296,757
Pandas DataFrame equivalent of laravel's 'pluck' on collections
<p>I am using pandas on python 3.6.5, I desire to achieve similar result on a DataFrame instance as the Collection's "pluck" method in Laravel. For example:</p> <p>DataFrame</p> <pre><code> one two 0 beer wine 1 beer tomato </code></pre> <p>PHP Laravel code:</p> <pre><code>$plucked = $collection-&gt;pluck('two')-&gt;toArray(); $print_r($plucked); &gt;&gt; ['wine', 'tomato'] </code></pre> <p>Desired solution (Python equivalent):</p> <pre><code>plucked = df.pluck('two') </code></pre> <p>How do I achieve this?</p>
<p>You can select ell entries in a DataFrame column by simply doing:</p> <pre><code>storage_variable = df['Column Name'] </code></pre> <p>So, in your case that would be:</p> <pre><code>plucked = df['two'] </code></pre>
php|python|laravel|pandas|dataframe
0
2,052
60,665,717
module 'tensorflow_core._api.v2.data' has no attribute 'Iterator'
<p>Can't figure out what to use instead of Iterator</p> <p>I tried tf.compat.v1.data.Iterator instead but got another error - <code>AttributeError: 'PrefetchDataset' object has no attribute 'output_types'</code></p> <p>code:</p> <pre><code>train_ds = prepare_for_train(labeled_ds) val_ds = tf.data.Dataset.from_tensor_slices(test_data) #create a iterator with shape and type iter = tf.data.Iterator.from_structure(train_ds.output_types, train_ds.output_shapes) """iter= tf.compat.v1.data.Iterator.from_structure(train_ds.output_types, train_ds.output_shapes)""" print(iter) *AttributeError: module 'tensorflow_core._api.v2.data' has no attribute 'Iterator'* </code></pre> <p>My TF version 2.2.0-dev20200212</p> <p>Thank you!</p>
<p>I was able to reproduce your error. Here is how you can fix it in <code>Tensorflow Version 2.x</code>.</p> <p>You need to define <code>iter</code> as below -</p> <pre><code>iter = tf.compat.v1.data.Iterator.from_structure(tf.compat.v1.data.get_output_types(train_dataset), tf.compat.v1.data.get_output_shapes(train_dataset)) </code></pre> <p>Below is an example -</p> <p><strong>Code -</strong></p> <pre><code>%tensorflow_version 2.x import tensorflow as tf print(tf.__version__) import numpy as np # Reinitializable iterator to switch between Datasets EPOCHS = 10 # making fake data using numpy train_data = (np.random.sample((100,2)), np.random.sample((100,1))) # create two datasets, one for training and one for test train_dataset = tf.data.Dataset.from_tensor_slices(train_data) # create a iterator of the correct shape and type iter = tf.compat.v1.data.Iterator.from_structure(tf.compat.v1.data.get_output_types(train_dataset), tf.compat.v1.data.get_output_shapes(train_dataset)) # create the initialisation operations train_init_op = iter.make_initializer(train_dataset) features, labels = iter.get_next() for _ in range(EPOCHS): print([features, labels]) </code></pre> <p><strong>Output -</strong></p> <pre><code>2.1.0 WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/data/ops/iterator_ops.py:347: Iterator.output_types (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.compat.v1.data.get_output_types(iterator)`. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/data/ops/iterator_ops.py:348: Iterator.output_shapes (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.compat.v1.data.get_output_shapes(iterator)`. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/data/ops/iterator_ops.py:350: Iterator.output_classes (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.compat.v1.data.get_output_classes(iterator)`. [&lt;tf.Tensor: shape=(2,), dtype=float64, numpy=array([0.35431711, 0.07564416])&gt;, &lt;tf.Tensor: shape=(1,), dtype=float64, numpy=array([0.38728039])&gt;] [&lt;tf.Tensor: shape=(2,), dtype=float64, numpy=array([0.35431711, 0.07564416])&gt;, &lt;tf.Tensor: shape=(1,), dtype=float64, numpy=array([0.38728039])&gt;] [&lt;tf.Tensor: shape=(2,), dtype=float64, numpy=array([0.35431711, 0.07564416])&gt;, &lt;tf.Tensor: shape=(1,), dtype=float64, numpy=array([0.38728039])&gt;] [&lt;tf.Tensor: shape=(2,), dtype=float64, numpy=array([0.35431711, 0.07564416])&gt;, &lt;tf.Tensor: shape=(1,), dtype=float64, numpy=array([0.38728039])&gt;] [&lt;tf.Tensor: shape=(2,), dtype=float64, numpy=array([0.35431711, 0.07564416])&gt;, &lt;tf.Tensor: shape=(1,), dtype=float64, numpy=array([0.38728039])&gt;] [&lt;tf.Tensor: shape=(2,), dtype=float64, numpy=array([0.35431711, 0.07564416])&gt;, &lt;tf.Tensor: shape=(1,), dtype=float64, numpy=array([0.38728039])&gt;] [&lt;tf.Tensor: shape=(2,), dtype=float64, numpy=array([0.35431711, 0.07564416])&gt;, &lt;tf.Tensor: shape=(1,), dtype=float64, numpy=array([0.38728039])&gt;] [&lt;tf.Tensor: shape=(2,), dtype=float64, numpy=array([0.35431711, 0.07564416])&gt;, &lt;tf.Tensor: shape=(1,), dtype=float64, numpy=array([0.38728039])&gt;] [&lt;tf.Tensor: shape=(2,), dtype=float64, numpy=array([0.35431711, 0.07564416])&gt;, &lt;tf.Tensor: shape=(1,), dtype=float64, numpy=array([0.38728039])&gt;] [&lt;tf.Tensor: shape=(2,), dtype=float64, numpy=array([0.35431711, 0.07564416])&gt;, &lt;tf.Tensor: shape=(1,), dtype=float64, numpy=array([0.38728039])&gt;] </code></pre> <p>Hope this answers your question. Happy Learning.</p>
tensorflow2.0|tensorflow-datasets
2
2,053
60,380,852
How to find a string match in df col based on list of strings?
<p>I have a list of 1000 corporate companies and a df of all previous transactions for the year. For every match, I would like to create a new row value (True) in the new column (df$Covered).</p> <p>I am not sure why I keep getting the errors below. I tried researching these questions but no luck so far.</p> <p><a href="https://stackoverflow.com/questions/16287681/match-string-to-list-of-defined-strings">Match string to list of defined strings</a></p> <p><a href="https://stackoverflow.com/questions/57160968/pandas-extract-rows-from-df-where-dfcol-values-match-df2col-values">Pandas extract rows from df where df[&#39;col&#39;] values match df2[&#39;col&#39;] values</a></p> <p><strong>Code Example: when I set regex=False</strong></p> <pre><code>Customer_List = ['3M','Cargill,'Chili's,---] df['Covered'] = df[df['End Customer Name'].str.contains('|'.join(Customer_List),case=False, na=False, regex=False)] </code></pre> <blockquote> <p>ValueError: Wrong number of items passed 32, placement implies 1</p> </blockquote> <p><strong>Code Example: when I set regex=True</strong></p> <blockquote> <p>error: bad character range H-D at position 177825</p> </blockquote> <pre><code> ~/opt/anaconda3/lib/python3.7/sre_parse.py in parse(str, flags, pattern) 928 929 try: --&gt; 930 p = _parse_sub(source, pattern, flags &amp; SRE_FLAG_VERBOSE, 0) 931 except Verbose: 932 **# the VERBOSE flag was switched on inside the pattern. to be** ~/opt/anaconda3/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 424 while True: 425 itemsappend(_parse(source, state, verbose, nested + 1, --&gt; 426 **not nested and not items**)) 427 if not sourcematch("|"): 428 break </code></pre>
<p>Thanks everyone, it has to do with my Customer_List having special characters so I needed to use map(re.escape</p> <p>This link helped me below <a href="https://stackoverflow.com/questions/28539253/python-regex-bad-character-range">Python regex bad character range.</a></p>
python|pandas
0
2,054
72,744,383
How can I input space separated integers in pyhton numpy array. (Like the list(map(int,input().spli(" ")) function does for a list.)
<p>I have tried to find alternatives but only available for list not for numpy arrays</p> <p>I tried this but didnt work:</p> <pre><code>5 1 2 3 4 5 Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 6, in &lt;module&gt; File &quot;/usr/local/lib/python3.8/dist-packages/numpy/core/numeric.py&quot;, line 204, in ones a = empty(shape, dtype, order) TypeError: expected sequence object with len &gt;= 0 or a single integer </code></pre> <p>I need a version of <code>list(map(int,input().split(&quot; &quot;))</code> for numpy arrays.</p>
<p>You can just convert to a numpy array:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np numbers = input('Enter some numbers: ').split() x = np.array(list(map(int, numbers))) print(x) </code></pre> <p>Output:</p> <pre><code>Enter some numbers: 1 2 3 4 5 [1 2 3 4 5] </code></pre>
python|arrays|numpy|dictionary|input
1
2,055
72,559,010
In Pandas, how can I perform the .diff() method to numerical values only in a column that also contains NaNs?
<p>I have a Pandas dataset and I would like to calculate the difference of a column element compared with another element of the same column. In order to do so, the most intuitive method to apply is <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.diff.html" rel="nofollow noreferrer">.diff()</a></p> <p>So far, so good. The problem is that my column contains <code>nan</code> values without a specific order pattern, like the following example with a column named <code>col</code>:</p> <pre class="lang-py prettyprint-override"><code> | col | |-----| 0 | 1 | 1 | NaN | 2 | 3 | 3 | 4 | 4 | NaN | 5 | NaN | 6 | 10 | 7 | NaN | 8 | 13 | </code></pre> <p>What I would like to do is to apply the <code>.diff()</code> method <strong>only to the preceding numerical values of the column</strong>, such that the expected answer is:</p> <pre class="lang-py prettyprint-override"><code> | col | |-----| 0 | NaN | 1 | NaN | 2 | 2 | 3 | 1 | 4 | NaN | 5 | NaN | 6 | 6 | 7 | NaN | 8 | 3 | </code></pre> <p>Had it been a periodic order of the <code>nan</code> values, I could have used the <code>periods</code> parameter of the <code>.diff()</code> method, as explained <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.diff.html" rel="nofollow noreferrer">here</a>. However, given that the <code>nan</code> values appear in a random order, I was wondering how this could be done?</p>
<p>You'll need to <code>dropna</code> and set up a temporary variable, and <code>reindex</code> like this:</p> <pre><code>import numpy as np df = pd.DataFrame({&quot;col&quot;: [1, np.nan, 3, 4, np.nan, np.nan, 10, np.nan, 13]}) idx = df.index # create index from original data tmp = df.dropna() # drop nan rows tmp.diff().reindex(idx) # reindex to original index &gt;&gt;&gt; | col | |-----| 0 | NaN | 1 | NaN | 2 | 2 | 3 | 1 | 4 | NaN | 5 | NaN | 6 | 6 | 7 | NaN | 8 | 3 | </code></pre>
python|pandas|dataframe|nan
2
2,056
72,776,999
How to apply a point transformation to many points?
<p>I have a gridded temperature dataset and a list of weather stations across the country and their latitudes and longitudes. I want to find the grid points that are nearest to the weather stations. My gridded data has coordinates x,y which latitude and longitude are a function of. <a href="https://i.stack.imgur.com/ClELk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ClELk.png" alt="enter image description here" /></a></p> <p>I found that the simplest way of finding the nearest grid point is to first transform the latitude and longitude (<code>Lat</code>, <code>Lon</code>) of the weather stations to x and y values and then find the nearest grid point. I did that for one station (lat= , lon= ) by doing the following:</p> <pre><code>import matplotlib.pyplot as plt from netCDF4 import Dataset as netcdf_dataset import numpy as np from cartopy import config import cartopy.crs as ccrs import cartopy.feature as cfeature import xarray as xr import pandas as pd import netCDF4 as nc #open gridded data df=xr.open_dataset('/home/mmartin/LauNath/air.2m.2015.nc') #open weather station data CMStations=pd.read_csv('Slope95.csv') import cartopy.crs as ccrs # Example - your x and y coordinates are in a Lambert Conformal projection data_crs = ccrs.LambertConformal(central_longitude=-107.0,central_latitude=50.0,standard_parallels = (50, 50.000001),false_easting=5632642.22547,false_northing=4612545.65137) # Transform the point - src_crs is always Plate Carree for lat/lon grid x, y = data_crs.transform_point(-94.5786,39.0997, src_crs=ccrs.PlateCarree()) # Now you can select data ks=df.sel(x=x, y=y, method='nearest') </code></pre> <p>How would I apply this to all of the weather stations latitudes and longitudes (<code>Lat</code>,<code>Lon</code>)?</p>
<p>There is no need to use geopandas in here... just use <code>crs.transform_points()</code> instead of <code>crs.transform_point()</code> and pass the coordinates as arrays!</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import cartopy.crs as ccrs data_crs = ccrs.LambertConformal(central_longitude=-107.0,central_latitude=50.0,standard_parallels = (50, 50.000001),false_easting=5632642.22547,false_northing=4612545.65137) lon, lat = np.array([1,2,3]), np.array([1,2,3]) data_crs.transform_points(ccrs.PlateCarree(), lon, lat) </code></pre> <p>which will return an array of the projected coordinates:</p> <pre><code>array([[16972983.1673108 , 8528848.37931063, 0. ], [16841398.80456616, 8697676.02704447, 0. ], [16709244.32834945, 8862533.81411212, 0. ]]) </code></pre> <hr /> <p>... also... if you really have a lot of points to transform (and maybe use some crs not yet supported by cartopy) you might want to have a look at <a href="https://pyproj4.github.io/pyproj/stable/examples.html#transformations-from-crs-to-crs" rel="nofollow noreferrer">PyProj</a> directly since it provides a lot more functionality and also some tricks to speed up transformations. (it's used under the hood by cartopy as well so you should already have it installed!)</p>
python|pandas|numpy|python-xarray|cartopy
0
2,057
72,808,258
Jupyter not showing visual representation
<p>I am researching to make a visual representation of clusters, and I have found the following source: <a href="https://plotly.com/python/v3/3d-point-clustering/#3d-clustering-with-alpha-shapes" rel="nofollow noreferrer">https://plotly.com/python/v3/3d-point-clustering/#3d-clustering-with-alpha-shapes</a></p> <p>In it you can find a code of a clustering that has been done and I wish to recreate the same in my Jupyter Notebook. However, I do not get any visual representation.</p> <p>My code is as follows:</p> <pre><code>import plotly as py !pip install plotly==3.10.0 from chart_studio import plotly import plotly.plotly as py import pandas as pd jupyter labextension install jupyterlab-plotly df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/alpha_shape.csv') df.head() scatter = dict( mode = &quot;markers&quot;, name = &quot;y&quot;, type = &quot;scatter3d&quot;, x = df['x'], y = df['y'], z = df['z'], marker = dict( size=2, color=&quot;rgb(23, 190, 207)&quot; ) ) clusters = dict( alphahull = 7, name = &quot;y&quot;, opacity = 0.1, type = &quot;mesh3d&quot;, x = df['x'], y = df['y'], z = df['z'] ) layout = dict( title = '3d point clustering', scene = dict( xaxis = dict( zeroline=False ), yaxis = dict( zeroline=False ), zaxis = dict( zeroline=False ), ) ) fig = dict( data=[scatter, clusters], layout=layout ) # Use py.iplot() for IPython notebook py.iplot(fig, filename='3d point clustering') </code></pre> <p>Does anyone know what error it is?</p>
<p>I got it to display in Jupyter notebook. The error was due to plotly authenticiation error, so import iplot from offline.</p> <pre><code>from plotly.offline import iplot iplot(fig, filename='3d point clustering') </code></pre> <p>Here is the complete code:</p> <pre><code>import chart_studio.plotly as py from plotly.offline import iplot import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/alpha_shape.csv') df.head() scatter = dict( mode = &quot;markers&quot;, name = &quot;y&quot;, type = &quot;scatter3d&quot;, x = df['x'], y = df['y'], z = df['z'], marker = dict( size=2, color=&quot;rgb(23, 190, 207)&quot; ) ) clusters = dict( alphahull = 7, name = &quot;y&quot;, opacity = 0.1, type = &quot;mesh3d&quot;, x = df['x'], y = df['y'], z = df['z'] ) layout = dict( title = '3d point clustering', scene = dict( xaxis = dict( zeroline=False ), yaxis = dict( zeroline=False ), zaxis = dict( zeroline=False ), ) ) fig = dict( data=[scatter, clusters], layout=layout ) # Use py.iplot() for IPython notebook iplot(fig, filename='3d point clustering') </code></pre>
python|pandas|plotly|cluster-analysis
0
2,058
72,601,457
change string into datetime in pandas
<p>How can I change the following string datetime into datetime in python.Here's my dataframe</p> <pre><code>IN OUT 2022/6/10 10:20:30.00000000000000000000000000 2022/6/17 13:25:30 2022/6/5 12:48:10.0 2022/6/11 10:15 2022/6/9 08:25:30 2022/6/13 10:25:30 2022-06-08 17:18:37.00000000000000000000 0 0 0 2022-06-08 17:18:37 2022/06/08 19:38 </code></pre> <p><a href="https://i.stack.imgur.com/qRwwE.png" rel="nofollow noreferrer">[image of df]</a></p> <p>I want to delete the row containing 0 value and change string into datetime of format <code>'%Y-%m-%d %H:%M:%S'</code>.</p> <p>Here's is my code....</p> <pre><code>import pandas as pd from datetime import datetime as dt def string_to_date(my_string): if '-' and '.' in my_string: data=dt.strftime(dt.strptime(my_string[:26],'%Y-%m-%d %H:%M:%S.%f'),'%Y-%m-%d %H:%M:%S') return data elif '/' and '.' in my_string: data=dt.strftime(dt.strptime(my_string[:26],'%Y/%m/%d %H:%M:%S.%f'),'%Y-%m-%d %H:%M:%S') return data elif '/' in my_string: data=dt.strftime(dt.strptime(my_string[:26],'%Y/%m/%d %H:%M:%S.%f'),'%Y-%m-%d %H:%M:%S') return data elif '-' in my_string: data=dt.strftime(dt.strptime(my_string[:26],'%Y-%m-%d %H:%M:%S.%f'),'%Y-%m-%d %H:%M:%S') return data else: data=dt.strftime(dt.strptime(my_string[:26],'%Y-%m-%d %H:%M:%S.%f'),'%Y-%m-%d %H:%M:%S') return data </code></pre> <p>if <strong>name</strong>=='<strong>main</strong>':</p> <pre><code>df=pd.read_excel('data.xlsx') col=df.columns[0:] df=df.loc[~(df=='0').all(axis=1)] print(df) i=0 for n in col: df[col[i]]=pd.to_datetime(df[col[i]]) df[col[i]]=df[col[i]].apply(lambda x:string_to_date(x)) i+=1 print(df) </code></pre>
<p>Letting pandas infer the format should get you started. You can parse to datetime data type like</p> <pre class="lang-py prettyprint-override"><code>df['IN'] = pd.to_datetime(df['IN'], errors='coerce') df['IN'] 0 2022-06-10 10:20:30 1 2022-06-05 12:48:10 2 2022-06-09 08:25:30 3 2022-06-08 17:18:37 4 NaT 5 2022-06-08 17:18:37 Name: IN, dtype: datetime64[ns] </code></pre> <p>Note that setting keyword <code>errors='coerce'</code> leaves <code>NaT</code> (not-a-time) for all elements that pandas considers to be not a datetime, e.g. <code>&quot;0&quot;</code></p> <p>Now you can drop rows that have NaT, e.g.</p> <pre class="lang-py prettyprint-override"><code>df['OUT'] = pd.to_datetime(df['OUT'], errors='coerce') df IN OUT 0 2022-06-10 10:20:30 2022-06-17 13:25:30 1 2022-06-05 12:48:10 2022-06-11 10:15:00 2 2022-06-09 08:25:30 2022-06-13 10:25:30 3 2022-06-08 17:18:37 NaT 4 NaT NaT 5 2022-06-08 17:18:37 2022-06-08 19:38:00 df = df.dropna(axis=0, how='all') df IN OUT 0 2022-06-10 10:20:30 2022-06-17 13:25:30 1 2022-06-05 12:48:10 2022-06-11 10:15:00 2 2022-06-09 08:25:30 2022-06-13 10:25:30 3 2022-06-08 17:18:37 NaT 5 2022-06-08 17:18:37 2022-06-08 19:38:00 </code></pre> <hr /> <p>docs: <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer">pd.to_datetime</a>, <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.dropna.html" rel="nofollow noreferrer">pd.DataFrame.dropna</a>, related: <a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes" rel="nofollow noreferrer">parsing/formatting directives</a></p>
python|pandas|string|date|datetime
0
2,059
59,553,742
Error while custom periods based resampling using resample('W'),sum() in python
<p>I have the data frame ( frame_combined_DF) which looks like this. I need to do custom resampling based on Time_W weeks provided for each SKU.</p> <pre><code>frame_combined_DF SKU Qty Time Time_W WY 2011-10-17 ABC 12.0 11.0 2 2012-01-16 ABC 20.0 11.0 2 2013-04-08 ABC 6.0 11.0 2 2013-12-02 ABC 2.0 11.0 2 2014-10-27 XYZ 1.0 21.0 3 </code></pre> <p>Below is my code</p> <pre><code>for i in ids: subset = frame_combined_DF.loc[frame_combined_DF.SKU==i] subset.index=subset.WY subset.sort_index(inplace=True) period=subset.Time_W.unique().astype('int64')[0] per=str(period)+'W' df = subset.Qty.resample(per).sum() new_df = {'WY':df.index, 'Qty':df.values,'SKU':i} newdf = pd.DataFrame(new_df) new_series=new_series.append(newdf) </code></pre> <p>I am getting following error while running this code</p> <pre><code> ValueError: Offset &lt;0 * Weeks: weekday=6&gt; did not increment date </code></pre> <p>Expected output is as under. Below example is only for 1 SKU. This SKU needs to be re sampled at frequency of 2 weeks, where as SKU XYZ to be resampled for for every three weeks</p> <pre><code> WY Qty SKU 2011-10-17 12.0 ABC 2011-10-31 0.0 ABC 2011-11-14 0.0 ABC 2011-11-28 0.0 ABC 2011-12-12 0.0 ABC ......................... ......................... 2012-01-09 20.0 ABC 2012-01-23 0.0 ABC .......................... </code></pre>
<p>From your sample data I see that <em>WY</em> is the index column.</p> <p>But check whether this column is of <em>datetime</em> type (not <em>string</em>). If it is not, run <code>frame_combined_DF.index = pd.to_datetime(frame_combined_DF.index)</code>.</p> <p>Another point to note is that <em>newdf</em> is a <strong>DataFrame</strong>, not a <em>Series</em>, so you should append it to a <em>DataFrame</em>.</p> <p>The third remark is that <em>subset.index = subset.WY</em> is not needed, because <em>WY</em> is already the index.</p> <p>And the last thing: Your sample did not define <em>new_series</em> (in my solution I changed it to <em>result</em>).</p> <p>So change your code to:</p> <pre><code>result = pd.DataFrame() for i in frame_combined_DF.SKU.unique(): subset = frame_combined_DF.loc[frame_combined_DF.SKU==i] subset.sort_index(inplace=True) period = subset.Time_W.unique().astype('int64')[0] per = str(period) + 'W' df = subset.Qty.resample(per).sum() new_df = {'WY': df.index, 'Qty': df.values, 'SKU': i} newdf = pd.DataFrame(new_df) result = result.append(newdf, ignore_index=True) </code></pre> <p>and it should run, at least on my computer it gives no error.</p>
python|pandas|numpy
1
2,060
59,697,708
Get date from list with `numpy.datetime64`-objects
<p>I have a list with quite some dates. Unfortunately they are all appear as <code>numpy.datetime64</code>-object. Does anyone has an idea of how I could extract the actual date? The list looks like this: </p> <pre><code>[numpy.datetime64('2016-01-04T00:00:00.000000000'), numpy.datetime64('2016-01-14T00:00:00.000000000'), numpy.datetime64('2016-01-17T00:00:00.000000000'), numpy.datetime64('2016-01-24T00:00:00.000000000'), ... </code></pre>
<p>Here's a way to do using <code>.astype</code>:</p> <pre><code>dates = [str(x.astype('datetime64[D]')) for x in dates_list] ['2016-01-04', '2016-01-14', '2016-01-17', '2016-01-24'] </code></pre>
python|pandas|numpy|datetime|type-conversion
2
2,061
59,503,069
Need to add dataframes in a excel in iterative format
<p>I have created a pandas dataframe from dictionary and i need to copy the unique column data to a excel in the same sheet But its just writing one dataframe and doesnt write anything after that Help! Below is the code:</p> <pre><code>import pandas import csv import os act_dict = {'bmc': [], 'adc': [], 'volume': []} with open('/home/laxmi/Downloads/chadaLookuptablevalueWithAdcreferenceAndBmcid.csv','r') as fp: contents=csv.reader(fp) total_count=0 i=0 for row in contents: if contents.line_num == 1: continue data=row act_dict['bmc'].append(data[0]) act_dict['adc'].append(data[1]) act_dict['volume'].append(data[2]) total_count+=1 os.chdir("/home/laxmi/Documents/volume_analyser_project") bmc_data = pandas.DataFrame(act_dict) count=len(bmc_data.bmc.unique()) l1 = bmc_data.bmc.unique() writer = pandas.ExcelWriter('pandas_multiple.xlsx', engine='xlsxwriter') count1=(len(l1)) startrow=startcol=0 for i in range(count1): df_i=bmc_data[bmc_data.bmc==l1[i]] df_i.to_excel(writer,startrow=0,startcol=startcol+2,sheet_name='Sheet1') writer.save() </code></pre>
<pre><code>l1 = bmc_data.bmc.unique() print(l1) startcol=startrow = 0 file_name='/home/laxmi/Documents/volume_analyser_project/idmc.xlsx' writer = pandas.ExcelWriter('idmc.xlsx', engine='xlsxwriter') count1=(len(l1)) print(count1) for i in range(count1): df_i=bmc_data[bmc_data.bmc==l1[i]] df_i = df_i.sort_values("adc", axis = 0, ascending = True) df_i.to_excel(writer,sheet_name='Sheet1',startrow=0,startcol=startcol,index=False) startcol=startcol+4 print(startrow,startcol) writer.save() </code></pre>
python|pandas
0
2,062
32,325,410
Label regions with unique combinations of values in two numpy arrays?
<p>I have two labelled 2D numpy arrays <code>a</code> and <code>b</code> with identical shapes. I would like to re-label the array <code>b</code> by something similar to a <a href="http://resources.arcgis.com/EN/HELP/MAIN/10.1/index.html#//00080000000s000000" rel="nofollow noreferrer">GIS geometric union</a> of the two arrays, such that <strong>cells with unique combination of values in array <code>a</code> and <code>b</code> are assigned new unique IDs:</strong></p> <p><a href="https://i.stack.imgur.com/0fIQZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0fIQZ.png" alt="enter image description here"></a></p> <p>I'm not concerned with the specific numbering of the regions in the output, so long as the values are all unique. I have attached sample arrays and desired outputs below: my real datasets are much larger, with both arrays having integer labels which range from "1" to "200000". So far I've experimented with concatenating the array IDs to form unique combinations of values, but ideally I would like to output a simple set of new IDs in the form of 1, 2, 3..., etc.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt # Example labelled arrays a and b input_a = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 0], [0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 0], [0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 0], [0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 0], [0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 0], [0, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 0], [0, 0, 3, 3, 3, 3, 2, 2, 2, 2, 0, 0], [0, 0, 3, 3, 3, 3, 2, 2, 2, 2, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) input_b = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 1, 3, 3, 3, 3, 3, 0, 0], [0, 0, 1, 1, 1, 3, 3, 3, 3, 3, 0, 0], [0, 0, 1, 1, 1, 2, 2, 2, 2, 2, 0, 0], [0, 0, 1, 1, 1, 2, 2, 2, 2, 2, 0, 0], [0, 0, 1, 1, 1, 2, 2, 2, 2, 2, 0, 0], [0, 0, 1, 1, 1, 2, 2, 2, 2, 2, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) # Plot inputs plt.imshow(input_a, cmap="spectral", interpolation='nearest') plt.imshow(input_b, cmap="spectral", interpolation='nearest') # Desired output, union of a and b output = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 1, 2, 3, 3, 3, 3, 0, 0], [0, 0, 1, 1, 1, 2, 3, 3, 3, 3, 0, 0], [0, 0, 1, 1, 1, 4, 7, 7, 7, 7, 0, 0], [0, 0, 5, 5, 5, 6, 7, 7, 7, 7, 0, 0], [0, 0, 5, 5, 5, 6, 7, 7, 7, 7, 0, 0], [0, 0, 5, 5, 5, 6, 7, 7, 7, 7, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) # Plot desired output plt.imshow(output, cmap="spectral", interpolation='nearest') </code></pre>
<p>If I understood the circumstances correctly, you are looking to have unique pairings from <code>a</code> and <code>b</code>. So, <code>1</code> from <code>a</code> and <code>1</code> from <code>b</code> would have one unique tag in the output; <code>1</code> from <code>a</code> and <code>3</code> from <code>b</code> would have another unique tag in the output. Also looking at the desired output in the question, it seems that there is an additional conditional situation here that if <code>b</code> is zero, the output is to be zero as well irrespective of the unique pairings.</p> <p>The following implementation tries to solve all of that -</p> <pre><code>c = a*(b.max()+1) + b c[b==0] = 0 _,idx = np.unique(c,return_inverse= True) out = idx.reshape(b.shape) </code></pre> <p>Sample run -</p> <pre><code>In [21]: a Out[21]: array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 0], [0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 0], [0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 0], [0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 0], [0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 0], [0, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 0], [0, 0, 3, 3, 3, 3, 2, 2, 2, 2, 0, 0], [0, 0, 3, 3, 3, 3, 2, 2, 2, 2, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) In [22]: b Out[22]: array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 1, 3, 3, 3, 3, 3, 0, 0], [0, 0, 1, 1, 1, 3, 3, 3, 3, 3, 0, 0], [0, 0, 1, 1, 1, 2, 2, 2, 2, 2, 0, 0], [0, 0, 1, 1, 1, 2, 2, 2, 2, 2, 0, 0], [0, 0, 1, 1, 1, 2, 2, 2, 2, 2, 0, 0], [0, 0, 1, 1, 1, 2, 2, 2, 2, 2, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) In [23]: out Out[23]: array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 1, 3, 5, 5, 5, 5, 0, 0], [0, 0, 1, 1, 1, 3, 5, 5, 5, 5, 0, 0], [0, 0, 1, 1, 1, 2, 4, 4, 4, 4, 0, 0], [0, 0, 6, 6, 6, 7, 4, 4, 4, 4, 0, 0], [0, 0, 6, 6, 6, 7, 4, 4, 4, 4, 0, 0], [0, 0, 6, 6, 6, 7, 4, 4, 4, 4, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) </code></pre> <p>Sample plot -</p> <pre><code># Plot inputs plt.figure() plt.imshow(a, cmap="spectral", interpolation='nearest') plt.figure() plt.imshow(b, cmap="spectral", interpolation='nearest') # Plot output plt.figure() plt.imshow(out, cmap="spectral", interpolation='nearest') </code></pre> <p><a href="https://i.stack.imgur.com/Lm1K4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Lm1K4.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/eAg4w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eAg4w.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/JdQ2G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JdQ2G.png" alt="enter image description here"></a></p>
python|arrays|numpy|scipy|python-2.6
5
2,063
40,600,308
Python: track job progress using tqdm
<p>I am using the following code to track a job progress:</p> <pre><code>from tqdm import tqdm, tqdm_pandas tqdm.pandas(tqdm()) my_df['target'] = my_df.progress_apply(lambda x: my_fun(x), axis = 1) </code></pre> <p>Then the code provide progress tracking like below:</p> <pre><code> 0%| | 0/5 [00:00&lt;?, ?it/s] 20%|██ | 1/5 [00:06&lt;00:26, 6.62s/it] 215it [00:06, 4.63s/it] 1062it [00:06, 3.24s/it] 1976it [00:06, 2.27s/it] 2893it [00:07, 1.59s/it] 3811it [00:07, 1.11s/it] 4720it [00:07, 1.28it/s] 5650it [00:07, 1.83it/s] 6585it [00:07, 2.62it/s] 7520it [00:07, 3.74it/s] 8444it [00:07, 5.35it/s] 9378it [00:07, 7.64it/s] 10311it [00:07, 10.90it/s] 11218it [00:07, 15.57it/s] 12111it [00:08, 22.22it/s] 13004it [00:08, 31.70it/s] 13832it [00:08, 45.20it/s] 14618it [00:08, 64.36it/s] 15404it [00:08, 91.62it/s] 16149it [00:08, 129.91it/s] 16870it [00:08, 184.16it/s] 17560it [00:08, 259.28it/s] 18315it [00:08, 365.02it/s] 19162it [00:09, 512.00it/s] 19891it [00:09, 706.09it/s] : : </code></pre> <hr> <p>I am wondering is it possible to print out info every 5 seconds instead of 10 outputs per second? Thanks a lot!</p>
<p>Yes, just use the mininterval argument:</p> <pre><code>tqdm.pandas(tqdm, mininterval=5) </code></pre>
python|pandas|progress
1
2,064
18,700,620
printing sub-array in numpy as Matlab does
<p>How can you print sub-arrays in numpy the same way Matlab does? I have a 3 by 10000 array and I want to view the first 20 columns. In Matlab you can write</p> <pre><code>a=zeros(3,10000); a(:,1:20) Columns 1 through 15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Columns 16 through 20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 </code></pre> <p>However in Numpy</p> <pre><code>import numpy as np set_printoptions(threshold=nan) a=np.zeros((3,10000)) print a[:,0:20] [[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] </code></pre> <p>As you can see numpy prints the first row, then the second row, then the third row. I would like it to maintain the column structure and not the row structure</p> <p>Thank you very much</p> <p>PS: One solution would be for example</p> <pre><code>print a[:,0:20].T [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] </code></pre> <p>but, would consume a lot more of space on screen than desired. it would be great if numpy had this option</p>
<p>Does this give what you want?</p> <pre><code>&gt;&gt;&gt; for item in a[:,0:20].T: print '\t'.join(map(str,item.tolist())) </code></pre> <p>Or this?</p> <pre><code>&gt;&gt;&gt; for item in a[:,0:20]: print '\t'.join(map(str,item.tolist())) </code></pre>
python|matlab|numpy
1
2,065
61,949,905
Getting Invalid argument: shape of all inputs must match:values[0].shape = [401408] != values[1].shape = [24485888] when using IoU metric in keras
<p>I'm using UNet to train on the TACO dataset, which is in COCO format. I tried training my model with the accuracy metric, only to end up with validation accuracy and accuracy reaching 1.000, which is honestly too good to be true. I was told that accuracy isn't exactly a fitting metric for segmentation problems, which is why I tried using IoU. Unfortunately, I get the following errors:</p> <pre><code>InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: Shapes of all inputs must match: values[0].shape = [401408] != values[1].shape = [24485888] [[node confusion_matrix/stack_1 (defined at &lt;ipython-input-7-4238ab807505&gt;:96) ]] (1) Invalid argument: Shapes of all inputs must match: values[0].shape = [401408] != values[1].shape = [24485888] [[node confusion_matrix/stack_1 (defined at &lt;ipython-input-7-4238ab807505&gt;:96) ]] [[confusion_matrix/stack_1/_96]] </code></pre> <p>I don't know what I'm doing wrong, since my images are being resized as they are passed into my data generator function by this function:</p> <pre><code>def getImage(imageObj, img_folder, input_image_size): # Read and normalize an image train_img = io.imread(img_folder + '/' + imageObj['file_name'])/255.0 # Resize train_img = cv2.resize(train_img, input_image_size) if (len(train_img.shape)==3 and train_img.shape[2]==3): # If it is a RGB 3 channel image return train_img else: # To handle a black and white image, increase dimensions to 3 stacked_img = np.stack((train_img,)*3, axis=-1) return stacked_img </code></pre> <p>where input_image_size = (224,224). In my UNet model, the input layer is as follows:</p> <pre><code>##Input Layer inputs = Input((IMG_WIDTH, IMG_HEIGHT, IMG_CHANNELS)) </code></pre> <p>where IMG_WIDTH and IMG_HEIGHT are both 224, and IMG_CHANNELS = 3. I'm also using sparse_categorical_crossentropy for my loss function. I'm pretty new to this, and I don't know what I'm doing wrong. Any help would be much appreciated. Cheers!</p>
<p>I was facing a similar issue. the problem was with the output layer where the number of filter I had was 1 but the mask was a 3D image hence the filter number was suppose to be 3. maybe for you too, the number of filters in output layers doesn't match the mask dimensions. try changing it</p>
python|tensorflow|machine-learning|keras
0
2,066
61,861,814
list reshape as similar to dictionary type
<p>I'm dealing with patent data with pandas and numpy. the steps that I've done and data I've got from the raw data is below.</p> <p>code</p> <pre><code>title = df['title'].tolist() cpc = df_cpu['cpc'].tolist() z = zip(title, cpc) </code></pre> <p>result</p> <pre><code> ('(real-time information transmission system)', 'A61B-0005/0002, A61B-0005/0001, A61B-0005/0021'), ('(skincare counselling system)', 'G06Q-0050/0010'), ('(apparatus for monitoring posture)', 'A61B-0005/1116, A61B-0005/0002'),,.... ) </code></pre> <p>It's a basically list(or tuple) with 'titles of patent' and it's own 'cpc codes' defining where sub technology the patents belongs to . In this case, I'd like to split(or should I say reshape) the data I've got as I wrote below. I guess it is not just split the data but reshape with specific rules.</p> <pre><code>('(real-time information transmission system)', 'A61B-0005/0002'), '(real-time information transmission system)', 'A61B-0005/0001') '(real-time information transmission system)', 'A61B-0005/0021') ('(skincare counselling system)', 'G06Q-0050/0010'), ('(apparatus for monitoring posture)', 'A61B-0005/1116') ('(apparatus for monitoring posture)', 'A61B-0005/0002'),,.... ) </code></pre> <p>I thought about counting each commas and copy titles by the number of commas but I guess there should be more easy way to do it and I don't even know how to do with the way I thought. </p>
<p>If I understood the end goal correctly, you want to use <code>split()</code> to split the cpc codes string, using <code>','</code> as the separator. This will generate a list, which you can then iterate through to create a new list/tuple.</p> <p>Here is a snippet that I think accomplishes what you want:</p> <pre><code>from pprint import pprint z = (('(real-time information transmission system)', 'A61B-0005/0002, A61B-0005/0001, A61B-0005/0021'), ('(skincare counselling system)', 'G06Q-0050/0010'), ('(apparatus for monitoring posture)', 'A61B-0005/1116, A61B-0005/0002')) new_z = [] for title, cpc_codes_str in z: cpc_codes = cpc_codes_str.split(',') for code in cpc_codes: new_z.append((title, code)) pprint(tuple(new_z)) </code></pre> <p>and this is what is printed:</p> <pre class="lang-py prettyprint-override"><code>(('(real-time information transmission system)', 'A61B-0005/0002'), ('(real-time information transmission system)', ' A61B-0005/0001'), ('(real-time information transmission system)', ' A61B-0005/0021'), ('(skincare counselling system)', 'G06Q-0050/0010'), ('(apparatus for monitoring posture)', 'A61B-0005/1116'), ('(apparatus for monitoring posture)', ' A61B-0005/0002')) </code></pre> <p>Hope this helps.</p>
python|pandas|numpy
1
2,067
58,150,686
How to find the eigenvalues in Python with a matrix different to the identity matrix
<p>I am trying to find the eigenvalues of a characteristic equation in Python, the problem is that in the equation |A-lambda I|=0, the matrix that multiplies lambda isn't the identity matrix, but I have to make clear that this matrix different to the identity matrix is a diagonal matrix.</p>
<p>The problem you're facing is known as the generalized eigenvalue problem. An example solution with <code>numpy</code> is given in <a href="https://stackoverflow.com/questions/24752393/solve-generalized-eigenvalue-problem-in-numpy">this</a> question.</p>
python|numpy|matrix|linear-algebra|eigenvalue
0
2,068
58,142,567
Streaming NumPy data as input to Tensorflow
<p>I was reading <a href="https://www.tensorflow.org/guide/datasets" rel="nofollow noreferrer">https://www.tensorflow.org/guide/datasets</a> to look for a solution to stream NumPy arrays stored in npz files, which may be too large to fit in memory. This snippet is provided in the documentation:</p> <pre><code># Load the training data into two NumPy arrays, for example using `np.load()`. with np.load("/var/data/training_data.npy") as data: features = data["features"] labels = data["labels"] # Assume that each row of `features` corresponds to the same row as `labels`. assert features.shape[0] == labels.shape[0] features_placeholder = tf.placeholder(features.dtype, features.shape) labels_placeholder = tf.placeholder(labels.dtype, labels.shape) dataset = tf.data.Dataset.from_tensor_slices((features_placeholder, labels_placeholder)) # [Other transformations on `dataset`...] dataset = ... iterator = dataset.make_initializable_iterator() sess.run(iterator.initializer, feed_dict={features_placeholder: features, labels_placeholder: labels}) </code></pre> <p>Does this method really allow you to stream NumPy data? Doesn't <code>features = data["features"]</code> load the data entirely into memory?</p>
<p>The utilities for <code>.npy</code> files indeed allocate the whole array into memory. </p> <p>If all of your input data fit in memory, the simplest way to create a Dataset from them is to convert them to <code>tf.Tensor</code> objects and use <code>Dataset.from_tensor_slices()</code> like you are doing above. </p> <p>In the case that the file doesn't fit into memory, it seems like the only recommended approach is to first convert the <code>npy</code> data into a <code>TFRecord</code> format, and then use the <code>TFRecord</code> data set format, which can be streamed without fully loading into memory. </p> <p>Below is the example: </p> <p><strong>Convert to TFRecords:</strong> </p> <pre><code>def array_to_tfrecords(X, y, output_file): feature = { 'X': tf.train.Feature(float_list=tf.train.FloatList(value=X.flatten())), 'y': tf.train.Feature(float_list=tf.train.FloatList(value=y.flatten())) } example = tf.train.Example(features=tf.train.Features(feature=feature)) serialized = example.SerializeToString() writer = tf.python_io.TFRecordWriter(output_file) writer.write(serialized) writer.close() </code></pre> <p><strong>Read TFRecordDataset:</strong> </p> <pre><code>def parse_proto(example_proto): features = { 'X': tf.FixedLenFeature((345,), tf.float32), 'y': tf.FixedLenFeature((5,), tf.float32), } parsed_features = tf.parse_single_example(example_proto, features) return parsed_features['X'], parsed_features['y'] def read_tfrecords(file_names=("file1.tfrecord", "file2.tfrecord", "file3.tfrecord"), buffer_size=10000, batch_size=100): dataset = tf.contrib.data.TFRecordDataset(file_names) dataset = dataset.map(parse_proto) dataset = dataset.shuffle(buffer_size) dataset = dataset.repeat() dataset = dataset.batch(batch_size) return tf.contrib.data.Iterator.from_structure(dataset.output_types, dataset.output_shapes) </code></pre>
python|numpy|tensorflow
1
2,069
58,066,558
Pandas Group-By and Sum not creating a new Data Frame
<p>I have a dataframe - </p> <pre><code> TransactionDT TransactionAmt TransactionHour 0 86400 68.5 0 1 86401 29.0 1 2 86469 59.0 1 3 86499 50.0 2 4 86506 50.0 3 </code></pre> <p>I want to create a new data frame that sums <code>TransactionAmt</code> grouping by <code>TransactionHour</code>, like - </p> <pre><code> Sum(TransactionAmt) TransactionHour 0 68.5 0 1 88.0 1 (sum of those with TransactionHour == 1) 2 50.0 2 3 50.0 3 </code></pre> <p>The code I wrote was - </p> <pre><code>sliced_data2 = data.groupby(['TransactionHour'])['TransactionAmt'].sum() </code></pre> <p>But it only gives me the <code>Sum(TransactionHour)</code></p>
<pre><code>sliced_data2 = data.groupby('TransactionHour',as_index = False).agg({"TransactionAmt" : "sum"}) </code></pre>
python|pandas
1
2,070
34,095,310
Pandas: How to get the column name where a row contain the date?
<p>I have a dataframe named <code>DateUnique</code> made of all unique dates (format datetime or string) that are present in my other dataframe named <code>A</code>.</p> <pre><code>&gt;&gt;&gt; print(A) 'dateLivraisonDemande' 'abscisse' 'BaseASDébut' 'BaseATDébut' 0 2015-05-27 2004-01-10 05:00:00 05:00:00 1 2015-05-27 2004-02-10 18:30:00 22:30:00 2 2015-05-27 2004-01-20 23:40:00 19:30:00 3 2015-05-27 2004-03-10 12:05:00 06:00:00 4 2015-05-27 2004-01-10 23:15:00 13:10:00 5 2015-05-27 2004-02-10 18:00:00 13:45:00 6 2015-05-27 2004-01-20 02:05:00 19:15:00 7 2015-05-27 2004-03-20 08:00:00 07:45:00 8 2015-05-29 2004-01-01 18:45:00 21:00:00 9 2015-05-27 2004-02-15 04:20:00 07:30:00 10 2015-04-10 2004-01-20 13:50:00 15:30:00 </code></pre> <p>And:</p> <pre><code>&gt;&gt;&gt; print(DateUnique) 1 1899-12-30 2 1900-01-01 3 2004-03-10 4 2004-03-20 5 2004-01-20 6 2015-05-29 7 2015-04-10 8 2015-05-27 9 2004-02-15 10 2004-02-10 </code></pre> <ul> <li>How can I get the name of the columns that contain each date? </li> <li><p>Maybe with something similar to this:</p> <pre><code># input: If row == '2015-04-10': print(df.name_Of_Column([0])) # output: 'dateLivraisonDemande' </code></pre></li> </ul>
<p>You can make a function that returns the appropriate column. Use the vectorized <code>isin</code> function, and then check if <code>any</code> value is <code>True</code>.</p> <pre><code>df = pd.DataFrame({'dateLivraisonDemande': ['2015-05-27']*7 + ['2015-05-27', '2015-05-29', '2015-04-10'], 'abscisse': ['2004-02-10', '2004-01-20', '2004-03-10', '2004-01-10', '2004-02-10', '2004-01-20', '2004-03-10', '2004-01-10', '2004-02-15', '2004-01-20']}) DateUnique = pd.Series(['1899-12-30', '1900-01-01', '2004-03-10', '2004-03-20', '2004-01-20', '2015-05-29', '2015-04-10', '2015-05-27', '2004-02-15', '2004-02-10']) def return_date_columns(date_input): if df["dateLivraisonDemande"].isin([date_input]).any(): return "dateLivraisonDemande" if df["abscisse"].isin([date_input]).any(): return "abscisse" &gt;&gt;&gt; DateUnique.apply(return_date_columns) 0 None 1 None 2 abscisse 3 None 4 abscisse 5 dateLivraisonDemande 6 dateLivraisonDemande 7 dateLivraisonDemande 8 abscisse 9 abscisse dtype: object </code></pre>
python|pandas
1
2,071
37,006,897
scipy optimize SLSQP only takes last ineq constrant into account
<p>Let's say I have a portfolio with weights, sum = 1.<br> Then I want to define pockets (0, 1, 2) with some assets included in those pockets, and sum(weights_pocket_assets) &lt; pocket_max_weight<br> On my UI, I have a 3 columns for each pocket, filled with 1 if asset is in pocket, 0 otherwise (this array is called 'pockets') </p> <pre><code>mask = list(map(int, pockets[0])) print(pocket_max[0], mask) constr0 = {'type': 'ineq', 'fun': lambda x: pocket_max[0] - np.sum(np.ma.array(x, mask=np.logical_not(mask)))} mask = list(map(int, pockets[1])) print(pocket_max[1], mask) constr1 = {'type': 'ineq', 'fun': lambda x: pocket_max[1] - np.sum(np.ma.array(x, mask=np.logical_not(mask)))} mask = list(map(int, pockets[2])) print(pocket_max[2], mask) constr2 = {'type': 'ineq', 'fun': lambda x: pocket_max[2] - np.sum(np.ma.array(x, mask=np.logical_not(mask)))} constr = [{'type': 'eq', 'fun': lambda x: np.sum(x) - 1}, constr0, constr1, constr2] print(constr) </code></pre> <p>gives as output:</p> <pre><code>0.04 [1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 0.08 [0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 0.05 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0] [{'fun': &lt;function Book.optimize.&lt;locals&gt;.&lt;lambda&gt; at 0x00000060CCD9AD90&gt;, 'type': 'eq'}, {'fun': &lt;function Book.optimize.&lt;locals&gt;.&lt;lambda&gt; at 0x00000060CCD9AD08&gt;, 'type': 'ineq'}, {'fun': &lt;function Book.optimize.&lt;locals&gt;.&lt;lambda&gt; at 0x00000060CCD9ABF8&gt;, 'type': 'ineq'}, {'fun': &lt;function Book.optimize.&lt;locals&gt;.&lt;lambda&gt; at 0x00000060CCD9AAE8&gt;, 'type': 'ineq'}] </code></pre> <p>which seems correct.<br> Problem is optimize only with eq and last ineq (ie sum(w) = 1 and sum(w_pocket_2) = 0.05)<br> Also if I have only 2 pockets, it optimizes with last only.<br> For short, it only takes last ineq in optimization... don't know what's wrong.</p> <p>EDIT:<br> if pockets are equals (ie, same components for each pocket, ie same mask) then all 3 'ineq' constraints are taken into account (ie the most constraining in fact, but no problem if it's the first 'ineq', the second or the third that is the most constraining)<br> as soon as pockets have not similar components (ex: we add a component for the second 'ineq'), only the last 'ineq' is taken by optimization.</p>
<p>As often, it was a problem with variable scope (odd stuff since variables were still defined, but overwritten. I did not investigate more to fully understand). The following code works as expected (all pockets constraints taken into account):</p> <pre><code>constr += ({'type': 'ineq', 'fun': lambda x, mask, pocket_max, i: pocket_max[i] - np.sum(np.ma.array(x, mask=np.logical_not(mask))), 'args': (mask, pocket_max, i )}, ) </code></pre>
python|python-3.x|numpy|scipy
0
2,072
36,940,253
TensorFlow: change tanh of BasicRNNCell to another op?
<p>I want try some other transfer functions besides the default <code>tanh</code> in TensorFlow's <code>BasicRNNCell</code>.</p> <p>The original implementation is like:</p> <pre><code>class BasicRNNCell(RNNCell): (...) def __call__(self, inputs, state, scope=None): """Most basic RNN: output = new_state = tanh(W * input + U * state + B).""" with vs.variable_scope(scope or type(self).__name__): # "BasicRNNCell" output = tanh(linear([inputs, state], self._num_units, True)) return output, output </code></pre> <p>...and I changed it to:</p> <pre><code>class MyRNNCell(BasicRNNCell): (...) def __call__(self, inputs, state, scope=None): """Most basic RNN: output = new_state = tanh(W * input + U * state + B).""" with tf.variable_scope(scope or type(self).__name__): # "BasicRNNCell" output = my_transfer_function(linear([inputs, state], self._num_units, True)) return output, output </code></pre> <p>Changing <code>vs.variable_scope</code> to <code>tf.variable_scope</code>, was successful but <code>linear</code> is an implementation in > <strong><a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/rnn_cell.py" rel="nofollow">rnn_cell.py</a></strong> &lt; and not available in <code>tf</code> itself.</p> <p>How can I get this to work?</p> <p>Do I have to re-implement <code>linear</code> completely? (I already checked the code, I think I would run into dependency problems there, too...)</p>
<p>You do not need to change the code of the tensor flow implementation for this.</p> <p>The BasicRNNCell has a parameter called activation function. You can just simply change that from tf.tanh to whatever activation function you want.</p>
python|inheritance|neural-network|tensorflow|recurrent-neural-network
2
2,073
54,747,845
Bokeh graph doesn't plot properly
<p>The following code doesn't generate a graph:</p> <pre><code>import pandas import numpy as np from bokeh.plotting import figure, show, output_file from bokeh.io import output_notebook from datetime import datetime output_notebook() TOOLS="hover,crosshair,pan,wheel_zoom,zoom_in,zoom_out,box_zoom,undo,redo,reset,\ tap,save,box_select,poly_select,lasso_select," df = pandas.read_csv('./logs.csv') df['datetime'] = pd.to_datetime(df['datetime']) xvals = df['datetime'].dt.strftime('%Y-%m-%d') yvals = df['datetime'].dt.strftime('%H:%M:%S') p = figure(title="Test Title", width=500, height=500, \ x_axis_type="datetime", y_axis_type="datetime", \ x_range=(df.iloc[-1]['datetime'].strftime('%Y/%m/%d'),\ df.iloc[0]['datetime'].strftime('%Y/%m/%d')),\ y_range=('00:00:00','23:59:59'),\ tools=TOOLS) p.scatter(xvals, yvals, alpha=0.5) show(p) </code></pre> <p>This graph produced is a blank graph. What is the problem?</p> <p>EDIT:</p> <p>I updated the code with</p> <pre><code>xvals = df['datetime'].dt.date yvals = df['datetime'].dt.time p = figure(title="Activity history", width=800, height=500, \ x_axis_type='datetime', y_axis_type='datetime',\ x_axis_label="Date", y_axis_label="Time",\ tools=TOOLS) p.scatter(xvals, yvals, alpha=0.3) show(p) </code></pre> <p>And this produces a graph.</p>
<p>OK, as far as I can tell, this is what you want (using some project sample data, since you did not provide anything to run your code with):</p> <pre><code>from bokeh.plotting import figure, show from bokeh.sampledata.commits import data p = figure(x_axis_type="datetime", y_axis_type="datetime") p.circle(x=data.index, y=data.index.time) show(p) </code></pre> <p><a href="https://i.stack.imgur.com/cgl84.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cgl84.png" alt="enter image description here"></a></p> <p>The <code>datetime</code> axis type, as the name suggests, treats the timestamps as datetimes. I.e., these are interpreted as hours of the day in the first day of the first year of Epoch. That's why the axis starts and ends with <em>1/01</em> and <em>1/02</em>. You might want to use <a href="https://bokeh.pydata.org/en/1.0.4/docs/user_guide/styling.html#tick-label-formats" rel="nofollow noreferrer">customize the tick formatter</a> to display just the hours. </p> <p>For reference, <code>data</code> looks like this:</p> <p><a href="https://i.stack.imgur.com/fVqsy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fVqsy.png" alt="enter image description here"></a></p>
python|pandas|bokeh
1
2,074
49,710,174
Is there a way to recognise an object in an image?
<p>I am looking for some pre-trained deep learning model which can recognise an object in an image. Usually the images are of type used in shopping websites for products. I want to recognise what is the product in the image. I have come across some pre-trained models like VGG, Inception but they seems to be trained on some few general objects like 1000 objects. I am looking for something which is trained on more like 10000 or more.</p>
<p>I think the best way to do this is to build your own training set with the labels that you need to predict, then take an existing pre-trained model like VGG, remove the last fully connected layers and train the mode with your data, the process called transfer learning. Some more info <a href="https://www.tensorflow.org/tutorials/image_retraining" rel="nofollow noreferrer">here</a>.</p>
tensorflow|deep-learning|keras|image-recognition
0
2,075
49,556,135
Load one-line-json formatted data into Pandas DataFrame
<p>I have a json doc with 7 columns and only 1 row.I am not able to load this Json into a DataFrame with read_json.</p> <pre><code>url_global = 'https://api.coinmarketcap.com/v1/global/' df_global = pd.read_json(url_global) ValueError: If using all scalar values, you must pass an index </code></pre>
<p>The params in this function is somehow complicated and un-orthogonal. I find it helpful to use</p> <pre><code>pds.read_json("https://api.coinmarketcap.com/v1/global/", typ='series') </code></pre> <p>output would be (the type is 'pandas.core.series.Series')</p> <pre><code>active_assets 6.770000e+02 active_currencies 9.170000e+02 active_markets 9.790000e+03 bitcoin_percentage_of_market_cap 4.565000e+01 last_updated 1.522332e+09 total_24h_volume_usd 1.386394e+10 total_market_cap_usd 2.790481e+11 dtype: float64 </code></pre> <p>The reason is that your json is not really a <code>DataFrame</code> (matrix) and is more like a <code>Series</code> (row). To find more about its functionality (concerning what are valid DF and how axes are arranged), refer to <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_json.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_json.html</a>.</p>
python|pandas|dataframe
0
2,076
27,913,806
How to keep rows where at least one column satisfy a condition in Pandas
<p>I have the following DF:</p> <pre><code>In [1]: import pandas as pd In [2]: mydict = {'foo':[0, 0.3,5], 'bar':[1,0.55,0.1], 'qux': [0.3,4.1,4]} In [3]: df = pd.DataFrame.from_dict(mydict, orient='index') In [4]: df Out[4]: 0 1 2 qux 0.3 4.10 4.0 foo 0.0 0.30 5.0 bar 1.0 0.55 0.1 </code></pre> <p>What I want to do is to keep rows if at least one of the column is > 2. The final output looks like this:</p> <pre><code> 0 1 2 qux 0.3 4.10 4.0 foo 0.0 0.30 5.0 </code></pre> <p>What's the way to do it in Pandas?</p>
<pre><code>In [201]: df.loc[(df &gt; 2).any(axis=1)] Out[201]: 0 1 2 qux 0.3 4.1 4 foo 0.0 0.3 5 </code></pre>
python|pandas
10
2,077
73,294,933
Pytorch: How to generate random vectors with length in a certain range?
<p>I want a <code>k</code> by <code>3</code> by <code>n</code> tensor representing <code>k</code> batches of <code>n</code> random 3d vectors, each vector has a magnitude (Euclidean norm) between <code>a</code> and <code>b</code>. Other than rescaling the entries of a random <code>kx3xn</code> tensor to <code>n</code> random lengths in a for loop, is there a better/more idiomatic way to do this?</p>
<p>Assuming <code>a &lt; b</code>, you now have a constraint on the 3rd random number due to the norm. i.e <code> sqrt(a^2 - x^2 - y^2) &lt; z &lt; sqrt(b^2 - x^2 - y^2)</code></p> <p>Now <code>a^2 - x^2 - y^2 &gt; 0</code> which implies that <code>x^2 + y^2 &lt; a^2</code></p> <p>We need two sets of generate numbers such that <code>x^2 + y^2 &lt; a^2</code></p> <pre><code>import numpy as np def rand_generator(a,b,n,k): req_array = np.zeros((n,k,3)) # first generate random numbers for x i.e 0&lt;x&lt;a req_array[:,:,0] = np.random.rand(n,k)*a # now generate random numbers for y such that 0 &lt; y &lt; a^-x2 req_array[:,:,1] = np.random.rand( n,k) * np.sqrt(a**2 - req_array[:,:,0]**2) norm_temp = np.linalg.norm(req_array,axis=2) a1 = np.sqrt(a**2 - norm_temp**2) b1 = np.sqrt(b**2 - norm_temp**2) # generate numbers for z such that they are inbetween a1 and b1 req_array[:,:,2] = a1 + np.random.rand(n,k)*(b1-a1) return req_array ll = rand_generator(2,5,10,12) lp = np.linalg.norm(ll,axis=2) print(np.all(lp&gt;2) and np.all(lp&lt;5)) ##output: True </code></pre> <hr /> <p>You can also use spherical coordinates for this(which is exactly same as above) <code>x = rsin(theta)cos(phi)</code>, <code>y = rsin(theta)sin(phi)</code>, <code>z = rcos(theta)</code> with <code> a&lt; r &lt;b</code> <code>0&lt;theta&lt;pi/2</code> and <code>0&lt;phi&lt;pi/2</code></p> <pre><code>import numpy as np def rand_generator(a,b,n,k): req_array = np.zeros((n,k,3)) # first generate random numbers for r in [a,b) r = a + np.random.rand(n,k)*(b-a) # now generate random numbers for theta in [0,pi/2) theta = np.random.rand( n,k) * np.pi/2 # now generate random numbers for phi in [0,pi/2) phi = np.random.rand( n,k) * np.pi/2 req_array[:,:,0] = r*np.sin(theta)*np.cos(phi) req_array[:,:,1] = r*np.sin(theta)*np.sin(phi) req_array[:,:,2] = r*np.cos(theta) return req_array ll = rand_generator(2,5,10,12) lp = np.linalg.norm(ll,axis=2) print(np.all(lp&gt;2) and np.all(lp&lt;5)) ##output: True </code></pre>
python|numpy|random|pytorch
2
2,078
73,354,949
Reformatting a dataframe to replace repeating similar rows with a new column
<p>Input.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Name</th> <th>Phrase number</th> <th>Words said</th> </tr> </thead> <tbody> <tr> <td>John</td> <td>Phrase 1</td> <td>Hi!</td> </tr> <tr> <td>John</td> <td>Phrase 2</td> <td>How are you?</td> </tr> <tr> <td>John</td> <td>Phrase 3</td> <td>Is everything okay?</td> </tr> <tr> <td>Brad</td> <td>Phrase 1</td> <td>Hello!</td> </tr> <tr> <td>Brad</td> <td>Phrase 2</td> <td>I am good!</td> </tr> <tr> <td>Brad</td> <td>Phrase 3</td> <td>How are you?</td> </tr> </tbody> </table> </div> <p>Desired output.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Name</th> <th>Phrase 1</th> <th>Phrase 2</th> <th>Phrase 3</th> </tr> </thead> <tbody> <tr> <td>John</td> <td>Hi!</td> <td>How are you?</td> <td>Is everything okay?</td> </tr> <tr> <td>Brad</td> <td>Hello!</td> <td>I am good!</td> <td>How are you?</td> </tr> </tbody> </table> </div> <p>How would you solve this with Pandas?</p>
<p>you can use <code>pivot</code> but have to use a few other methods to clean up the index and columns names (in order to exactly match the desired output):</p> <pre><code>df = (df.pivot(index='Name', columns='Phrase number') .droplevel(0, axis=1) .reset_index() .rename_axis('', axis=1)) df Out[1]: Name Phrase 1 Phrase 2 Phrase 3 0 Brad Hello! I am good! How are you? 1 John Hi! How are you? Is everything okay? </code></pre>
python|pandas
0
2,079
73,320,055
How to apply a low pass filter to a dicom image in python?
<p>I am trying to apply some blur using a low pass filter to a dicom image, however my resulting dicom image is not correct (see image below) (all data below is publicly available)</p> <pre><code>from scipy import fftpack import numpy as np import imageio from PIL import Image, ImageDraw import numpy as np import pydicom def test(matrix): image1_np = matrix #read_xray2(&quot;./CT000000.dcm&quot;) #fft of image fft1 = fftpack.fftshift(fftpack.fft2(image1_np)) #Create a low pass filter image x,y = image1_np.shape[0],image1_np.shape[1] #size of circle e_x,e_y=50,50 #create a box bbox=((x/2)-(e_x/2),(y/2)-(e_y/2),(x/2)+(e_x/2),(y/2)+(e_y/2)) low_pass=Image.new(&quot;L&quot;,(image1_np.shape[0],image1_np.shape[1]),color=0) draw1=ImageDraw.Draw(low_pass) draw1.ellipse(bbox, fill=1) low_pass_np=np.array(low_pass) #multiply both the images filtered=np.multiply(fft1,low_pass_np) #inverse fft ifft2 = np.real(fftpack.ifft2(fftpack.ifftshift(filtered))) ifft2 = np.maximum(0, np.minimum(ifft2, 255)) return ifft2 dicom = pydicom.dcmread(&quot;./CT000000.dcm&quot;) dicom.PixelData = test(dicom.pixel_array) dicom.save_as(r&quot;./result.dcm&quot;) </code></pre> <p>Original Image <a href="https://i.stack.imgur.com/c3Uwe.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c3Uwe.jpg" alt="original image" /></a></p> <p>Resulting Image <a href="https://i.stack.imgur.com/oBbjX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oBbjX.jpg" alt="resulting image" /></a></p>
<p>I fixed the code using the GaussianBlur of cv2 library</p> <pre><code>dicom = pydicom.dcmread(&quot;./CT000000.dcm&quot;) dicom.PixelData = cv2.GaussianBlur(dicom.pixel_array, (7, 7), 0) #save the image dicom.save_as(r&quot;./result.dcm&quot;) </code></pre>
python|numpy|opencv|image-processing|pydicom
0
2,080
34,917,727
Stacked bar plot by grouped data with pandas
<p>Let's assume I have <code>pandas</code> dataframe which has many features and I am interested in two. I'll call them <code>feature1</code> and <code>feature2</code>.</p> <p><code>feature1</code> can have three possible values. <code>feature2</code> can have two possible values.</p> <p>I need bar plot grouped by <code>feature1</code> and stacked by count of rows with each value of <code>feature2</code>. (So that there will be three stacks each with two bars).</p> <p>How to achieve this?</p> <p>At the moment I have</p> <pre><code>import pandas as pd df = pd.read_csv('data.csv') df['feature1'][df['feature2'] == 0].value_counts().plot(kind='bar',label='0') df['feature1'][df['feature2'] == 1].value_counts().plot(kind='bar',label='1') </code></pre> <p>but that is not what I actually want because it doesn't stack them.</p>
<p>Also, I have found another way to do this (with pandas):</p> <p><code>df.groupby(['feature1', 'feature2']).size().unstack().plot(kind='bar', stacked=True)</code></p> <p>Source: <a href="https://stackoverflow.com/questions/26683654/making-a-stacked-barchart-in-pandas">making a stacked barchart in pandas</a></p>
python|pandas|plot
24
2,081
67,495,100
Problem with freezing pytorch model - requires_grad is always true
<p>I have tried to freeze part of my model but it does not work. Gradient computation is still enabled for each layer. Is that some sort of bug or am I doing something wrong? :)</p> <pre class="lang-py prettyprint-override"><code>model = models.resnet18(pretrained=True) # To freeze the residual layers for param in model.parameters(): param.require_grad = False for param in model.fc.parameters(): param.require_grad = True # Replace last layer num_features = model.fc.in_features model.fc = nn.Linear(num_features, 2) model.fc = nn.Dropout(0.5) # Find total parameters and trainable parameters total_params = sum(p.numel() for p in model.parameters()) print(f'{total_params:,} total parameters.') # &gt;&gt;&gt; 21,284,672 total parameters. total_trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'{total_trainable_params:,} training parameters.') # &gt;&gt;&gt; 21,284,672 training parameters. </code></pre>
<p>This is just a typo (<code>require_grad</code> must be <code>requires_grad</code>):</p> <pre class="lang-py prettyprint-override"><code># To freeze the residual layers for param in model.parameters(): param.requires_grad = False # it was require_grad for param in model.fc.parameters(): param.requires_grad = True # it was require_grad </code></pre>
python|pytorch
2
2,082
67,286,133
Creating new column based on other column values with condition
<p>I have a column with values:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">brand</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Brand1</td> </tr> <tr> <td style="text-align: left;">Brand2</td> </tr> <tr> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">Brand3</td> </tr> </tbody> </table> </div> <pre><code>data.brand = data.brand.astype(str) data.brand = data.brand.replace(r'^\s*$', np.nan, regex=True) data['branded'] = np.where(data['brand']!= 'nan', True, False) </code></pre> <p>after first init of the code I get results:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">brand</th> <th style="text-align: left;">branded</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Brand1</td> <td style="text-align: left;">TRUE</td> </tr> <tr> <td style="text-align: left;">Brand2</td> <td style="text-align: left;">TRUE</td> </tr> <tr> <td style="text-align: left;">nan</td> <td style="text-align: left;">TRUE</td> </tr> <tr> <td style="text-align: left;">Brand3</td> <td style="text-align: left;">TRUE</td> </tr> </tbody> </table> </div> <p>after second init of the same code I get desired results:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">brand</th> <th style="text-align: left;">branded</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Brand1</td> <td style="text-align: left;">TRUE</td> </tr> <tr> <td style="text-align: left;">Brand2</td> <td style="text-align: left;">TRUE</td> </tr> <tr> <td style="text-align: left;">nan</td> <td style="text-align: left;">FALSE</td> </tr> <tr> <td style="text-align: left;">Brand3</td> <td style="text-align: left;">TRUE</td> </tr> </tbody> </table> </div> <p>What could be the smarter way to face/avoid this problem?</p>
<p>This answer just focusses on <em>Why did the first iteration not work</em></p> <p>In your code when you replace the <code>data.brand</code> with the <code>regex</code>, you replace with <code>np.nan</code> which is not <code>nan</code>, hence the first init cannot identify the condition in the next line : <code>np.where(data['brand']!= 'nan', True, False)</code>. However, on the second init, the row is already a <code>np.nan</code> and you do <code>.astype(str)</code> in the first line which sets <code>np.nan</code> to <code>'nan'</code> and hence the third line works.</p> <p>Solution:</p> <p>Replace:</p> <pre><code>data.brand = data.brand.replace(r'^\s*$', np.nan, regex=True) </code></pre> <p>With:</p> <pre><code>data.brand = data.brand.replace(r'^\s*$', 'nan', regex=True) </code></pre> <p>This will set the replace value to <code>'nan'</code> from the get go and hence the third line will run fine in the first iteration.</p>
python|pandas|dataframe|numpy
2
2,083
34,670,464
How can I add coordinate system / reference frame information to my data to avoid errors?
<p>Often times when dealing with vectors, reference frames are implicitly enforced through documentation, comments, or worse, (human) memory. For example, I want to compute the torque acting on a body moving with a given velocity from a plane due to drag (using a simple drag model):</p> <pre><code>torque = velocity.dot(normal) * position.cross(normal) </code></pre> <p>Here, a plane that is <code>position</code> away from the center of the body has normal <code>normal</code>. The body is moving at velocity <code>velocity</code>. The torque calculated will only be correct if all three quantities are w.r.t the same reference frame or coordinate system. If the quantities are obtained from different frames, then they will have to be converted before computing the torque:</p> <pre><code>velocity_A = B_to_A * velocity_source # velocity comes in frame B position_A = C_to_A * position_source # position comes in frame C torque_A = velocity_A.dot(normal_source) + ... </code></pre> <p>This is tedious and error prone. I would like this information to explicitly tracked so that errors cannot occur:</p> <pre><code>A, B, C = Frame() B.conversion_to(B_to_A) # etc. do this ONCE velocity = Quantity(velocity_source, B) position = Quantity(position_source, C) normal = Quantity(normal_source, A) torque = velocity.dot(normal) * position.cross(normal) total_torque = torque + some_other_torque # do other computations similarly external_thing.send_data(total_torque.to(D)) # This expects torque in the D reference frame </code></pre> <p>Essentially, all of the conversions are gone and all the programmer needs to do is implement the math and proper computations. Internally, the framework will have freedom to choose how to compute most efficiently (using the least number of transforms). It can even avoid any computation unless values are needed outside of the framework to find optimizations, but the internals are not important.</p> <p>How can one achieve such an interface? I am familiar with Python's pint (<a href="https://pint.readthedocs.org/en/0.6/" rel="nofollow">https://pint.readthedocs.org/en/0.6/</a>) but it doesn't seem general enough to handle coordinate frames. C++ has Boost::units, but that also does not seem general enough. Ideally, the system would work with numpy arrays. I would like to avoid rewriting a vector library.</p> <p>I have attempted to implement something like this in Python, but it looks like this:</p> <pre><code>vel = Quantity(velocity_source, B) pos = Quantity(position_source, C) normal = Quantity(normal_source, A) computation = lambda vel, pos, normal: vel.dot(normal) * pos.cross(normal) torque = compute(computation, vel=vel, pos=pos, normal=normal) </code></pre> <p>This is not ideal because everything needs to be done using functions or lambdas. Ideally, the system would get out of your way, e.g. you can add two <code>Quantity</code>s together without knowing they are <code>Quantity</code>s.</p> <p>How can one best achieve such a framework? If the motivation isn't clear please let me know and I will clarify. This seems like something that would be very useful in any graphics or simulation engine, yet hours of searching have turned up nothing. Language doesn't particularly matter, I'm mostly looking for general ideas.</p>
<p>If I understand correctly you are looking to implement what Sympy provides in the vector module. Have a <a href="http://docs.sympy.org/0.7.2/modules/physics/mechanics/vectors.html" rel="nofollow noreferrer">look</a> at the <code>ReferenceFrame</code> class. </p> <pre><code>from sympy.physics.vector import ReferenceFrame, express from mpmath import radians A = ReferenceFrame('A') B = ReferenceFrame('B') # define some relationship between the two systems like a rotation* B.orient(A, 'Axis', [radians(90), A.z]) # define a vector in frame A vector = [1, 0, 0] vector_inA = vector[0]*A.x + vector[1]*A.y + vector[2]*A.z print vector_inA # &gt;A.x # determine the vector coordinates in frame B vector_inB = express(vector_inA, B) print vector_inA # &gt;6.12323399573677e-17*B.x - B.y </code></pre> <p>*Rotation is defined as anticlockwise around the z axis as the viewer looks in the direction of z for a right-handed Cartesian system</p>
python|math|numpy|coordinates|physics
2
2,084
60,068,277
free up the memory allocation cuda pytorch?
<blockquote> <p>RuntimeError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 1; 11.91 GiB total capacity; 10.12 GiB already allocated; 21.75 MiB free; 56.79 MiB cached)</p> </blockquote> <p>I encountered the preceding error during pytorch training. <br/> I'm using pytorch on jupyter notebook. Is there a way to free up the gpu memory in jupyter notebook?</p>
<p>I had the same issue sometime back. There are generally two way I go about.</p> <ol> <li>Decrease the batch size</li> </ol> <p>Sometimes, even when I had decrease the batch size to '1', this issue persists. Then I changed my approach as follows.</p> <ol start="2"> <li>Decrease the image size ( or patch size, depending upon your implementation). Decreasing the image size, also gives in space for you to increase your batch size.</li> </ol> <p>But second approach is not recommended because we want the network to learn different features of image in relation to each other. Decreasing the image size decreases the scope of network learning finer details. ( Depending upon on your need you would need to alter it).</p>
gpu|pytorch
2
2,085
59,965,978
Making computation more efficient using a binary file
<p>I am solving N coupled differential equations (u1(t),v1(t),u2(t),v2(t),...) iteratively. I have a ring of N oscillators, and each oscillator is connected to P neighbours. I am trying to improve the efficiency by not saving all of my iteration steps into lists, but instead by exporting my results for every 10th time step into a binary file which I later import so that I can plot the results over time. The following is my old code where I haven't used a binary file. The results are good, but it's inefficient:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt dt = 0.001 ts = np.arange(0, 30, dt) N, P = 4, 2 u = np.array([np.zeros(len(ts)) for i in range(N)]) v = np.array([np.zeros(len(ts)) for i in range(N)]) def a_u(j,t,P,u,v): del_li = [] for k in range(j-P,j+P): del_li.append(u[k][t-1] - u[j][t-1]) return (u[j][t-1] - ((u[j][t-1])**3)/3 - v[j][t-1] + (1/(4*P))*sum(del_li)) for t in range(len(ts)): for j in range(-P,P): u[j][t] = u[j][t-1] + a_u(j,t,P,u,v)*dt v[j][t] = v[j][t-1] + (u[j][t-1] + 1.05)*dt + np.random.normal(scale=np.sqrt(dt)) </code></pre> <p>My attempt to make the above code faster using a binary file looks as follows:</p> <pre><code>u, v = np.array(np.zeros(N)), np.array(np.zeros(N)) def a_u(j,t,P,u,v): del_li = [] for k in range(j-P,j+P): del_li.append(u[k] - u[j]) return (u[j] - ((u[j])**3)/3 - v[j] + (1/(4*P))*sum(del_li)) with open('oscillators.bin', 'wb') as f: # write binary file for t in range(len(ts)): osc_list = [] for j in range(-P,P): u[j] += a_u(j,t,P,u,v)*dt v[j] += (u[j] + 1.05)*dt + np.random.normal(scale=np.sqrt(dt)) if not t % 10: osc_list.append(u[j]) osc_list.append(v[j]) if j==P: np.save(f, osc_list) fp = open("oscillators.bin", 'rb') # read binary file a = [] for i in range(int(len(ts)/10)): a.append(np.load(fp)) A = np.array(a) # u[1], v[1], ... = A[:,0], A[:,1], ... </code></pre> <p>Am I taking the correct steps to improve the efficiency of my code? My real code is much more complicated than this, and the parameters that I am using are much larger, so efficiency is important. </p>
<p>In your second computation, in the first line, you are allocating the <code>u</code> and <code>v</code> arrays to the same memory location. That is, when you assign to <code>u[j]</code> and <code>v[j]</code>, you assign to the same place, overwriting the previous content. This will give a completely different computation.</p> <p>Using loops in the way you do will only be efficient if you compile the code with cython or similar, where the usual overhead of the type-ambiguous python language gets reduced and you get the advantage of avoiding the allocation and garbage collection of the <code>u,v</code> arrays in every step. Else the numpy mechanism of vectorized operations</p> <pre><code>u,v = u - (u**3)/3 - v)*dt, (u + 1.05)*dt + np.random.normal(scale=np.sqrt(dt), size=len(v)) </code></pre> <p>is faster. But still, this form does not present any coupling to neighboring nodes.</p>
python|python-3.x|numpy|binary|differential-equations
1
2,086
63,798,869
Replace dot product for loop Numpy
<p>I am trying to replace the dot product for loop using something faster like NumPy</p> <p>I did research on dot product and kind of understand and can get it working with toy data in a few ways in but not 100% when it comes to implementing it for actual use with a data frame.</p> <p>I looked at these and other SO threads to no luck <a href="https://stackoverflow.com/questions/27006176/dot-product-of-subarrays-without-for-loop">avoide loop dot product, matlab</a> and <a href="https://stackoverflow.com/questions/27006176/dot-product-of-subarrays-without-for-loop">dot product subarrays without for loop</a> and <a href="https://stackoverflow.com/questions/24090889/multiple-numpy-dot-products-without-a-loop">multiple numpy dot products without a loop</a></p> <p><strong>looking to do something like this which works with toy numbers in np array</strong></p> <pre><code>u1 =np.array([1,2,3]) u2 =np.array([2,3,4]) v1.dot(v2) 20 </code></pre> <pre><code> u1 =np.array([1,2,3]) u2 =np.array([2,3,4]) (u1 * u2).sum() 20 </code></pre> <pre><code>u1 =np.array([1,2,3]) u2 =np.array([2,3,4]) sum([x1*x2 for x1, x2 in zip (u1, u2)]) 20 </code></pre> <p><strong>this is the current working get dot product</strong></p> <p>I would like to do this with out the for loop</p> <pre><code>def get_dot_product(self, courseid1, courseid2, unit_vectors): u1 = unit_vectors[courseid1] u2 = unit_vectors[courseid2] dot_product = 0.0 for dimension in u1: if dimension in u2: dot_product += u1[dimension] * u2[dimension] return dot_product </code></pre> <p>** code**</p> <pre><code> #!/usr/bin/env python # coding: utf-8 class SearchRecommendationSystem: def __init__(self): pass def get_bag_of_words(self, titles_lines): bag_of_words = {} for index, row in titles_lines.iterrows(): courseid, course_bag_of_words = self.get_course_bag_of_words(row) for word in course_bag_of_words: word = str(word).strip() # added if word not in bag_of_words: bag_of_words[word] = course_bag_of_words[word] else: bag_of_words[word] += course_bag_of_words[word] return bag_of_words def get_course_bag_of_words(self, line): course_bag_of_words = {} courseid = line['courseid'] title = line['title'].lower() description = line['description'].lower() wordlist = title.split() + description.split() if len(wordlist) &gt;= 10: for word in wordlist: word = str(word).strip() # added if word not in course_bag_of_words: course_bag_of_words[word] = 1 else: course_bag_of_words[word] += 1 return courseid, course_bag_of_words def get_sorted_results(self, d): kv_list = d.items() vk_list = [] for kv in kv_list: k, v = kv vk = v, k vk_list.append(vk) vk_list.sort() vk_list.reverse() k_list = [] for vk in vk_list[:10]: v, k = vk k_list.append(k) return k_list def get_keywords(self, titles_lines, bag_of_words): n = sum(bag_of_words.values()) keywords = {} for index, row in titles_lines.iterrows(): courseid, course_bag_of_words = self.get_course_bag_of_words(row) term_importance = {} for word in course_bag_of_words: word = str(word).strip() # extra tf_course = (float(course_bag_of_words[word]) / sum(course_bag_of_words.values())) tf_overall = float(bag_of_words[word]) / n term_importance[word] = tf_course / tf_overall keywords[str(courseid)] = self.get_sorted_results(term_importance) return keywords def get_inverted_index(self, keywords): inverted_index = {} for courseid in keywords: for keyword in keywords[courseid]: if keyword not in inverted_index: keyword = str(keyword).strip() # added inverted_index[keyword] = [] inverted_index[keyword].append(courseid) return inverted_index def get_search_results(self, query_terms, keywords, inverted_index): search_results = {} for term in query_terms: term = str(term).strip() if term in inverted_index: for courseid in inverted_index[term]: if courseid not in search_results: search_results[courseid] = 0.0 search_results[courseid] += ( 1 / float(keywords[courseid].index(term) + 1) * 1 / float(query_terms.index(term) + 1) ) sorted_results = self.get_sorted_results(search_results) return sorted_results def get_titles(self, titles_lines): titles = {} for index, row in titles_lines.iterrows(): titles[row['courseid']] = row['title'][:60] return titles def get_unit_vectors(self, keywords, categories_lines): norm = 1.884 cat = {} subcat = {} for line in categories_lines[1:]: courseid_, category, subcategory = line.split('\t') cat[courseid_] = category.strip() subcat[courseid_] = subcategory.strip() unit_vectors = {} for courseid in keywords: u = {} if courseid in cat: u[cat[courseid]] = 1 / norm u[subcat[courseid]] = 1 / norm for keyword in keywords[courseid]: u[keyword] = (1 / float(keywords[courseid].index(keyword) + 1) / norm) unit_vectors[courseid] = u return unit_vectors def get_dot_product(self, courseid1, courseid2, unit_vectors): u1 = unit_vectors[courseid1] u2 = unit_vectors[courseid2] dot_product = 0.0 for dimension in u1: if dimension in u2: dot_product += u1[dimension] * u2[dimension] return dot_product def get_recommendation_results(self, seed_courseid, keywords, inverted_index, unit_vectors): courseids = [] seed_courseid = str(seed_courseid).strip() for keyword in keywords[seed_courseid]: for courseid in inverted_index[keyword]: if courseid not in courseids and courseid != seed_courseid: courseids.append(courseid) dot_products = {} for courseid in courseids: dot_products[courseid] = self.get_dot_product(seed_courseid, courseid, unit_vectors) sorted_results = self.get_sorted_results(dot_products) return sorted_results def Final(self): print(&quot;Reading Title file.......&quot;) titles_lines = open('s2-titles.txt', encoding=&quot;utf8&quot;).readlines() print(&quot;Reading Category file.......&quot;) categories_lines = open('s2-categories.tsv', encoding = &quot;utf8&quot;).readlines() print(&quot;Getting Supported Functions Data&quot;) bag_of_words = self.get_bag_of_words(titles_lines) keywords = self.get_keywords(titles_lines, bag_of_words) inverted_index = self.get_inverted_index(keywords) titles = self.get_titles(titles_lines) print(&quot;Getting Unit Vectors&quot;) unit_vectors = self.get_unit_vectors(keywords=keywords, categories_lines=categories_lines) #Search Part print(&quot;\n ############# Started Search Query System ############# \n&quot;) query = input('Input your search query: ') while query != '': query_terms = query.split() search_sorted_results = self.get_search_results(query_terms, keywords, inverted_index) print(f&quot;==&gt; search results for query: {query.split()}&quot;) for search_result in search_sorted_results: print(f&quot;{search_result.strip()} - {str(titles[search_result]).strip()}&quot;) #ask again for query or quit the while loop if no query is given query = input('Input your search query [hit return to finish]: ') print(&quot;\n ############# Started Recommendation Algorithm System ############# \n&quot;) # Recommendation ALgorithm Part seed_courseid = (input('Input your seed courseid: ')) while seed_courseid != '': seed_courseid = str(seed_courseid).strip() recom_sorted_results = self.get_recommendation_results(seed_courseid, keywords, inverted_index, unit_vectors) print('==&gt; recommendation results:') for rec_result in recom_sorted_results: print(f&quot;{rec_result.strip()} - {str(titles[rec_result]).strip()}&quot;) get_dot_product_ = self.get_dot_product(seed_courseid, str(rec_result).strip(), unit_vectors) print(f&quot;Dot Product Value: {get_dot_product_}&quot;) seed_courseid = (input('Input seed courseid [hit return to finish]:')) if __name__ == '__main__': obj = SearchRecommendationSystem() obj.Final() </code></pre> <p><strong>s2-categories.tsv</strong></p> <pre><code> courseid category subcategory 21526 Design 3D &amp; Animation 153082 Marketing Advertising 225436 Marketing Affiliate Marketing 19482 Office Productivity Apple 33883 Office Productivity Apple 59526 IT &amp; Software Operating Systems 29219 Personal Development Career Development 35057 Personal Development Career Development 40751 Personal Development Career Development 65210 Personal Development Career Development 234414 Personal Development Career Development </code></pre> <p><strong>Example of how s2-titles.txt looks</strong></p> <pre><code>courseidXXXYYYZZZtitleXXXYYYZZZdescription 3586XXXYYYZZZLearning Tools for Mrs B's Science Classes This is a series of lessons that will introduce students to the learning tools that will be utilized throughout the schoXXXYYYZZZThis is a series of lessons that will introduce students to the learning tools that will be utilized throughout the school year The use of these tools serves multiple purposes 1 Allow the teacher to give immediate and meaningful feedback on work that is in progress 2 Allow students to have access to content and materials when outside the classroom 3 Provide a variety of methods for students to experience learning materials 4 Provide a variety of methods for students to demonstrate learning 5 Allow for more time sensitive correction grading and reflections on concepts that are assessed </code></pre>
<p>Evidently <code>unit_vectors</code> is a dictionary, from which you extract to 2 values, <code>u1</code> and <code>u2</code>.</p> <p>But what are those? Evidently dicts as well (this iteration would not make sense with a list):</p> <pre><code>for dimension in u1: if dimension in u2: dot_product += u1[dimension] * u2[dimension] </code></pre> <p>But what is <code>u1[dimension]</code>? A list? An array.</p> <p>Normally <code>dict</code> are access by <code>key</code> as you do here. There isn't a numpy style &quot;vectorization&quot;. <code>vals = list(u1.values())</code> gets a lists of all values, and conceivably that could be made into an array (if the elements are right)</p> <pre><code> arr1 = np.array(list(u1.values())) </code></pre> <p>and a <code>np.dot(arr1, arr2)</code> might work</p> <p>You'll get the best answers if you give small concrete examples - with real working data (and skip the complex generating code). Focus on the core of the problem, so we can grasp the issue with a 30 second read!</p> <p>===</p> <p>Looking more in depth at your <code>dot</code> function; this replicates the core (I think). Initially I missed the fact that you aren't iterating on <code>u2</code> keys, but rather seeking matching ones.</p> <pre><code>def foo(dd): x = 0 u1 = dd['u1'] u2 = dd['u2'] for k in u1: if k in u2: x += u1[k]*u2[k] return x </code></pre> <p>Then making a dictionary of dictionaries:</p> <pre><code>In [30]: keys=list('abcde'); values=[1,2,3,4,5] In [31]: adict = {k:v for k,v in zip(keys,values)} In [32]: dd = {'u1':adict, 'u2':adict} In [41]: dd Out[41]: {'u1': {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5}, 'u2': {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5}} In [42]: foo(dd) Out[42]: 55 </code></pre> <p>In this case the subdictionaries match, so we get the same value with a simple array <code>dot</code>:</p> <pre><code>In [43]: np.dot(values,values) Out[43]: 55 </code></pre> <p>But if <code>u2</code> was different, with different key/value pairs, and possibly different keys the result will be different. I don't see a way around the iterative access by keys. The sum-of-products part of the job is minor compared to the dictionary access.</p> <pre><code>In [44]: dd['u2'] = {'e':3, 'f':4, 'a':3} In [45]: foo(dd) Out[45]: 18 </code></pre> <p>We could construct other data structures that are more suitable to a fast <code>dot</code> like calculation. But that's another topic.</p>
python|python-3.x|numpy|nlp|self
1
2,087
63,845,441
Key/Value Pairs in Pandas Dataframe
<p>I have a dataframe that I created by merging multiple MATLAB <code>.mat</code> files and then loading the merged list of dictionaries to pandas.</p> <pre><code> KEY_COLUMN VALUE_COLUMN 0 [[[KEY1]], [[KEY2]], [[KEY3]], [[KEY4]]] [[VALUE], [VALUE], [VALUE], [VALUE]] 1 [[[KEY2]], [[KEY3]], [[KEY1]], [[KEY4]]] [[VALUE], [VALUE], [VALUE], [VALUE]] 2 [[[KEY1]], [[KEY3]], [[KEY4]], [[KEY2]]] [[VALUE], [VALUE], [VALUE], [VALUE]] </code></pre> <pre><code>{'TYPE': {0: array([[array(['START'], dtype='&lt;U5')], [array(['DIST'], dtype='&lt;U6')], [array(['DISTFALSE'], dtype='&lt;U7')], [array(['DISTTRUE'], dtype='&lt;U7')], [array(['ENCFALSE'], dtype='&lt;U11')], [array(['ENCTRUE'], dtype='&lt;U12')]], dtype=object), 1: array([[array(['DISTFALSE'], dtype='&lt;U5')], [array(['START'], dtype='&lt;U10')], [array(['DIST'], dtype='&lt;U11')], [array(['DISTTRUE'], dtype='&lt;U11')], [array(['ENCTRUE'], dtype='&lt;U10')], [array(['ENCFALSE'], dtype='&lt;U11')]], dtype=object)}, 'TIME': {0: array([[ 24413], [ 27481], [ 29382], [ 31923], [ 31249], [ 34690]]), 1: array([[ 364582], [ 31234], [ 43123], [ 24444], [ 55551], [ 12355]])}} </code></pre> <p>now I would want to have the KEYS be columns and VALUES be rows of the dataframe like here:</p> <pre><code> KEY1 KEY2 KEY3 KEY4 0 VALUE VALUE VALUE VALUE 1 VALUE VALUE VALUE VALUE 2 VALUE VALUE VALUE VALUE </code></pre> <p>The issue is that the order of keys (and consecutively values) is not the same. It differs between the current rows.</p> <p>How to achieve that? Many thanks!</p>
<p>Let's create a new dataframe by mapping key value pairs inside a list comprehension and using <code>np.squeeze</code> to remove the single dimensions:</p> <pre><code>df1 = pd.DataFrame([dict(zip(*map(np.squeeze, v))) for v in df.to_numpy()]) </code></pre> <p>Result:</p> <pre><code># for sample data KEY1 KEY2 KEY3 KEY4 0 VALUE VALUE VALUE VALUE 1 VALUE VALUE VALUE VALUE 2 VALUE VALUE VALUE VALUE # for actual data START DIST DISTFAL DISTTRU ENCFALSE ENCTRUE DISTF DISTTRUE 0 24413 27481 29382.0 31923.0 31249 34690 NaN NaN 1 31234 43123 NaN NaN 12355 55551 364582.0 24444.0 </code></pre>
python|pandas|matlab|dataframe
0
2,088
46,716,472
Getting a " A nested call to gcloud failed" error when trying to create a datalab in gcloud
<p>Just starting to use Google Cloud Platform. Trying to familiarize myself with tensorflow and am following the Stack Skills tutorial Machine Learning and TensorFlow on the Google Cloud. I am using the gcloud console on firefox and following the tutorial I use the commands</p> <ul> <li>gcloud config set core/project my-first-project </li> <li>gcloud config set compute/zone us-central1-f</li> <li>datalab create --no-create-repository tensorflow</li> </ul> <p>but I keep getting this error and haven't been able to find a solution that fixes it on the web.</p> <blockquote> <p>ERROR: (gcloud.compute.networks.create) Could not fetch resource: - Failed to find project my-first-project</p> <p>A nested call to gcloud failed.</p> </blockquote> <p>Any ideas or solutions whats going on or what I'm doing wrong?</p>
<p>It is likely that "my-first-project" does not exist as a project that your account has access to. You need to create the project first either through the console, or via the command line:</p> <pre><code>gcloud projects create my-first-project </code></pre>
tensorflow|google-cloud-platform|google-cloud-datalab
0
2,089
46,849,831
Using the OR operator seems to only take the first of two conditions when used with np.where filter
<p>Here is a small sampling of my dataset:</p> <pre><code>Search_Term Exit_Page Unique_Searches Exit_Pages_actual nitrile gloves /store/catalog/product.jsp? 10 /store/catalog/product.jsp? zytek gloves /store/product/KT781010 20 /store/pro </code></pre> <p>So this should be pretty easy, not sure why I am not getting it to work. I am trying to pull into the Exit_Pages_actual column when the all the characters in the Exit_Page when the first 10 characters are "/store/pro" or "/store/cat". When that is not the case, I want it to pull in only the first 10 characters from Exit_Page. As you can see above, my code works fine for the catalog but not for the product (aka works for the first condition in my OR but not the 2nd per the code below). What is wrong? So there is no error message, it just does not gives me the right result for product, only outputs the first 10 characters rather then the whole string:</p> <pre><code>Exit_Pages['Exit_Pages_actual'] = np.where(Exit_Pages['Exit_Page'].str[:10]==('/store/cat' or '/store/pro'),Exit_Pages['Exit_Page'].str[:],Exit_Pages['Exit_Page'].str[:10]) Exit_Pages </code></pre>
<p>@tw-uxtli51nus in the comments is basically correct.</p> <p>We can accomplish what you want by wrapping logical conditions with () and using '|' in place of 'or'.</p> <p>So np.where would look like:</p> <pre><code>df['new_col'] = np.where( ( (df['Exit_Page'].str[:10]=='/store/cat') | (df['Exit_Page'].str[:10]=='/store/pro') ) ,df['Exit_Page'] ,df['Exit_Page'].str[:10]) </code></pre> <p>trying to make it more readable since this stuff is ugly to look at.</p> <p>We can make our lives easier by instead trying a technique similar to what the docs suggest using np.isin(): <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.where.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.where.html</a></p> <p>but I don't have the correct version of numpy to write out a real example, unfortunately.</p>
python|python-3.x|pandas
1
2,090
38,643,151
Importing structured data into python
<p>I have a text file with a set of arrays in it that looks like this:</p> <pre><code>[(0,1,3),(0,4,5),...(1,9,0)] [(9,8,7),(0,4,5),...(1,9,0)] </code></pre> <p>where the rows are not the same length. </p> <p>This is essentially a list of paths, where each set of points is a path, ie:</p> <pre><code>(0,1,3),(0,4,5),...(1,9,0) =path1 (9,8,7),(0,4,5),...(1,9,0) =path2 </code></pre> <p>I need to import this in a form where I can call all elements. eg for all points in path 1, determine the distance to all points in path 2. Not sure where to start considering the delimiters don't want to hand both brackets and commas, and then built arrays in a callable way.</p>
<p>The following code reads the data in (assuming one path per line, and no extra whitespace) into a list of numpy arrays, then demonstrates how to compute the distance between two points.</p> <pre><code>import numpy as np import numpy.linalg as la #replace with your datafile datafile = "../data/point_path.txt" paths = [] with open(datafile, "r") as f: for line in f: point_strs = line.strip().strip("[()]").split("),(") npoints = len(point_strs) path = np.empty((npoints, 3)) for i in xrange(npoints): path[i,:] = np.array(map(int, point_strs[i].split(","))) paths.append(path) print "First point of path 1:" print paths[0][0] print "Second point of path 2:" print paths[1][1] print "Euclidean Distance between these points:" print la.norm(paths[0][0]-paths[1][1]) </code></pre> <p>The output of this is:</p> <pre><code>First point of path 1: [ 0. 1. 3.] Second point of path 2: [ 0. 4. 5.] Euclidean Distance between these points: 3.60555127546 </code></pre> <p><b>Edit: How to format input file</b><br/> The code assumes that each list of points is on its own line (e.g. for line in f, parse list of points). So the following file:</p> <pre><code>[(0,2,3),(0,4,0)] [(1,4,5),(5,8,9),(3,4,0)] [(0,5,7),(0,6,8),(1,5,6),(5,8,10)] </code></pre> <p>will not work because all 3 lists are on the same line.</p> <p>This format:</p> <pre><code>[(0,2,3),(0,4,0)] [(1,4,5),(5,8,9),(3,4,0)] [(0,5,7),(0,6,8),(1,5,6),(5,8,10)] </code></pre> <p>will work, as each list of points is on a separate line. </p>
python|arrays|numpy|import
0
2,091
63,111,918
Unable to replace NaN value with a date in pandas
<p>Trying to replace a <code>NaN</code> in a <code>datetime</code> column with another <code>datetime</code> object from the same pandas dataframe. I have tried set_value, at, <code>loc</code>. They all result in <code>nan</code> being saved instead of the actual date.<br /> Here is the most recent code I tried, seeing that the <code>updated_date</code> was being saved as Timestamp, I tried converting it to <code>datetime</code>. But even here it saves it as <code>nan</code>.<br /> any idea?</p> <pre><code>updated_date = df[column_to_fix_with].iloc[index].to_pydatetime() df.set_value(col_w_dates_to_fix, index, updated_date) </code></pre>
<p>For example, to fill empty column <code>column_to_fill</code> with values from the same dataframe <code>df</code> with values from column <code>column_from</code> use:</p> <pre><code>df['column_to_fill'] = df['column_to_fill'].fillna(df['column_from']) </code></pre>
python|pandas|datetime|nan
0
2,092
63,214,018
How to prevent/avoid duplicate row insert in dataframe?
<p>here's my code snippet:</p> <pre><code>insert os insert sys insert pandas as pd data=[['2019-04-04',1105],['2019-04-05',1145],['2019-04-06',1125],['2019-04-07',1130],['2019-04-08',1122], ['2019-04-09',1105],['2019-04-10',1145],['2019-04-11',1125],['2019-04-12',1130],['2019-04-13',1122], ['2019-05-04',1105],['2019-05-05',1145],['2019-05-06',1125],['2019-05-07',1130],['2019-05-08',1122], ['2019-05-09',1105],['2019-05-10',1145],['2019-05-11',1125],['2019-05-12',1130],['2019-05-13',1122] ] pp=pd.DataFrame(data,columns=['Date','Price']) def clear_screen(): os.system('cls' if os.name=='nt' else 'clear') def print_menu(): clear_screen() print(&quot;-&quot;*15,&quot;Menu&quot;,&quot;-&quot;*15) print(&quot;1. Data display&quot;) print(&quot;2. Data insert&quot;) print(&quot;3. Data update&quot;) print(&quot;4. Data search&quot;) print(&quot;5. Data delete&quot;) print(&quot;6. Exit&quot;) print(&quot;-&quot;*36) def back2menu(): print() input(&quot;Press Enter to back.&quot;) loop=True while loop: print_menu() choices=int(input(&quot;Insert choice[1-6]: &quot;)) if choices==1: clear_screen() print(&quot;Display data&quot;) elif choices==2: dt=input(&quot;Insert date: &quot;) prc=int(input(&quot;Insert price: &quot;)) pp.loc[len(pp)]=[dt,prc] print(&quot;Data has been added.&quot;) back2menu() ..... ..... ..... elif choices==6: print(&quot;You've exited from the program&quot;) loop=False sys.exit() ..... </code></pre> <p>And here's the dataframe sample:</p> <pre><code> Date Price 0 2019-04-04 1105 1 2019-04-05 1145 2 2019-04-06 1125 3 2019-04-07 1130 4 2019-04-08 1122 5 2019-04-09 1105 6 2019-04-10 1145 7 2019-04-11 1125 8 2019-04-12 1130 9 2019-04-13 1122 10 2019-05-04 1105 11 2019-05-05 1145 12 2019-05-06 1125 13 2019-05-07 1130 14 2019-05-08 1122 15 2019-05-09 1105 16 2019-05-10 1145 17 2019-05-11 1125 18 2019-05-12 1130 19 2019-05-13 1122 </code></pre> <p>I want to implement some input condition like this:</p> <pre><code>if the date is already exist in the dataframe: print(&quot;Error, because the data on this date already exist.&quot;) else: #There you go, you can insert data </code></pre> <p>Is there any way to do it with pandas? Because i've been tried with <code>pp.drop_duplicates(subset='Date',keep='first')</code> it isn't work and the duplicated date still inputed to the dataframe like this:</p> <pre><code> Date Price 0 2019-04-04 1105 1 2019-04-05 1145 2 2019-04-06 1125 3 2019-04-07 1130 4 2019-04-08 1122 5 2019-04-09 1105 6 2019-04-10 1145 7 2019-04-11 1125 8 2019-04-12 1130 9 2019-04-13 1122 10 2019-05-04 1105 11 2019-05-05 1145 12 2019-05-06 1125 13 2019-05-07 1130 14 2019-05-08 1122 15 2019-05-09 1105 16 2019-05-10 1145 17 2019-05-11 1125 18 2019-05-12 1130 19 2019-05-13 1122 20 2019-05-13 555 </code></pre> <p>and not like the first dataframe i showed.</p> <p>Don't mind the price tho, i just want the date is not duplicated if i input the same date.</p>
<pre><code>date_str = '2019-05-14' price = 1200 if any(pp['Date'] == date_str): print('Date already exists.') else: pp.loc[len(pp)] = [date_str, price] print('New date added to dataframe.') print(pp) </code></pre>
python|pandas
0
2,093
62,986,296
TypeError: '<=' not supported between instances of 'str' and 'int' Duplicate
<p>I am using Python3 and I'm working on several files where some of my data (AYield &amp; BYield) is missing which is considered a NaN, however, when I'm running the last line of the code, I get an error. Both Ask and Bid data frames contain the same rows and columns. Thank you</p> <pre><code>Askyield = pd.read_excel(&quot;AYield.xlsx&quot;,na_values=[&quot;NaN&quot;]) Bidyield = pd.read_excel(&quot;BYield.xlsx&quot;,na_value=[&quot;NaN&quot;]) matchedbond_info = pd.read_excel(&quot;matched_bonds.xlsx&quot;) Askyield = pd.merge(matchedbond_info, Askyield, on = ['ISIN']) Bidyield = pd.merge(matchedbond_info, Bidyield, on = ['ISIN']) date_list = [] for i in range(len(Bidyield.columns)): if isinstance(Bidyield.columns[i], dt.datetime):date_list.append(Bidyield.columns[i]) matchedbond_info = Bidyield.drop(columns=date_list) bid_yield.info() bid_yield.info() &lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 1236 entries, 0 to 1235 Columns: 1566 entries, 2019-12-31 00:00:00 to 2014-01-01 00:00:00 dtypes: float64(1566) bid_yield = Bidyield[date_list] ask_yield = Askyield[date_list] bid_yield.head() 2019-12-31 2019-12-30 2019-12-27 ... 2014-01-03 2014-01-02 2014-01-01 0 NaN NaN NaN ... NaN NaN NaN 1 3.119 3.084 3.081 ... NaN NaN NaN 2 NaN NaN NaN ... NaN NaN NaN 3 NaN NaN NaN ... NaN NaN NaN 4 NaN NaN NaN ... NaN NaN NaN [5 rows x 1566 columns] bid_yield = bid_yield.mask((bid_yield &gt;0) &amp; (ask_yield &lt;0)) </code></pre> <p>Then I get the following</p> <pre><code> TypeError: '&lt;' not supported between instances of 'str' and 'int' </code></pre>
<p>I can reproduce this error with this example:</p> <pre><code>import pandas as pd df = pd.DataFrame(dict(x=[&quot;5&quot;, &quot;10&quot;], y=[1, 4])) df.dtypes # x object # y int64 # dtype: object df[df.x &gt; df.y] # TypeError: '&gt;' not supported between instances of 'str' and 'int' </code></pre> <p>You probably need to convert one of the columns to float.</p> <p>In this example:</p> <pre><code>df['x'] = df.x.astype(&quot;float&quot;) </code></pre>
python|pandas|dataframe|nan
2
2,094
63,285,923
Why is i not incrementing in for loop?
<p>I'm new to programming may I know why my <code>i</code> is not incrementing in the for loop. I want to update the plot name for each subplot. Thank you.<br /> <img src="https://i.stack.imgur.com/5Vbwd.png" alt="Code screenshot" /></p> <pre><code>from matplotlib import pyplot as plt fig= plt.figure() fig,axes = plt.subplots(nrows=1, ncols=3,squeeze=False) fig.tight_layout() i=0 for current_ax in axes: current_ax[i].set_title(f&quot;plot: {i}&quot;) i+=1 </code></pre>
<p>This is because your axes array is like shown below</p> <pre><code>[[&lt;matplotlib.axes._subplots.AxesSubplot object at 0x000001DCA32BB2E0&gt; &lt;matplotlib.axes._subplots.AxesSubplot object at 0x000001DCA54476A0&gt; &lt;matplotlib.axes._subplots.AxesSubplot object at 0x000001DCA547D250&gt;]] </code></pre> <p>so your array have only one plot with three objects in it. while you running your code the loop will only execute once. there is no problem with the counter increment. you can cross check this by printing <code>i</code> at the end of loop. so to make work this code in your way first pull out the first element from the array which will make the axes array with 3 objects ie 3 plots.</p> <pre><code>from matplotlib import pyplot as plt fig= plt.figure() fig,axes = plt.subplots(nrows=1, ncols=3,squeeze=False) fig.tight_layout() i=0 print('figarray1',axes) axes=axes[0] print('figarray2',axes) for current_ax in axes: current_ax.set_title(f&quot;plot: {i}&quot;) i+=1 print(i) plt.show() </code></pre> <p><strong>output graph</strong></p> <p><a href="https://i.stack.imgur.com/VVw8T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VVw8T.png" alt="enter image description here" /></a></p> <p><strong>Terminal Output</strong></p> <pre><code>figarray1 [[&lt;matplotlib.axes._subplots.AxesSubplot object at 0x0000020AEB72B2E0&gt; &lt;matplotlib.axes._subplots.AxesSubplot object at 0x0000020AEF9266A0&gt; &lt;matplotlib.axes._subplots.AxesSubplot object at 0x0000020AEF95D250&gt;]] figarray2 [&lt;matplotlib.axes._subplots.AxesSubplot object at 0x0000020AEB72B2E0&gt; &lt;matplotlib.axes._subplots.AxesSubplot object at 0x0000020AEF9266A0&gt; &lt;matplotlib.axes._subplots.AxesSubplot object at 0x0000020AEF95D250&gt;] 1 2 3 </code></pre>
python|numpy|for-loop|matplotlib|subplot
3
2,095
63,319,129
Index Error: Unable to print cost_function
<p>I am trying to run a for loop to print the cost functions for three different slopes and bias = 0 by defining a function. The dataset has 5 rows and cost function is to predict marks based on attendance. I am able to print cost function if I define three separate functions for each value of slope. Here is my code:</p> <pre><code>dataset = {&quot;Attendance&quot;:[100, 87, 15, 63, 47], &quot;Marks&quot;: [100, 95, 6, 73, 50]} Marks = pd.DataFrame(dataset, columns = [&quot;Attendance&quot;, &quot;Marks&quot;]) bias = 0 slope = {&quot;values&quot;: [-1, 0, 3]} slope = pd.DataFrame(slope) def error(): a = [] sum_of_squared_error = 0 for i in range(len(slope)): for j in range(0, len(Marks)): x = Marks.iloc[j, 0] y = Marks.iloc[j, 1] sum_of_squared_error += (y - (slope.iloc[0, i]*x + bias)) ** 2 cost_function = sum_of_squared_error / (2 * len(Marks)) a.append(cost_function) return a error() </code></pre> <p>I am getting this error.</p> <pre><code>--------------------------------------------------------------------------- IndexError Traceback (most recent call last) &lt;ipython-input-126-1bf54ede9ae1&gt; in &lt;module&gt;() 13 a.append(cost_function) 14 return a ---&gt; 15 error() 5 frames /usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py in _validate_integer(self, key, axis) 2061 len_axis = len(self.obj._get_axis(axis)) 2062 if key &gt;= len_axis or key &lt; -len_axis: -&gt; 2063 raise IndexError(&quot;single positional indexer is out-of-bounds&quot;) 2064 2065 def _getitem_tuple(self, tup: Tuple): IndexError: single positional indexer is out-of-bounds </code></pre>
<p>You aren't accessing the elements of your slope df correctly. <code>slope.shape</code> returns <code>(3, 1)</code> so you want to iterate through the row number, not the column number.</p> <p><code>sum_of_squared_error += (y - (slope.iloc[0, i]*x + bias)) ** 2</code> should be: <code>sum_of_squared_error += (y - (slope.iloc[i, 0]*x + bias)) ** 2</code></p> <p>In addition, you should have sum_of_squared_error reset to 0 between the inner and outer loop:</p> <pre><code>import pandas as pd dataset = {&quot;Attendance&quot;:[100, 87, 15, 63, 47], &quot;Marks&quot;: [100, 95, 6, 73, 50]} Marks = pd.DataFrame(dataset, columns = [&quot;Attendance&quot;, &quot;Marks&quot;]) bias = 0 slope = {&quot;values&quot;: [-1, 0, 3]} slope = pd.DataFrame(slope) def error(): a = [] sum_of_squared_error = 0 for i in range(len(slope)): for j in range(0, len(Marks)): x = Marks.iloc[j, 0] y = Marks.iloc[j, 1] sum_of_squared_error += (y - (slope.iloc[i, 0]*x + bias)) ** 2 cost_function = sum_of_squared_error / (2 * len(Marks)) sum_of_squared_error = 0 a.append(cost_function) return a error() </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; error() [10147.0, 2689.0, 9081.4] </code></pre>
python|pandas
1
2,096
67,636,342
python how to choose just special elements from df string
<p>Please, help. I need to choose just 'yellow', 'green', 'black' or combinatons of these elements, if there are several of them in the string. df:</p> <pre><code>0 ['blue','green','white','yellow','orange','pink','black'] 1 ['green','yellow','orange','pink','pink'] 2 ['white','orange','black'] 3 ['green','white','yellow','orange'] 4 ['green'] </code></pre> <p>the result should be like:</p> <pre><code>0 ['green','yellow','black'] 1 ['green','yellow'] 2 ['black'] 3 ['green','yellow'] 4 ['green'] </code></pre>
<p>Your dataframe <code>df</code>:</p> <pre><code> val 0 ['blue','green','white','yellow','orange','pin... 1 ['green','yellow','orange','pink','pink'] 2 ['white','orange','black'] 3 ['green','white','yellow','orange'] 4 ['green'] </code></pre> <p>Try with <code>apply()</code> and list comprehension:</p> <pre><code>df['val']=df['val'].apply(lambda x:eval(x)) #use this only when the data inside val is string </code></pre> <p><strong>Note:</strong> If the above line throw you can error then simply skip it and move to the code below(It means that the data inside val column is of type list)</p> <p>Finally:</p> <pre><code>df['val']=df['val'].apply(lambda x:[y for y in x if y=='yellow' or y=='green' or y=='black']) </code></pre> <p>OR(use any one code)</p> <pre><code>df['val']=df['val'].apply(lambda x:[y for y in x if y in ['yellow','green','black']]) </code></pre> <p>Now If you print <code>df</code> you will get:</p> <pre><code> val 0 [green, yellow, black] 1 [green, yellow] 2 [black] 3 [green, yellow] 4 [green] </code></pre>
python|pandas
1
2,097
67,644,891
How do I create embeddings for every sentence in a list and not for the list as a whole?
<p>I need to generate embeddings for documents in lists, calculate the Cosine Similarity between every sentence of corpus 1 with every sentence of corpus2, rank them and give out the best fit:</p> <pre><code>embed = hub.load(&quot;https://tfhub.dev/google/universal-sentence-encoder/4&quot;) embeddings1 = [&quot;I'd like an apple juice&quot;, &quot;An apple a day keeps the doctor away&quot;, &quot;Eat apple every day&quot;, &quot;We buy apples every week&quot;, &quot;We use machine learning for text classification&quot;, &quot;Text classification is subfield of machine learning&quot;] embeddings1 = embed(embeddings1) embeddings2 = [&quot;I'd like an orange juice&quot;, &quot;An orange a day keeps the doctor away&quot;, &quot;Eat orange every day&quot;, &quot;We buy orange every week&quot;, &quot;We use machine learning for document classification&quot;, &quot;Text classification is some subfield of machine learning&quot;] embeddings2 = embed(embeddings2) print(cosine_similarity(embeddings1, embeddings2)) </code></pre> <p>The vectors seem to work fine (due to the shape of the array) and also the calculation of the cosine similarity. My problem is that the Universal Sentence Encoder does not give them out with the respective strings which is crucial. It always has to find the right fit and I must be able to order after the value of Cosine Similarity</p> <pre><code>array([[ 0.7882168 , 0.3366559 , 0.22973989, 0.15428472, -0.10180502, -0.04344492], [ 0.256085 , 0.7713026 , 0.32120776, 0.17834462, -0.10769081, -0.09398925], [ 0.23850328, 0.446203 , 0.62606746, 0.25242645, -0.03946173, -0.00908459], [ 0.24337521, 0.35571027, 0.32963073, 0.6373588 , 0.08571904, -0.01240187], [-0.07001016, -0.12002315, -0.02002328, 0.09045915, 0.9141338 , 0.8373743 ], [-0.04525191, -0.09421931, -0.00631144, -0.00199519, 0.75919366, 0.9686416 ]] </code></pre> <p>The goal is that the code finds out itself that the highest cosine similarity of &quot;I'd like an apple juice&quot; in the second corpus is &quot;I'd like an orange juice&quot; and matches them.</p> <p>I tried for loops, for instance:</p> <pre><code>for sentence in embeddings1: print(sentence, embed(sentence)) </code></pre> <p>resulting in this error:</p> <pre><code>tensorflow.python.framework.errors_impl.InvalidArgumentError: input must be a vector, got shape: [] [[{{node StatefulPartitionedCall/StatefulPartitionedCall/text_preprocessor/tokenize/StringSplit/StringSplit}}]] [Op:__inference_restored_function_body_5285] Function call stack: restored_function_body </code></pre>
<p>As I mentioned in the comment, you should write the for loop as follows:</p> <pre><code>for sentence in embeddings1: print(sentence, embed([sentence])) </code></pre> <p>the reason is simply that embed is expecting a list of strings as an input. No more detailed explanation than that.</p>
python|tensorflow|nlp|cosine-similarity|sentence-similarity
0
2,098
31,795,045
How can I make pandas.to_excel() include the index but NOT on a separate line?
<p>When I save a Pandas DataFrame to Excel (with the index option left as it's default: True), the resulting Excel file has a line beneath the row of headers. Said row contains the index name. How can I avoid that extra line and just have the index name(s) show up in the same row as the rest of the column headers?</p> <pre><code>df[field_list].to_excel(path) </code></pre> <p>I don't see this addressed in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html#pandas.DataFrame.to_excel" rel="nofollow noreferrer">the documentation</a>.</p> <p>In the context of <a href="https://stackoverflow.com/questions/30167246/how-can-i-preserve-a-pandas-multi-index-between-a-to-excel-and-a-read-excel">my related question</a>, I could understand how it would be useful to put the indices on their own line (so that pandas, in reading an Excel file back in, could identify index columns), but since that doesn't seem to work (unless something's changed or I'm mistaken), I don't know why it's useful.</p>
<p>It appears that setting merge_cells (which is by default True) to False accomplishes the objective, but it's not immediately clear to me why that's the case.</p> <pre><code>df[field_list].to_excel(path, merge_cells=False) </code></pre>
python|excel|pandas
0
2,099
41,560,796
Numpy not found after installation
<p>I just installed numpy on my PC (running windows 10, running python 3.5.2) using WinPython, but when i try to import it in IDLE with: <code>import numpy</code> I get the ImportError: <code>Traceback (most recent call last): File "C:\Users\MY_USERNAME\Desktop\DATA\dataScience1.py", line 1, in &lt;module&gt; import numpy ImportError: No module named 'numpy'</code>.</p> <p>Did I possibly install it incorrectly, or do I need to do something else before it can be used?</p>
<p>In Linux and Mac OS systems we can install modules directly by mentioning</p> <pre><code>pip install modulename (or) sudo pip install modulename </code></pre> <p>in terminal or command prompt.</p> <p>But in windows we should mention location of python folder in c directory like c:\python3 and later we should use</p> <pre><code>pip install modulename </code></pre> <p>in command prompt or terminal.</p> <p>or</p> <p>go n check whether numpy module is installed in sitepackages of python 3 folder in c directory or not.</p>
python|numpy|python-3.5
1