Unnamed: 0 int64 0 378k | id int64 49.9k 73.8M | title stringlengths 15 150 | question stringlengths 37 64.2k | answer stringlengths 37 44.1k | tags stringlengths 5 106 | score int64 -10 5.87k |
|---|---|---|---|---|---|---|
377,400 | 56,498,442 | concatenate two sting columns - python | <p>I have this dataframe:</p>
<pre><code>df = pd.DataFrame({"X" : ["2017-12-17","2017-12-18","2017-12-19"],
"Y": ["F","W","Q"]})
</code></pre>
<p>And I'm looking for the <code>key</code> column:</p>
<pre><code> X Y key
0 2017-12-17 F 2017-12-17_F
1 2017-12-18 W 2... | <p>Looks like your <code>X</code> column is not string as posted, but <code>TimeStamp</code>. Anyway, you can try:</p>
<pre><code>df['key'] = df.X.astype(str) + '_' + df.Y.astype(str)
</code></pre> | python|pandas|numpy|data-manipulation | 0 |
377,401 | 56,546,251 | Custom range bins in Pandas (interval starting always from zero) | <p>I use:</p>
<pre><code>bins = pd.cut(data['R10rank'], list(np.arange(0.0, 1.1, 0.1)))
sum=data.groupby(bins)['Ret20d'].agg(['count', 'mean'])
</code></pre>
<p>to create stats like:</p>
<pre><code> count mean
R10rank
(0.0, 0.1] 1044 4.782833
(0.1, 0.2] 809 5.527745
(0.2, 0.3] 746 5.181306
(0.3, 0.4]... | <p>You can check with <code>expanding</code></p>
<pre><code>df['New']=df['count']*df['mean']
df.expanding(min_periods=1).sum().assign(mean=lambda x : x['New']/x['count'])
Out[105]:
count mean New
R10rank
(0.0,0.1] 1044.0 4.782833 4993.277652
(0.1,0.2] ... | pandas|data-binding | 0 |
377,402 | 56,493,760 | How to split the column with same delimiter | <p>My dataframe is this and I want to split my data frame by colon (<code>:</code>)</p>
<pre><code>+------------------+
|Name:Roll_no:Class|
+------------------+
| #ab:cd#:23:C|
| #sd:ps#:34:A|
| #ra:kh#:14:H|
| #ku:pa#:36:S|
| #ra:sh#:50:P|
+------------------+
</code></pre>
<p>and I want my... | <p>If need split by last 2 <code>:</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.rsplit.html" rel="nofollow noreferrer"><code>Series.str.rsplit</code></a>, then set columns by split column name and last remove first and last <code>#</code> by indexing:</p>
<pre><code>c... | python|pandas|pyspark | 4 |
377,403 | 56,690,524 | matplotlib geopandas plot chloropleth with set bins for colorscheme | <h1>How do I set a consistent colorscheme for three <code>axes</code> in the same figure?</h1>
<p>The following should be a wholly reproducible example to run the code and get the same figure I have posted below.</p>
<p>Get the shapefile data from the <a href="http://geoportal.statistics.gov.uk/datasets/8edafbe3276d4... | <p>From geopandas 0.5 onwards you can use a custom scheme defined as <code>scheme="User_Defined"</code> and supply the binning via <code>classification_kwds</code>.</p>
<pre><code>import geopandas as gpd
print(gpd.__version__) ## 0.5
import numpy as np; np.random.seed(42)
import matplotlib.pyplot as plt
gdf = gpd.... | python|python-3.x|matplotlib|geopandas | 13 |
377,404 | 56,512,810 | Categorizing a column in a pandas dataframe | <p>I have a dataframe with a column that has university ranks as follows: <code>1,2,3,4,5,...,99,100,101-150,151-200,201-300,301-400,401-500,>500</code> and looks like this:</p>
<pre><code> Uni_Rank
1
3
4
101-150
20
22
151-200
201-300
301-400
10
15
44
53
70
... | <p>you can do something like this (just add more bin boundaries and labels as you wish)</p>
<p>I am assuming the string ranks are your eventual categories that you want</p>
<pre class="lang-py prettyprint-override"><code>df_train_final['uni_original'] = [1,3,4,'101-150',20,22,'151-200']
bins = [0, 10, 20, 30]
names =... | python|pandas | 1 |
377,405 | 56,731,076 | Combine text of a column in dataframe with conditions in pandas/python | <p>I'm am testing a ML model and need to merge my text to cut my audio file and train the model. How can I merge the text using conditions ?</p>
<p>My goal is to merge the text in the 'Text' column until I reach an end punctuation to form a sentence. I want to continue to form sentences until I reach the end of the te... | <p>Or use:</p>
<pre><code>>>> df['Text'] = df.groupby(['Name', 'Speaker'])['Text'].transform(' '.join).str.split().str.join(' ')
>>> df2 = df.head(1)
>>> df2['EnTime'] = df['EnTime'].iloc[-1]
>>> df2
Name Speaker StTime Text EnTime
0 s1 tom 6.8... | python|pandas|pandas-groupby|data-cleaning|data-processing | 1 |
377,406 | 56,765,205 | A column name is 0, and it can not be renamed or selected | <p>I turned a Series <code>word</code> into a dataframe with same name.
After reindexing, I now have the dataframe as such:</p>
<pre><code> index 0
0 a A
1 b B
2 c C
</code></pre>
<p>and after I renamed the dataframe:</p>
<pre><code>words.rename({'index':'word','0':'counts'},axis... | <p>Here DataFrame constructor is not necessary, rather use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.reset_index.html" rel="nofollow noreferrer"><code>Series.reset_index</code></a> with parameter <code>name</code> and for rename <code>index</code> add <a href="http://pandas.pydata... | python|pandas | 1 |
377,407 | 56,834,934 | How to replace tensorflow softmax with max for generating one hot vector at the output layer of Neural Network? | <p>For a classification problem, softmax function is used in the last layer of the Neural Network.<br>
I want to replace the softmax layer with the max layer that generates one hot vector with one set to the index where maximum value occurred and set all other entries to zero.</p>
<p>I can do it with tf.argmax as sugg... | <p>No, there is no differentiable solution, that is why we use the <code>softmax</code> activation, because it is a differentiable approximation to the max function.</p> | python|tensorflow|neural-network|one-hot-encoding|softmax | 0 |
377,408 | 56,632,417 | Convert header and values list to pandas dataframe | <p>I'm reading data from Google Sheets and was able to successfully get it in:</p>
<pre><code>header = result.get('values', [])[0] #First line is column names
values = result.get('values', [])[1:] #Everything else is data
</code></pre>
<p>Then after that I'm doing this:</p>
<pre><code>if not values:
print('No da... | <p>First you can make your dataframe with the present values, after that create a dataframe with the rest of the columns with <code>NaN</code> and <code>concat</code> them together:</p>
<pre><code>df = pd.DataFrame(data=values, columns=header[:14])
df2 = pd.DataFrame({head : [np.NaN]*len(df) for head in header[14:]})
... | python|pandas|dataframe|google-sheets-api | 1 |
377,409 | 56,649,369 | pandas: cannot reindex from a duplicate axis | <p>I have this code:</p>
<pre><code>missing_columns = list(set(model_header) - set(combined_data.columns))
if missing_columns:
combined_data = combined_data.reindex(columns=np.append(combined_data.columns.values, missing_columns))
</code></pre>
<p>which is sometimes generating this error</p>
<blockquote>
<p>ca... | <p>I guess there are duplicated columns names in one or both <code>DataFrames</code>, solution is deduplicated them before your solution manually or by code below:</p>
<pre><code>model_header = pd.DataFrame(columns=list('ABDB'))
combined_data = pd.DataFrame(columns=list('ABCA'))
print (model_header)
Empty DataFrame
Co... | pandas | 3 |
377,410 | 25,577,213 | contour plot with python loop and matplotlib | <p>I can't figure out what's preventing me from getting a contour plot of this cost function. After much trial and error, I'm getting:</p>
<pre><code>ValueError: zero-size array to reduction operation minimum which has no identity
</code></pre>
<p>If I print J it doesn't give me any values, just a 100x100 array full ... | <p>If <code>J</code> is just <code>nan</code>s, then the problem is in the way you're generating <code>J</code> and <strong>not</strong> the <code>contour()</code> call. </p> | python|numpy|matplotlib | 1 |
377,411 | 25,552,741 | Python numpy not saving array () | <p>I am getting a strange error when trying to (binary) save some arrays in python 2
I have isolated the error, in particular supposing </p>
<pre><code>p1 = [1, 5, 10, 20]
p2 = [1, 5, 10, 20, 30]
p3 =np.zeros( (5,10), dtype=float)
</code></pre>
<p>then</p>
<pre><code>np.save("foo1", (p1, p2))
np.save("foo2", (p1... | <p>The error is not due to <code>np.save</code>, but coming from trying to create an array from nested sequences. I get a similar but different error, probably because I am working on the development version, using any of the variants of <code>np.array</code>:</p>
<pre><code>>>> np.array((p2, p3))
Traceback ... | python-2.7|numpy|save | 5 |
377,412 | 25,825,720 | python: join group size to member rows in dataframe | <p>(Python 2.7) I wish to create a column in a python dataframe with the size of the group to which member rows belong (indexed by row ID number). Groups are based on rows with identical values in two columns, date and amount. I've attempted to use groubpy and size - which is suggested for similar problems - but I can'... | <p>Have you tried:</p>
<pre><code>df.groupby(['date','amount']).transform('count')
</code></pre> | python|pandas | 1 |
377,413 | 25,819,172 | How to initialize empty pandas Panel | <p>I'm trying to fill pandas Panel in a <code>for</code> loop:</p>
<pre><code>dp = pd.Panel()
for i in range(x):
# read data in 2D numpy array as 'arr'
dp[i] = arr
</code></pre>
<p>which raises:</p>
<pre><code>ValueError: shape of value must be (0, 0), shape of given object was (309, 495)
</code></pre>
<p>T... | <p>It seems it is necessary to initialize Panel with the shape of expected data, so in my case this worked fine:</p>
<pre><code>dp = pd.Panel(major_axis=range(309), minor_axis=range(495))
for i in range(x):
# read data in 2D numpy array as 'arr'
dp[i] = arr
</code></pre>
<p>Same applies to DataFrame - if user... | python|pandas | 1 |
377,414 | 25,697,450 | Cleaner pandas apply with function that cannot use pandas.Series and non-unique index | <p>In the following, <code>func</code> represents a function that uses multiple columns (with coupling across the group) and cannot operate directly on <code>pandas.Series</code>. The <code>0*d['x']</code> syntax was the lightest I could think of to force the conversion, but I think it's awkward.</p>
<p>Additionally,... | <p>Instead of </p>
<pre><code>0*d['x'] + x + y
</code></pre>
<p>you could use</p>
<pre><code>pd.Series(x+y, index=d.index)
</code></pre>
<hr>
<p>When using <code>groupy-apply</code>, instead of dropping the group key index using:</p>
<pre><code>s = df.groupby(df.index).apply(func)
s = s.reset_index(level=0, drop=... | numpy|pandas | 3 |
377,415 | 25,749,285 | How to get rid of values from Numpy array without loop? | <p>I have a numpy array similar to the following that represents neighbors of each individual (This is first generated by igraph package then converted to numpy array</p>
<pre><code>import numpy as np
import igraph
Edges = 2
NumNodes = 30
DisGraph = igraph.GraphBase.Barabasi(NumNodes, Edges)
Neighbors = map(DisGraph.n... | <p>I don't know what you mean by "have these values in List removed" (what do you mean, "remove"?). Generally, though, you can select points within an array via: </p>
<pre><code>import numpy as np
a = np.random.random_integers(0,10,[10,10])
b = np.random.random_integers(0,10,5)
for r in b:
a[a==r] = -999
a
O... | python|arrays|numpy | 1 |
377,416 | 25,973,514 | Combining Series in Pandas | <p>I need to combine multiple Pandas <code>Series</code> that contain string values. The series are messages that result from multiple validation steps. I try to combine these messages into 1 <code>Series</code> to attach it to the <code>DataFrame</code>. The problem is that the result is empty.</p>
<p>This is an exam... | <p>When concatenating the default is to use the existing indices, however if they collide then this will raise a <code>ValueError</code> as you've found so you need to set <code>ignore_index=True</code>:</p>
<pre><code>In [33]:
series = pd.concat([series1, series2, series3], ignore_index=True)
df['series'] = series
p... | python|string|pandas|series | 2 |
377,417 | 26,147,180 | Convert row to column header for Pandas DataFrame, | <p>The data I have to work with is a bit messy.. It has header names inside of its data. How can I choose a row from an existing pandas dataframe and make it (rename it to) a column header?</p>
<p>I want to do something like:</p>
<pre><code>header = df[df['old_header_name1'] == 'new_header_name1']
df.columns = heade... | <pre><code>In [21]: df = pd.DataFrame([(1,2,3), ('foo','bar','baz'), (4,5,6)])
In [22]: df
Out[22]:
0 1 2
0 1 2 3
1 foo bar baz
2 4 5 6
</code></pre>
<p>Set the column labels to equal the values in the 2nd row (index location 1):</p>
<pre><code>In [23]: df.columns = df.iloc[1]
</code... | python|pandas|rename|dataframe | 271 |
377,418 | 25,982,638 | H5py: fast way to save list of list of numpy.ndarray? | <p>How can I save and read efficiently a list of list of numpy.ndarray, with h5py? E.g. I want to save/read:</p>
<pre><code>Y = np.arange(3**3).reshape(3,3,3)
X = [[Y,Y],[Y,Y,Y],[Y]]
</code></pre>
<p>I am looking for the most efficient (no double loops etc) solution.</p> | <p>I'm going to assume Y is the same type, e.g., int32 or string. Generally, the most efficient way to handle this will be to emit Y as a single vector (flattened) who's length is:</p>
<pre><code>totalLen = sum(map(len, X))
offsets = cumsum(map(len, X))
</code></pre>
<p>You can stick the offsets into the hdf5 file a... | python|numpy|h5py | 1 |
377,419 | 26,188,131 | Prepend values to Panda's dataframe based on index level of another dataframe | <p>Below I have two dataframes. The first dataframe (d1) has a 'Date' index, and the 2nd dataframe (d2) has a 'Date' and 'Name' index.<br>
You'll notice that d1 starts at 2014-04-30 and d2 starts at 2014-01-31.</p>
<p>d1:</p>
<pre><code> Value
Date
2014-04-30 1
2014-05-31 2
2014-06... | <p>This is a direct formulation of your problem, and it is quite fast already:</p>
<pre><code>In [126]: def direct(d1, d2):
dates2 = d2.index.get_level_values('Date')
dates1 = d1.index
return d1.reindex(dates2[dates2 < min(dates1)].append(dates1), method='bfill')
.....:
In [127]: direct... | python|numpy|pandas | 1 |
377,420 | 25,967,580 | how to convert series integer to datetime in pandas | <p>i want to convert integer type date to datetime.</p>
<p>ex) i : 20130601000011( 2013-6-1 00:00: 11 ) </p>
<p>i don't know exactly how to use pd.to_datetime </p>
<p>please any advice </p>
<p>thanks</p>
<p>ps. my script is below</p>
<pre><code>rent_date_raw = pd.Series(1, rent['RENT_DATE'])
return_date_raw = pd.... | <p>Pandas seems to deal with your format fine as long as you convert to string first:</p>
<pre><code>import pandas as pd
eg_date = 20130601000011
pd.to_datetime(str(eg_date))
Out[4]: Timestamp('2013-06-01 00:00:11')
</code></pre>
<p>Your data at the moment is really more of a string than an integer, since it doesn't ... | python|pandas | 3 |
377,421 | 26,371,720 | How can I keep the intersection of a panel between dates using Pandas? | <p>I've got a panel of price data that has multiple IDs for each date. </p>
<pre><code>Date ID price
2012-06-08 1234 6.09
2345 5.08
3456 1.23
2012-06-09 1234 6.10
3456 1.25
</code></pre>
<p>I need to keep only the rows where the IDs are the ... | <p>One thing you could do is exploit the <code>DataFrame.shift()</code> method in order to find the differences. If you combine this with groupby, when grouping on the IDs then you will end up results as I see that you want them. The trick is though, you need a DataFrame that has a date/ID pair of every unique date and... | python|pandas|filter|panel|intersection | 1 |
377,422 | 26,371,509 | n-dimensional sliding window with Pandas or Numpy | <p>How do I do the R(xts) equivalent of rollapply(...., by.column=FALSE), using Numpy or Pandas? When given a dataframe, pandas rolling_apply seems only to work column by column instead of providing the option to provide a full (window-size) x (data-frame-width) matrix to the target function. </p>
<pre><code>import p... | <p>I indeed cannot find a way to compute "wide" rolling application in pandas
docs, so I'd use numpy to get a "windowing" view on the array and apply a ufunc
to it. Here's an example:</p>
<pre><code>In [40]: arr = np.arange(50).reshape(10, 5); arr
Out[40]:
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
... | python|arrays|r|numpy|pandas | 6 |
377,423 | 26,302,249 | Split a column value to mutliple columns pandas /python | <p>I am new to Python/Pandas and have a data frame with two columns one a series and another a string.
I am looking to split the contents of a Column(Series) to multiple columns .Appreciate your inputs on this regard .
This is my current dataframe content </p>
<pre><code> Songdetails ... | <p>Thank you , i had a do an insert of column to the new data frame and was able to achieve what i needed thanks df2 = pd.DataFrame(series.apply(lambda x: pd.Series(x.split(','))))
df2.insert(3,'Density',finaldf['Density'])</p> | python|pandas | 0 |
377,424 | 26,184,977 | urllib2.URLError when using Quandl for Python behind a proxy | <p>I'm posting this because I tried searching for the answer myself and I was not able to find a solution. I was eventually able to figure out a way to get this to work & I hope this helps someone else in the future.</p>
<h2>Scenario:</h2>
<p>In Windows XP, I'm using Python with Pandas & Quandl to get data fo... | <p>You can set your <strong>user</strong> environment variable <code>HTTP_PROXY</code> if you can't or won't set the system environment variable:</p>
<pre><code>set HTTP_PROXY "10.11.123.456:8080"
python yourscript.py
</code></pre>
<p>and to permanently set it (using setx from <a href="https://www.microsoft.com/en-us... | python|pandas|proxy|quantitative-finance|quandl | 0 |
377,425 | 26,002,474 | pandas, name of the column after a group by function | <p>I have a simple Pandas Dataframe named purchase_cat_df:</p>
<pre><code> email cat
0 email1@gmail.com Mobiles & Tablets
1 email2@gmail.com Mobiles & Tablets
2 email1@gmail.com Mobiles & Tablets
3 email3@gmail.com Mobiles & Tablets
4 email3@gmail.com Home &... | <p>If you want to keep your original index, you were probably looking for something like this:</p>
<pre><code>purchase_cat_df.groupby('email', as_index=False)
</code></pre>
<p>as_index=False keeps the original index. You can then continue to address the column by its name.</p> | python|pandas|group-by | 4 |
377,426 | 66,801,401 | how can i work with my GPU in python Visual Studio Code | <p>Hello I know that the key to analyzing data and working with artificial intelligence is to use the gpu and not the cpu. The problem is that I don't know how to use it with Python in the visual studio code, I use Ubuntu, I already have nvidia installed</p> | <p>You have to use with the libraries that are designed to work with the GPUs.</p>
<p>You can use Numba to compile Python code directly to binary with CUDA/ROC support, but I don't really know how limiting it is.</p>
<p>Another way is to call APIs that are designed for parallel computing such as OpenCL, there is PyOpen... | python|python-3.x|pandas|dataframe|visual-studio-code | 1 |
377,427 | 66,851,208 | Passing multiple parameters with apply in Pandas | <p>How do I pass multiple parameters with apply in Pandas?</p>
<p><code>do_something</code> is a function:</p>
<pre><code>def do_something(x,test="testFoo")
</code></pre>
<p>This can be used with <code>dataframe.apply</code></p>
<pre><code>df2.apply(do_something, test="testBar",axis=1)
</code></pre>... | <p>For me working small change <code>df=df</code> in <code>def</code>:</p>
<pre><code>df = pd.DataFrame({'a':[1,2]})
df2 = pd.DataFrame({'g':[50,40]})
def do_something(x,test="testFoo",df=df):
print (df)
a
0 1
1 2
a
0 1
1 2
df2.apply(do_something, test="testBar&quo... | pandas | 1 |
377,428 | 66,879,986 | TensorFlow 2 Quantization Aware Training (QAT) with tf.GradientTape | <p>Can anyone point to references where one can learn how to perform Quantization Aware Training (QAT) with <code>tf.GradientTape</code> on TensorFlow 2?</p>
<p>I only see this done with the tf.keras API. I do not use <code>tf. keras</code>, I always build customized training with <code>tf.GradientTape</code> provides ... | <p>In the official examples <a href="https://www.tensorflow.org/model_optimization/guide/quantization/training_example" rel="nofollow noreferrer">here</a>, they showed QAT training with <code>model. fit</code>. Here is a demonstration of <strong>Quantization Aware Training</strong> using <code>tf.GradientTape()</code>.... | tensorflow|keras|quantization | 2 |
377,429 | 67,174,506 | Pandas: Duplicated level name: <Column Name>, assigned to level 1, is already used for level 0." | <pre><code>Item AddDate COUNT(Number)
Item 3 2021-01-05 111
Item 3 2021-01-06 223
Item 3 2021-01-07 44
Item 3 2021-01-26 431
Item 3 2021-01-25 12
I... | <p>It seems there is wrongly assigned parameter <code>columns=['AddDate', 'Item']</code>, so is used <code>AddDate</code> for generate new index values (<code>index='AddDate'</code>), but also for <code>MultiIndex</code> by <code>AddDate, Item</code> combinations (<code>columns=['AddDate', 'Item']</code>).</p>
<p>I thi... | python|pandas | 1 |
377,430 | 66,994,770 | Calculate sum of amount in a month before a certain date | <p>Initial df</p>
<pre><code>d = {'salesman': ['Andy', 'Brown','Charlie'],
'training_date': ['2020-04-16','2021-03-04','2021-03-08'],
'sales_in_training_month':['0','2634','2856.5']
}
df_initial = pd.DataFrame(data=d)
df_initial
</code></pre>
<p>Expected df</p>
<pre><code>d2 = {'salesman': ['Andy', 'Brow... | <ul>
<li>Convert the dates to <code>datetime</code>.</li>
<li><code>groupby</code> "salesman", then by month.</li>
<li>The "day" field of the training date gives you the number of days before training (after subtracting 1).</li>
<li>Divide this by days in the month (a function call on the month
numb... | python|python-3.x|pandas | 0 |
377,431 | 67,088,286 | pandas drop rows based on cell content and no headers | <p>I'm reading a csv file with pandas that has no headers.</p>
<pre><code>df = pd.read_csv('file.csv', header=0)
</code></pre>
<p>csv file containing 1 row with several users:</p>
<pre><code>admin
user
system
sysadmin
adm
administrator
</code></pre>
<p>I need to read the file to a df or a list except for example: <cod... | <p>Select first columns, filter by <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> and write to file:</p>
<pre><code>df = pd.read_csv('file.csv', header=0)
df[df.iloc[:, 0].ne('sysadmin')].to_csv(file, index=Fa... | python|pandas | 3 |
377,432 | 66,791,716 | Pass DataFrame to parse() in spider class | <p>Im making a database with products of an eShop, i want to save all the data as a JSON file.
I was wondering a way to pass values of the dataframe used to list the links the spider crawl</p>
<pre><code>import scrapy
import pandas as pd
class subcategoryExtractorSpider(scrapy.Spider):
name = 'subcategorySpider'
... | <p>If I've understood correctly, you need to pass categories from Pandas through to your final output?</p>
<p>Scrapy has this : <a href="https://docs.scrapy.org/en/latest/topics/request-response.html#passing-additional-data-to-callback-functions" rel="nofollow noreferrer">cb_kwargs</a> which allows you to pass values</... | pandas|scrapy|web-crawler | 0 |
377,433 | 67,082,891 | Pythonic way of replace values in one column from a two column table | <p>I have a df with the origin and destination between two points and I want to convert the strings to a numerical index, and I need to have a representation to back convert it for model interpretation.</p>
<pre><code>df1 = pd.DataFrame({"Origin": ["London", "Liverpool", "Paris",... | <p>You can use <code>.map()</code>:</p>
<pre><code>mapping = dict(zip(df2.Location, df2.Idx))
df1.Origin = df1.Origin.map(mapping)
df1.Destination = df1.Destination.map(mapping)
print(df1)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> Origin Destination
0 2 1
1 1 ... | python|pandas | 1 |
377,434 | 66,883,277 | Add a column in second DataFrame based on 2 matched column in 2 DataFrame | <p>I have 2 DataFrame, I want to add a column in second DataFrame based on multiple (In my case 2) matched column of both DataFrame
I tried the code below, but I don't get the right answer. Can anyone help me please?</p>
<pre><code>
result = result.merge(out[out['COL1.']!=''].drop(['month'], axis=1), on=['COL1.'], how=... | <p>Use <code>merge()</code> method and chain <code>drop()</code> method to it:</p>
<pre><code>result=df2.merge(df1,right_on=['COL1.','month'],left_on=['station','month']).drop(columns=['COL1.'])
</code></pre>
<p>Now if you print <code>result</code> you will get your desired output:</p>
<pre><code> station month ... | python|pandas|dataframe | 1 |
377,435 | 66,877,406 | Python Dataframe Conditional If Statement Using pd.np.where Erroring Out | <p>I have the following dataframe:</p>
<pre><code>count country year age_group gender type
7 Albania 2006 014 f ep
1 Albania 2007 014 f ep
3 Albania 2008 014 f ep
2 Albania 2009 014 f ep
2 Albania 2010 014 ... | <p>You can use also <code>.replace()</code>:</p>
<pre><code>df["gender"] = df["gender"].replace({"f": "female", "m": "male"})
print(df)
</code></pre>
<p>Prints:</p>
<pre><code> count country year age_group gender type
0 7 Albania 2006 14 ... | python|pandas|if-statement|conditional-statements | 2 |
377,436 | 67,114,236 | Inserting thousands of JSONs into a dataframe | <p>I'm sure this is probably a fairly simple query for most but I'm quite new to Python and thus Pandas as well. Ultimately, I have thousands of JSON files in a folder that I would like to get into a dataframe. This is the code I'm currently using but unfortunately, it is brutally slow. My guess is because I'm opening ... | <p>You can use a <code>ThreadPoolExecutor</code> to speed things up:</p>
<pre class="lang-py prettyprint-override"><code>from concurrent.futures import ThreadPoolExecutor
def read_file(filename):
with open(filename) as f:
x = json.load(f)
return pd.json_normalize(x['data'])
with ThreadPoolExecutor()... | python|json|pandas | 0 |
377,437 | 67,003,865 | Pandas reading tall data into a DataFrame | <p>I have a text file which consists of tall data. I want to iterate through each line within the text file and create a Dataframe.</p>
<p>The text file looks like this, note that the same fields don't exist for all Users (e.g some might have an email field some might not), Also note that each User is separated by[User... | <p>Read in the single long column, and then form a group indicator by seeing where the value is '[User]'. Then separate the column labels and values, with a <code>str.split</code> and join back to your DataFrame. Finally pivot to your desired shape.</p>
<pre><code>df = pd.read_csv('test.txt', sep='\n', header=None)
df... | python|pandas|dataframe | 0 |
377,438 | 67,175,670 | how to split values in columns using dataframe? | <p>I have a dataframe that is consist of 3 columns where one of these columns includes as values 2 values separated with <strong>-</strong> in one record.</p>
<p>I want to another columns that includes all these values <strong>but with one value each record</strong></p>
<p>For this I created a function but when i run ... | <p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>DataFrame.explode</code></a> with split column passed to function:</p>
<pre><code>def splitting(df, r):
df[r] = df[r].str.split("-")
return df.explode(... | python|pandas|function | 3 |
377,439 | 67,172,936 | iterating a numpy matrix, and assign its rows with information from other dataframe and numpy array | <p>I have a matrix, e.g., defined as <code>x_matrix = np.zeros(200,16)</code> Iterating over the rows, I need to assign each row of this matrix with two component vectors, <code>a1</code> is an array with 10 elements, <code>a2</code> is a corresponding row belonging to a pandas dataframe <code>y_dataframe</code> <co... | <p>You can do this without iteration if you wish using <code>np.repeat</code> and <code>np.hstack</code>:</p>
<pre class="lang-py prettyprint-override"><code># assuming `a1` is shaped (10,) i.e. 1D array
a1_repeated = np.repeat(a1[np.newaxis, :], 200, axis=0)
x_matrix = np.hstack((a1_repeated, y_dataframe))
</code></p... | python|pandas|numpy|scipy | 0 |
377,440 | 67,174,341 | Keras Lambda layer, how to use multiple arguments | <p>I have this function:</p>
<pre><code>def sampling(x):
zeros = x*0
samples = tf.random.categorical(tf.math.log(x), 1)
samples = tf.squeeze(tf.one_hot(samples, depth=2), axis=1)
return zeros+samples
</code></pre>
<p>That I call from this layer:</p>
<pre><code>x = layers.Lambda(sampling, name="lamb... | <p>Use a lambda function inside the Lambda layer...</p>
<pre><code>def sampling(x, depth):
zeros = x*0
samples = tf.random.categorical(tf.math.log(x), 1)
samples = tf.squeeze(tf.one_hot(samples, depth=depth), axis=1)
return zeros+samples
</code></pre>
<p>usage:</p>
<pre><code>Lambda(lambda t: sampling(t... | python|tensorflow|machine-learning|keras|deep-learning | 3 |
377,441 | 66,949,800 | ImageDataGenerator flow_from_directory with grandchild folder | <p>I have a k-fold train dataset but its structure has a grandchild folder for ex:</p>
<pre><code>/monkey
/ howler monkey
- img1
- img2
/ japanese macaque
- img1
- img2
/dog
/ bulldog
- img1
- img2
/ Rottweiler
- img1
- img2
</c... | <p>I have this question for a long time and I couldnt find a direct answer using <code>.flow_from_directory</code>. What I did instead, was use <code>.flow_from_dataframe</code> instead. Where first, I just created a dataframe with the images paths and their corresponding label (in your case howler monkey, japanese mac... | python|tensorflow|machine-learning|keras|dataset | 0 |
377,442 | 67,140,380 | How to list JSON non-list items together with list items with pandas.json_normalize with Python? | <pre><code>[{
"builtin_name": "custom_template",
"fields": [{
"id": 10012,
"field_type": "OBJECT_SET",
"tooltip_text": "",
"name_plural": "",
"name... | <p>I'd use <code>.json_normalize</code> on whole <code>data</code> list and <code>.explode()</code> the <code>fields</code> column. Then concat back to obtain desired DataFrame:</p>
<pre><code>df = pd.json_normalize(data, errors="ignore", record_prefix="")
df = pd.concat(
[df, df.explode("f... | python|pandas|json-normalize | 1 |
377,443 | 66,933,028 | How to use lower version of keras and tensorflow | <p>I'm running a code which requires keras version 1.2.0 and tensorflow version 1.1.0.
I'm using Jupyter notebook and I created an <a href="https://github.com/llSourcell/How_to_simulate_a_self_driving_car/blob/master/environments.yml" rel="nofollow noreferrer">environment</a> for all the dependencies.</p>
<p>However, I... | <p>It's probably because it is getting uninstalled on a different environment. Identify which python and pip executable you are using by running the following commands:</p>
<pre class="lang-sh prettyprint-override"><code>$ which pip
$ which python
</code></pre>
<p>These two commands will give out the path of the execut... | python|tensorflow|keras|libraries | 0 |
377,444 | 66,921,105 | How to edit running total column to restart with every new column value? | <p>I have the data frame pictured below. I need the 'Total #' column to restart every time there is a new value in the 'Item Number' column. For example, if Index 4 was the last occurrence of 104430-003 then 14 would be the last 'Total #' and it would start recounting the 'Total #' of VTHY-039 in the appropriate 'Bin L... | <pre><code>pv['cumsum'] = pv.groupby('Item Number')['Items'].transform(pd.Series.cumsum)
pv
Item Number Bin Loc. PV Pick Items cumsum
0 104430-003 A-P28-17B 4 2 2
1 104430-003 A-P39-20B 4 4 6
2 104430-003 A-P39-20C 4 1 7
3 104430-003 A-P39-26C 4 2 9
4 104430-003 A-P40-2... | python|pandas|group-by|cumsum | 0 |
377,445 | 66,856,539 | Python Selenium - Scraping a Table from a Dynamic Page | <p>I'm completely new to Python. I want to scrape data from a html table and put it into MS Excel. The website I'm scraping from is dynamic, so I have to select options from 3 drop down boxes to build the table.</p>
<p>Please note that the code below gets me to the website and selects the options I need to build the ta... | <p>Instead of copying and pasting the content, I used soup.find_all function from the BeautifulSoup library to find the table. Then, I used Pandas to create a dataframe from the table and send it to my Excel sheet.</p>
<p>Here is the code I used:</p>
<pre><code>html = driver.page_source
soup = BeautifulSoup(html)
table... | python|pandas|dataframe|selenium|web-scraping | 1 |
377,446 | 66,786,787 | pytorch multiple branches of a model | <p><a href="https://i.stack.imgur.com/na6Px.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/na6Px.png" alt="enter image description here" /></a></p>
<p>Hi I'm trying to make this model using pytorch.</p>
<p>Each input is consisted of 20 images of size 28 X 28, which is C1 ~ Cp in the image.
Each imag... | <p>Assuming each path have it's own weights, may be this could be done with grouped convolution, although pre fusion <code>Linear</code> can cause some trouble.</p>
<pre><code> P = 20
self.features = nn.Sequential(
nn.Conv2d(1*P,10*P, kernel_size = 3, padding = 1, groups = P ),
nn.ReLU(),
... | pytorch|concatenation|conv-neural-network | 3 |
377,447 | 66,906,652 | How to download hugging face sentiment-analysis pipeline to use it offline? | <p><strong>How to download hugging face sentiment-analysis pipeline to use it offline?</strong> I'm unable to use hugging face sentiment analysis pipeline without internet. How to download that pipeline?</p>
<p>The basic code for sentiment analysis using hugging face is</p>
<pre><code>from transformers import pipeline
... | <p>Use the <a href="https://huggingface.co/transformers/main_classes/pipelines.html#transformers.Pipeline.save_pretrained" rel="nofollow noreferrer">save_pretrained()</a> method to save the configs, model weights and vocabulary:</p>
<pre class="lang-py prettyprint-override"><code>classifier.save_pretrained('/some/direc... | deep-learning|nlp|huggingface-transformers|huggingface-tokenizers | 3 |
377,448 | 67,132,348 | Best way to debug or step over a sequential pytorch model | <p>I used to write the PyTorch model with <code>nn.Module</code> which included <code>__init__</code> and forward so that I can step over my model to check how the variable dimension changes along the network.
However I have since realized that you can also do it with <code>nn.Sequential</code> which only requires an <... | <p>You can iterate over the children of model like below and print sizes for debugging. This is similar to writing forward but you write a separate function instead of creating an <code>nn.Module</code> class.</p>
<pre><code>import torch
from torch import nn
model = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),... | python|debugging|deep-learning|pycharm|pytorch | 1 |
377,449 | 66,962,823 | Subset pandas dataframe using function applied to a column/series | <p>I have a pandas dataframe <code>df</code> that I would like to subset based on the result of running <code>Name</code> through a certain function <code>is_valid()</code></p>
<pre><code>import pandas as pd
data = [['foo', 10], ['baar', 15], ['baz', 14]]
df = pd.DataFrame(data, columns = ['name', 'age'])
df
name... | <p>Try:</p>
<pre><code>df[df['name'].str.len()==3]
</code></pre>
<p>Or use your code with <code>apply</code>:</p>
<pre><code>df[df['name'].apply(is_valid)]
</code></pre> | python|pandas|dataframe|lambda|subset | 4 |
377,450 | 66,814,443 | How to define a weighted loss function for TF2.0+ keras CNN for image classification? | <p>I would like to integrate the <a href="https://www.tensorflow.org/api_docs/python/tf/nn/weighted_cross_entropy_with_logits" rel="noreferrer">weighted_cross_entropy_with_logits</a> to deal with data imbalance. I am not sure how to do it. Class 0 has 10K images, while class 1 has 500 images. Here is my code.</p>
<pre>... | <p>You can simply wrap <code>tf.nn.weighted_cross_entropy_with_logits</code> inside a custom loss function.</p>
<p>Remember also that <code>tf.nn.weighted_cross_entropy_with_logits</code> expects logits so your network must produce it and not probabilities (remove <code>softmax</code> activation from the last layer)</p... | python|tensorflow|keras | 4 |
377,451 | 67,092,744 | How to replicate rows based on number of comma separated values in column | <p>I have a large dataframe containing changelog of the data. Each line represents 'product' and each product can have multiple 'action types'. It can be new, deleted, renamed, moved, updated, etc. or it can be combination of them separated by comma.</p>
<p>Examples:</p>
<ul>
<li>Move, Rename</li>
<li>Move, Rename, Upd... | <p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>explode</code></a> for this after you split your column by commas:</p>
<pre><code>df['Action Type'] = df['Action Type'].str.split(', ')
df.explode('Action Type')
Product Description Eff... | python|pandas|dataframe | 3 |
377,452 | 67,113,753 | Zipping List of Pandas DataFrames Yields Unexpected Results | <p>Can somebody explain the following code?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
a = pd.DataFrame({"col1": [1,2,3], "col2": [2,3,4]})
b = pd.DataFrame({"col3": [1,2,3], "col4": [2,3,4]})
list(zip(*[a,b]))
</code></pre>
<p>Output:</p>
<pre><code... | <p>a:</p>
<p><a href="https://i.stack.imgur.com/OF6NI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OF6NI.png" alt="enter image description here" /></a></p>
<p>b:</p>
<p><a href="https://i.stack.imgur.com/0GKoz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0GKoz.png" alt="enter... | python|pandas | 0 |
377,453 | 67,085,037 | Tensorflow import issues on the anaconda virtual environment | <p>I'm using OS X now and got a problem with importing tensorflow.
I'm using anaconda and made a new virtual environment with python 3.7 and tensor flow 1.15.0.
I've no problem so far but from yesterday I got error massages like below</p>
<blockquote>
<p>import tensorflow as tf</p>
<p>Traceback (most recent call last):... | <p>Problem solved.
The main issue was a directory of python file that I tried to run by using conda.
I moved that file out of problematic folder and it worked.</p> | python|tensorflow | 0 |
377,454 | 67,084,739 | Creating Dataframe from Lists | <p>I am seeing 5 items in each list below, but it looks like I am running into issues with the shape of my data when creating my dataframe. Any idea why this error is happening?</p>
<p>Code:</p>
<pre><code>b = {'Link':links, 'Tax':taxes, 'Description':descrip}
bet = pd.DataFrame(b)
</code></pre>
<pre><code>['http://www... | <p>The code you wrote works for me</p>
<pre><code>links = ['http://www.redfin.com/IL/Chicago/195-N-Harbor-Dr-60601/unit-509/home/14093313', 'http://www.redfin.com/IL/Chicago/1235-N-Astor-St-60610/unit-3N/home/13054822', 'http://www.redfin.com/MO/St-Louis/2622-S-11th-St-63118/home/93686930', 'http://www.redfin.com/IL/Ch... | python|pandas | 0 |
377,455 | 66,853,776 | how to replace values in 2 dataframe columns based on multiple conditions? | <p>I'm having columns in pandas dataframe which look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Positive</th>
<th style="text-align: center;">Neutral</th>
<th style="text-align: right;">Negative</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align... | <p>You can just do it in two steps - set the mixed column - then set the pos/neg columns to 0.</p>
<pre><code>>>> df['Mixed'] = 0
>>> df
Positive Neutral Negative Mixed
0 1 0 1 0
1 0 1 0 0
>>> rows = (df.Positive == 1) & (df.Ne... | python|pandas|dataframe | 0 |
377,456 | 67,129,188 | logical evaluations of multiple boolean variables in python | <p>Assuming have m different boolean variables, c1, c2,..cm. How to evaluate whether all of them are true, or any one of these elements is not true, etc. It can be very efficient to check them one by one.</p> | <p>Use <a href="https://docs.python.org/library/functions.html#any" rel="nofollow noreferrer"><code>any</code></a> and <a href="https://docs.python.org/library/functions.html#all" rel="nofollow noreferrer"><code>all</code></a>:</p>
<pre><code>c = [True, False]
print(any(c))
# True
print(all(c))
# False
</code></pre> | python|python-3.x|pandas|numpy|boolean-logic | 0 |
377,457 | 67,081,802 | Keep first occurrence in one column according to certain conditions | <p>I'm not sure how to explain this so I'll do my best.</p>
<p>I have two datasets which I merged to obtain the following:</p>
<pre><code>ID | active_date | datestamp | code1 | code2 | code3 | payment
01 | 01/01/2020 | 10/06/2020 | AAA | . | . | 1
01 | 01/01/2020 | 11/06/2020 | AAA | . | . | 1
01... | <p>gnarly one. Compute multiple options independently and combine. Code below</p>
<pre><code>#meets greater than equal to 3 rule
df['m']=df['payment'].str.extract('(\d+)').astype(float).ge(3)#create temp ro
a=df[df['m']]
#meets BBB, CCC rule
b=df[df['code1'].isin(["BBB","CCC"])|df['code2'].isin([&... | python|pandas|filter|data-mining | 0 |
377,458 | 66,874,669 | Get Pytorch - tensor values as a integer in python | <p>I have my output of my torch tensor which looks like below</p>
<p>(coordinate of a bounding box in object detection)</p>
<pre><code>[tensor(299., device='cuda:0'), tensor(272., device='cuda:0'), tensor(327., device='cuda:0'), tensor(350., device='cuda:0')]
</code></pre>
<p>I wanted to extract each of the tensor valu... | <pre><code>minx, miny, maxx, maxy = [int(t.item()) for t in tensors]
</code></pre>
<p>where <code>tensors</code> is the list of tensors.</p> | python|pytorch|torch | 0 |
377,459 | 66,991,843 | ModuleNotFoundError: No module named 'pandas' in Jupyter Notebooks | <p>I am using a Jupyter Notebook in VSCode for a simple data science project. I've imported pandas in the past and had no problems, only now when I try to execute the code, "ModuleNotFoundError: No module named 'pandas'" is raised in the Notebook.</p>
<p>I installed pandas with pip, and when I type <code>pip ... | <p>After installing Jupyter Notebook, I re-ran the anaconda install. Seemed to fix it.</p> | python|pandas|pip|jupyter-notebook|modulenotfounderror | 0 |
377,460 | 66,809,521 | How can I change two dimensional grayscale image to one dimensional vector image? | <p>I have a grayscale image with size 28 by 28, and I plot it with plt.imshow(..., cmap='gray_r')
I'd like to plot the second figure, that has pixel number as a xlabel, and grayscale value as a ylabel.
But I don't know how to make it.
I tried it to make with imshow function after reshape a 2_d image vector to 1_d vecto... | <pre><code># flat_image = 1_d vector
pixel_nums = range(28 * 28)
matplotlib.pyplot.scatter(pixel_nums, flat_image)
</code></pre>
<p>This will plot the pixel numbers and their grayscale values in a scatterplot. More information can be found <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.htm... | python|numpy|matplotlib|grayscale | 0 |
377,461 | 66,994,657 | Combining output in pandas? | <p>I have a movie recommender system I have been working on and currently it is printing two different sets of output because I have two different types of recommendation engines. Code is like this:</p>
<pre><code>while True:
user_input3 = input('Please enter movie title: ')
if user_input3 == 'done':
br... | <p>If the return type of <code>get_input_movie()</code> is a Pandas DataFrame or a Pandas Series, you can try:</p>
<p>Replace the following 2 lines:</p>
<pre><code>print(get_input_movie(user_input3))
print(get_input_movie(user_input3, cosine_sim1))
</code></pre>
<p>by using <a href="https://pandas.pydata.org/docs/refer... | python|pandas|dataframe|recommendation-system | 1 |
377,462 | 66,919,995 | how to execute a function more than once and concatenate its outputs into an array only in python | <p>I'm trying to create a function that calls a function that I already have to execute it N times and then concatenate in its array only its outputs. this is the function that I want to perform more than once.</p>
<pre><code>def r_sample(m_base, m_external):
n_linhas_m_base = m_base.shape[0] # matriz com as variáv... | <p>You can create a variable called <code>all_arrays</code> which is an empty array. Where you call the function you could put a loop. Then you can get hold of the functions callback and add the <code>current_array</code> <code>to all_arrays</code> like this: <code>all_arrays += current_array</code>. The whole code can... | python|arrays|python-3.x|numpy | 0 |
377,463 | 67,181,917 | How should I convert numbers in a column to the same format style in Python? | <p>I have two columns Lat and Long in a data frame. I try to convert these strings into float but I have the following error:</p>
<pre><code>ValueError: Unable to parse string "" at position 61754
</code></pre>
<p>I've noticed that in my data frame I have numbers written in different styles, even in bold text... | <p>Suppose you have this DataFrame:</p>
<pre><code> funny_numbers
0
1
2
</code></pre>
<p>You can try <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.normalize.html#pandas-series-str-normalize" rel="nofollow noreferrer"><code>.str.normalize</co... | python|pandas|valueerror | 3 |
377,464 | 66,910,315 | AttributeError: 'float' object has no attribute 'isnumeric' | <p>I have a pandas df and I want to remove non-numeric values of <code>col1</code>.</p>
<p>If I use <code>df[df.col1.apply(lambda x: x.isnumeric())]</code>, I get the following error:</p>
<p><code>AttributeError: 'float' object has no attribute 'isnumeric'</code></p>
<p>any suggestion on doing this efficiently in panda... | <p>You could use standard method of strings <code>isnumeric</code> and apply it to each value in your <code>id</code> column:<br />
<a href="https://stackoverflow.com/questions/33961028/remove-non-numeric-rows-in-one-column-with-pandas">Remove non-numeric rows in one column with pandas</a><br />
<a href="https://stacko... | pandas|filtering|numeric | 1 |
377,465 | 67,112,369 | how to concatenate numpy array of different shape | <p>I have two <code>NumPy</code> arrays of shape: (Batch, H, W, Canal). I would like to concat these arrays in one array, but they have different shapes <code>[209, 450, 450, 24]</code> and <code>[209, 112, 112]</code>.</p>
<p>Does anyone know how to do it?</p> | <p>A simple way to concatenate vectors with different sizes is to fill the smaller one with zeros, so that it is the same shape as the bigger.</p>
<pre class="lang-py prettyprint-override"><code>bigger = np.random.uniform(size=(209, 450, 450, 24)) # should be your input
smaller = np.random.uniform(size=(209, 112, 112))... | python|numpy | 0 |
377,466 | 67,177,319 | Importing csv files based on list names | <p>I'm looking to create a function that will import csv files based on a user input of file names that were created as a list. This is for some data analysis where i will then use pandas to resample the data etc and calculate the percentages of missing data. So far I have:</p>
<pre><code>parser = lambda x: pd.datetime... | <p>To read all files, you can do something like -</p>
<pre><code>list_of_dfs = [pd.read_csv(f) for f in list_of_stations_name_number]
</code></pre>
<p><code>list_of_dfs[0]</code> will correspond to the csv file <code>list_of_stations_name_number[0]</code></p>
<p>If your files are not in the current directory, you can p... | python|pandas|csv | 0 |
377,467 | 66,988,685 | Break ties using rank function (OR other function) PYTHON | <p>I have the following dataframe:</p>
<pre><code>ID Name Weight Score
1 Amazon 2 11
1 Apple 4 10
1 Netflix 1 10
2 Amazon 2 8
2 Apple 4 8
2 Netflix 1 5
</code></pre>
<p>Currently I have a code which looks like this</p>
<pre><code>#add weight... | <p>You can use <code>rank</code> with <code>method='first'</code> with some presorting first:</p>
<pre><code>df['Score_Rank'] = (df.sort_values('Weight', ascending=False)
.groupby(['ID'])['Score']
.rank(method='first', ascending=False)
)
</code></pre>
<p>Ou... | python|pandas|dataframe|numpy|rank | 3 |
377,468 | 67,054,993 | Find thresholds of bins by sum of column values in pandas | <p>I need to find thresholds of bins (for ex. 0-999, 1000-1999 etc.), so that on each bin there was approximately an equal amount (1/n of total value, for ex 1/3 if we split into 3 bins).</p>
<pre><code>d = {'amount': [600,400,250,340,200,500,710]}
df = pd.DataFrame(data=d)
df
amount
600
400
250
340
200
500
710
</code... | <p>If need approximately an equal amount aggregate <code>sum</code> with <code>pd.cut</code>:</p>
<pre><code>df = df.groupby(pd.cut(df.amount, 3)).sum()
print (df)
amount
amount
(199.49, 370.0] 790
(370.0, 540.0] 900
(540.0, 710.0] 1310
</code></pre> | python|pandas|split | 1 |
377,469 | 66,980,011 | File operation using numpy | <p>I am trying to delete phrase from text file using numpy.I have tried
num = [] and num1.append(num1)
'a' instead of 'w' to write the file back.
While append doesn't delete the phrase
writes' first run deletes the phrase
second run deletes second line which is not phrase
third run empties the file</p>
<pre><code>impor... | <p>I think you can still use <code>nums.append(num1)</code> with <code>w</code> mode, the issue I think you're getting is that you used the <code>enumerate</code> function for <code>myFile</code>'s lines using 1-index instead of 0-index as expected in numpy array. Changing it from <code>enumerate(myFile, 1)</code> to <... | python-3.x|numpy | 0 |
377,470 | 67,181,773 | Create numeric variable with condition | <p>I would like to create a dummy variable for % Change PMI. If % Change PMI is positive is 1 and if negative a 0.</p>
<pre><code>print(Overview3)
Adjusted Close % Change PMI % Change IP
Date
1970-02-27 0.052693 -0.026694 -0.0007
1970-03-31... | <pre><code>df['dummy'] = 0
for i in range(0,len(df)):
if df["% Change PMI"][i] > 0:
df['dummy'][i] = 1
else:
df['dummy'][i] = 0
</code></pre> | python|pandas|dataframe|dummy-variable | -1 |
377,471 | 67,099,881 | The pandas value error still shows, but the code is totally correct and it loads normally the visualization | <p>I really wanted to use <code>pd.options.mode.chained_assignment = None</code>, but I wanted a code clean of error.</p>
<p>My start code:</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import altair as alt
import operator
import pandas as pd
s = pd.read_csv('../../data/aparecida-small-sample.csv'... | <p>This <code>SettingWithCopyWarning</code> is a <code>warning</code> and not an <code>error</code>. The importance in this distinction is that <code>pandas</code> isn't sure whether your code will produce the intended output so is letting the programmer make this decision where as a <code>error</code> means that somet... | python|python-3.x|pandas|dataframe|jupyter-notebook | 1 |
377,472 | 66,799,694 | Image Resolution in Deep Learning | <p>Let me summarize my problem as follows,</p>
<p>I have been facing a memory error whenever I try to train a model with my custom dataset. Later, I noticed that some of the images are very high resolution compared to other images in the same dataset. However their size was not that greater.</p>
<p>There is a image res... | <p>Yes, it could be a potential source for memory error. Usually, memory error happens because of two reasons, the first is a high image size (maybe even the resized dimensions are high) and batch size.</p>
<p>These are kind of related in some sense. If you use a higher image size, you must use a lower batch size to co... | tensorflow|deep-learning|gpu|object-detection | 1 |
377,473 | 66,927,880 | Pandas: Extracting data from sorted dataframe | <p>Consider I have a dataframe with 2 columns: the first column is 'Name' in the form of a string and the second is 'score' in type int. There are many duplicate Names and they are sorted such that the all 'Name1's will be in consecutive rows, followed by 'Name2', and so on. Each row may contain a different score.The n... | <p>Firstly make use of <code>groupby()</code> method as mentioned by @QuangHong:</p>
<pre><code>result=df.groupby('Name', as_index=False)['Score'].mean()
</code></pre>
<p>Finally make use of <code>rename()</code> method:</p>
<pre><code>result=result.rename(columns={'Score':'Avg Score'})
</code></pre> | pandas|dataframe | 1 |
377,474 | 67,000,060 | loading model failed in torchserving | <p>i learning to serve a model using pytorch serving and i am new to this serving
this is the handler file i created for serving the vgg16 model
i am using the model from kaggle</p>
<p>Myhandler.py file</p>
<pre><code>
import io
import os
import logging
import torch
import numpy as np
import torch.nn.functional as F
f... | <blockquote>
<p>i am using the model from kaggle</p>
</blockquote>
<p>I presume you got the model from <a href="https://www.kaggle.com/pytorch/vgg16" rel="nofollow noreferrer">https://www.kaggle.com/pytorch/vgg16</a></p>
<p>I think you are loading the model incorrectly.
You are loading a checkpoint, which would work if... | python|pytorch|torch | 1 |
377,475 | 67,123,411 | Pandas replacing decimal seperator in string columns | <p>I habe a dataframe with many columns >= 50. Some of them have a comma as decimal seperator and some have commas and a few even have a little bit of both. A few are supposed to be string.</p>
<pre class="lang-none prettyprint-override"><code>| colA | colB | colC | colD |
| 12.4 | 9,4 | 17.8 | eaui |
| 12.4 | 17,3... | <p>You have to perform <code>str.replace</code> on a <code>pd.Series</code> object, i.e. a single column. You can first select the columns that are not numeric and then use <code>apply</code> on this sub-frame to replace the comma in each column:</p>
<pre><code>string_columns = df.select_dtypes(include='object').column... | python|pandas | 2 |
377,476 | 67,063,852 | Performing SQL updates based on row number and using previous row for calculations | <p>I have a python/pandas code that I was using to perform some calculations, but I was having performance issues with it. I'm trying to write everything on SQL, updating the table with BigQuery.</p>
<p>The problem that I am facing is to update an existing table based on row number and using previous rows for calculati... | <p>Consider below example</p>
<pre><code>#standardSQL
with `project.dataset.table` as (
select 1.2 DEPTH_M, 2 A, 0 B union all
select 1.4, 3, 0 union all
select 1.6, 6, 0 union all
select 1.8, 2, 0 union all
select 2.0, 1, 0 union all
select 2.2, 6, 0 union all
select 2.4, 7, 0 union all
select 2.6, 6, ... | sql|pandas|google-bigquery | 0 |
377,477 | 66,867,203 | How to fill a pandas dataframe column using a value from another dataframe column | <p>Firstly we can import some packages which might be useful</p>
<pre><code>import pandas as pd
import datetime
</code></pre>
<p>Say I now have a dataframe which has a date, name and age column.</p>
<pre><code>df1 = pd.DataFrame({'date': ['10-04-2020', '04-07-2019', '12-05-2015' ], 'name': ['john', 'tim', 'sam'], 'age'... | <p>Try:</p>
<pre class="lang-py prettyprint-override"><code>y = datetime.datetime(2015, 5, 12).strftime('%d-%m-%Y')
df2.loc[:, 'new'] = df1.loc[df1['date'] == y, "age"].item()
# Output
a b new
0 1 4 27
1 2 5 27
2 3 6 27
</code></pre> | python|pandas|dataframe | 1 |
377,478 | 66,997,596 | Nested loop problem in python while working with pandas | <p>I am trying to created a nested loop to load multiple files in an s3 bucket and concatenate them into a single dataframe. I am having trouble in arranging the nested loops in order to do this.
Here is my code:</p>
<pre><code>import json
import pandas as pd
import boto3
import io
client = boto3.client('s3')
var ... | <p>You get that error when your dataframe has a non-unique index or (repeated) values. Since it doesn't look like you're using the index, you could create a new one by using the following command:</p>
<p><code>df.reset_index(inplace=True)</code></p>
<p>or</p>
<p>If you want to remove the previous index.</p>
<p><code>df... | python|pandas|list|loops|boto3 | 1 |
377,479 | 67,050,913 | Google colab cache google drive content | <p>I have a dataset on google drive that's about 20GB big.
I use a generator to pull in the dataset to my keras/TF models, and the overhead of loading the files (for every batch) is insane.</p>
<p>I want to prefetch the content as one operation and then simply fetched from the local VM disk</p>
<p>I tried this:</p>
<pr... | <p>Theres a good example on google codelab for doing this, they write the dataset in a local TFRecords:</p>
<p><a href="https://codelabs.developers.google.com/codelabs/keras-flowers-tpu/#0" rel="nofollow noreferrer">https://codelabs.developers.google.com/codelabs/keras-flowers-tpu/#0</a></p>
<p>you can find more info h... | python|tensorflow|keras|google-colaboratory | 0 |
377,480 | 67,142,631 | comparing two arrays of bolean | <pre><code>a = np.array(['numeric','string','numeric'])
b = np.array(['numeric','string','numeric','numeric','string'])
</code></pre>
<p>I am trying to compare two arrays a and b.</p>
<p>I want to get something like : <code>array([ True, True, True])</code> because the 3 first elements are identical.</p>
<p>I know i... | <p>Built in function for this, I'm not sure, but here's a quick solution:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
a = np.array(['numeric','string','numeric'])
b = np.array(['numeric','string','numeric','numeric','string'])
c = np.array([i == j for i, j in zip(a, b)])
print(c)
</code></pre... | python|numpy | 1 |
377,481 | 66,836,559 | Drop a column and its corresponding row in a data frame | <p>I tried to drop a column and its corresponding row in a data frame in pandas using .drop(). But its dropping only the column and not the corresponding row that has the column value. For ex. I have unknown genre as a column and corresponding to it I have a movie as a row. When i drop the unknown column only the colum... | <p>Try the other way around. First retain only the rows of your dataframe for which value in "unknown" column is not equal to 1 and then get rid of the column:</p>
<pre class="lang-py prettyprint-override"><code>new_df = df.loc[~(df["unkwown"] == 1), :].drop(columns="unknown")
</code></pre... | pandas | 0 |
377,482 | 67,141,529 | I want to scrape the reviews through selenium webscraping and make a csv file of it but finding hard to remove the error. How to remove error? | <pre><code>import pandas as pd
from bs4 import BeautifulSoup
from time import sleep
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import sqlite3 as sql
urls = []
product... | <p>try to change the code scraping the href to be as following:</p>
<pre><code># Scraping each product's urls | 16,064 products
for url in urls:
driver = driver = webdriver.Chrome(executable_path=r"C:\Users\dell\Downloads\chromedriver_win32\chromedriver.exe")
driver.get(url)
sleep(5)
for i in ... | python|pandas|selenium|selenium-webdriver|nlp | 1 |
377,483 | 66,767,157 | How to Append Unique Rows and Take Its Values and put in a Dataframe | <p>This is my initial df:</p>
<pre><code> Date Items Stocks Sold
11/07/2020 Item1 10 40
11/08/2020 Item1 20 50
11/09/2020 Item1 30 90
11/10/2020 Item1 30 30
11/07/2020 Item2 10 10
11/08/2020 Item2 20 100
11/09/2020 Item2 30 70
11/10/2020 Item2 40 80
</cod... | <p>Try this:</p>
<pre><code>df.groupby('Items', as_index=False)[['Stocks', 'Sold']].sum()
</code></pre>
<p>Output:</p>
<pre><code> Items Stocks Sold
0 Item1 90 210
1 Item2 100 260
</code></pre>
<p>And if you want different agg functions for different columns:</p>
<pre><code>df.groupby('Items', as... | python|pandas | 0 |
377,484 | 47,337,195 | selecting not None value from a dataframe column | <p>I would like to use the <code>fillna</code> function to fill None value of a column with its own first most frequent value that is not None or nan. </p>
<p>Input DF:</p>
<pre><code>Col_A
a
None
None
c
c
d
d
</code></pre>
<p>The output Dataframe could be either:</p>
<pre><code>Col_A
a
c
c
c
c
d
d
</code></pre>
<... | <p>Prelude: If your <code>None</code> is actually a <em>string</em>, you can simplify any headaches by getting rid of them first-up. Use <code>replace</code>:</p>
<pre><code>df = df.replace('None', np.nan)
</code></pre>
<hr>
<p>I believe you could use <code>fillna</code> + <code>value_counts</code>:</p>
<pre><code>... | python|pandas|dataframe | 6 |
377,485 | 47,473,996 | Variable substitution without duplicating the tensor (or having the graph accepting two different input) | <p>I think it is easier to clarify what I need with a MWE (question is in the comment).</p>
<pre><code>import tensorflow as tf
import numpy as np
class MLP:
def __init__(self, sizes, activations):
self.input = last_out = tf.placeholder(dtype=tf.float32, shape=[None, sizes[0]])
self.layers = []
... | <p>Here's one approach (I assume there is a typo and what you want is <code>x3 * (mynet(x2) - mynet(x1))</code>?):</p>
<pre><code>import tensorflow as tf
import numpy as np
class MLP:
def __init__(self, x1, x2, sizes, activations):
x_sizes = [tf.shape(x1)[0], tf.shape(x2)[0]]
last_out = tf.concat(... | tensorflow | 1 |
377,486 | 47,251,621 | Python pandas/numpy fill 1 to cells with value, and zero to cells of nan | <p>I have an array with cells of different types of data (String, float, Integer, ...) . </p>
<p>e.g. </p>
<pre><code>[[18 '1/4/11' 73.0 'Male' 4.0]
[18 nan 73.0 'Male' nan]
[18 '7/5/11' 73.0 'Male' 7.0]]
</code></pre>
<p>And I want to assign 0 to cells with value <code>nan</code>, and 1 to all others</p>
<p... | <p>Whether it's a dataframe or an ndarray, you can use <code>pd.notnull</code>:</p>
<pre><code>>>> arr = np.array([[18, '1/4/11', 73.0, 'Male', 4.0],
... [18, np.nan, 73.0, 'Male', np.nan],
... [18, '7/5/11', 73.0, 'Male', 7.0]], dtype=object)
>>> pd.notnull(arr)
... | python|arrays|pandas|numpy|dataframe | 1 |
377,487 | 47,405,628 | Bokeh 'utf8' codec can't decode byte 0xe9 : unexpected end of data | <p>Im using Bokeh to plot a pandas Dataframe. Following is the code:</p>
<pre><code>map_options = GMapOptions(lat=19.075984, lng=72.877656, map_type="roadmap", zoom=11)
plot = GMapPlot(x_range=DataRange1d(), y_range=DataRange1d(), map_options=map_options)
plot.api_key = "xxxxx"
source = ColumnDataSource(
data=di... | <p>The byte 0xe9 is not in pure ascii, because it is 233 (in decadical system) and ascii has only 127 symbols. In UTF-8 it is a special byte, which introduces a charecter taking next two bytes. Thus the string is probably in another encoding. For example in latin1 and latin2 the byte 0xe9 symbolizes the letter é.</p>
... | python|pandas|encoding|bokeh | 9 |
377,488 | 47,295,566 | How to use pandas to shift the last row to the first | <p>So I have a dataframe that looks like this:</p>
<pre><code> #1 #2
1980-01-01 11.6985 126.0
1980-01-02 43.6431 134.0
1980-01-03 54.9089 130.0
1980-01-04 63.1225 ... | <p>Not sure about the performance, but you could try <code>numpy.roll</code>:</p>
<pre><code>import numpy as np
print(df.apply(np.roll, shift=1))
# #1 #2
#1980-01-01 72.4399 120.0
#1980-01-02 11.6985 126.0
#1980-01-03 43.6431 134.0
#1980-01-04 54.9089 130.0
#1980-01-05 63.1225 126.0
</co... | python|python-3.x|pandas | 12 |
377,489 | 47,192,221 | Numpy list of matrixes as dataset in TensorFlow | <p>there!</p>
<p>I have a list of n 30x7 matrixes (so I guess it's a 3D array) and another list of n labels (basically TRUE or FALSE). How can I use this as a dataset for TensorFlow? Most tutorials use images, so I can't find how to do it for my case.</p>
<p>Thanks a lot!</p> | <p>That's effectively an image, any of the image based tutorials should help you out. If you want to follow tutorials for images, e.g. convolutional neural networks you'll want to <code>reshape</code> your 30x7 matrix to include 1 "color" channel. For example: <code>tf.reshape(data_matrix, shape=[30, 7, 1])</code> will... | numpy|tensorflow | 0 |
377,490 | 47,394,761 | checkpoint file for different instances of tensorflow program | <p>I'm tweaking parameters in my tensorflow script to determine best performance.</p>
<p>Basically I'm running at the same time different instances of the same script with different parameters but saving the model with different names </p>
<p>I thought that changing the name in the <code>saver</code> would be enough.... | <p>I found the answer in the <a href="https://www.tensorflow.org/api_docs/python/tf/train/Saver" rel="nofollow noreferrer">tensorflow</a> documentation</p>
<p>In particular this part:</p>
<blockquote>
<p>That protocol buffer is stored in a file named 'checkpoint' next to the checkpoint files.</p>
<p>If you create sever... | python|tensorflow|artificial-intelligence | 0 |
377,491 | 47,368,382 | Representations of the binary string in the tree graph paths | <p>I'm trying to write an algorithm in which graphs are searched for possible node paths, representing binary strings. Where nodes with even numbers correspond to the digit '0', while the odd numbers '1'. The following code is for the time being unelegant and not optimized. In the code comments I put some explanations ... | <p>All the code can be reduced to few lines I mean can be vectorized, so you can get rid of for loops i.e </p>
<pre><code>a = pd.Series([0] + df['nodes_2'][df['nodes_2']%2==0].values.tolist())
# Creating series to make use of apply
b = a.apply(g.neighbors)
n1e ,n2e = df['nodes_1'] % 2 == 0, df['nodes_2'] % 2 == 0
... | python|algorithm|list|pandas|networkx | 2 |
377,492 | 47,430,467 | Dimensionality error after applying a dense layer | <p>I am trying to add a dense layer after applying dropout to the max pooled convolutional layer output.</p>
<p>I have the following TensorFlow code written in Python. Number of filters is 128 and len(filter_sizes) is 3</p>
<pre><code>pooled_outputs = []
for i, filter_size in enumerate(filter_sizes):
with... | <p>The error was with the indices of the matrices. I was using the xw_plus_b function provided by tensorflow and using the dimensions of the matrices for multiplication wrong.</p> | python|tensorflow|deep-learning|conv-neural-network | 0 |
377,493 | 47,431,777 | Reinitializable iterators for simultaneous training and validation | <p>I want to use <code>Dataset</code> and <code>Iterator</code>s to evaluate on a validation set during training. I want to evaluate on one (or a few) validation batches every now and then — that every now and then is typically <em>not</em> an epoch.</p>
<p>However reinitializable iterators start all over again when r... | <p><a href="https://www.tensorflow.org/programmers_guide/datasets" rel="nofollow noreferrer">Feedable iterators</a> should help, but they're tough to work with. You need to create a placeholder and string handles:</p>
<pre><code>dataset_trn = tf.data.Dataset.range(10)
dataset_tst = tf.data.Dataset.range(10).map(lambda... | python|tensorflow | 3 |
377,494 | 47,364,379 | I want to use variable length input with dynamic RNN of tensorflow, but I don't know how to padding | <p>As an example, input tensor is a batch size 3, a maximum time sequence length 4 and dimension of feature value is 2</p>
<p>[ [ [1, 2], [3, 4], [5, 6], [7, 8] ],
[ [9, 10] ] ]</p>
<p>, And in order to make it an input to RNN</p>
<p>[ [ [1, 2], [3, 4], [5, 6], [7, 8] ],
[ [9, 10], [0,0], [0,0], [0,0] ] ]</p>
<p>I ... | <p>I think may need <strong>tf.train.batch</strong> (<a href="https://www.tensorflow.org/api_docs/python/tf/train/batch" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/train/batch</a>) and
use <strong>dynamic_pad=True</strong>. </p> | python|tensorflow|neural-network|deep-learning|rnn | 0 |
377,495 | 47,403,696 | Apply Feature Hashing to specific columns from a DataFrame | <p>I'm a bit lost with the use of Feature Hashing in Python Pandas . </p>
<p>I have the a DataFrame with multiple columns, with many information in different types. There is one column that represent a class for the data. </p>
<p>Example:</p>
<pre><code> col1 col2 colType
1 1 2 'A'
... | <p>I know this answer comes in late, but I stumbled upon the same problem and found this works:</p>
<pre><code>fh = FeatureHasher(n_features=8, input_type='string')
sp = fh.fit_transform(df['colType'])
df = pd.DataFrame(sp.toarray(), columns=['fh1', 'fh2', 'fh3', 'fh4', 'fh5', 'fh6', 'fh7', 'fh8'])
pd.concat([df1, df]... | python|pandas|scikit-learn|data-science | 3 |
377,496 | 47,525,425 | Convert numpy array of Datetime objects to UTC strings | <p>I have a large array of datetime objects in numpy array. However I am trying to export them as a json object attribute and need them to be represented as a UTC string.</p>
<p>Here is my array ( a small chunk of it )</p>
<pre><code>datetimes = [datetime.datetime(2015, 7, 12, 18, 33, 14, tzinfo=<UTC>), datetim... | <p>I think you can create <code>DataFrame</code>, convert to <code>iso</code> format and save to <code>dict</code>, because <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_json.html" rel="nofollow noreferrer"><code>DataFrame.to_json</code></a> with <code>orint='list'</code> is <a href... | pandas|numpy | 1 |
377,497 | 47,445,099 | Inserting data to impala table using Ibis python | <p>I'm trying to insert df into a ibis created impala table with partition. I am running this on remote kernel using spyder 3.2.4 on windows 10 machine and python 3.6.2 on edge node machine running CentOS. </p>
<p>I get following error:</p>
<pre><code>Writing DataFrame to temporary file
Writing CSV to: /tmp/ibis/pand... | <p>instead of editing the config_init.py mentioned</p>
<blockquote>
<p><a href="https://stackoverflow.com/a/47543691/5485370">https://stackoverflow.com/a/47543691/5485370</a></p>
</blockquote>
<p>It is easier to assign the temp db and path using the ibis.options:</p>
<pre><code>ibis.options.impala.temp_db = 'your_temp_... | python|pandas|impala|ibis | 0 |
377,498 | 47,377,372 | Python numpy how to join numbers from array | <p>I am newbie in Python. I think I'm looking for something easy, but can't find.
I have an numpy binary array, e.g.:</p>
<pre><code> [1,0,1,1,0,0,0,1,1,1,1,0]
</code></pre>
<p>And I want to do 2 things:</p>
<ol>
<li><p>Join (?) all elements into one number, so result will be:</p>
<pre><code>x=101100011110
</code><... | <pre><code>>>> int(''.join(map(str, [1,0,1,1,0,0,0,1,1,1,1,0])))
101100011110
</code></pre>
<p>Or with a little numpy:</p>
<pre><code>>>> int(''.join(np.array([1,0,1,1,0,0,0,1,1,1,1,0]).astype('|S1')))
101100011110
</code></pre> | python|arrays|numpy | 2 |
377,499 | 47,362,481 | TensorFlow boolean indexing | <p>Let's go straight, currently, I want to index a tensor and change the non-zero term to -1 and zero term to 1. But I don't know how to do it in TensorFlow.</p>
<p>Here is my code:</p>
<pre><code>y_[y_ != 0].assign(-1)
y_[y_ == 0].assign(1)
</code></pre>
<p>The reason is TensorFlow doesn't seem to support boolean i... | <p>You can use <strong>tf.cond ()</strong>, for the conditional assignment. I have given an example code below,</p>
<pre><code>import tensorflow as tf
x_= tf.Variable(5) #non-zero variable
y_= tf.Variable(0) #variable euqals to 0
y_ =tf.cond(tf.equal(y_,0),lambda :y_.assign(1),lambda :y_.assign(-1)) #assign 1 if var... | python|tensorflow|tensor | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.