Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
376,300
55,081,741
Tensorflow, Horovod, and NVLINK NotFoundError
<p>I'm trying to run a tensorflow neural network that runs on GPUs using <strong>uber's horovod library</strong>. At the same time I am trying to run a measurement script that measurements the <strong>nvlinks</strong> between the multiple gpus. Alas, whenever I run the file I get the following error: </p> <blockquote> <p>tensorflow.python.framework.errors_impl.NotFoundError: /home/pat/.virtualenvs/venv/lib/python3.6/site-packages/horovod /tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN10tensorflow14kernel_factory17OpKernelRegist rar12InitInternalEPKNS_9KernelDefEN4absl11string_viewESt10unique_ptrINS0_15OpKernelFactoryESt14default_deleteIS8_EE</p> </blockquote> <p>Does anyone have any idea how to fix this issue? </p> <p>Thank you.</p>
<p>Please take a look at this issue raised on the repo:</p> <p><a href="https://github.com/horovod/horovod/issues/656" rel="nofollow noreferrer">https://github.com/horovod/horovod/issues/656</a></p>
python|tensorflow|horovod
-1
376,301
54,749,818
How to turn tabular and horizontal data into tabular data in Pandas/Python
<p>Quick question regarding reshaping data in python/pandas</p> <p>The reports that I have to work with in excel are sometimes organised like figure 1. below (horizontal &amp; vertical)</p> <pre><code>Make Model Volume Yr. 1 Volume Yr. 2 Gadget 1 Model 1 1254 1549 Gadget 2 Model 2 897 1108 Gadget 3 Model 3 1598 1974 Gadget 4 Model 4 5897 7283 Gadget 5 Model 5 9008 11125 Gadget 6 Model 6 2456 3033 Gadget 7 Model 7 700 865 Gadget 8 Model 8 367 453 </code></pre> <p>I believe it would be best to work on the information in a tabular format, like that in figure 2. below;</p> <pre><code>Make Model Product Type Specification Date Volume Gadget 1 Model 1 Product Type 1 Specification 1 Volume Yr. 1 1254 Gadget 1 Model 1 Product Type 1 Specification 1 Volume Yr. 2 1549 Gadget 1 Model 1 Product Type 1 Specification 1 Volume Yr. 3 1913 Gadget 1 Model 1 Product Type 1 Specification 1 Volume Yr. 4 2362 Gadget 1 Model 1 Product Type 1 Specification 1 Volume Yr. 5 2917 Gadget 2 Model 2 Product Type 2 Specification 2 Volume Yr. 1 897 Gadget 2 Model 2 Product Type 2 Specification 2 Volume Yr. 2 1108 Gadget 2 Model 2 Product Type 2 Specification 2 Volume Yr. 3 1368 Gadget 2 Model 2 Product Type 2 Specification 2 Volume Yr. 4 1690 Gadget 2 Model 2 Product Type 2 Specification 2 Volume Yr. 5 2087 Gadget 3 Model 3 Product Type 3 Specification 3 Volume Yr. 1 1598 Gadget 3 Model 3 Product Type 3 Specification 3 Volume Yr. 2 1974 Gadget 3 Model 3 Product Type 3 Specification 3 Volume Yr. 3 2437 Gadget 3 Model 3 Product Type 3 Specification 3 Volume Yr. 4 3010 Gadget 3 Model 3 Product Type 3 Specification 3 Volume Yr. 5 3717 Gadget 4 Model 4 Product Type 4 Specification 4 Volume Yr. 1 5897 Gadget 4 Model 4 Product Type 4 Specification 4 Volume Yr. 2 7283 Gadget 4 Model 4 Product Type 4 Specification 4 Volume Yr. 3 8994 Gadget 4 Model 4 Product Type 4 Specification 4 Volume Yr. 4 11108 Gadget 4 Model 4 Product Type 4 Specification 4 Volume Yr. 5 13718 Gadget 5 Model 5 Product Type 5 Specification 5 Volume Yr. 1 9008 Gadget 5 Model 5 Product Type 5 Specification 5 Volume Yr. 2 11125 Gadget 5 Model 5 Product Type 5 Specification 5 Volume Yr. 3 13739 Gadget 5 Model 5 Product Type 5 Specification 5 Volume Yr. 4 16968 Gadget 5 Model 5 Product Type 5 Specification 5 Volume Yr. 5 20955 Gadget 6 Model 6 Product Type 6 Specification 6 Volume Yr. 1 2456 Gadget 6 Model 6 Product Type 6 Specification 6 Volume Yr. 2 3033 Gadget 6 Model 6 Product Type 6 Specification 6 Volume Yr. 3 3746 Gadget 6 Model 6 Product Type 6 Specification 6 Volume Yr. 4 4626 Gadget 6 Model 6 Product Type 6 Specification 6 Volume Yr. 5 5713 Gadget 7 Model 7 Product Type 7 Specification 7 Volume Yr. 1 700 Gadget 7 Model 7 Product Type 7 Specification 7 Volume Yr. 2 865 Gadget 7 Model 7 Product Type 7 Specification 7 Volume Yr. 3 1068 Gadget 7 Model 7 Product Type 7 Specification 7 Volume Yr. 4 1319 Gadget 7 Model 7 Product Type 7 Specification 7 Volume Yr. 5 1628 Gadget 8 Model 8 Product Type 8 Specification 8 Volume Yr. 1 367 Gadget 8 Model 8 Product Type 8 Specification 8 Volume Yr. 2 453 Gadget 8 Model 8 Product Type 8 Specification 8 Volume Yr. 3 560 Gadget 8 Model 8 Product Type 8 Specification 8 Volume Yr. 4 691 Gadget 8 Model 8 Product Type 8 Specification 8 Volume Yr. 5 854 </code></pre> <p><a href="https://i.stack.imgur.com/3e331.png" rel="nofollow noreferrer">Tabular</a></p> <p>Would you be able to advise on the best way to get the unorganized, horizontal &amp; vertical data tabular in pandas/python?</p> <p>Many thanks in advance.</p>
<p>Considering the data looks like:</p> <pre><code> Make Model Volume Yr. 1 Volume Yr. 2 0 Gadget 1 Model 1 1254 1549 1 Gadget 2 Model 2 897 1108 2 Gadget 3 Model 3 1598 1974 3 Gadget 4 Model 4 5897 7283 4 Gadget 5 Model 5 9008 11125 5 Gadget 6 Model 6 2456 3033 6 Gadget 7 Model 7 700 865 7 Gadget 8 Model 8 367 453 </code></pre> <p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html" rel="nofollow noreferrer"><code>pd.melt()</code></a></p> <pre><code>df_new=df.melt(id_vars=['Make','Model'],var_name='Date',value_name='Value') print(df_new) Make Model Date Value 0 Gadget 1 Model 1 Volume Yr. 1 1254 1 Gadget 2 Model 2 Volume Yr. 1 897 2 Gadget 3 Model 3 Volume Yr. 1 1598 3 Gadget 4 Model 4 Volume Yr. 1 5897 4 Gadget 5 Model 5 Volume Yr. 1 9008 5 Gadget 6 Model 6 Volume Yr. 1 2456 6 Gadget 7 Model 7 Volume Yr. 1 700 7 Gadget 8 Model 8 Volume Yr. 1 367 8 Gadget 1 Model 1 Volume Yr. 2 1549 9 Gadget 2 Model 2 Volume Yr. 2 1108 10 Gadget 3 Model 3 Volume Yr. 2 1974 11 Gadget 4 Model 4 Volume Yr. 2 7283 12 Gadget 5 Model 5 Volume Yr. 2 11125 13 Gadget 6 Model 6 Volume Yr. 2 3033 14 Gadget 7 Model 7 Volume Yr. 2 865 15 Gadget 8 Model 8 Volume Yr. 2 453 </code></pre> <p>Similarly you can use all list of all columns which needs not flattening under <code>id_vars</code>, example:</p> <pre><code>df.melt(id_vars=['Make','Model','Product Type','Specification'],\ var_name='Date',value_name='Value') </code></pre>
python|pandas
1
376,302
54,881,627
in tensorflow, how do I convert a list of indices to an indicator vector?
<p>My input are lists of indices like </p> <pre><code>[1,3], [0,1,2] </code></pre> <p>how can I convert them into fixed length indicator vectors?</p> <pre><code>[0, 1, 0, 1], [1, 1, 1, 0] </code></pre>
<pre><code>import tensorflow as tf indices = [[1, 3, 0], [0, 1, 2]] many_hot = tf.one_hot(indices, depth=4) many_hot = tf.reduce_sum(many_hot, axis=1) with tf.Session() as sess: print(sess.run(many_hot)) </code></pre> <p>This prints</p> <pre><code>[[1. 1. 0. 1.] [1. 1. 1. 0.]] </code></pre> <p>Note that this only works if all the indices have the same amount of indices in each entry of the list. If this is not the case, you could do it with a loop:</p> <pre><code>import tensorflow as tf indices = [[1, 3], [0, 1, 2]] many_hots = [] for idx in indices: many_hot = tf.one_hot(idx, depth=4) many_hot = tf.reduce_sum(many_hot, axis=0) many_hots.append(many_hot) many_hot = tf.stack(many_hots) with tf.Session() as sess: print(sess.run(many_hot)) </code></pre> <p>This prints</p> <pre><code>[[0. 1. 0. 1.] [1. 1. 1. 0.]] </code></pre>
python|tensorflow
5
376,303
54,840,036
Connecting S3 - Lambda - EC2 - Elasticsearch
<p>In my project users upload images into a S3 bucket. I have created a tensor flow resnet model to interpret the contents of the image. Based on the tensor flow interpretation, the data is to be stored in an elasticsearch instance. </p> <p>For this, I have created a S3 Bucket, a lambda function that gets triggered when an image is loaded, and AWS elasticsearch instance. Since my tf models are large, I have zipped them and put it in a S3 bucket and uploaded the s3 url to lambda. </p> <p>Issue: Since my unzipped files were larger than 266 mb, I could not complete the lambda function.</p> <p>Alternative approach: Instead of S3 Bucket - I am thinking of creating a ec2 instance - with larger volume size to store images and receive the images directly into ec2 instance instead of s3. However, since I will be receiving images in millions within a year, I am not sure if this will be scalable. </p>
<p>I can think of two approaches here:</p> <ol> <li><p>You side load the app. The lambda can be a small bootstrap script that downloads your app from s3 and unzips it. This is a popular pattern in server less frameworks. You pay for this during a cold start of the lambda so you will need to keep it warm in a production env.</p></li> <li><p>You can store images in s3 itself and create event on image upload with destination SQS. Then you can use ec2 to pull the sqs messages for new messages periodically and process them using your tf models.</p></li> </ol>
amazon-web-services|tensorflow|elasticsearch|amazon-s3
0
376,304
54,809,825
How do I reproduce results from webgraphviz with python graphviz using 2 column pandas dataframe
<p>I have a two column pandas dataframe with parent and child process id's that looks like the following:</p> <pre><code> ChildID ParentID 0 460 580 1 580 716 2 460 724 3 716 840 4 716 812 5 724 884 6 716 800 7 1424 2028 8 2280 2368 9 2368 2480 10 2948 2916 11 3312 3896 12 3312 3468 13 3312 3996 16 4 460 17 460 480 18 3244 4168 19 1324 4796 20 5888 5048 21 2504 4424 22 1324 7584 23 2040 1400 24 1224 2452 .. ... ... </code></pre> <p>I have downloaded the graphviz python library, but in the meantime to see what I could do I headed over to <a href="http://www.webgraphviz.com/" rel="nofollow noreferrer">http://www.webgraphviz.com/</a> to see what could be done. I used the same dataset and it looks pretty good.</p> <p><a href="https://i.stack.imgur.com/OuPhT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OuPhT.png" alt="ParentChild-WebGraphViz"></a></p> <p>I have searched a bit but am having trouble finding a good way to replicate this using the python library graphviz. Can anyone point me in the right direction just using 2 columns with possibly a small example?</p>
<p>Here is my solution:</p> <pre><code>from graphviz import Graph g = Graph('processs', filename='process.gv', engin='sfdp') # run over all the rows and for each row add a new edge to the graph for index, row in df.iterrows(): g.edge(str(row['ChildID']), str(row['ParentID'])) g.view() </code></pre> <p>If you have some problems with running graphviz on windows you probably need to add graphviz's bin files to the windows PATH, to do so you can use:</p> <pre><code>import os os.environ["PATH"] += os.pathsep + &lt;path to the bin folder&gt; </code></pre> <p>Enjoy! </p>
python|pandas|graphviz|pygraphviz
2
376,305
55,123,657
Combine duplicate rows on specific column
<p>I am trying to combine rows of a dataframe in the event that there is a duplicate in one column. The dataframe looks like the following.</p> <pre><code>Name Code X Y A 123 10 11 B 456 12 13 C 123 15 16 </code></pre> <p>I want to combine on Code. So if the Code is the same, combine the other data separated by a comma. The resulting df would look like this:</p> <pre><code>Name Code X Y A,C 123 10,15 11,16 B 456 12 13 </code></pre> <p>My approach was the following: </p> <pre><code> df = df.groupby(['Name','Code','Y'])['X'].astype(str).apply(', '.join).reset_index() df = df.groupby(['Name','Code','X'])['Y'].astype(str).apply(', '.join).reset_index() </code></pre> <p>I get the following error :</p> <pre><code>"Cannot access callable attribute 'astype' of 'SeriesGroupBy' objects, try using the 'apply' method" </code></pre> <p>I have been unable to figure out how to use apply to cast as type str, any tips?</p>
<p>Create index from <code>Code</code> column fo avoid casting to strings, then cast all columns and aggregate by index function <code>join</code>:</p> <pre><code>df = df.set_index('Code').astype(str).groupby(level=0).agg(', '.join).reset_index() #pandas 0.24+ #df = df.set_index('Code').astype(str).groupby('Code').agg(', '.join).reset_index() print (df) Code Name X Y 0 123 A, C 10, 15 11, 16 1 456 B 12 13 </code></pre>
python|pandas
5
376,306
54,808,153
use tf.nn.dynamic_rnn but final state don't have c and h
<p>I'm working with tensorflow, I want run a program with RNN, but I got the followed error:</p> <pre><code>a=self._encoder_final_state[0].c AttributeError: 'Tensor' object has no attribute 'c' </code></pre> <p>the program is like this:</p> <pre><code>self._encoder_cells = build_rnn_layers( cell_type=self._hparams.cell_type, num_units_per_layer=self._num_units_per_layer, use_dropout=self._hparams.use_dropout, dropout_probability=self._hparams.dropout_probability, mode=self._mode, residual_connections=self._hparams.residual_encoder, highway_connections=self._hparams.highway_encoder, dtype=self._hparams.dtype, ) self._encoder_outputs, self._encoder_final_state = tf.nn.dynamic_rnn( cell=self._encoder_cells, inputs=encoder_inputs, sequence_length=self._inputs_len, parallel_iterations=self._hparams.batch_size[0 if self._mode == 'train' else 1], swap_memory=False, dtype=self._hparams.dtype, scope=scope, ) a=self._encoder_final_state[0].c </code></pre>
<p>From the <a href="https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn" rel="nofollow noreferrer">docs</a> of <code>dynamic_rnn</code>:</p> <blockquote> <p>If cells are <code>LSTMCells</code> <code>state</code> will be a tuple containing a <code>LSTMStateTuple</code> for each cell.</p> </blockquote> <p>And <a href="https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/rnn_cell/LSTMStateTuple?hl=en" rel="nofollow noreferrer">here</a> you can see that indeed <code>LSTMStateTuple</code> are the ones having the desired <code>c</code> and <code>h</code> properties.</p> <p>Unfortunately, your code doesn't give me any clue what kind of cells you are using, but apparently, they are no <code>LSTMCells</code>. So I cannot give you any better advise than to switch to <code>LSTMCells</code>.</p>
tensorflow|recurrent-neural-network|tensor
1
376,307
54,930,722
to_sql not updating table in ms server
<p>Hey so I have the below code I'm running where I'm pulling a table from a MS Sql Server table, running a bit of code and then trying to reimport it into another table within the same database. Running this in spyder</p> <p>It runs all the way through, but when I</p> <pre><code>select * from pythontest </code></pre> <p>on the SQL Server, the table comes out blank. Is there anything that is standing out as not working?</p> <pre><code>## From SQL to DataFrame Pandas import pandas as pd import pyodbc import mysql.connector from sqlalchemy import create_engine sql_conn = pyodbc.connect("Driver={SQL Server Native Client 11.0};" "Server=njsrvnav1;" "Database=cornerstone;" "Trusted_Connection=yes;") query = "SELECT [c1], [c2], [c3] from projectmaster" df = pd.read_sql(query, sql_conn) df = df[:100] con = create_engine('mssql+pyodbc://username:pword@serverName:1433/cornerstone?driver=SQL+Server+Native+Client+11.0') df.to_sql('dbo.pythontest', con, if_exists='replace') con.dispose() </code></pre>
<p>Try with <code>pymysql</code> :</p> <pre><code>conn = pymysql.connect( host='', port= user='', passwd='', db='', charset='utf8mb4') df = pd.read_sql_query("SELECT * FROM table ", conn) df.head(2) </code></pre>
sql-server|python-3.x|pandas|spyder
1
376,308
55,037,285
How do I use tensoflow frozen graph for visualizing its feature maps?
<p><strong>I am having tensoflow frozen graph (.pb). I want to visualize the hidden layer output (feature maps) given by that graph from an image. Is there any way to do it?</strong></p>
<p>Yes. Probably the best place to start is tensorflow's native way to visualize network infromation, <code>tensorboard</code>.</p>
tensorflow|deep-learning|conv-neural-network
0
376,309
54,761,314
Pandas Dataframe show Count with Group by and Aggregate
<p>I have this data</p> <pre><code>ID Value1 Value2 Type Type2 1 3 1 A X 2 2 2 A X 3 5 3 B Y 4 2 4 B Z 5 6 8 C Z 6 7 9 C Z 7 8 0 C L 8 3 2 D M 9 4 3 D M 10 6 5 D M 11 8 7 D M </code></pre> <p>Right now i am able to generate this output using this code</p> <pre><code>pandabook.groupby(['Type','Type2'],as_index=False)['Value1', 'Value2'].agg({'Value1': 'sum','Value2': 'sum'}) ID Value 1 Value2 Type Type2 1 5 3 A X 2 5 3 B Y 3 2 5 B Z 4 13 17 C Z 5 8 0 C L 6 21 17 D M </code></pre> <p>I want to show the Aggregated count as well, as show in this example</p> <p><a href="https://i.stack.imgur.com/wCMSh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wCMSh.png" alt="enter image description here"></a></p> <p>How can i achieve this output ? </p>
<p>Add new value to dictionary with <code>size</code> function, remove <code>as_index=False</code> for prevent:</p> <blockquote> <p>ValueError: cannot insert Type, already exists</p> </blockquote> <p>and last <code>rename</code> with <code>reset_index</code>:</p> <pre><code>df = pandabook.groupby(['Type','Type2']).agg({'Value1': 'sum','Value2': 'sum', 'Type':'size'}) df = df.rename(columns={'Type':'Count'}).reset_index() print (df) Type Type2 Value1 Value2 Count 0 A X 5 3 2 1 B Y 5 3 1 2 B Z 2 4 1 3 C L 8 0 1 4 C Z 13 17 2 5 D M 21 17 4 </code></pre>
python-3.x|pandas|dataframe
1
376,310
55,015,329
Add In Plot Labels to Seaborn Lineplot
<p>I have a dataframe that shows monthly revenue. There is an additional column that shows the number of locations opened in that month. </p> <pre><code>&gt; Date Order Amount Locations Opened 16 2016-05-31 126443.17 2.0 &gt; 17 2016-06-30 178144.27 0.0 18 2016-07-31 230331.96 1.0 &gt; 19 2016-08-31 231960.04 0.0 20 2016-09-30 208445.26 0.0 </code></pre> <p>I'm using seaborn to plot the revenue by month</p> <pre><code> sns.lineplot(x="Date", y="Order Amount", data=total_monthly_rev).set_title("Total Monthly Revenue") </code></pre> <p><a href="https://i.stack.imgur.com/YoV2M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YoV2M.png" alt="enter image description here"></a></p> <p>I've been trying, unsuccessfully, to use the third column, Locations Opened, to add supporting text to the lineplot so I can show the number of locations opened in a month, where Locations Opened > 0. </p>
<p>IIUC, use <code>text</code>:</p> <pre class="lang-py prettyprint-override"><code>plt.figure(figsize=(12, 5)) sns.lineplot(x="Date", y="Order Amount", data=total_monthly_rev).set_title("Total Monthly Revenue") # Using a variable to manage how above/below text should appear slider = 1000 for i in range(total_monthly_rev.shape[0]): if total_monthly_rev['LocationsOpened'].iloc[i] &gt; 0: plt.text(total_monthly_rev.Date.iloc[i], total_monthly_rev['Order Amount'].iloc[i] + slider, total_monthly_rev['LocationsOpened'].iloc[i]) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/POzJb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/POzJb.png" alt="plt"></a></p>
python|pandas|matplotlib|seaborn
5
376,311
55,000,316
Include file name to be part of xml to csv conversion in Python
<p>I am trying to convert an XML file into csv. I have got this below code working to do just that. I however am also trying to include the file name to be part of the extract but I am not able to have that included in this code.</p> <pre><code>df = pd.DataFrame() for file in allFiles: def iter_docs(cis): for docall in cis: doc_dict = {} for doc in docall: tag = [elem.tag for elem in doc] txt = [elem.text for elem in doc] if len(tag) &gt; 0: doc_dict.update(dict(zip(tag, txt))) else: doc_dict[doc.tag] = doc.text yield doc_dict etree = ET.parse(file) df = df.append(pd.DataFrame(list(iter_docs(etree.getroot())))) </code></pre>
<p>Try</p> <pre><code>df = df.append(pd.DataFrame([file] + list(iter_docs(etree.getroot())))) </code></pre> <p>to get a column with the filename added</p> <p>By the way, this approach will give you bad performance.</p> <p>A better approach is to collect the df in a list and convert that to a big one at the end.</p> <pre><code>list_of_df = [] for file in allFiles: def iter_docs(cis): # your code list_of_df.append(pd.DataFrame([file] + list(iter_docs(etree.getroot())))) # at the end df = pd.concat(list_of_df) </code></pre>
python|xml|pandas
0
376,312
55,120,395
keras kernel initializers are called incorrectly when using load_model
<p>Keras version 2.2.4, tensorflow version 1.13.1, I'm using colab notebooks</p> <p>I'm trying to make a custom initializer and save the model using model.save() but when I load the model again I get the following error:</p> <blockquote> <p>TypeError: myInit() missing 1 required positional argument: 'input_shape'</p> </blockquote> <p>I have the following code:</p> <pre><code>import numpy as np import tensorflow as tf import keras from google.colab import drive from keras.models import Sequential, load_model from keras.layers import Dense, Dropout, Flatten, Lambda, Reshape, Activation from keras.layers.convolutional import Conv2D, MaxPooling2D from keras import backend as K K.set_image_data_format('channels_first') K.backend() # the output should be 'tensorflow' </code></pre> <blockquote> <p>'tensorflow'</p> </blockquote> <pre><code>def myInit( input_shape, dtype=None): weights = np.full( input_shape, 2019 ) return K.variable( weights, dtype=dtype ) </code></pre> <p>This initializer is given an input_shape and returns a keras tensor like in the docs: <a href="https://keras.io/initializers/" rel="nofollow noreferrer">https://keras.io/initializers/</a></p> <pre><code>model = Sequential() model.add( Dense( 40, input_shape=(784,) ) ) model.add( Dense( 30, kernel_initializer=myInit ) ) model.add( Dense( 5 ) ) model.build() </code></pre> <p>The weights are initialized correctly because when I call <code>model.layers[1].get_weights()</code> I get an array full of 2019. I save the model using model.save:</p> <pre><code>model.save(somepath) </code></pre> <p>In a different notebook I then call </p> <pre><code>model = load_model(somepath, custom_objects={ 'tf' : tf, 'myInit' : myInit } ) </code></pre> <p>In this notebook <code>myInit</code> and all the imports are defined as well. When I call <code>load_model</code> I get the following error:</p> <blockquote> <p>TypeError: myInit() missing 1 required positional argument: 'input_shape'</p> </blockquote> <p>So it seems when the model is loaded, the input_shape is not passed to myInit. Does anyone have any idea?</p> <p>Full trace: </p> <pre><code>TypeError Traceback (most recent call last) &lt;ipython-input-25-544d137de03f&gt; in &lt;module&gt;() 2 custom_objects={ 3 'tf' : tf, ----&gt; 4 'myInit' : myInit 5 } 6 ) /usr/local/lib/python3.6/dist-packages/keras/engine/saving.py in load_model(filepath, custom_objects, compile) 417 f = h5dict(filepath, 'r') 418 try: --&gt; 419 model = _deserialize_model(f, custom_objects, compile) 420 finally: 421 if opened_new_file: /usr/local/lib/python3.6/dist-packages/keras/engine/saving.py in _deserialize_model(f, custom_objects, compile) 223 raise ValueError('No model found in config.') 224 model_config = json.loads(model_config.decode('utf-8')) --&gt; 225 model = model_from_config(model_config, custom_objects=custom_objects) 226 model_weights_group = f['model_weights'] 227 /usr/local/lib/python3.6/dist-packages/keras/engine/saving.py in model_from_config(config, custom_objects) 456 '`Sequential.from_config(config)`?') 457 from ..layers import deserialize --&gt; 458 return deserialize(config, custom_objects=custom_objects) 459 460 /usr/local/lib/python3.6/dist-packages/keras/layers/__init__.py in deserialize(config, custom_objects) 53 module_objects=globs, 54 custom_objects=custom_objects, ---&gt; 55 printable_module_name='layer') /usr/local/lib/python3.6/dist-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 143 config['config'], 144 custom_objects=dict(list(_GLOBAL_CUSTOM_OBJECTS.items()) + --&gt; 145 list(custom_objects.items()))) 146 with CustomObjectScope(custom_objects): 147 return cls.from_config(config['config']) /usr/local/lib/python3.6/dist-packages/keras/engine/sequential.py in from_config(cls, config, custom_objects) 298 for conf in layer_configs: 299 layer = layer_module.deserialize(conf, --&gt; 300 custom_objects=custom_objects) 301 model.add(layer) 302 if not model.inputs and build_input_shape: /usr/local/lib/python3.6/dist-packages/keras/layers/__init__.py in deserialize(config, custom_objects) 53 module_objects=globs, 54 custom_objects=custom_objects, ---&gt; 55 printable_module_name='layer') /usr/local/lib/python3.6/dist-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 145 list(custom_objects.items()))) 146 with CustomObjectScope(custom_objects): --&gt; 147 return cls.from_config(config['config']) 148 else: 149 # Then `cls` may be a function returning a class. /usr/local/lib/python3.6/dist-packages/keras/engine/base_layer.py in from_config(cls, config) 1107 A layer instance. 1108 """ -&gt; 1109 return cls(**config) 1110 1111 def count_params(self): /usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs) 89 warnings.warn('Update your `' + object_name + '` call to the ' + 90 'Keras 2 API: ' + signature, stacklevel=2) ---&gt; 91 return func(*args, **kwargs) 92 wrapper._original_function = func 93 return wrapper /usr/local/lib/python3.6/dist-packages/keras/layers/core.py in __init__(self, units, activation, use_bias, kernel_initializer, bias_initializer, kernel_regularizer, bias_regularizer, activity_regularizer, kernel_constraint, bias_constraint, **kwargs) 846 self.activation = activations.get(activation) 847 self.use_bias = use_bias --&gt; 848 self.kernel_initializer = initializers.get(kernel_initializer) 849 self.bias_initializer = initializers.get(bias_initializer) 850 self.kernel_regularizer = regularizers.get(kernel_regularizer) /usr/local/lib/python3.6/dist-packages/keras/initializers.py in get(identifier) 509 elif isinstance(identifier, six.string_types): 510 config = {'class_name': str(identifier), 'config': {}} --&gt; 511 return deserialize(config) 512 elif callable(identifier): 513 return identifier /usr/local/lib/python3.6/dist-packages/keras/initializers.py in deserialize(config, custom_objects) 501 module_objects=globals(), 502 custom_objects=custom_objects, --&gt; 503 printable_module_name='initializer') 504 505 /usr/local/lib/python3.6/dist-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 152 custom_objects = custom_objects or {} 153 with CustomObjectScope(custom_objects): --&gt; 154 return cls(**config['config']) 155 elif isinstance(identifier, six.string_types): 156 function_name = identifier TypeError: myInit() missing 1 required positional argument: 'input_shape' </code></pre> <p>Note I also posted this on <a href="https://github.com/keras-team/keras/issues/12452" rel="nofollow noreferrer">https://github.com/keras-team/keras/issues/12452</a> but I figured this would be a better place for this.</p>
<p>After viewing the source code I got the following working code, which should be the proper way to define an initializer (especially when loading a model with load_model):</p> <pre><code>import numpy as np import tensorflow as tf import keras from google.colab import drive from keras.models import Sequential, load_model from keras.layers import Dense from keras import backend as K from keras.initializers import Initializer K.backend() # the output should be 'tensorflow' </code></pre> <pre><code>class myInit( Initializer ): def __init__(self, myParameter): self.myParameter = myParameter def __call__(self, shape, dtype=None): # array filled entirely with 'myParameter' weights = np.full( shape, self.myParameter ) return K.variable( weights, dtype=dtype ) def get_config(self): return { 'myParameter' : self.myParameter } </code></pre> <p>Building the model:</p> <pre><code>model = Sequential() model.add( Dense( 2, input_shape=(784,) ) ) model.add( Dense( 3, kernel_initializer=myInit( 2019 ) ) ) model.add( Dense( 5 ) ) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) </code></pre> <p>Save the model:</p> <pre><code>model.save( somepath ) </code></pre> <p>Now we can load the model in a different notebook. The imports from the other notebook should be imported here as well and <code>myInit</code> should also be defined in this notebook.</p> <pre><code>model = load_model( somepath, custom_objects={ 'tf' : tf, 'myInit' : myInit } ) </code></pre>
tensorflow|keras|google-colaboratory
4
376,313
54,892,806
Pandas - DataFrame aggregate behaving oddly
<p>Related to <a href="https://stackoverflow.com/questions/54892437/dataframe-aggregate-method-passing-list-problem">Dataframe aggregate method passing list problem</a> and <a href="https://stackoverflow.com/questions/54890646/pandas-fails-to-aggregate-with-a-list-of-aggregation-functions">Pandas fails to aggregate with a list of aggregation functions</a></p> <p>Consider this dataframe</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame(index=range(10)) df['a'] = [ 3 * x for x in range(10) ] df['b'] = [ 1 -2 * x for x in range(10) ] </code></pre> <p>According to the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.aggregate.html" rel="nofollow noreferrer">documentation</a> for <code>aggregate</code> you should be able to specify which columns to aggregate using a <code>dict</code> like this:</p> <pre><code>df.agg({'a' : 'mean'}) </code></pre> <p>Which returns</p> <pre><code>a 13.5 </code></pre> <p>But if you try to <code>aggregate</code> with a user-defined function like this one</p> <pre><code>def nok_mean(x): return np.mean(x) df.agg({'a' : nok_mean}) </code></pre> <p>It returns the mean for each row rather than the column</p> <pre><code> a 0 0.0 1 3.0 2 6.0 3 9.0 4 12.0 5 15.0 6 18.0 7 21.0 8 24.0 9 27.0 </code></pre> <p>Why does the user-defined function not return the same as aggregating with <code>np.mean</code> or <code>'mean'</code>?</p> <p>This is using <code>pandas</code> version <code>0.23.4</code>, <code>numpy</code> version <code>1.15.4</code>, <code>python</code> version <code>3.7.1</code></p>
<p>The issue has to do with applying <code>np.mean</code> to a series. Let's look at a few examples:</p> <pre><code>def nok_mean(x): return x.mean() df.agg({'a': nok_mean}) a 13.5 dtype: float64 </code></pre> <p>this works as expected because you are using pandas version of mean, which can be applied to a series or a dataframe:</p> <pre><code>df['a'].agg(nok_mean) df.apply(nok_mean) </code></pre> <p>Let's see what happens when <code>np.mean</code> is applied to a series:</p> <pre><code>def nok_mean1(x): return np.mean(x) df['a'].agg(nok_mean1) df.agg({'a':nok_mean1}) df['a'].apply(nok_mean1) df['a'].apply(np.mean) </code></pre> <p>all return</p> <pre><code>0 0.0 1 3.0 2 6.0 3 9.0 4 12.0 5 15.0 6 18.0 7 21.0 8 24.0 9 27.0 Name: a, dtype: float64 </code></pre> <p>when you apply <code>np.mean</code> to a dataframe it works as expected:</p> <pre><code>df.agg(nok_mean1) df.apply(nok_mean1) a 13.5 b -8.0 dtype: float64 </code></pre> <p>in order to get <code>np.mean</code> to work as expected with a function pass an ndarray for x:</p> <pre><code>def nok_mean2(x): return np.mean(x.values) df.agg({'a':nok_mean2}) a 13.5 dtype: float64 </code></pre> <p>I am guessing all of this has to do with <code>apply</code>, which is why <code>df['a'].apply(nok_mean2)</code> returns an attribute error.</p> <p>I am guessing <a href="https://github.com/pandas-dev/pandas/blob/v0.24.1/pandas/core/frame.py#L6287-L6289" rel="nofollow noreferrer">here</a> in the source code</p>
pandas|numpy|dataframe|aggregate|series
3
376,314
54,790,000
Calculate segment ids needed in tf.math.segment_sum by length of segments in Tensorflow
<p>I'm working with sequential data of variable-size. Lets consider data like</p> <pre><code>Y = [ [.01,.02], [.03,.04], [.05,.06], [.07,.08], [.09,.1] ] l = [ 3, 2 ] </code></pre> <p>where <code>Y</code> is the result of some auxiliary calculation performed on my data and <code>l</code> stores the length of the original sequences. In this example <code>[.01,.02], [.03,.04], [.05,.06]</code> is thus the result of a calculation performed on the first sequence of the batch and <code>[.07,.08], [.09,.1]</code> is the result of a calculation performed on the second sequence of the batch of lengths <code>3</code>and <code>2</code> respectively. Now I would like to do some further calculations on the entries of <code>Y</code>, but grouped by sequences. In Tensorflow there are functions as <code>tf.math.segment_sum</code> which can be performed on a per-group basis.</p> <p>Lets say I would like to sum using <code>tf.math.segment_sum</code>. I would be interested in </p> <pre><code>seq_ids = [ 0, 0, 0, 1, 1 ] tf.math.segment_sum(Y, segment_ids=seq_ids) #returns [ [0.09 0.12], [0.16 0.18] ] </code></pre> <p>The problem that I now face is to get <code>seq_ids</code> from <code>l</code>. In numpy one would easily retrieve this by</p> <pre><code>seq_ids = np.digitize( np.arange(np.sum(l)), np.cumsum(l) ) </code></pre> <p>It seems that there is a hidden (from the python api) equivalent of <code>digitize</code> named <code>bucketize</code> as mentioned in <a href="https://stackoverflow.com/a/48914476/10316642">this search</a> for a <code>digitize</code> in Tensorflow. But it seems that the refered <code>hidden_ops.txt</code> has been removed from Tensorflow and it is unclear to me if there still is (and will be) support for the function <code>tensorflow::ops::Bucketize</code> in the python api. Another idea I had to get a similar result was to use the <code>tf.train.piecewise_constant</code> function. But this attempt failed, as </p> <pre><code>seq_ids = tf.train.piecewise_constant(tf.range(tf.math.reduce_sum(l)), tf.math.cumsum(l), tf.range(BATCH_SIZE-1)) </code></pre> <p>failed with <code>object of type 'Tensor' has no len()</code>. It seems that <code>tf.train.piecewise_constant</code> isn't implemented in the most general way as the parameters <code>boundaries</code> and <code>values</code> need to be lists instead of tensors. As <code>l</code> in my case is a 1-D tensor gathered in a minibatch of my <code>tf.data.Dataset</code></p>
<p>This is one way to do that:</p> <pre><code>import tensorflow as tf def make_seq_ids(lens): # Get accumulated sums (e.g. [2, 3, 1] -&gt; [2, 5, 6]) c = tf.cumsum(lens) # Take all but the last accumulated sum value as indices idx = c[:-1] # Put ones on every index s = tf.scatter_nd(tf.expand_dims(idx, 1), tf.ones_like(idx), [c[-1]]) # Use accumulated sums to generate ids for every segment return tf.cumsum(s) with tf.Graph().as_default(), tf.Session() as sess: print(sess.run(make_seq_ids([2, 3, 1]))) # [0 0 1 1 1 2] </code></pre> <p>EDIT:</p> <p>You can also implement the same thing using <a href="https://www.tensorflow.org/api_docs/python/tf/searchsorted" rel="nofollow noreferrer"><code>tf.searchsorted</code></a>, in a more similar way to what you proposed for NumPy:</p> <pre><code>import tensorflow as tf def make_seq_ids(lens): c = tf.cumsum(lens) return tf.searchsorted(c, tf.range(c[-1]), side='right') </code></pre> <p>Neither of these implementations should be a bottleneck in a TensorFlow model, so for about any practical purpose it won't matter which one you choose. However, it is interesting to note that, in my particular machine (Win 10, TF 1.12, Core i7 7700K, Titan V), the second implementation is ~1.5x slower when running on CPU and ~3.5x faster when running on GPU.</p>
python|python-3.x|tensorflow|tensorflow-datasets
2
376,315
55,031,850
Multiple parameters in MySQL "IN" query
<p>I'm clearly doing something wrong in the parameterization, but not sure what the proper syntax is. </p> <p><strong>Desired, but doesn't work: multiple conditions in where IN</strong></p> <pre><code>data = ['lol', 'hi'] query = """ select word, count(1) from table where word in (%(ids)s) group by 1""" pandas.read_sql_query(sql=query, con=db_engine, params={'ids':data}) </code></pre> <p><em>Output:</em></p> <pre><code>InternalError: (pymysql.err.InternalError) (1241, 'Operand should contain 1 column(s)') [SQL: "select word, count(1) from table where word in (%(ids)s) group by 1 "] [parameters: {'ids': ('lol', 'hi')}] </code></pre> <p><strong>Not desired, but works: single condition in where IN</strong> (it's fine with a list of length 1)</p> <pre><code>data = ['lol'] query = """ select word, count(1) from table where word in (%(ids)s) group by 1""" pandas.read_sql_query(sql=query, con=db_engine, params={'ids':data}) </code></pre>
<p>Remove the brackets around the placeholder. As it is the query is comparing <code>word</code> against <code>(('lol', 'hi'))</code> after parameter substitution done by pymysql, or in other words a scalar against a tuple. A list of length 1 is fine because the result is <code>(('hi'))</code> and SQL actually treats the scalar <code>word</code> as a 1-column row when comparing, which the perhaps slightly non obvious error hints at.</p> <p>So the query should look like:</p> <pre><code>query = """ select word, count(1) from table where word in %(ids)s group by 1 """ </code></pre>
python|mysql|pandas|sqlalchemy|pymysql
1
376,316
55,131,799
How to preserve datatype in DataFrame from an sklearn Transform (Imputer)
<p>I have the following data below. </p> <pre><code>+----+-------------+----------+--------+------+-------+-------+---------+ | ID | PassengerId | Survived | Pclass | Age | SibSp | Parch | Fare | +----+-------------+----------+--------+------+-------+-------+---------+ | 0 | 1 | 0 | 3 | 22.0 | 1 | 0 | 7.2500 | | 1 | 2 | 1 | 1 | 38.0 | 1 | 0 | 71.2833 | | 2 | 3 | 1 | 3 | 26.0 | 0 | 0 | 7.9250 | | 3 | 4 | 1 | 1 | 35.0 | 1 | 0 | 53.1000 | | 4 | 5 | 0 | 3 | 35.0 | 0 | 0 | 8.0500 | | 5 | 6 | 0 | 3 | NaN | 0 | 0 | 8.4583 | +----+-------------+----------+--------+------+-------+-------+---------+ </code></pre> <p>After the transformation (via imputation) the datatypes assumingly from int/bool change into floats.</p> <pre><code>+----+-------------+----------+--------+-----------+-------+-------+---------+ | ID | PassengerId | Survived | Pclass | Age | SibSp | Parch | Fare | +----+-------------+----------+--------+-----------+-------+-------+---------+ | 0 | 1.0 | 0.0 | 3.0 | 22.000000 | 1.0 | 0.0 | 7.2500 | | 1 | 2.0 | 1.0 | 1.0 | 38.000000 | 1.0 | 0.0 | 71.2833 | | 2 | 3.0 | 1.0 | 3.0 | 26.000000 | 0.0 | 0.0 | 7.9250 | | 3 | 4.0 | 1.0 | 1.0 | 35.000000 | 1.0 | 0.0 | 53.1000 | | 4 | 5.0 | 0.0 | 3.0 | 35.000000 | 0.0 | 0.0 | 8.0500 | | 5 | 6.0 | 0.0 | 3.0 | 28.000000 | 0.0 | 0.0 | 8.4583 | +----+-------------+----------+--------+-----------+-------+-------+---------+ </code></pre> <p>My code is below:</p> <pre><code>import pandas as pd import numpy as np #https://www.kaggle.com/shivamp629/traincsv/downloads/traincsv.zip/1 data = pd.read_csv("train.csv") data2 = data[['PassengerId', 'Survived','Pclass','Age','SibSp','Parch','Fare']].copy() from sklearn.preprocessing import Imputer fill_NaN = Imputer(missing_values=np.nan, strategy='median', axis=0) data2_im = pd.DataFrame(fill_NaN.fit_transform(data2), columns = data2.columns) data2_im </code></pre> <p>IS there a way to preserve the datatypes? Thanks for any help.</p>
<p>The dtypes cannot be preserved, because <code>sklearn</code> extracts the underlying data from <code>data2</code> before transforming and homogenises the dtypes to float for performance reasons.</p> <p>You can always reinstate the initial dtypes using <code>astype</code>:</p> <pre><code>v = fill_NaN.fit_transform(data2) df = pd.DataFrame(v, columns=data2.columns).astype(data2.dtypes.to_dict()) df PassengerId Survived Pclass Age SibSp Parch Fare 0 1 0 3 22.0 1 0 7.2500 1 2 1 1 38.0 1 0 71.2833 2 3 1 3 26.0 0 0 7.9250 3 4 1 1 35.0 1 0 53.1000 4 5 0 3 35.0 0 0 8.0500 5 6 0 3 35.0 0 0 8.4583 </code></pre>
python|pandas|numpy|scikit-learn
1
376,317
54,800,236
correct use of lambda function with pandas
<p>I want to apply a function to a DataFrame for creating a new dataframe with mean values using lambda, and I'm getting this error: </p> <p>TypeError: ("Argument 'real' has incorrect type (expected numpy.ndarray, got Series)", u'occurred at index 2018-01-02 00:00:00')</p> <p>here is my data:</p> <pre><code> AA AAPL FB GOOG TSLA Date 2018-01-02 55.169998 168.987320 181.419998 1065.000000 320.529999 2018-01-03 54.500000 168.957886 184.669998 1082.479980 317.250000 2018-01-04 54.700001 169.742706 184.330002 1086.400024 314.619995 2018-01-05 54.090000 171.675278 186.850006 1102.229980 316.579987 2018-01-08 55.000000 171.037628 188.279999 1106.939941 336.410004 2018-01-09 54.200001 171.018005 187.869995 1106.260010 333.690002 </code></pre> <p>and this is what i'm trying so far:</p> <pre><code>data = pd.read_csv('help.csv', parse_dates=True, index_col=0) sma20 = data.apply(lambda x: ta.SMA(x, 20), axis=0) print(sma20.tail()) </code></pre>
<p>In more recent versions of pandas, you can supply a <code>raw=True</code> argument if you want <code>apply</code> to pass an <code>ndarray</code> to your function.</p> <pre><code># data.apply(lambda x: ta.SMA(x, 20), axis=0, raw=True) # Same as, data.apply(ta.SMA, axis=0, raw=True, args=(20, )) </code></pre> <p>PS: you don't need the <code>lambda</code>.</p>
python|pandas|lambda|ta-lib
2
376,318
54,910,321
Allow overflow for numpy types
<p>I'm trying to get the "normal" overflow/underflow behavior of C-type languages in Python. To my surprise, a <code>RuntimeWarning</code> is raised when I'm trying to get this behavior. Example:</p> <pre class="lang-py prettyprint-override"><code>np.uint8(255) + np.uint8(1) &gt;&gt;&gt; RuntimeWarning: overflow encountered in ubyte_scalars </code></pre> <p>Is there any way to simulate the desired behavior, i.e., that 255+1 gives 0?</p> <p>I tried the docs but cannot find this behavior documented.</p>
<p>I believe numpy does give you the correct behavior.</p> <pre><code>In [1]: np.uint8(255) + np.uint8(1) /usr/bin/ipython:1: RuntimeWarning: overflow encountered in ubyte_scalars #!/usr/bin/python2 Out[1]: 0 </code></pre> <p>You can suppress the warning by running:</p> <pre><code>In [1]: np.seterr(over='ignore') Out[1]: {'divide': 'warn', 'invalid': 'warn', 'over': 'warn', 'under': 'ignore'} In [2]: np.uint8(255) + np.uint8(1) Out[2]: 0 </code></pre>
python|numpy|overflow
2
376,319
55,093,574
Fill NaN's within 1 column of a df via lookup to another df via pandas
<p>I seen various versions of this question but none of them seem to fit with what I am attempting to do: here's my data: </p> <p>Here's the df with the <code>NaN</code>s: </p> <pre><code>df = pd.DataFrame({"A": ["10023", "10040", np.nan, "12345", np.nan, np.nan, "10033", np.nan, np.nan], "B": [",", "17,-6", "19,-2", "17,-5", "37,-5", ",", "9,-10", "19,-2", "2,-5"], "C": ["small", "large", "large", "small", "small", "large", "small", "small", "large"]}) A B C 0 10023 , small 1 10040 17,-6 large 2 NaN 19,-2 large 3 12345 17,-5 small 4 NaN 37,-5 small 5 NaN , large 6 10033 9,-10 small 7 NaN 19,-2 small 8 NaN 2,-5 large </code></pre> <p>Next I have the lookup df called <code>df2</code>:</p> <pre><code>df2 = pd.DataFrame({"B": ['17,-5', '19,-2', '37,-5', '9,-10'], "A": ["10040", "54321", "12345", "10033"]}) B A 0 17,-5 10040 1 19,-2 54321 2 37,-5 12345 3 9,-10 10033 </code></pre> <p>I would like to fill in the <code>NaN</code>s of column <code>A</code> on <code>df</code> by looking up column <code>df2.B</code> and returning <code>df2.A</code> such that the resulting <code>dfr</code> looks like this: </p> <pre><code> A B C 0 10023 , small 1 10040 17,-6 large 2 54321 19,-2 large 3 10040 17,-5 small 4 12345 37,-5 small 5 NaN , large 6 10033 9,-10 small 7 54321 19,-2 small 8 NaN 2,-5 large </code></pre> <p>Important caveats: </p> <ol> <li>The <code>df</code>s do not have matching indexes</li> <li>The contents of <code>df.A</code> and <code>df2.A</code> are non-unique()</li> <li>The rows of <code>df2</code> do make up unique pairs.</li> <li>Assume that there are more columns, not shown, with <code>NaN</code>s.</li> </ol> <p>Using pandas, the rows of interest on <code>df</code> would be found (I think) via: <code>df.loc[df['A'].isnull(),]</code>. <a href="https://stackoverflow.com/questions/29293369/pandas-fillna-with-a-lookup-table">This</a> answer seemed promising but I'm unclear where <code>df1</code> in that example comes from. My actual data set is much, much larger than this and I'll have to be doing replacing several columns in this way. </p>
<p>The <code>map</code> method from Wen-Ben will be faster in terms of speed, but here's another way you can solve this problem, just for your convenience and knowledge</p> <p>You can use <code>pd.merge</code>, because this is basically a <code>join</code> problem. After the merge, we fillna and drop the columns we dont need.</p> <pre><code>df_final = pd.merge(df, df2, on='B', how='left', suffixes=['_1','_2']) df_final['A'] = df_final.A_1.fillna(df_final.A_2) df_final.drop(['A_1', 'A_2'], axis=1, inplace=True) print(df_final) B C A 0 , small 10023 1 17,-6 large 10040 2 19,-2 large 54321 3 17,-5 small 12345 4 37,-5 small 12345 5 , large NaN 6 9,-10 small 10033 7 19,-2 small 54321 8 2,-5 large NaN </code></pre>
pandas|numpy|dataframe|missing-data|fillna
1
376,320
54,904,954
Best method to identify and replace outlier for Salary column in python
<p>What is best method to identify and replace outlier for ApplicantIncome, CoapplicantIncome,LoanAmount,Loan_Amount_Term column in pandas python.</p> <p>I tried IQR with seaborne boxplot, and tried to identified the outlet and fill with NAN record after that take mean of ApplicantIncome and filled with NAN records.</p> <p>Try to take group of below combination column ex: gender, education, selfemployed, Property_Area</p> <p>And having below column in my dataframe</p> <pre><code>Loan_ID LP001357 Gender Male Married NaN Dependents NaN Education Graduate Self_Employed No ApplicantIncome 3816 CoapplicantIncome 754 LoanAmount 160 Loan_Amount_Term 360 Credit_History 1 Property_Area Urban Loan_Status Y </code></pre>
<h2>Outliers</h2> <p>Just like missing values, your data might also contain values that diverge heavily from the big majority of your other data. These data points are called “outliers”. To find them, you can check the distribution of your single variables by means of a box plot or you can make a scatter plot of your data to identify data points that don’t lie in the “expected” area of the plot.</p> <p>The causes for outliers in your data might vary, going from system errors to people interfering with the data through data entry or data processing, but it’s important to consider the effect that they can have on your analysis: they will change the result of statistical tests such as standard deviation, mean or median, they can potentially decrease the normality and impact the results of statistical models, such as regression or ANOVA.</p> <p>To deal with outliers, you can either delete, transform, or impute them: the decision will again depend on the data context. That’s why it’s again important to understand your data and identify the cause for the outliers:</p> <ul> <li>If the outlier value is due to data entry or data processing errors, you might consider deleting the value.</li> <li>You can transform the outliers by assigning weights to your observations or use the natural log to reduce the variation that the outlier values in your data set cause.</li> <li>Just like the missing values, you can also use imputation methods to replace the extreme values of your data with median, mean or mode values.</li> </ul> <p>You can use the functions that were described in the above section to deal with outliers in your data.</p> <p>Following links will be useful for you:</p> <p><a href="https://realpython.com/python-data-cleaning-numpy-pandas/" rel="nofollow noreferrer">Python data cleaning</a></p> <p><a href="https://towardsdatascience.com/ways-to-detect-and-remove-the-outliers-404d16608dba" rel="nofollow noreferrer">Ways to detect and remove the outliers</a></p>
sklearn-pandas|data-science-experience
1
376,321
54,993,183
Does a random seed set via numpy.random.seed maintain across submodules?
<p>If I set a seed for my RNG e.g. <code>numpy.random.seed(0)</code> and I call a submodule, will the RNG's state be maintained?</p> <p>e.g.</p> <pre><code># some_lib.py def do_thing(): return numpy.random.rand() </code></pre> <pre><code># parent module import some_lib numpy.seed(0) ... some_lib.do_thing() </code></pre> <p>Will the numpy state set by the parent be used by the child?</p>
<p>The seed is a global value for all uses of <code>numpy</code>. So as long as the child module doesn't reseed it, or pull values from it non-deterministically (effectively adjusting it to a new seed based on advancing the old), then the seed will be preserved.</p> <p>Most PRNG libraries behave this way, because the alternative is pretty useless; for reproducible tests, you <em>want</em> to be able to set the seed once, and have everything rely on that stable seed. If there was a per-module seed, the testing module couldn't seed the PRNG used by the module being tested.</p>
python|numpy|random
5
376,322
55,134,142
How to Fix a Column Using a For Loop and Placing into Another Column Using python pandas?
<p>Pasted the code below. Need to fix the accountant name and making each variation them same (doesn't matter which variation, as long each are the same). I figured there were 2 options,1) using a dictionary or 2) trying to fix the name based upon matching the first 3 letters of the Accountant Name.</p> <pre><code>import pandas as pd import numpy as np data = {'Accountant Name': ['Sindman Traub LLP', 'Sindman Traub LLC', 'Sindman Traub PLLC', 'McCrumb &amp; Assoc.', 'McCrumb &amp; Associates LLC', 'Lee &amp; Mike', 'Lee &amp; Mike LLC', 'Lee &amp; Mike Inc','Sindman Traub Corp'], 'Cost':[10, 9, 15, 4, 13, 25, 2, 89, 44]} df = pd.DataFrame(data) df['AverageCost'] =np.nan df['Fixed Accountant Name'] =np.nan df = df.sort_values(by=['Accountant Name'], ascending = True) </code></pre> <p>Output =</p> <pre><code>outputdata = {'Accountant Name':['Sindman Traub LLP', 'Sindman Traub LLC', 'Sindman Traub PLLC', 'McCrumb &amp; Assoc.', 'McCrumb &amp; Associates LLC', 'Lee &amp; Mike', 'Lee &amp; Mike LLC', 'Lee &amp; Mike Inc','Sindman Traub Corp'], 'Cost':[10, 9, 15, 4, 13, 25, 2, 89, 44], 'Fixed Accountant Name':['Sindman Traub', 'Sindman Traub','Sindman Traub', 'McCrumb and Associates', 'McCrumb and Associates', 'Lee and Mike','Lee and Mike', 'Lee and Mike', 'Sindman Traub'], 'AverageCost':[19.500000, 19.500000,19.500000,8.500000,8.500000, 38.666667,38.666667,38.666667,19.500000]} outputdf = pd.DataFrame(outputdata) </code></pre> <p><a href="https://i.stack.imgur.com/KQ1xZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KQ1xZ.png" alt="enter image description here"></a></p>
<p>Not sure what you're asking, so please post expected output.</p> <p>Maybe this?:</p> <pre><code>df['Fixed Accountant Name'] = [x[:3] for x in df['Accountant Name']] df.groupby('Fixed Accountant Name')['Cost'].mean() Fixed Accountant Name Lee 38.666667 McC 8.500000 Sin 19.500000 </code></pre>
python|pandas|dataframe
0
376,323
54,950,980
Problem with removing redundancy from a file
<p>I've got a DataSet with two columns, one with categorical value (<code>State2</code>), and another (<code>State</code>) that contains the same values only in binary.<br> I used <code>OneHotEncoding</code>.</p> <pre><code>import pandas as pd mydataset = pd.read_csv('fieldprotobackup.binetflow') mydataset.drop_duplicates(['Proto2','Proto'], keep='first') mydataset.to_csv('fieldprotobackup.binetflow', columns=['Proto2','Proto'], index=False) </code></pre> <p><a href="https://i.stack.imgur.com/1huWt.png" rel="nofollow noreferrer">Dataset</a></p> <p>I'd like to remove all redundancies from the file. While researching, I found the command <code>df.drop_duplicates</code>, but it's not working for me.</p>
<p>You either need to add the <code>inplace=True</code> parameter, or you need to capture the returned dataframe:</p> <pre><code>mydataset.drop_duplicates(['Proto2','Proto'], keep='first', inplace=True) </code></pre> <p>or</p> <pre><code>no_duplicates = mydataset.drop_duplicates(['Proto2','Proto'], keep='first') </code></pre> <p>Always a good idea to check the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer">documentation</a> when something isn't working as expected.</p>
python|pandas|file|duplicates
2
376,324
55,010,146
Dealing with large sum of distinct values in dataframe columns
<p>Newbie to python, to the world of data analytics with python. I am working on practice data where one of the columns has 87 distinct values and other column has 888 distinct values where I am thinking to delete the latter column. I just don't understand how do I deal with these columns. Do I group these columns or delete the columns. If I group, then how do I go about it!? Really appreciate your ideas. @Toby Petty @Vaishali</p> <p>Ex:</p> <p><code>import pandas as pd import bumpy as np</code></p> <p><code>print("Count of distinct entries for car:", len(set(car_sales['car'])))<br> print("Distinct entries for car:", set(car_sales['car']))</code></p> <p><code>Count of distinct entries for car: 87 Distinct entries for car: {'Lamborghini', 'ËUAZ', 'Daewoo', 'Jeep', 'Ferrari', 'Bentley', 'Mercury', 'MINI', 'Acura', 'Land Rover', 'Aston Martin', 'Fisker', 'Dodge', 'Fiat', 'MG', 'Samsung', 'Rolls-Royce', 'SsangYong', 'Hyundai', 'Lincoln', 'Ford', 'Moskvich-Izh', 'Samand', 'Audi', 'Dadi', 'Geely', 'Dacia', 'Daihatsu', 'Maserati', 'Volkswagen', 'Peugeot', 'Volvo', 'Nissan', 'SMA', 'Hummer', 'Porsche', 'Subaru', 'Alfa Romeo', 'Saab', 'Buick', 'Mazda', 'Mercedes-Benz', 'Lexus', 'Hafei', 'Renault', 'Suzuki', 'Chrysler', 'BYD', 'Moskvich-AZLK', 'Jaguar', 'Smart', 'ZAZ', 'Groz', 'Infiniti', 'TATA', 'Lifan', 'ZX', 'Isuzu', 'Rover', 'Honda', 'Mitsubishi', 'Cadillac', 'FAW', 'Aro', 'Wartburg', 'GMC', 'Great Wall', 'Lancia', 'Bogdan', 'Kia', 'BMW', 'JAC', 'Tesla', 'Seat', 'Barkas', 'VAZ', 'Huanghai', 'Toyota', 'Citroen', 'Other-Retro', 'Chery', 'Opel', 'Chevrolet', 'Skoda', 'UAZ', 'Changan', 'GAZ'} </code></p>
<p>What exactly is your question?</p> <p><strong>Update</strong>: After some clarification/guessing, I am going to assume that the question is about two issues:</p> <ol> <li>How to limit a <code>groupby</code> to only the top <code>k</code> groups (by some aggregate of choice).</li> <li>How to summarize columns, including some non-numeric ones.</li> </ol> <p>For starters, <code>sns</code> contains some beautiful datasets that are very handy for such questions, for example, below we will use 'mpg', which contains some car and mileage information.</p> <pre><code>import pandas as pd import numpy as np import seaborn as sns df = sns.load_dataset('mpg') </code></pre> <p>We are going to split the supplied <code>name</code> into a <code>brand</code> and <code>model</code>:</p> <pre><code>df[['brand', 'model']] = pd.DataFrame(df.name.str.split(' ', n=1).values.tolist()) df.head(3) Out[]: mpg cylinders displacement horsepower weight acceleration \ 0 18.0 8 307.0 130.0 3504 12.0 1 15.0 8 350.0 165.0 3693 11.5 2 18.0 8 318.0 150.0 3436 11.0 model_year origin name brand model 0 70 usa chevrolet chevelle malibu chevrolet chevelle malibu 1 70 usa buick skylark 320 buick skylark 320 2 70 usa plymouth satellite plymouth satellite </code></pre> <p>For later, we'll add a column <code>n</code> which we'll use to count how many entries we have for our stats:</p> <pre><code>df['n'] = 1 </code></pre> <p>Look for top 5 groups, according to maximum <code>acceleration</code> (the OP wants to use total sales, so in his case we would use <code>sales.sum()</code> instead of <code>acceleration.max()</code>, but here we don't have sales figures). The main point is to build an index of the groups we want to report on (and rename the others as 'Others'). We turn that index, that we call <code>idx</code>, into a list of tuples for easier subsetting.</p> <pre><code>idx = df.groupby(['brand', 'model']).acceleration.max().sort_values(ascending=False).head(5).index.to_list() idx Out[]: [('peugeot', '504'), ('vw', 'pickup'), ('vw', 'dasher (diesel)'), ('volkswagen', 'type 3'), ('chevrolet', 'chevette')] </code></pre> <p>Now build a boolean selector <code>top10</code>, which is <code>True</code> for the selected groups.</p> <pre><code>top10 = df.set_index(['brand', 'model']).index.isin(idx) </code></pre> <p>Rename the others:</p> <pre><code>df.loc[~top10, 'brand'] = 'Other' df.loc[~top10, 'model'] = '' </code></pre> <p>Now, for columns that are not numeric, we choose to report the majority value (the most frequent within the group).</p> <pre><code>from collections import Counter def majority(*args): return Counter(*args).most_common(1)[0][0] # example majority('z a b a a c d'.split()) Out[]: 'a' </code></pre> <p>Finally, we define a dict of aggregators to be used for the various columns:</p> <pre><code># numeric: use mean desired = {k:'mean' for k in df.columns if np.issubdtype(df[k], np.number)} # simplified: desired = {k:'mean' for k in ['mpg', 'horsepower', 'weight']} # non-numeric: use majority desired.update({'origin': majority}) # also report the size of each group desired.update({'n': 'sum'}) </code></pre> <p>Now, do the groupby and aggregate:</p> <pre><code>df.groupby(['brand', 'model']).agg(desired) Out[]: mpg horsepower weight origin n brand model Other 23.340052 105.540682 2984.651163 usa 387 chevrolet chevette 30.400000 63.250000 2090.250000 usa 4 peugeot 504 23.550000 83.500000 3022.250000 europe 4 volkswagen type 3 23.000000 54.000000 2254.000000 europe 1 vw dasher (diesel) 43.400000 48.000000 2335.000000 europe 1 pickup 44.000000 52.000000 2130.000000 europe 1 </code></pre>
python|pandas
1
376,325
54,748,330
Changing variable in Lambda layer in pretrained model?
<p>I have imported and pytorch model to keras using pytorch2keras and have made the input flexible from [None,3,224,224] to [None,3,224,224]. Unfortunately, in the original model there is a Lambda layer reducing the output of a convolutional layer by 1, e.g. [None,3,111,111] -> [None,3,110, 110].</p> <p>How can I specify in my config that I would like to do [None,3,None,None] -> [None,3,None-1, None-1]?</p> <p>The shape of the Lambda layer is hardcoded here (see below line: (3,0,110)):</p> <pre><code>[..., {'name': 'lambda_2', 'class_name': 'Lambda', 'config': {'name': 'lambda_2', 'trainable': True, 'function': ('4wQAAAAAAAAABAAAAAYAAABTAAAAc34AAAB8AWQBawJyFHwAfAJ8A4UCGQBTAHwBZAJrAnIwfABk\nAGQAhQJ8AnwDhQJmAhkAUwB8AWQDawJyUnwAZABkAIUCZABkAIUCfAJ8A4UCZgMZAFMAfAFkBGsC\ncnp8AGQAZACFAmQAZACFAmQAZACFAnwCfAOFAmYEGQBTAGQAUwApBU7pAAAAAOkBAAAA6QIAAADp\nAwAAAKkAKQTaAXjaBGF4aXPaBXN0YXJ02gNlbmRyBQAAAHIFAAAA+j4vdXNyL2xvY2FsL2xpYi9w\neXRob24zLjYvZGlzdC1wYWNrYWdlcy9weXRvcmNoMmtlcmFzL2xheWVycy5wedoMdGFyZ2V0X2xh\neWVypgQAAHMQAAAAAAEIAQwBCAEUAQgBGgEIAQ==\n', (3,0,110), None), 'function_type': 'lambda', 'output_shape': None, 'output_shape_type': 'raw', 'arguments': {}}, 'inbound_nodes': [[['lambda_1', 0, 0, {}]]]}, ..] </code></pre>
<p>You could try to replace the function with <code>lambda x: x[:,:,:-1,:-1]</code>. (If you decide later to use channels_last, then <code>lambda x: x[:,:-1, :-1]</code>.</p> <p>No sure what to do with the argument <code>(3,0,110)</code>, but it seems unnecessary.</p>
python|tensorflow|keras
0
376,326
55,110,049
Creating a '0-1' column based on a list
<p>Let's say that this is head of my df:</p> <pre><code> Team Win_pct_1 Win_pct_2 0 Memphis 0.6 0.5 1 Miami 0.4 0.6 2 Phoenix 0.7 0.4 3 Dallas 0.6 0.3 4 Boston 0.4 0.1 </code></pre> <p>I have created a list of teams for example: </p> <pre><code>list = ['Miami','Dallas'] </code></pre> <p>1) Then I want to add a column to my df based on that list. If the <code>df['Team']</code> is in the list, new column will show 1, else 0. So in the end I will get something like:</p> <pre><code> Team Win_pct_1 Win_pct_2 New_column 0 Memphis 0.6 0.5 0 1 Miami 0.4 0.6 1 2 Phoenix 0.7 0.4 0 3 Dallas 0.6 0.3 1 4 Boston 0.4 0.1 0 </code></pre> <p>I was considering using <code>for index, row in df.iterrows():</code> or <code>if df.Team.isin(list)</code> but I don't know how to make it work. </p> <p>2) Once I add new column, I want to create a relplot:</p> <pre><code>sns.relplot(data=df, x='Win_pct_1', y='Win_pct_2', hue='New_column') </code></pre> <p>And I would like to know whether there is a fast way to add annotations to such plot based on my list (it can be simple annotations just above a right dot, no arrows) or it is impossible in Python (In R that is pretty easy) and I have to create as many <code>plt.annotate</code> as necessary. </p>
<p>For your first question, you can use a ternary with <code>np.where</code> and <code>isin</code>:</p> <pre><code>df['New_column'] = np.where(df['Team'].isin(my_list), 1, 0) </code></pre> <p>Another alternative:</p> <pre><code>df['New_column'] = df['Team'].isin(my_list).astype(int) </code></pre>
python|pandas|matplotlib|seaborn
0
376,327
54,987,518
Trying to join two pandas dataframes but get "ValueError: You are trying to merge on object and int64 columns."?
<p>I have two pandas dataframes: <code>seren1</code> and <code>bbox</code>. And I want to perform an inner join of them on column named <code>filepath</code>.</p> <pre><code>seren1[["filepath", "label"]].join(bbox[["filepath", "label"]], on="filepath", how="inner", lsuffix='_caller', rsuffix='_other') </code></pre> <p>gives the error:</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-74-c001a7adc7cd&gt; in &lt;module&gt; ----&gt; 1 seren1[["filepath", "label"]].join(bbox[["filepath", "label"]], on="filepath", how="inner", lsuffix='_caller', rsuffix='_other') /projects/community/py-data-science-stack/5.1.0/kp807/envs/fastai/lib/python3.7/site-packages/pandas/core/frame.py in join(self, other, on, how, lsuffix, rsuffix, sort) 6822 # For SparseDataFrame's benefit 6823 return self._join_compat(other, on=on, how=how, lsuffix=lsuffix, -&gt; 6824 rsuffix=rsuffix, sort=sort) 6825 6826 def _join_compat(self, other, on=None, how='left', lsuffix='', rsuffix='', /projects/community/py-data-science-stack/5.1.0/kp807/envs/fastai/lib/python3.7/site-packages/pandas/core/frame.py in _join_compat(self, other, on, how, lsuffix, rsuffix, sort) 6837 return merge(self, other, left_on=on, how=how, 6838 left_index=on is None, right_index=True, -&gt; 6839 suffixes=(lsuffix, rsuffix), sort=sort) 6840 else: 6841 if on is not None: /projects/community/py-data-science-stack/5.1.0/kp807/envs/fastai/lib/python3.7/site-packages/pandas/core/reshape/merge.py in merge(left, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator, validate) 45 right_index=right_index, sort=sort, suffixes=suffixes, 46 copy=copy, indicator=indicator, ---&gt; 47 validate=validate) 48 return op.get_result() 49 /projects/community/py-data-science-stack/5.1.0/kp807/envs/fastai/lib/python3.7/site-packages/pandas/core/reshape/merge.py in __init__(self, left, right, how, on, left_on, right_on, axis, left_index, right_index, sort, suffixes, copy, indicator, validate) 531 # validate the merge keys dtypes. We may need to coerce 532 # to avoid incompat dtypes --&gt; 533 self._maybe_coerce_merge_keys() 534 535 # If argument passed to validate, /projects/community/py-data-science-stack/5.1.0/kp807/envs/fastai/lib/python3.7/site-packages/pandas/core/reshape/merge.py in _maybe_coerce_merge_keys(self) 978 (inferred_right in string_types and 979 inferred_left not in string_types)): --&gt; 980 raise ValueError(msg) 981 982 # datetimelikes must match exactly ValueError: You are trying to merge on object and int64 columns. If you wish to proceed you should use pd.concat </code></pre> <p>But if I convert them to series join:</p> <pre><code>import numpy as np pd.Series(np.intersect1d(seren1["filepath"].values,bbox["filepath"].values)) </code></pre> <p>It works fine:</p> <pre><code>0 S1/B04/B04_R1/S1_B04_R1_PICT0006 1 S1/B04/B04_R1/S1_B04_R1_PICT0007 2 S1/B04/B04_R1/S1_B04_R1_PICT0008 3 S1/B04/B04_R1/S1_B04_R1_PICT0013 4 S1/B04/B04_R1/S1_B04_R1_PICT0039 5 S1/B04/B04_R1/S1_B04_R1_PICT0040 6 S1/B04/B04_R1/S1_B04_R1_PICT0041 7 S1/B05/B05_R1/S1_B05_R1_PICT0056 ...... </code></pre> <p>Type Checking:</p> <pre><code>seren1.dtypes filepath object timestamp object label object dtype: object bbox.dtypes filepath object label object X int64 Y int64 W int64 H int64 dtype: object all (seren1.filepath.apply(lambda x: isinstance(x, str)) ) True all (bbox.filepath.apply(lambda x: isinstance(x, str)) ) True </code></pre> <p>What goes wrong?</p>
<p>I was able to get away with this error as follow: </p> <p>Let suppose you are trying to join df2 to df1. For join function to work correctly, you will have to same column name 'Column' in both data frames and also have to set_index on that column 'Column' in the data frame to be joined. To get df2 joined to df1 at column 'Column' use,</p> <p>df1.join(df2.set_index('Column'), on = 'Column')</p>
python|pandas
0
376,328
55,064,890
How to print a plain text constant in tensorflow eager execution mode
<p>I have initialized a constant in tensorflow:</p> <pre><code>hello = tf.constant('hello') </code></pre> <p>In normal mode, <code>print(sess.run(hello).decode())</code> outputs the plain text in a constant Tensor.</p> <p>In eager execution mode however, the code above does not work.</p> <p>How can I print the plain text in constant Tensor hello?</p> <p>This question is NOT about <a href="https://stackoverflow.com/q/6269765/11074017">python b'str'</a> or <a href="https://stackoverflow.com/q/40904979/11074017">tf sess.run</a></p>
<p>convert tensor to numpy array and then decode</p> <pre><code>print(hello.numpy().decode()) </code></pre>
python|tensorflow
1
376,329
54,965,049
Create array of index values from list with another list python
<p>I have an array of values as well as another array which I would like to create an index to. For example:</p> <pre><code>value_list = np.array([[2,2,3],[255,243,198],[2,2,3],[50,35,3]]) key_list = np.array([[2,2,3],[255,243,198],[50,35,3]]) MagicFunction(value_list,key_list) #result = [[0,1,0,2]] which has the same length as value_list </code></pre> <p>The solutions I have seen online after researching are not quite what I am asking for I believe, any help would be appreciated! I have this brute force code which provides the result but I don't even want to test it on my actual data size</p> <pre><code>T = np.zeros((len(value_list)), dtype = np.uint32) for i in range(len(value_list)): for j in range(len(key_list)): if sum(value_list[i] == key_list[j]) == 3: T[i] = j </code></pre>
<p>The issue is how to get this to be not terribly inefficient. I see two approaches</p> <ol> <li><p>use a dictionary so that the lookups will be fast. <code>numpy</code> arrays are mutable, and thus not hashable, so you'll have to convert them into, e.g., tuples to use with the dictionary.</p></li> <li><p>Use broadcasting to check <code>value_list</code> against every "key" in <code>key_list</code> in a vectorized fashion. This will at least bring the for loops out of Python, but you will still have to compare every value to every key.</p></li> </ol> <p>I'm going to assume here too that <code>key_list</code> only has unique "keys".</p> <p>Here's how you could do the first approach:</p> <pre class="lang-py prettyprint-override"><code>value_list = np.array([[2,2,3],[255,243,198],[2,2,3],[50,35,3]]) key_list = np.array([[2,2,3],[255,243,198],[50,35,3]]) key_map = {tuple(key): i for i, key in enumerate(key_list)} result = np.array([key_map[tuple(value)] for value in value_list]) result # array([0, 1, 0, 2]) </code></pre> <p>And here's the second:</p> <pre class="lang-py prettyprint-override"><code>result = np.where((key_list[None] == value_list[:, None]).all(axis=-1))[1] result # array([0, 1, 0, 2]) </code></pre> <p>Which way is faster might depend on the size of <code>key_list</code> and <code>value_list</code>. I would time both for arrays of typical sizes for you.</p> <p>EDIT - as noted in the comments, the second solution doesn't appear to be entirely correct, but I'm not sure what makes it fail. Consider using the first solution instead.</p>
python|image|numpy|indexing
3
376,330
55,138,797
Expand pandas column based on the cell type
<p>I have the following dataframe:</p> <pre><code> field value 0 longitude 100 1 altitude 200 2 location China 3 date 20180303 ...... </code></pre> <p>I want to convert this dataframe into the following format:</p> <pre><code> field string_value int_value datetime_value boolean_value float_value field_type 0 longitude NA NA NA NA 100 float 1 altitude NA NA NA NA 200 float 2 location China NA NA NA NA str 3 date NA NA 20180303 NA NA datetime ...... </code></pre> <p>How could I efficiently do this? I think I can do this with <code>apply</code> but that seems slow because its going through the dataframe row by row. Is there a faster way to do this?</p>
<p>Idea is get <code>type</code>s of values, convert to string and <code>map</code> to better readable form, then for new columns use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>Series.unstack</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a> to original:</p> <pre><code>d = {'field': ['longitude', 'altitude', 'location', 'date','check'], 'value': [100, 200.5, 'China', pd.Timestamp('20180303'), True]} df = pd.DataFrame(d) #print (df) d = {"&lt;class 'bool'&gt;":"bool", "&lt;class 'float'&gt;":"float", "&lt;class 'int'&gt;":"int", "&lt;class 'str'&gt;":"string", "&lt;class 'pandas._libs.tslibs.timestamps.Timestamp'&gt;":"datetime"} s = df['value'].apply(type).astype(str).map(d).fillna('not defined type') df = df.join(df.set_index(s, append=True)['value'].unstack()) df['field_type'] = s print (df) field value bool datetime float int \ 0 longitude 100 NaN NaN NaN 100 1 altitude 200.5 NaN NaN 200.5 NaN 2 location China NaN NaN NaN NaN 3 date 2018-03-03 00:00:00 NaN 2018-03-03 00:00:00 NaN NaN 4 check True True NaN NaN NaN string field_type 0 NaN int 1 NaN float 2 China string 3 NaN datetime 4 NaN bool </code></pre>
python|pandas|dataframe
3
376,331
54,924,265
Pandas try to append a row to a dataframe but keeps overwriting the existing row
<p>My dataframe (df_UP) looks like this:</p> <pre><code> Total_trends #up #down #flat 0.05 811 326 310 175 </code></pre> <p>I am using a jupyter notebook, when I try to append a row with the following code: </p> <pre><code>`d = {'Total_trends': [total.time], '#up': [good_trigger.time], '#down': [bad_trigger.time], '#flat': [flat.time]} df_UP.append(d, ignore_index=True)` </code></pre> <p>It works the first time: </p> <pre><code>Total_trends #up #down #flat 0 811 326 310 175 1 [811] [326] [310] [175] </code></pre> <p>But when I run again the cell with other values in <code>d</code>, it just overwrites the existing data at row 1 with the new one, any idea why? Thanks!</p>
<p>You can convert dictionary to <code>DataFrame</code> and assign back:</p> <pre><code>d = {'Total_trends': [10], '#up': [20], '#down': [3], '#flat': [0]} df_UP = df_UP.append(pd.DataFrame(d), ignore_index=True) print (df_UP) Total_trends #up #down #flat 0 811 326 310 175 1 10 20 3 0 d = {'Total_trends': [160], '#up': [270], '#down': [63], '#flat': [30]} df_UP = df_UP.append(pd.DataFrame(d), ignore_index=True) print (df_UP) Total_trends #up #down #flat 0 811 326 310 175 1 10 20 3 0 2 160 270 63 30 </code></pre>
python|pandas
1
376,332
55,095,316
How can I make a generator which iterates over 2D numpy array?
<p>I have a huge 2D numpy array which I want to retrieve in batches. Array shape is=<code>60000,3072</code> I want to make a generator that gives me chunks out of this array like : <code>1000,3072</code> , then next <code>1000,3072</code> and so on. How can I make a generator to iterate over this array and pass me a batch of given size?</p>
<p>consider array <code>a</code></p> <pre><code>a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]) </code></pre> <p><strong><em>Option 1</em></strong><br> Use a generator</p> <pre><code>def get_every_n(a, n=2): for i in range(a.shape[0] // n): yield a[n*i:n*(i+1)] for sa in get_every_n(a): print sa [[1 2 3] [4 5 6]] [[ 7 8 9] [10 11 12]] </code></pre> <p><strong><em>Option 2</em></strong><br> use <code>reshape</code> and <code>//</code></p> <pre><code>a.reshape(a.shape[0] // 2, -1, a.shape[1]) array([[[ 1, 2, 3], [ 4, 5, 6]], [[ 7, 8, 9], [10, 11, 12]]]) </code></pre> <p><strong><em>Option 3</em></strong><br> if you wanted groups of two rather than two groups</p> <pre><code>a.reshape(-1, 2, a.shape[1]) array([[[ 1, 2, 3], [ 4, 5, 6]], [[ 7, 8, 9], [10, 11, 12]]]) </code></pre> <p>Since you explicitly stated that you need a generator you can use option 1 as the appropriate reference.</p>
python|numpy
3
376,333
55,002,918
Why does the size of tensorflow model files depend on the size of the dataset?
<p>The sizes of .index, .meta, and .data files of my saved model after training on a dataset of 10K sentences are 3KB, 58MB and 375MB respectively</p> <p>Keeping the architecture of the network same and training it on a dataset of 100K sentences, the sizes of the files are 3KB, 139MB and 860MB</p> <p>I think it suggests that the size depends on the size of the dataset. <a href="https://stackoverflow.com/a/45033500/9349697">According to this answer</a>, the size of the files should be independent of the size of the dataset as the architecture of the neural network is same. </p> <p>Why is there such a huge difference in the sizes?</p> <p>I would also like to know what more do these files contain apart from that mentioned in linked answer.</p> <p>Do these files contain information related to training history like loss values at each step, etc?</p>
<pre><code>import tensorflow as tf from tensorflow.python.training import checkpoint_utils as cp cp.list_variables('./model.ckpt-12520') </code></pre> <p>Running the above snippet gives the following output</p> <pre><code>[('Variable', []), ('decoder/attention_wrapper/attention_layer/kernel', [600, 300]), ('decoder/attention_wrapper/attention_layer/kernel/Adam', [600, 300]), ('decoder/attention_wrapper/attention_layer/kernel/Adam_1', [600, 300]), ('decoder/attention_wrapper/bahdanau_attention/attention_b', [300]), ('decoder/attention_wrapper/bahdanau_attention/attention_b/Adam', [300]), ('decoder/attention_wrapper/bahdanau_attention/attention_b/Adam_1', [300]), ('decoder/attention_wrapper/bahdanau_attention/attention_g', []), ('decoder/attention_wrapper/bahdanau_attention/attention_g/Adam', []), ('decoder/attention_wrapper/bahdanau_attention/attention_g/Adam_1', []), ('decoder/attention_wrapper/bahdanau_attention/attention_v', [300]), ('decoder/attention_wrapper/bahdanau_attention/attention_v/Adam', [300]), ('decoder/attention_wrapper/bahdanau_attention/attention_v/Adam_1', [300]), ('decoder/attention_wrapper/bahdanau_attention/query_layer/kernel', [300, 300]), ('decoder/attention_wrapper/bahdanau_attention/query_layer/kernel/Adam', [300, 300]), ('decoder/attention_wrapper/bahdanau_attention/query_layer/kernel/Adam_1', [300, 300]), ('decoder/attention_wrapper/basic_lstm_cell/bias', [1200]), ('decoder/attention_wrapper/basic_lstm_cell/bias/Adam', [1200]), ('decoder/attention_wrapper/basic_lstm_cell/bias/Adam_1', [1200]), ('decoder/attention_wrapper/basic_lstm_cell/kernel', [900, 1200]), ('decoder/attention_wrapper/basic_lstm_cell/kernel/Adam', [900, 1200]), ('decoder/attention_wrapper/basic_lstm_cell/kernel/Adam_1', [900, 1200]), ('decoder/dense/kernel', [300, 49018]), ('decoder/dense/kernel/Adam', [300, 49018]), ('decoder/dense/kernel/Adam_1', [300, 49018]), ('decoder/memory_layer/kernel', [300, 300]), ('decoder/memory_layer/kernel/Adam', [300, 300]), ('decoder/memory_layer/kernel/Adam_1', [300, 300]), ('embeddings', [49018, 300]), ('embeddings/Adam', [49018, 300]), ('embeddings/Adam_1', [49018, 300]), ('loss/beta1_power', []), ('loss/beta2_power', []), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/bias', [600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/bias/Adam', [600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/bias/Adam_1', [600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/kernel', [450, 600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/kernel/Adam', [450, 600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/kernel/Adam_1', [450, 600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/bias', [600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/bias/Adam', [600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/bias/Adam_1', [600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/kernel', [450, 600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/kernel/Adam', [450, 600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/kernel/Adam_1', [450, 600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/bias', [600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/bias/Adam', [600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/bias/Adam_1', [600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/kernel', [450, 600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/kernel/Adam', [450, 600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/kernel/Adam_1', [450, 600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/bias', [600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/bias/Adam', [600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/bias/Adam_1', [600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/kernel', [450, 600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/kernel/Adam', [450, 600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/kernel/Adam_1', [450, 600])] </code></pre> <p>I realised that the embeddings variable is storing the word embeddings which is accounting for the increase in size of those files</p> <pre><code>cp.load_variable('./model.ckpt-12520', 'embeddings') </code></pre>
tensorflow
1
376,334
54,938,429
Reading multiple excel files and writting it to multiple excel files in python
<p>I have written code where it is reading excel file and then after processing required function I want to write it to Excel file . Now I have done this for one excel file . and now my question is when I want to do it for multiple excel file that is reading multiple excel file and then output should be also in multiple excel file how will I apply for loop here so I get separate output excel file for each input file</p> <p>Following is my code </p> <pre><code>from ParallelP import * import time,json import pandas as pd if __name__ == '__main__': __ip__ = "ip/" __op__ = "op/" __excel_file_name__ = __ip__ + '80chars.xlsx' __prediction_op__ = __op__ + basename(__excel_file_name__) + "_processed.xlsx" df = pd.read_excel(__excel_file_name__) start_time = time.time() df_preprocessed = run(df) print("Time Needed to execute all data is {0} seconds".format((time.time() - start_time))) print("Done...") df_preprocessed.to_excel(__prediction_op__) </code></pre>
<p>I tried to stick to your example and just expand it as I would do it. The below example is untested and does not mean that it is the best way to do it!</p> <pre><code>from ParallelP import * import time,json import pandas as pd import os from pathlib import Path # Handles directory paths -&gt; less error prone than manually sticking together paths if __name__ == '__main__': __ip__ = "ip/" __op__ = "op/" # Get a list of all excel files in a given directory excel_file_list = [f for f in os.listdir(__ip__) if f.endswith('.xlsx')] # Loop over the list and process each excel file seperately for excel_file in excel_file_list: excel_file_path = Path(__ip__, excel_file) # Create the file path df = pd.read_excel(str(excel_file)) # Read the excel file to data frame start_time = time.time() df_preprocessed = run(df) # Run your routine print("Time Needed to execute all data is {0} seconds".format((time.time() - start_time))) print("Done...") # Create the output file name prediction_output_file_name = '{}__processed.xlsx'.format(str(excel_file_path.resolve().stem)) # Create the output file path prediction_output_file_path = str(Path(__op__, prediction_output_file_name)) # Write the output to the excel file df_preprocessed.to_excel(prediction_output_file_path) </code></pre> <p>Sidenote: I have to mention that your variable names feel like a misuse of the __ . These 'dunder' functions are special and indicate that they have a meaning for python (<a href="https://dbader.org/blog/python-dunder-methods" rel="nofollow noreferrer">see for example here</a>). Please, just name your variables <code>input_dir</code> and <code>output_dir</code> instead of <code>__ip__</code> and <code>__op__</code>, respectively.</p>
python|excel|pandas
0
376,335
55,096,595
How to plot normal distribution curve along with Central Limit theorem
<p>I am trying to get a normal distribution curve along my Central limit data distribution.</p> <p>Below is the implementation I have tried.</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats import math # 1000 simulations of die roll n = 10000 avg = [] for i in range(1,n):#roll dice 10 times for n times a = np.random.randint(1,7,10)#roll dice 10 times from 1 to 6 &amp; capturing each event avg.append(np.average(a))#find average of those 10 times each time plt.hist(avg[0:]) zscore = stats.zscore(avg[0:]) mu, sigma = np.mean(avg), np.std(avg) s = np.random.normal(mu, sigma, 10000) # Create the bins and histogram count, bins, ignored = plt.hist(s, 20, normed=True) # Plot the distribution curve plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *np.exp( - (bins - mu)**2 / (2 * sigma**2))) </code></pre> <p>I get the below graph,</p> <p><a href="https://i.stack.imgur.com/cRYSS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cRYSS.png" alt="enter image description here"></a></p> <p>You can see the normal curve in the red at the bottom.</p> <p>Can anyone tell me why the curve is not fitting ?</p>
<p>You almost had it! First, see that you're plotting two histograms on the same axes:</p> <pre><code>plt.hist(avg[0:]) </code></pre> <p>and</p> <pre><code>plt.hist(s, 20, normed=True) </code></pre> <p>So that you can plot the normal density over the histogram you rightly normalised the second plot with the <code>normed=True</code> argument. However, you forgot to normalise the first histogram too (<code>plt.hist(avg[0:]), normed=True</code>).</p> <p>I'd also recommend that since you've already imported <code>scipy.stats</code>, you may as well use the normal distribution that comes in that module, rather than coding the pdf yourself.</p> <p>Putting this all together we have:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # 1000 simulations of die roll n = 10000 avg = [] for i in range(1,n): a = np.random.randint(1,7,10) avg.append(np.average(a)) # CHANGED: normalise this histogram too plt.hist(avg[0:], 20, normed=True) zscore = stats.zscore(avg[0:]) mu, sigma = np.mean(avg), np.std(avg) s = np.random.normal(mu, sigma, 10000) # Create the bins and histogram count, bins, ignored = plt.hist(s, 20, normed=True) # Use scipy.stats implementation of the normal pdf # Plot the distribution curve x = np.linspace(1.5, 5.5, num=100) plt.plot(x, stats.norm.pdf(x, mu, sigma)) </code></pre> <p>Which gave me the following plot:</p> <p><a href="https://i.stack.imgur.com/xOtJ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xOtJ3.png" alt="enter image description here"></a></p> <h2>Edit</h2> <p>In the comments you asked:</p> <ol> <li>How did I choose 1.5 and 5.5 in <code>np.linspace</code></li> <li>Is it possible to plot the normal kernel over the non-normalised histogram?</li> </ol> <p>To address q1. first, I chose 1.5 and 5.5 by eye. After plotting the histogram I saw that the histogram bins looked to range between 1.5 and 5.5, so that is the range over which we'd like to plot the normal distribution.</p> <p>A more programmatic way of choosing this range would have been:</p> <pre><code>x = np.linspace(bins.min(), bins.max(), num=100) </code></pre> <p>As for question 2., yes, we can achieve what you want. However, you should know that <em>we'd no longer be plotting a probability density function</em> at all.</p> <p>After removing the <code>normed=True</code> argument when plotting the histograms:</p> <pre><code>x = np.linspace(bins.min(), bins.max(), num=100) # Find pdf of normal kernel at mu max_density = stats.norm.pdf(mu, mu, sigma) # Calculate how to scale pdf scale = count.max() / max_density plt.plot(x, scale * stats.norm.pdf(x, mu, sigma)) </code></pre> <p>This gave me the following plot: <a href="https://i.stack.imgur.com/9RWql.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9RWql.png" alt="enter image description here"></a></p>
python|numpy|matplotlib|statistics
2
376,336
54,797,538
How to write values of dictionary in numpy array to csv file instead of the full dictionary?
<p>I'm trying to write a multidimensional <code>numpy</code> array to a <code>.csv</code> file. This array includes words, numbers, and a dictionary. When writing the file, my code prints the keys and values to the array. However, I need only the values to be written.</p> <p>My code can be seen as follows:</p> <pre><code>import numpy as np lowercase_letters = 'abcdefghijklmnopqrstuvwxyz' words_array = np.genfromtxt('words_file.txt', dtype=str).reshape(300, 1) lowercase_words = np.array([word[0].lower() for word in words_array]) words_array = np.append(words_array, lowercase_words.reshape(300, 1), axis=1) def letter_frequency(word): letter_frequency = {} for letter in lowercase_letters: letter_frequency[letter] = 0 for letter in word: letter_frequency[letter] += 1 return letter_frequency vectorized_frequencies = np.vectorize(letter_frequency) frequencies = vectorized_frequencies(lowercase_words) words_array = np.append(words_array, frequencies.reshape(300, 1), axis=1) word_length = np.vectorize(len) word_lengths = word_length(words_array[:, :1]) words_array = np.append(words_array, word_lengths.reshape(300, 1), axis=1) np.savetxt('rocko_bishop_project2.csv', words_array, delimiter=',', fmt='%s', header="original, lower, a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,length") </code></pre> <p>After writing the finalized array to the csv file (also required, I can't export as a different file type), it prints the full dictionary, with keys and values:</p> <pre><code>Abject,abject,{'a': 1, 'b': 1, 'c': 1, 'd': 0, 'e': 1, 'f': 0, 'g': 0, 'h': 0, 'i': 0, 'j': 1, 'k': 0, 'l': 0, 'm': 0, 'n': 0, 'o': 0, 'p': 0, 'q': 0, 'r': 0, 's': 0, 't': 1, 'u': 0, 'v': 0, 'w': 0, 'x': 0, 'y': 0, 'z': 0},6 </code></pre> <p>However, I would like to to just have the values printed instead.</p> <pre><code>Abject,abject,1,1,1,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0 </code></pre> <p>The parameters of the project prevent me from importing <code>pandas</code>, or any other library other than <code>numpy</code>. </p> <p>What is the best course of action?</p>
<p>Assume, for lack of further information, that your <code>array</code> is really a <code>list</code> like:</p> <pre><code>In [338]: alist = ['Abjure','abjure',{'a': 1, 'b': 1, 'c': 0, 'd': 0, 'e': 1, 'f ...: ': 0, 'g': 0, 'h': 0, 'i': 0, 'j': 1, 'k': 0, 'l': 0, 'm': 0, 'n': 0, ...: 'o': 0, 'p': 0, 'q': 0, 'r': 1, 's': 0, 't': 0, 'u': 1, 'v': 0, 'w': 0 ...: , 'x': 0, 'y': 0, 'z': 0}] </code></pre> <p>If it isn't a list, it must be a object dtype array, in which case <code>arr.tolist()</code> should produce the same list.</p> <p>As first try, just print (with possible redirect to file), the elements:</p> <pre><code>In [341]: print(alist[:2], list(alist[2].values())) ['Abjure', 'abjure'] [1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0] </code></pre> <p>Here I use <code>values()</code> to get the dictionary values.</p> <p>Now we can clean up the print (or file write). This is plain Python. No need to invoke <code>numpy</code> or any other package.</p> <p>Expanding the sub lists:</p> <pre><code>In [343]: print(*alist[:2], *list(alist[2].values())) Abjure abjure 1 1 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 </code></pre>
python|file|csv|numpy|dictionary
0
376,337
54,765,953
How to replace Swedish characters ä, å, ö in columns names in python?
<p>I have a dataframe with some columns names having Swedish characters (ö,ä,å). I would like to replace these characters with simple o,a,a instead. </p> <p>I tried to convert the columns names to str and replace the characters, it works but then it gets complicated if I want to assign back the str as columns names, i.e., there are multiple operations needed which makes it complicated.</p> <p>I tried the following code which replaces the Swedish characters in columns names with the English alphabets and returns the result as str. </p> <pre><code>from unidecode import unidecode unicodedata.normalize('NFKD',str(df.columns).decode('utf-8')).encode('ascii', 'ignore') </code></pre> <p>Is there a way to use the returning str as columns names for the dataframe? If not, then is there a better way to replace the Swedish characters in columns names?</p>
<p>For me working first normalize, then encode to ascii and last decode to <code>utf-8</code>:</p> <pre><code>df = pd.DataFrame(columns=['aä','åa','oö']) df.columns = (df.columns.str.normalize('NFKD') .str.encode('ascii', errors='ignore') .to_series() .str.decode('utf-8')) print (df) Empty DataFrame Columns: [aa, aa, oo] Index: [] </code></pre> <p>Another solutions with <code>map</code> or list comprehension:</p> <pre><code>import unicodedata f = lambda x: unicodedata.normalize('NFKD', x).encode('ascii', 'ignore').decode('utf-8') df.columns = df.columns.map(f) print (df) Empty DataFrame Columns: [aa, aa, oo] Index: [] </code></pre> <hr> <pre><code>import unicodedata df.columns = [unicodedata.normalize('NFKD', x).encode('ascii', 'ignore').decode('utf-8') for x in df.columns] print (df) Empty DataFrame Columns: [aa, aa, oo] Index: [] </code></pre>
pandas|python-2.7|python-unicode
3
376,338
55,121,828
Negative examples for image classification?
<p>I have 1000 images of dogs and 1000 images of cats. </p> <p>I've trained a small CNN to do classification on this dataset and the accuracy on both the validation/test set is 99% +. </p> <p>But, I've noticed that when I give an input that isn't a cat or a dog, for example a car, the classifier (sometimes) gives a high confidence of cat or dog. </p> <p>Why is this? I understand that the CNN has only been trained on two classes, but if it sees something completely random shouldn't it output a low confidence for both classes?</p> <p>I assume that this problem is solved by negative examples (of other random object and animals), but then the question becomes: how many negative examples are needed to truly cover the distribution of all the possible random images (that aren't of cats or dog)?</p>
<p>If the network is trained only on dog/cat images, it makes sense that it confuses an image that belongs to none of the two categories. You should add negative examples in the training set (as you mentioned) and convert your final classification layer to predict confidence over 3 catetegories (dog, cat, none). This should work better.</p>
tensorflow|machine-learning|deep-learning|classification
1
376,339
54,799,199
I cannot fully uninstall numpy
<p>I wrote "pip uninstall numpy", and it appears uninstalled. For example when I put import pandas, it returns </p> <pre><code>module 'numpy' has no attribute '__version__' </code></pre> <p>But when I run "import numpy" it lets me import it. However, it does when i ask it </p> <pre><code>print numpy.__file__ </code></pre> <p>it returns</p> <pre><code>module 'numpy' has no attribute '__file__' </code></pre> <p>I'm using Google Cloud Console, and can't seem to install packages because of this numpy issue.</p> <p>One of the errors I receive is</p> <p>"Something is wrong with the numpy installation. While importing we detected an older version of numpy in ['/home/davidxmkong/anaconda3/lib/python3.5/site-packages/numpy']. One method of fixing this is to repeatedly uninstall numpy until none is found, then reinstall this version."</p>
<p>I installed the 2018-12 version of Anaconda, and it appears like the problem has been solved. </p>
python|numpy
1
376,340
55,044,538
How to search by using different columns
<p>Gender and city are two different columns. I want to search in such a way that how many males and Females from Gender column in particular city in Pandas</p>
<pre><code>df = pd.DataFrame({ 'city': ['NY', 'NY', 'NY', 'LA'], 'gender': ['m', 'f', 'f', 'm']}) z = df.groupby(['city', 'gender']).size() z </code></pre> <p>Output:</p> <pre><code>city gender LA m 1 NY f 2 m 1 </code></pre> <p>To check distribution in one city, e.g. NY:</p> <pre><code>z = df.groupby(['city', 'gender']).size() z['NY'] </code></pre> <p>Output:</p> <pre><code>gender f 2 m 1 </code></pre>
pandas
1
376,341
54,951,448
How to subtract neighbouring vectors in a numpy array from each other
<p>My expected results are as follows:</p> <pre><code>array = [[2,3,4], [1,2,4]] </code></pre> <p>Output:</p> <pre><code>[1, 1, 0] # [2-1, 3-2, 4-4] </code></pre> <p>I tried doing this by enumerating and getting the indexes to subtract with no luck as:</p> <pre><code>for i, k in enumerate(array): for j in k: return(j[i+1] - j[i]) </code></pre> <p>Which gives me:</p> <blockquote> <p>IndexError: invalid index to scalar variable.</p> </blockquote>
<p>This works:</p> <pre><code>result = [(i-j) for (i,j) in zip(*array)] </code></pre> <p><strong>Output:</strong></p> <pre><code>print (result) [1, 1, 0] </code></pre> <p><strong>Explanation:</strong></p> <p><code>zip(*array)</code> is equivalent to the list of tuples <code>[(2,1), (3,2), (4,4)]</code></p>
python|numpy
0
376,342
54,808,927
How to get two columns Values based on Where function in python
<p>The Question is: Based on the <code>user_id</code> column, I want to get the values of <code>rating</code> and <code>product_id</code> columns. There can be multiple entries with the same user_id. I want to get all users records with the <code>rating</code> and <code>product_id</code> columns value But for the movies where the user didn't rate the movie should be displayed as Nan but still, the <code>product_id</code> should be retrieved. Following is the table with some data provided. </p> <pre><code>| product_id | user_id | user_name | rating | |-------------|-----------------|----------------------------------------------|--------| | B0009XRZ92 | A2JFZLAUG3YFQ7 | Entropy Babe "EB" | 5 | | B0009XRZ92 | A22HGAAO8KZ2N3 | R. Metzelar | 5 | | B000067A8B | A2NJO6YE954DBH | Lawrance M. Bernabo | 4 | | B0009XRZ92 | A3HE4MYMWK4AER | Rebecca M. Eddy "Foster Mom and Untbunny" | 5 | | B003A3R3ZY | A9A2PR663ED1V | Roger D. Goff | 5 | | B0009XRZ92 | A2MRZDJF90JC1U | Suzanne K. Armstrong "Suzy Q" | 5 | | B0009XRZ92 | A2YNBDT3170PCR | C. O'Hern | 5 | | B0009XRZ92 | A10VJ7BDVCPKEZ | Carol S. Bottom | 5 | | B0009XRZ92 | AAAQO894MG80B | Paul J. Michko | 5 | | B00067BBQE | A9A2PR663ED1V | Roger D. Goff | 5 | | B0009XRZ92 | A31S5QUMFR8NH2 | Dana L. Jordan "Mom of Twins" | 5 | | B0009XRZ92 | A2DS24DHXUH0GM | Gaz Rev(iewer) | 4 | | B00006AUMZ | A2NJO6YE954DBH | Lawrance M. Bernabo | 4 | | B0009XRZ92 | A16FRHL2ZC7EUR | M. Claytor | 5 | | B0009XRZ92 | A3AV8R0A62PP1N | MARCUSHELBLINZ "mmmacman" | 5 | | B0009XRZ92 | A3QN84C38DE9FU | Gillian M. Kratzer | 5 | | B0009XRZ92 | A36MLTLVQFEQYL | Yossarian "alienated socialist" | 5 | | B00006AUMD | A2NJO6YE954DBH | Lawrance M. Bernabo | 4 | </code></pre> <blockquote> <p>What I want to do is:</p> <p>To take one <code>user_id</code> at a time and display the <code>rating</code> and <code>product_id</code> columns value for that user for all the movies in the table and if the user didn't rate some movies then the record should be displayed with the <code>product_id</code> value and <code>rating</code> as Nan and the whole process should be repeated for all the users.</p> </blockquote> <p>For Example, the record for <code>user_id: A2NJO6YE954DBH</code> will look like this:</p> <pre><code>| product_id | rating | |------------|--------| | B000067A8B | 4 | | B00006AUMD | 4 | | B00006AUMD | 4 | | B0009XRZ92 | Nan | | B003A3R3ZY | Nan | | B00067BBQE | Nan | | . | . | | . | . | | . | . | </code></pre> <p>I've tried to write the code for this using Pandas Library but couldn't help myself. This is All i did but its not outputing what I want.</p> <pre><code>import pandas as pd df =pd.read_csv('out.csv') unique_users=df.user_id.unique() for x, y in enumerate(unique_users): print(df[['rating','product_id']].where(df.user_id==y)) </code></pre> <p>Please help me out.. Thanks</p>
<p>try </p> <pre><code> print(df[df.user_id==y][['rating','product_id']]) </code></pre>
python|python-3.x|pandas|csv
1
376,343
55,128,156
Importing CSV file into Google Colab using numpy loadtxt
<p>I'm trying to migrate a JupyterLab notebook to Google Colab. In JupyterLab, when I have the notebook file and the associated csv files in the same directory, it is easy to import the data using numpy's loadtxt function as follows:</p> <pre><code>import numpy as np filein = "testfile.csv" data = np.loadtxt(open(filein, "rb"), delimiter=",", skiprows=1) </code></pre> <p>For various reasons I'd like to continue to use np.loadtxt in Colab. However when I try the same code there, it can't find the csv file despite it residing in the same Google Drive location as the notebook file. I get this error: <code>"FileNotFoundError: [Errno 2] No such file or directory: 'testfile.csv'"</code>. </p> <p>I gather I somehow need to provide a path to the file, but haven't been able to figure out how to do this. Is there any straightforward way to use np.loadtxt?</p>
<p>Colab doesn't automatically mount Google Drive. By default, the working directory is <code>/content</code> on an ephemeral backend virtual machine.</p> <p>To access your file in Drive, you'll need to mount it first using the following snippet:</p> <pre><code>from google.colab import drive drive.mount('/content/gdrive') </code></pre> <p>Then, <code>%cd /content/gdrive/My\ Drive</code> to change the working directory to your Drive root. (Or, customize the path as needed to wherever <code>testfile.csv</code> is located.)</p>
python|numpy|import|google-colaboratory
11
376,344
54,821,206
I want all numpy arrays to be forced to be 2 dimensional
<p>I am coming over from Matlab, and while everything was mostly ported over really well (the community has to be thanked for this, a Matlab license costs way over $1000). There is one thing that I cannot for the life of me find out.</p> <p>In Matlab, all arrays are 2D (until recently, where they gave you other options). Such that, when I define a scalar, array, matrix they are all considered 2D. This is pretty useful when doing matrix multiplication!</p> <p>In Python, when using numpy. Unfortunately, I find myself having to use the reshape command quite frequently.</p> <p><strong>Is there anyway that I can globally set that all array's have 2D dimensions unless stated otherwise?</strong></p> <p>Edit: According to the numpy documentation numpy.matrix may be removed in the coming future. What I want to do in essence is that have all output of any numpy operation have the function <strong>np.atleast_2d</strong> applied to them automatically.</p>
<p>As noted above, the np.matrix class has semantics quite similar to a matlab array.</p> <p>However, if you goal is to learn numpy as a marketable skill, I would strongly recommend you fully embrace the concept of an ndarray; while there is some historical truth to calling numpy a port of matlab, it is a bit of an insult, as the ndarray is one of the most compelling objective conceptual improvements of numpy over matlab, other than its price.</p> <p>TLDR; you will have a hard time not getting your application tossed by me if you claim to know numpy, but your code samples smell like ported matlab in any way.</p>
python|arrays|numpy
2
376,345
55,088,656
Most efficient way to calculate frequency of pairs of numbers in a 2D Numpy array
<p>Let's say I have the following 2D array:</p> <pre><code>import numpy as np np.random.seed(123) a = np.random.randint(1, 6, size=(5, 3)) </code></pre> <p>which produces:</p> <pre><code>In [371]: a Out[371]: array([[3, 5, 3], [2, 4, 3], [4, 2, 2], [1, 2, 2], [1, 1, 2]]) </code></pre> <p>is there a more efficient (Numpy, Pandas, etc.) way to calculate a freuency of all pairs of numbers than <a href="https://stackoverflow.com/a/32214841/5741205">the following solution</a>?</p> <pre><code>from collections import Counter from itertools import combinations def pair_freq(a, sort=False, sort_axis=-1): a = np.asarray(a) if sort: a = np.sort(a, axis=sort_axis) res = Counter() for row in a: res.update(combinations(row, 2)) return res res = pair_freq(a) </code></pre> <p>to produce something like that:</p> <pre><code>In [38]: res Out[38]: Counter({(3, 5): 1, (3, 3): 1, (5, 3): 1, (2, 4): 1, (2, 3): 1, (4, 3): 1, (4, 2): 2, (2, 2): 2, (1, 2): 4, (1, 1): 1}) </code></pre> <p>or:</p> <pre><code>In [39]: res.most_common() Out[39]: [((1, 2), 4), ((4, 2), 2), ((2, 2), 2), ((3, 5), 1), ((3, 3), 1), ((5, 3), 1), ((2, 4), 1), ((2, 3), 1), ((4, 3), 1), ((1, 1), 1)] </code></pre> <p>PS the resulting dataset might look differently - for example like a multi-index Pandas DataFrame or something else.</p> <p>I was trying to increase the dimensionality of the <code>a</code> array and to use <code>np.isin()</code> together with the a list of a combinations of all pairs, but I still couldn't get rid of a loop.</p> <p><strong>UPDATE:</strong></p> <blockquote> <p><strong>(a)</strong> Are you interested only in the frequency of combinations of 2 numbers (and not interested in frequency of combinations of 3 numbers)?</p> </blockquote> <p>yes, i'm interested in combinations of pairs (2 numbers) only</p> <blockquote> <p><strong>(b)</strong> Do you want to consider (3,5) as distinct from (5,3) or do you want to consider them as two occurrences of the same thing?</p> </blockquote> <p>actually both approaches are fine - I can always sort my array beforehand if I need:</p> <pre><code>a = np.sort(a, axis=1) </code></pre> <p><strong>UPDATE2:</strong></p> <blockquote> <p>Do you want the distinction between (a,b) and (b,a) to happen only due to the source column of a and b, or even otherwise? Do understand this question, please consider three rows <code>[[1,2,1], [3,1,2], [1,2,5]]</code>. What do you think should be the output here? What should be the distinct 2-tuples and what should be their frequencies?</p> </blockquote> <pre><code>In [40]: a = np.array([[1,2,1],[3,1,2],[1,2,5]]) In [41]: a Out[41]: array([[1, 2, 1], [3, 1, 2], [1, 2, 5]]) </code></pre> <p>I would expect the following result:</p> <pre><code>In [42]: pair_freq(a).most_common() Out[42]: [((1, 2), 3), ((1, 1), 1), ((2, 1), 1), ((3, 1), 1), ((3, 2), 1), ((1, 5), 1), ((2, 5), 1)] </code></pre> <p>because it's more flexible, so I want to count (a, b) and (b, a) as the same pair of elements I could do this:</p> <pre><code>In [43]: pair_freq(a, sort=True).most_common() Out[43]: [((1, 2), 4), ((1, 1), 1), ((1, 3), 1), ((2, 3), 1), ((1, 5), 1), ((2, 5), 1)] </code></pre>
<p>If your elements are not too large nonnegative integers <code>bincount</code> is fast:</p> <pre><code>from collections import Counter from itertools import combinations import numpy as np def pairs(a): M = a.max() + 1 a = a.T return sum(np.bincount((M * a[j] + a[j+1:]).ravel(), None, M*M) for j in range(len(a) - 1)).reshape(M, M) def pairs_F_3(a): M = a.max() + 1 return (np.bincount(a[1:].ravel() + M*a[:2].ravel(), None, M*M) + np.bincount(a[2].ravel() + M*a[0].ravel(), None, M*M)) def pairs_F(a): M = a.max() + 1 a = np.ascontiguousarray(a.T) # contiguous columns (rows after .T) # appear to be typically perform better # thanks @ning chen return sum(np.bincount((M * a[j] + a[j+1:]).ravel(), None, M*M) for j in range(len(a) - 1)).reshape(M, M) def pairs_dict(a): p = pairs_F(a) # p is a 2D table with the frequency of (y, x) at position y, x y, x = np.where(p) c = p[y, x] return {(yi, xi): ci for yi, xi, ci in zip(y, x, c)} def pair_freq(a, sort=False, sort_axis=-1): a = np.asarray(a) if sort: a = np.sort(a, axis=sort_axis) res = Counter() for row in a: res.update(combinations(row, 2)) return res from timeit import timeit A = [np.random.randint(0, 1000, (1000, 120)), np.random.randint(0, 100, (100000, 12))] for a in A: print('shape:', a.shape, 'range:', a.max() + 1) res2 = pairs_dict(a) res = pair_freq(a) print(f'results equal: {res==res2}') print('bincount', timeit(lambda:pairs(a), number=10)*100, 'ms') print('bc(F) ', timeit(lambda:pairs_F(a), number=10)*100, 'ms') print('bc-&gt;dict', timeit(lambda:pairs_dict(a), number=10)*100, 'ms') print('Counter ', timeit(lambda:pair_freq(a), number=4)*250,'ms') </code></pre> <p>Sample run:</p> <pre><code>shape: (1000, 120) range: 1000 results equal: True bincount 461.14772390574217 ms bc(F) 435.3669326752424 ms bc-&gt;dict 932.1215840056539 ms Counter 3473.3258984051645 ms shape: (100000, 12) range: 100 results equal: True bincount 89.80463854968548 ms bc(F) 43.449611216783524 ms bc-&gt;dict 46.470773220062256 ms Counter 1987.6734036952257 ms </code></pre>
python|performance|numpy|processing-efficiency
2
376,346
49,483,025
Adding Tensorboard summaries from graph ops generated inside Dataset map() function calls
<p>I've found the Dataset.map() functionality pretty nice for setting up pipelines to preprocess image/audio data before feeding into the network for training, but one issue I have is accessing the raw data before the preprocessing to send to tensorboard as a summary. </p> <p>For example, say I have a function that loads audio data, does some framing, makes a spectrogram, and returns this. </p> <pre><code>import tensorflow as tf def load_audio_examples(label, path): # loads audio, converts to spectorgram pcm = ... # this is what I'd like to put into tf.summmary.audio() ! # creates one-hot encoded labels, etc return labels, examples # create dataset training = tf.data.Dataset.from_tensor_slices(( tf.constant(labels), tf.constant(paths) )) training = training.map(load_audio_examples, num_parallel_calls=4) # create ops for training train_step = # ... accuracy = # ... # create iterator iterator = training.repeat().make_one_shot_iterator() next_element = iterator.get_next() # ready session sess = tf.InteractiveSession() tf.global_variables_initializer().run() train_writer = # ... # iterator test_iterator = testing.make_one_shot_iterator() test_next_element = iterator.get_next() # train loop for i in range(100): batch_ys, batch_xs, path = sess.run(next_element) summary, train_acc, _ = sess.run([summaries, accuracy, train_step], feed_dict={x: batch_xs, y: batch_ys}) train_writer.add_summary(summary, i) </code></pre> <p>It appears as though this does not become part of the graph that is plotted in the "Graph" tab of tensorboard (see screenshot below).</p> <p><a href="https://i.stack.imgur.com/I5BFx.png" rel="noreferrer"><img src="https://i.stack.imgur.com/I5BFx.png" alt="tesnrofboard"></a></p> <p>As you can see, it's just X (the output of the preprocessing map() function). </p> <ol> <li>How would I better structure this to get the raw audio into a <code>tf.summary.audio()</code>? Right now the things inside map() aren't accessible as Tensors inside my training loop. </li> <li>Also, why isn't my graph showing up on Tensorboard? Worries me that I won't be able to export my model or use Tensorflow Serving to put my model into production because I'm using the new Dataset API - maybe I should go back to doing things manually? (with queues, etc). </li> </ol>
<p>I think your use of Dataset API doesn't make much sense. In fact you have 2 disconnected subgraphs. One for reading data and the other for running your training step.</p> <pre class="lang-py prettyprint-override"><code>batch_ys, batch_xs, path = sess.run(next_element) summary, train_acc, _ = sess.run([summaries, accuracy, train_step], feed_dict={x: batch_xs, y: batch_ys}) </code></pre> <p>The first line in the code above runs session and fetches data items from it. It transfers data from Tensorflow backend into Python.</p> <p>The next line feeds data using <code>feed_dict</code> and that is <a href="https://www.tensorflow.org/performance/performance_guide#using_the_tfdata_api" rel="noreferrer">said to be inefficient</a>. This time TensorFlow transfers data from Python to runtime.</p> <p>This has the following consequences:</p> <ol> <li>Your graph looks disconnected</li> <li>TensorFlow wastes time doing unnecessary data transfer to and from Python.</li> </ol> <p>To have a single graph (without disconnected subgraphs) you need to build your model on top of tensors returned by Dataset API. Please note that it is possible to switch between training and testing datasets without manual fetching of batches (see <a href="https://www.tensorflow.org/programmers_guide/datasets" rel="noreferrer">Dataset guide</a>)</p> <p>If to speak about summary defined in <code>map_fn</code> I believe you can retrieve summary from <code>SUMMARIES</code> collection (default collection for summaries). You can also pass your own collection name when adding summary operation.</p>
python|tensorflow|tensorboard|tensorflow-serving|tensorflow-datasets
5
376,347
49,466,894
How to correctly give inputs to Embedding, LSTM and Linear layers in PyTorch?
<p>I need some clarity on how to correctly prepare inputs for batch-training using different components of the <code>torch.nn</code> module. Specifically, I'm looking to create an encoder-decoder network for a seq2seq model.</p> <p>Suppose I have a module with these three layers, in order:</p> <ol> <li><code>nn.Embedding</code></li> <li><code>nn.LSTM</code></li> <li><code>nn.Linear</code></li> </ol> <h1><code>nn.Embedding</code></h1> <p><strong>Input:</strong> <code>batch_size * seq_length</code><br> <strong>Output:</strong> <code>batch_size * seq_length * embedding_dimension</code></p> <p>I don't have any problems here, I just want to be explicit about the expected shape of the input and output.</p> <h1><code>nn.LSTM</code></h1> <p><strong>Input:</strong> <code>seq_length * batch_size * input_size</code> (<code>embedding_dimension</code> in this case)<br> <strong>Output:</strong> <code>seq_length * batch_size * hidden_size</code><br> <strong>last_hidden_state:</strong> <code>batch_size * hidden_size</code><br> <strong>last_cell_state:</strong> <code>batch_size * hidden_size</code></p> <p>To use the output of the <code>Embedding</code> layer as input for the <code>LSTM</code> layer, I need to transpose axis 1 and 2.</p> <p>Many examples I've found online do something like <code>x = embeds.view(len(sentence), self.batch_size , -1)</code>, but that confuses me. How does this view ensure that elements of the same batch remain in the same batch? What happens when <code>len(sentence)</code> and <code>self.batch</code> size are of same size?</p> <h1><code>nn.Linear</code></h1> <p><strong>Input:</strong> <code>batch_size</code> x <code>input_size</code> (hidden_size of LSTM in this case or ??)<br> <strong>Output:</strong> <code>batch_size</code> x <code>output_size</code></p> <p>If I only need the <code>last_hidden_state</code> of <code>LSTM</code>, then I can give it as input to <code>nn.Linear</code>.</p> <p>But if I want to make use of Output (which contains all intermediate hidden states as well) then I need to change <code>nn.Linear</code>'s input size to <code>seq_length * hidden_size</code> and to use Output as input to <code>Linear</code> module I need to transpose axis 1 and 2 of output and then I can view with <code>Output_transposed(batch_size, -1)</code>.</p> <p>Is my understanding here correct? How do I carry out these transpose operations in tensors <code>(tensor.transpose(0, 1))</code>?</p>
<p>Your understanding of most of the concepts is accurate, but, there are some missing points here and there.</p> <h3>Interfacing embedding to LSTM (Or any other recurrent unit)</h3> <p>You have embedding output in the shape of <code>(batch_size, seq_len, embedding_size)</code>. Now, there are various ways through which you can pass this to the LSTM.<br> * You can pass this directly to the <code>LSTM</code>, if <code>LSTM</code> accepts input as <code>batch_first</code>. So, while creating your <code>LSTM</code> pass argument <code>batch_first=True</code>.<br> * Or, you can pass input in the shape of <code>(seq_len, batch_size, embedding_size)</code>. So, to convert your embedding output to this shape, you’ll need to transpose the first and second dimensions using <code>torch.transpose(tensor_name, 0, 1)</code>, like you mentioned. </p> <blockquote> <p>Q. I see many examples online which do something like x = embeds.view(len(sentence), self.batch_size , -1) which confuses me.<br> A. This is wrong. It will mix up batches and you will be trying to learn a hopeless learning task. Wherever you see this, you can tell the author to change this statement and use transpose instead.</p> </blockquote> <p>There is an argument in favor of not using <code>batch_first</code>, which states that the underlying API provided by Nvidia CUDA runs considerably faster using batch as secondary.</p> <h3>Using context size</h3> <p>You are directly feeding the embedding output to LSTM, this will fix the input size of LSTM to context size of 1. This means that if your input is words to LSTM, you will be giving it one word at a time always. But, this is not what we want all the time. So, you need to expand the context size. This can be done as follows -</p> <pre><code># Assuming that embeds is the embedding output and context_size is a defined variable embeds = embeds.unfold(1, context_size, 1) # Keeping the step size to be 1 embeds = embeds.view(embeds.size(0), embeds.size(1), -1) </code></pre> <p><a href="http://pytorch.org/docs/0.3.1/tensors.html?#torch.Tensor.unfold" rel="noreferrer">Unfold documentation</a><br> Now, you can proceed as mentioned above to feed this to the <code>LSTM</code>, just remembed that <code>seq_len</code> is now changed to <code>seq_len - context_size + 1</code> and <code>embedding_size</code> (which is the input size of the LSTM) is now changed to <code>context_size * embedding_size</code></p> <h3>Using variable sequence lengths</h3> <p>Input size of different instances in a batch will not be the same always. For example, some of your sentence might be 10 words long and some might be 15 and some might be 1000. So, you definitely want variable length sequence input to your recurrent unit. To do this, there are some additional steps that needs to be performed before you can feed your input to the network. You can follow these steps -<br> 1. Sort your batch from largest sequence to the smallest.<br> 2. Create a <code>seq_lengths</code> array that defines the length of each sequence in the batch. (This can be a simple python list)<br> 3. Pad all the sequences to be of equal length to the largest sequence.<br> 4. Create LongTensor Variable of this batch.<br> 5. Now, after passing the above variable through embedding and creating the proper context size input, you’ll need to pack your sequence as follows - </p> <pre><code># Assuming embeds to be the proper input to the LSTM lstm_input = nn.utils.rnn.pack_padded_sequence(embeds, [x - context_size + 1 for x in seq_lengths], batch_first=False) </code></pre> <h3>Understanding output of LSTM</h3> <p>Now, once you have prepared your <code>lstm_input</code> acc. To your needs, you can call lstm as </p> <pre><code>lstm_outs, (h_t, h_c) = lstm(lstm_input, (h_t, h_c)) </code></pre> <p>Here, <code>(h_t, h_c)</code> needs to be provided as the initial hidden state and it will output the final hidden state. You can see, why packing variable length sequence is required, otherwise LSTM will run the over the non-required padded words as well.<br> Now, <code>lstm_outs</code> will be a packed sequence which is the output of lstm at every step and <code>(h_t, h_c)</code> are the final outputs and the final cell state respectively. <code>h_t</code> and <code>h_c</code> will be of shape <code>(batch_size, lstm_size)</code>. You can use these directly for further input, but if you want to use the intermediate outputs as well you’ll need to unpack the <code>lstm_outs</code> first as below </p> <pre><code>lstm_outs, _ = nn.utils.rnn.pad_packed_sequence(lstm_outs) </code></pre> <p>Now, your <code>lstm_outs</code> will be of shape <code>(max_seq_len - context_size + 1, batch_size, lstm_size)</code>. Now, you can extract the intermediate outputs of lstm according to your need.</p> <blockquote> <p>Remember that the unpacked output will have 0s after the size of each batch, which is just padding to match the length of the largest sequence (which is always the first one, as we sorted the input from largest to the smallest).</p> <p>Also note that, h_t will always be equal to the last element for each batch output.</p> </blockquote> <h3>Interfacing lstm to linear</h3> <p>Now, if you want to use just the output of the lstm, you can directly feed <code>h_t</code> to your linear layer and it will work. But, if you want to use intermediate outputs as well, then, you’ll need to figure out, how are you going to input this to the linear layer (through some attention network or some pooling). You do not want to input the complete sequence to the linear layer, as different sequences will be of different lengths and you can’t fix the input size of the linear layer. And yes, you’ll need to transpose the output of lstm to be further used (Again you cannot use view here).</p> <blockquote> <p>Ending Note: I have purposefully left some points, such as using bidirectional recurrent cells, using step size in unfold, and interfacing attention, as they can get quite cumbersome and will be out of the scope of this answer.</p> </blockquote>
lstm|pytorch
58
376,348
49,682,724
Tensorflow Index File Utility
<p>I've been looking for a clear answer, but couldn't find until now.</p> <p>In Tensorflow, after the training executing, 4 files are generated:</p> <p>.meta, .data, .index and checkpoint</p> <p>What is the utility of the .index file?</p> <p>Thanks! </p>
<p>The .index file holds an immutable key-value table linking a serialized tensor name and where to find data in its .data files</p>
tensorflow|machine-learning|deep-learning
1
376,349
49,729,404
Produce equal length of rows for each value in another column (using Python or SQL)
<p>My original data in the table is organised as below format:</p> <pre><code> routes demand days Paris-New York 1 Paris-New York 3 Paris-New York 5 London-Berlin 2 London-Berlin 3 London-Berlin 4 London-Berlin 5 Tokyo-Shanghai 2 Tokyo-Shanghai 4 </code></pre> <p>The desired format I want in the new table:</p> <pre><code> routes calendar days demand-days Paris-New York 1 1 Paris-New York 2 Paris-New York 3 3 Paris-New York 4 Paris-New York 5 5 London-Berlin 1 London-Berlin 2 2 London-Berlin 3 3 London-Berlin 4 4 London-Berlin 5 5 Tokyo-Shanghai 1 Tokyo-Shanghai 2 2 Tokyo-Shanghai 3 Tokyo-Shanghai 4 4 Tokyo-Shanghai 5 </code></pre> <p>I just want to generate a new column (e.g. "calendar days") with the equal number of rows for every unique routes in column "routes". Is there a simply way to do it with either Python or SQL?</p>
<p><strong>pandas</strong> solution working if for each <code>routes</code> are unique <code>demand days</code>:</p> <pre><code>df = df.set_index(['routes']).set_index('demand days', drop=False, append=True) df = (df.reindex(pd.MultiIndex.from_product(df.index.levels,names=('routes','calendar days'))) .reset_index()) print (df) routes calendar days demand days 0 London-Berlin 1 NaN 1 London-Berlin 2 2.0 2 London-Berlin 3 3.0 3 London-Berlin 4 4.0 4 London-Berlin 5 5.0 5 Paris-New York 1 1.0 6 Paris-New York 2 NaN 7 Paris-New York 3 3.0 8 Paris-New York 4 NaN 9 Paris-New York 5 5.0 10 Tokyo-Shanghai 1 NaN 11 Tokyo-Shanghai 2 2.0 12 Tokyo-Shanghai 3 NaN 13 Tokyo-Shanghai 4 4.0 14 Tokyo-Shanghai 5 NaN </code></pre> <p>EDIT:</p> <p>Dynamic solution for <code>reindex</code> by <code>range</code>:</p> <pre><code>df = df.set_index(['routes']).set_index('demand days', drop=False, append=True) #get values of routes a = df.index.levels[0] #get minimal and maximal days b = range(min(df.index.levels[1]), max(df.index.levels[1]) + 1) #create MulitIndex mux = pd.MultiIndex.from_product([a, b],names=('routes','calendar days')) #reinex df = df.reindex(mux).reset_index() </code></pre>
python|sql|pandas
2
376,350
49,789,700
Extracting just Month and Year from Pandas Datetime column in a .csv file(Python)
<pre><code>OrderDate 2/1/2018 3/1/2018 3/1/2018 3/1/2018 2/1/2018 3/1/2018 3/1/2018 3/1/2018 3/1/2018 3/1/2018 3/1/2018 3/1/2018 3/1/2018 3/1/2018 2/1/2018 </code></pre> <p>The format of the date is <code>%d/%M/%Y</code>. When i changed the string <code>Orderdate</code> to <code>Datetime</code>, the day becomes the month while the month becomes the day. How do i fix the error.</p> <p>This is the code i use to change the format -> <code>df['OrderDate'] = pd.to_datetime(df['OrderDate'])</code></p> <p>Thanks in advance.</p>
<pre><code>import pandas as pd import os df = pd.read_csv("sample.csv") df['OrderDate'] = pd.to_datetime(df['OrderDate'], format='%Y%m%d', errors='ignore') print df </code></pre> <h1>output:</h1> <pre><code> OrderDate 0 2/1/2018 1 3/1/2018 2 3/1/2018 3 3/1/2018 4 2/1/2018 5 3/1/2018 6 3/1/2018 7 3/1/2018 8 3/1/2018 9 3/1/2018 10 3/1/2018 11 3/1/2018 12 3/1/2018 13 3/1/2018 14 2/1/2018 </code></pre> <p>For more info refer this link : <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html#pandas.to_datetime" rel="nofollow noreferrer">Pandas to DateTime</a></p>
python|pandas|datetime
0
376,351
49,632,200
Replace numpy cell with values from second array based on a condition
<p>I have two images, <code>img1</code> and <code>img2</code>. I'm trying to find everywhere in <code>img1</code> that the color <code>[0,204,204]</code> occurs and replace it with whatever is in <code>img2</code> in the same place. I can use <code>np.where()</code> to find the places where that color occurs and replace it with a different color directly:</p> <pre><code>img1[np.where((img1==[0,204,204]).all(axis=2))] = [255,255,0] </code></pre> <p>I'm unsure how to grab the indices of these cells as the shape of the images are 5070000 with 3 dimensions, so, I can't display the array's effectively. Looking over the numpy documentation, I think I can do something like:</p> <pre><code>img2[img1[img1==[0,204,204]]] </code></pre> <p>to get the indices of <code>img1</code> where that color occurs and then call the same array position from <code>img2</code>, but, I can't seem to get this syntax correct. Help?</p>
<p>If I understand correctly, you can use the following:</p> <pre><code>img1[np.where((img1==[0,204,204]).all(axis=2))] = img2[np.where((img1==[0,204,204]).all(axis=2))] </code></pre> <p>This works because the syntax you had originally (<code>np.where((img1==[0,204,204]).all(axis=2))</code>) already returns the indices you are looking for</p> <p>Example (on a small array):</p> <pre><code>img1 = np.array([[[0,204,204],[0,0,0],[1,2,3]]]) array([[[ 0, 204, 204], [ 0, 0, 0], [ 1, 2, 3]]]) img2 = np.array([[[0,1,2],[1,1,1],[2,7,3]]]) array([[[0, 1, 2], [1, 1, 1], [2, 7, 3]]]) img1[np.where((img1==[0,204,204]).all(axis=2))] = img2[np.where((img1==[0,204,204]).all(axis=2))] &gt;&gt;&gt; img1 array([[[0, 1, 2], [0, 0, 0], [1, 2, 3]]]) </code></pre>
python|arrays|image|numpy
1
376,352
49,480,591
Group and rename pandas dataframe
<p>In Pythons Pandas, I have a dataframe where one column holds a group called "code" and another column holds notes for that group. Each occurrence of those groups may have different notes.<br> How to rename the groups by selecting the first occurrence of the note in that group? <br><br> Example: <br> IN:</p> <pre><code>CODE NOTE A Banana B Cola A Apple B Fanta C Toy </code></pre> <p>Out:</p> <pre><code>CODE NOTE Banana Banana Cola Cola Banana Apple Cola Fanta Toy Toy </code></pre> <p>So far, I have this code to group and display code, count, and note:</p> <pre><code>df.groupby('code').note.agg(['count', 'first']).sort_values('count', ascending=False) </code></pre>
<p>Call <code>drop_duplicates</code> and then <code>map</code> <code>NOTE</code> to <code>CODE</code>:</p> <pre><code>df['CODE'] = df.CODE.map(df.drop_duplicates('CODE').set_index('CODE').NOTE) </code></pre> <p>Or,</p> <pre><code>df['CODE'] = df.CODE.replace(df.drop_duplicates('CODE').set_index('CODE').NOTE) </code></pre> <p>Alternatively,</p> <pre><code>mapper = df.drop_duplicates('CODE').set_index('CODE').NOTE.to_dict() df['CODE'] = df['CODE'].map(mapper) </code></pre> <p></p> <pre><code>df CODE NOTE 0 Banana Banana 1 Cola Cola 2 Banana Apple 3 Cola Fanta 4 Toy Toy </code></pre> <p>Note; <code>map</code> is magnitudes of order faster than <code>replace</code>, but both of them work the same.</p>
python|pandas|dataframe
2
376,353
49,645,155
Filtering out Non English sentences in a list in Python Pandas
<p>So there is a excel file which i have read through pandas and stored it in a dataframe 'df'. Now that excel file contains 24 columns as 'questions' and 631 rows as 'responses/answers'.</p> <p>So i converted one such question into a list so that i can tokenize it and apply further nlp related tasks on it.</p> <pre><code>df_lst = df['Q8 Why do you say so ?'].values.tolist() </code></pre> <p>Now, this gives me a list that contains 631 sentences, out of which some sentences are non-english.. So i want to filter out the non-english sentences so that in the end I am left with a list that contains only english sentences.</p> <p>What i have: </p> <pre><code>df_lst = ['The excecutive should be able to understand the customer's problem','Customers should get correct responses to their queries', 'This text is in a random non english language'...] </code></pre> <p>Output (What i want): </p> <pre><code>english_words = ['The excecutive should be able to understand the customer's problem','Customers should get correct responses to their queries', ...] </code></pre> <p>Also, I read about a python library named pyenchant which should be able to do this, but it's not compatible with windows 64bit and python 3.. Is there any other way by which this can be done ? </p> <p>Thanks!</p>
<p>There is another library (closely related to nltk), TextBlob, Initially bound to Sentiment analysis, But you can still use it for translation, see the doc here: <a href="https://textblob.readthedocs.io/en/dev/quickstart.html" rel="nofollow noreferrer">https://textblob.readthedocs.io/en/dev/quickstart.html</a></p> <p>Section Translation and Language Detection</p> <p>gl</p>
python|pandas|filtering|non-english|pyenchant
1
376,354
49,604,224
Pulling stock information using pandas datareader
<p>I am using pandas datareader to pull stock information for a given range of dates. For example:</p> <pre><code>import pandas_datareader.data as web import datetime as dt start = dt.datetime(2018,3,26) end = dt.datetime(2018,3,29) web.DataReader('IBM','yahoo', start, end).reset_index() </code></pre> <p>This returns the following dataframe for IBM:</p> <p><a href="https://i.stack.imgur.com/nOwi7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nOwi7.png" alt="enter image description here"></a></p> <p>This contains the information I am looking for, but I would like to automatically iterate through multiple stock tickers (instead of manually changing the stock ticker). Ideally I could loop this code through a list of desired stock tickers.</p>
<p>Here is another way, creating your dataframe directly:</p> <pre><code>tickers = ['IBM','AAPL'] df = pd.concat([web.DataReader(ticker,'morningstar', start, end) for ticker in tickers]).reset_index() </code></pre> <p>Which returns:</p> <pre><code> Symbol Date Close High Low Open Volume 0 IBM 2018-03-26 153.37 153.6570 150.28 151.210 4103904 1 IBM 2018-03-27 151.91 154.8697 151.16 153.950 3883745 2 IBM 2018-03-28 152.52 153.8600 151.89 152.070 3664826 3 IBM 2018-03-29 153.43 153.8900 151.08 153.070 3419959 4 AAPL 2018-03-26 172.77 173.1000 166.44 168.070 37541236 5 AAPL 2018-03-27 168.34 175.1500 166.92 173.680 40922579 6 AAPL 2018-03-28 166.48 170.0200 165.19 167.250 41668545 7 AAPL 2018-03-29 167.78 171.7500 166.90 167.805 38398505 </code></pre>
python|pandas|stock|datareader
6
376,355
49,399,198
Sort a tensor based on two columns in tensorflow
<p>Is it possible to sort a tensor based on values in two columns in Tensorflow?</p> <p>For example, let's say I have the following tensor.</p> <pre><code>[[1,2,3] [2,3,5] [1,4,6] [2,2,1] [0,4,2]] </code></pre> <p>I would lik it to be sorted first based on the first column and then the second column. After sorting it would look like following.</p> <pre><code>[[0,4,2] [1,2,3] [1,4,6] [2,2,1] [2,3,5]] </code></pre> <p>Is there any way to achieve this using tensorflow? I am able to sort according to a single column. But sorting based on two columns is a problem for me. Please can anyone help?</p>
<p>A very naive approach, </p> <pre><code>import tensorflow as tf a = tf.constant([[1, 2, 3], [2, 3, 5], [1, 4, 6], [2, 2, 1], [0, 4, 2]]) # b = a[:0]*10 + a[:1]*1 -- &gt; (e.g 1*10+2*1 =12) b = tf.add(tf.slice(a, [0, 0], [-1, 1]) * 10, tf.slice(a, [0, 1], [-1, 1])) reordered = tf.gather(a, tf.nn.top_k(b[:, 0], k=5, sorted=False).indices) reordered = tf.reverse(reordered, axis=[0]) with tf.Session() as sess: result = sess.run(reordered) print(result) </code></pre> <p>Hope this helps.</p>
python|tensorflow
1
376,356
49,424,084
Loop on pandas dataframe over unique values only
<p>I have the following pandas dataframe:</p> <pre><code>DB Table Column Format Retail Orders ID INTEGER Retail Orders Place STRING Dept Sales ID INTEGER Dept Sales Name STRING </code></pre> <p>I want to loop on the Tables, while generating a SQL for creating the tables. e.g. </p> <pre><code>create table Retail.Orders ( ID INTEGER, Place STRING) create table Dept.Sales ( ID INTEGER, Name STRING) </code></pre> <p>What I've already done is get distinct db &amp; tables using <code>drop_duplicate</code> and then for each table apply a filter and concatenate the strings to create a sql.</p> <pre><code>def generate_tables(df_cols): tables = df_cols.drop_duplicates(subset=[KEY_DB, KEY_TABLE])[[KEY_DB, KEY_TABLE]] for index, row in tables.iterrows(): db = row[KEY_DB] table = row[KEY_TABLE] print("DB: " + db) print("Table: " + table) sql = "CREATE TABLE " + db + "." + table + " (" cols = df_cols.loc[(df_cols[KEY_DB] == db) &amp; (df_cols[KEY_TABLE] == table)] for index, col in cols.iterrows(): sql += col[KEY_COLUMN] + " " + col[KEY_FORMAT] + ", " sql += ")" print(sql) </code></pre> <p>Is there a better approach to iterate over the dataframe?</p>
<p>This is the way I would do it. First create a dictionary via <code>df.itertuples</code> [more efficient than <code>df.iterrows</code>], then use <code>str.format</code> to include the values seamlessly.</p> <p>Uniqueness is guaranteed in dictionary construction by using <code>set</code>.</p> <p>I also convert to a generator so you can iterate it efficiently if you wish; it's always possible to exhaust the generator via <code>list</code> as below.</p> <pre><code>from collections import defaultdict d = defaultdict(set) for row in df.itertuples(): d[(row[1], row[2])].add((row[3], row[4])) def generate_tables_jp(d): for k, v in d.items(): yield 'CREATE TABLE {0}.{1} ({2})'\ .format(k[0], k[1], ', '.join([' '.join(i) for i in v])) list(generate_tables_jp(d)) </code></pre> <p>Result:</p> <pre><code>['CREATE TABLE Retail.Orders (ID INTEGER, Place STRING)', 'CREATE TABLE Dept.Sales (ID INTEGER, Name STRING)'] </code></pre>
python|pandas
2
376,357
49,543,158
How to use weights in tf.metrics.auc?
<p>The docs for the <a href="https://www.tensorflow.org/api_docs/python/tf/metrics/auc" rel="nofollow noreferrer">tf.metrics.auc function in tensorflow</a> say</p> <blockquote> <p>weights: Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension).</p> </blockquote> <p>and</p> <blockquote> <p>If weights is None, weights default to 1. Use weights of 0 to mask values.</p> </blockquote> <p>Suppose I want to use the weights to measure two AUCs: one for men, one for women.</p> <p>Can you give an example of how to do that?</p> <p><strong>EDIT</strong>: And suppose I have enough classes that I don't want to divide the data into all the different classes, and enough data that I don't want to read it all into memory. That is, I want to do it in a streaming fashion.</p>
<p>Assuming your labels are a vector, setting the weights to be a vector with 1 for rows where data points belong to a class and 0 for rows where data points do not belong to that class will let you compute AUC for members of that class.</p>
tensorflow|auc
0
376,358
49,514,655
r - Dplyr 'ungroup' function in pandas
<p>Imagine you have in R this 'dplyr' code:</p> <pre><code>test &lt;- data %&gt;% group_by(PrimaryAccountReference) %&gt;% mutate(Counter_PrimaryAccountReference = n()) %&gt;% ungroup() </code></pre> <p>how can I exactly convert this to pandas equivalent code ? Shortly, I need to group by to add another column and then ungroup the initial query. My concern is about how to do 'ungroup' function using pandas package.</p>
<p>Now you are able to do it with <a href="https://github.com/pwwang/datar" rel="nofollow noreferrer"><code>datar</code></a>:</p> <pre class="lang-py prettyprint-override"><code>from datar import f from datar.dplyr import group_by, mutate, ungroup, n test = data &gt;&gt; \ group_by(f.PrimaryAccountReference) &gt;&gt; \ mutate(Counter_PrimaryAccountReference = n()) &gt;&gt; \ ungroup() </code></pre> <p>I am the author of the package. Feel free to submit issues if you have any questions.</p>
python|r|pandas|group-by|dplyr
0
376,359
49,755,610
in keras, how can apply filter ( where ) funtion?
<p>like filter funtion in functools package, i want to find element over 0.5 in tensor. </p> <p>this is code for that, but not work . </p> <pre><code>def pred_overhalf(y_true, y_pred): return K.count_params( filter( lambda x : x &gt; 0.5 , y_pred ) ) model.compile(optimizer = "adam" , loss = "mse", metrics = [ pred_overhalf]) </code></pre> <p>is there any way to solve this problem? I search keras backend documentation, but i cant find any solution </p>
<pre><code>def pred_overhalf(y_true,y_pred): out = K.greater(y_pred,0.5) out = K.cast(out,K.floatx()) #option 1 return K.mean(out) #fraction of items greater than 0.5 #option 2 return K.sum(out) #total count (beware: this will consider all samples) </code></pre>
python|tensorflow|deep-learning|keras
0
376,360
49,371,943
Error in data type in python?
<p>I have a text file which has 4 attributes like this:</p> <pre><code> taxi id date time longitude latitude 0 1 2008-02-02 15:36:08 116.51172 39.92123 1 1 2008-02-02 15:46:08 116.51135 39.93883 2 1 2008-02-02 15:46:08 116.51135 39.93883 3 1 2008-02-02 15:56:08 116.51627 39.91034 4 1 2008-02-02 16:06:08 116.47186 39.91248 </code></pre> <p>I have read this file in jupyter by using this command:</p> <pre><code>res=pd.read_csv("C:/Users/malik/Desktop/result.txt",low_memory=False) res.head() </code></pre> <p>but when i want to fetch out the datatype of attributes by using this code:</p> <pre><code>type(res) res['longitude'].dtype </code></pre> <p>It gives me error like:</p> <blockquote> <p>KeyError: 'longitude'</p> </blockquote>
<p>Your data ingestion is incorrect. You have a table with <em>one</em> column named <code>taxi id date time longitude latitude</code>. You need to insert or specify the proper data separator when your read the file.</p>
python|pandas
0
376,361
49,775,476
Python Pandas: reduce dataframe to contain with duplicate states
<p>this is my first question I ask here, I couldn't find an easy solution to my problem.</p> <p>I want to reduce a dataframe which contains state changes. Similar to ".drop_duplicates()" i want to reduce the dataframe with duplicate states, but instead it should only drop the row when the state didn't change.</p> <p>Here my example dataframe:</p> <pre><code>df = pd.DataFrame(data=({'Date':('Day1', 'Day2', 'Day3', 'Day4', 'Day5'), 'State':(1,0,0,2,0)}), columns=(['State']), index=(['Date'])) df_reduced = df.drop_duplicates df_reduced </code></pre> <p>The result is unfortunately not the desired result:</p> <pre><code>Out[]: State Date Day1 1 Day2 0 Day4 2 </code></pre> <p>The desired output would contain also Day 5 with state 0.</p> <p>I tried this with "for and iterrows()" construct, but it is very slow on longer time series data.</p> <p>Hope you find an more elegant way, which works fast on longer time series data.</p> <p>Thank you for your help in advance!</p>
<p>One way is to compare your series to a series shifted by one value:</p> <pre><code>df = pd.DataFrame(data={'Date':('Day1', 'Day2', 'Day3', 'Day4', 'Day5'), 'State':(1,0,0,2,0)}) df = df.set_index('Date') res = df.loc[df['State'] != df['State'].shift()] print(res) # State # Date # Day1 1 # Day2 0 # Day4 2 # Day5 0 </code></pre>
python|pandas|dataframe
1
376,362
49,734,582
Summing in a list of Counters
<p>I have a following list of counters </p> <pre><code>[Counter({'A': 2, 'B': 2, 'C': 1}), Counter({'A': 3, 'B': 3, 'C': 2}), Counter({'A': 4, 'B': 4, 'C': 4}), Counter({'A': 5, 'B': 4, 'C': 5}), Counter({'A': 6, 'B': 6, 'C': 6}), Counter({'A': 7, 'B': 8, 'C': 8}), Counter({'A': 8, 'B': 9, 'C': 9}), Counter({'A': 9, 'B': 12, 'C': 10}), Counter({'A': 11, 'B': 14, 'C': 13}), Counter({'A': 13, 'B': 17, 'C': 17}), Counter({'A': 15, 'B': 19, 'C': 20}), Counter({'A': 17, 'B': 22, 'C': 22}), Counter({'A': 19, 'B': 24, 'C': 24}), Counter({'A': 22, 'B': 26, 'C': 27}), Counter({'A': 24, 'B': 29, 'C': 29}), Counter({'A': 26, 'B': 30, 'C': 30}), Counter({'A': 30, 'B': 33, 'C': 35}), Counter({'A': 34, 'B': 35, 'C': 38}), Counter({'A': 37, 'B': 40, 'C': 42}), Counter({'A': 40, 'B': 42, 'C': 46})] </code></pre> <p>For each Counter I want to calculate the probability. I did this as follow: </p> <pre><code>counts = ({'A': 2, 'B': 2, 'C': 1}) counts total = sum(counts.values()) print(total) probability_mass = {k:v/total for k,v in counts.items()} print(probability_mass) </code></pre> <p>I had to delete the Counter by hand which is not pythonic at all. How can I do such operation for the each counter i.e. finding the probability first for </p> <pre><code>Counter({'A': 2, 'B': 2, 'C': 1}) </code></pre> <p>then for </p> <pre><code>Counter({'A': 3, 'B': 3, 'C': 2}) </code></pre> <p>and so on and so forth and make a DataFrame from them? </p>
<p>Since <code>pd.DataFrame()</code> knows how to handle a list of dictionaries, this can be done fairly easily:</p> <pre><code>counter_list = [Counter({'A': 2, 'B': 2, 'C': 1}), Counter({'A': 3, 'B': 3, 'C': 2}), Counter({'A': 4, 'B': 4, 'C': 4}), Counter({'A': 5, 'B': 4, 'C': 5}), Counter({'A': 6, 'B': 6, 'C': 6}), Counter({'A': 7, 'B': 8, 'C': 8}), Counter({'A': 8, 'B': 9, 'C': 9}), Counter({'A': 9, 'B': 12, 'C': 10}), Counter({'A': 11, 'B': 14, 'C': 13}), Counter({'A': 13, 'B': 17, 'C': 17}), Counter({'A': 15, 'B': 19, 'C': 20}), Counter({'A': 17, 'B': 22, 'C': 22}), Counter({'A': 19, 'B': 24, 'C': 24}), Counter({'A': 22, 'B': 26, 'C': 27}), Counter({'A': 24, 'B': 29, 'C': 29}), Counter({'A': 26, 'B': 30, 'C': 30}), Counter({'A': 30, 'B': 33, 'C': 35}), Counter({'A': 34, 'B': 35, 'C': 38}), Counter({'A': 37, 'B': 40, 'C': 42}), Counter({'A': 40, 'B': 42, 'C': 46})] df = pd.DataFrame([{k: v / sum(counter.values()) for k, v in counter.items()} for counter in counter_list]) print(df) # A B C # 0 0.400000 0.400000 0.200000 # 1 0.375000 0.375000 0.250000 # 2 0.333333 0.333333 0.333333 # 3 0.357143 0.285714 0.357143 # 4 0.333333 0.333333 0.333333 # 5 0.304348 0.347826 0.347826 # 6 0.307692 0.346154 0.346154 # 7 0.290323 0.387097 0.322581 # 8 0.289474 0.368421 0.342105 # 9 0.276596 0.361702 0.361702 # 10 0.277778 0.351852 0.370370 # 11 0.278689 0.360656 0.360656 # 12 0.283582 0.358209 0.358209 # 13 0.293333 0.346667 0.360000 # 14 0.292683 0.353659 0.353659 # 15 0.302326 0.348837 0.348837 # 16 0.306122 0.336735 0.357143 # 17 0.317757 0.327103 0.355140 # 18 0.310924 0.336134 0.352941 # 19 0.312500 0.328125 0.359375 </code></pre>
python|python-3.x|list|pandas|counter
1
376,363
49,673,876
Assign new values in pandas
<p>I am trying to change the value in one row in pandas dataframe for certain columns with other values:</p> <pre><code>sub_data.loc[[0],20:71] = sub_data.loc[1,20:71] or sub_data.loc[0,20:71] = sub_data.loc[1,20:71] </code></pre> <p>both did not work. any suggestion? </p> <p>Update</p> <p>It was solved when I used series</p> <p><code>sub_data.iloc[0,20:71].update(sub_data.iloc[1,20:71])</code></p>
<p>Most probable reason is that you are using <code>20:71</code> in <code>loc</code>. That looks like you need <code>iloc</code></p> <pre><code>sub_data.iloc[0, 20:71] = sub_data.iloc[1, 20:71] </code></pre>
python|pandas|row
1
376,364
49,432,508
Multiple input/output arguments for function in python
<p>I wrote the following function to convert a value (col) with unit (ufrom) into another unit (uto):</p> <pre><code>def convert(row, col , ufrom, uto): convRow = convDF[(convDF.from == row[ufrom]) &amp; (convDF.to == uto)] val = row[col] / convRow.factor return(val, uto) </code></pre> <p>convDF is a dataframe containing several units and their conversion factor. I call the function like this:</p> <pre><code>for idx, row in df.iterrows(): if row.Unit!= 'MM': df.at[idx, ['Width', 'Unit']] = convert(row,'Width', 'Unit', 'MM') df.at[idx, ['Length', 'Unit']] = convert(row,'Length', 'Unit', 'MM') df.at[idx, ['Hight', 'Unit']] = convert(row,'Hight', 'Unit', 'MM') </code></pre> <p>The convert function gets the current row, the column containing the value that needs to be converted, the source unit column as well as the destination unit. So far it works perfectly.</p> <p>As you can see, I call the function three times but I was wondering if I could call it once and pass all three arguments (Width, Length, Hight) and convert them because they have the same column refering to their unit (Unit) and the same destination unit (uto). So I would like the function to handle single as well as multiple values. In the end, this</p> <pre><code>df.at[idx, ['Width', 'Unit']] = convert(row,'Width', 'Unit', 'MM') </code></pre> <p>should work as well as this</p> <pre><code>df.at[idx, ['Width','Length','Hight', 'Unit']] = convert(row,['Width','Length','Hight'],'Unit', 'MM') </code></pre> <p>Tried working with the <code>*</code>-syntax for passing 1:n arguments but how do I change the convert function to give multiple or single results?</p> <p>Thank you!</p>
<p>You can have your function take a list of columns as an argument then return a list based on what is in the column list. For example, </p> <pre><code>def convert(row, cols , ufrom, uto): values=[] for col in cols: convRow = convDF[(convDF.from == row[ufrom]) &amp; (convDF.to == uto)] values.append(row[col] / convRow.factor) values.append(ufrom) return values </code></pre> <p>and then you could call it like</p> <pre><code>df.at[idx, ['Width','Length','Hight', 'Unit']] = convert(row,['Width','Length','Hight'],'Unit', 'MM') </code></pre> <p>or in the case of one column, just have a list with one element: </p> <pre><code>df.at[idx, ['Width', 'Unit']] = convert(row,['Width'],'Unit', 'MM') </code></pre>
python|pandas|function|multiple-columns
1
376,365
49,576,858
Adding LSTM to conv2D layers in keras
<p>I have an input shape of 64x60x4 for reinforcement learning an agent to play Mario. The problem is, it seems very "if screen looks like this then do that", which isn't very good for this problem.</p> <p>I want to add an LSTM layer after 3 conv2D layers in Keras (TensorFlow) but it complains that it expects 5 dimensions, but received 4. When I play with the layers, it then becomes 6 and 5.</p> <p>So how do I get an LSTM layer into the following model with input_shape 64x60x4 (the 4 being the last 4 frames for helping learn acceleration and direction of objects):</p> <pre><code>image_input = Input(shape=input_shape) out = Conv2D(filters=32, kernel_size=8, strides=(4, 4), padding=padding, activation='relu')(image_input) out = Conv2D(filters=64, kernel_size=4, strides=(2, 2), padding=padding, activation='relu')(out) out = Conv2D(filters=64, kernel_size=4, strides=(1, 1), padding=padding, activation='relu')(out) out = MaxPooling2D(pool_size=(2, 2))(out) out = Flatten()(out) out = Dense(256, activation='relu')(out) ### LSTM should go here ### q_value = Dense(num_actions, activation='linear')(out) </code></pre> <p>Any other suggestions/pointers for this would be welcome.</p>
<p>I would suggest something like this, after your MaxPooling Layer)</p> <pre><code>out = Reshape((64, -1))(out) out = LSTM(...)(out) out = Flatten... </code></pre> <p>Also I don't recommend starting with 32 filters then going up, I suggest starting with 64 then going down, but hey, you do you. Also I would suggest separate CNN layers for different aspects, like score, time...etc. Other than that, all is set.</p>
python|tensorflow|keras
1
376,366
49,533,818
Not able to import numpy in JyNi alpha 4
<p>I am new to python and jython i want to import numpy in my jython program but whenever i import it shows following error:</p> <pre><code>Traceback (most recent call last): File "/home/phpdev/workspace/FirstProgram/testone.py", line 16, in &lt;module&gt; import numpy File "/usr/lib/python2.7/dist-packages/numpy/__init__.py", line 153, in &lt;module&gt; from . import add_newdocs File "/usr/lib/python2.7/dist-packages/numpy/add_newdocs.py", line 13, in &lt;module&gt; from numpy.lib import add_newdoc File "/usr/lib/python2.7/dist-packages/numpy/lib/__init__.py", line 8, in &lt;module&gt; from .type_check import * File "/usr/lib/python2.7/dist-packages/numpy/lib/type_check.py", line 11, in &lt;module&gt; import numpy.core.numeric as _nx File "/usr/lib/python2.7/dist-packages/numpy/core/__init__.py", line 15, in &lt;module&gt; from . import defchararray as char File "/usr/lib/python2.7/dist-packages/numpy/core/defchararray.py", line 1668, in &lt;module&gt; class chararray(ndarray): TypeError: Error when calling the metaclass bases 'getset_descriptor' object is not callable </code></pre> <p>and my code is :</p> <pre><code>import os import sys print "hi" print sys.path print "hello " import numpy print "last" </code></pre> <p>am using jython 2.7.1:</p> <pre><code>JyNI : alpha 5 numpy : 1.13.0 </code></pre>
<p>What you are trying to do should be workable as NumPy 12 and 13 are supported in JyNI alpha 4, 5 and newer.</p> <p>Most likely, Jython/JyNI locates the wrong NumPy installation. I suspect that you have multiple numpy installations in parallel and JyNI takes the wrong one.</p> <p>Further information on your platform, classpath and pythonpath (w.r.t. Jython) would be required to tell the actual cause. Some scenarios similar to this issue are discussed at</p> <ul> <li><a href="https://github.com/Stewori/JyNI/issues/18" rel="nofollow noreferrer">https://github.com/Stewori/JyNI/issues/18</a></li> <li><a href="https://github.com/Stewori/JyNI/issues/21" rel="nofollow noreferrer">https://github.com/Stewori/JyNI/issues/21</a></li> </ul> <p>There may be helpful hints for you. Otherwise, this would be discussed best on the <a href="https://github.com/Stewori/JyNI" rel="nofollow noreferrer">issue tracker</a> or with <a href="https://www.jyni.org/#contact" rel="nofollow noreferrer">JyNI's support</a>.</p> <p>Notes:</p> <ul> <li>NumPy from anaconda or canopy is not tested and might yield ABI issues with prebuilt JyNI.</li> <li>NumPy 14 and 15 are not supported by current JyNI (i.e. JyNI alpha 5). See <a href="https://github.com/Stewori/JyNI/issues/22" rel="nofollow noreferrer">https://github.com/Stewori/JyNI/issues/22</a>.</li> <li>NumPy 13.2 is broken (also for some CPython versions) and was officially withdrawn by the NumPy developers. NumPy 13.3 works fine again with JyNI alpha 4 and 5.</li> </ul>
python|numpy|jyni
0
376,367
49,534,162
File with sequence of numbers to two column array/list and then plot
<p>I have a text file (test.txt) which just has some sequence of numbers e.g. 2, 5, 6, 9, 3, 1, 3, 5, 5, 6, 7, 8, etc. My main goal is to plot odd placed numbers on the X-axis and even placed numbers on the Y-axis. To do that i thought, perhaps i can first store them in a list/array with two columns and then just plot the first column vs the second. How can I do this in python?</p>
<p>I am assuming your <code>data</code> to be saved in <code>myFile.csv</code> like this:</p> <pre><code>2, 5, 6, 9, 3, 1, 3, 5, 5, 6, 7, 8 5, 6, 9, 3, 1, 3, 5, 5, 6, 7, 8, 8 </code></pre> <p>you can load it into a numpy array with <code>np.loadtxt</code>. If you don't want your dataset to be divided into multiple lines, you can <code>flatten</code> it.</p> <pre><code>import numpy as np from matplotlib import pyplot as plt # load data data = np.loadtxt('myFile.csv', dtype=int, delimiter=', ') data = data.flatten() # if data was saved in multiple lines </code></pre> <p>You can split your data using list comprehensions.</p> <pre><code># process data x = [data[i] for i in range(len(data)) if i%2 == 0] y = [data[i] for i in range(len(data)) if i%2 == 1] </code></pre> <p>And then plot it.</p> <pre><code># plot data plt.plot(x, y, '.') # '.' only shows dots, no connected lines plt.show() </code></pre>
python|numpy|matplotlib|number-sequence
0
376,368
49,464,047
pandas groupby events across different days
<pre><code>import pandas as pd df = pd.DataFrame(data=[[1,1,10],[1,2,50],[1,3,20],[1,4,24], [2,1,20],[2,2,10],[2,3,20],[2,4,34],[3,1,10],[3,2,50], [3,3,20],[3,4,24],[3,5,24],[4,1,24]],columns=['day','hour','event']) df Out[4]: day hour event 0 1 1 10 1 1 2 50 2 1 3 20 &lt;- yes 3 1 4 24 &lt;- yes 4 2 1 20 &lt;- yes 5 2 2 10 6 2 3 20 &lt;- yes 7 2 4 34 &lt;- yes 8 3 1 10 &lt;- yes 9 3 2 50 10 3 3 20 &lt;- yes 11 3 4 24 &lt;- yes 11 3 5 24 &lt;- yes (here we have also an hour more) 12 4 1 24 &lt;- yes </code></pre> <p>now I would like to sum the number of events from hour=3 to hour=1 of the following day..</p> <p>The expected result should be</p> <pre><code>0 64 1 64 2 92 </code></pre>
<pre><code>#convert columns to datetimes, for same day of next day subtract 2 hours: a = pd.to_datetime(df['day'].astype(str) + ':' + df['hour'].astype(str), format='%d:%H')- pd.Timedelta(2, unit='h') #get hours between 1 and 23 only -&gt;in real 3,4...23,1 hours = a.dt.hour.between(1,23) #create consecutives groups by filtering df['a'] = hours.ne(hours.shift()).cumsum() #filter only expected hours df = df[hours] #aggregate df = df.groupby('a')['event'].sum().reset_index(drop=True) print (df) 0 10 1 64 2 64 3 92 Name: event, dtype: int64 </code></pre> <hr> <p>Another similar solution:</p> <pre><code>#create datetimeindex df.index = pd.to_datetime(df['day'].astype(str)+':'+df['hour'].astype(str), format='%d:%H') #shift by 2 hours df = df.shift(-2, freq='h') #filter hours and first unnecessary event df = df[(df.index.hour != 0) &amp; (df.index.year != 1899)] #aggregate df = df.groupby(df.index.day)['event'].sum().reset_index(drop=True) print (df) 0 64 1 64 2 92 Name: event, dtype: int64 </code></pre> <p>Another solution:</p> <pre><code>#filter out first values less as 3 and hours == 2 df = df[(df['hour'].eq(3).cumsum() &gt; 0) &amp; (df['hour'] != 2)] #subtract 1 day by condition and aggregate df = df['event'].groupby(np.where(df['hour'] &lt; 3, df['day'] - 1, df['day'])).sum() print (df) 1 64 2 64 3 92 Name: event, dtype: int64 </code></pre>
python|pandas|dataframe
1
376,369
49,455,620
Find rows in pandas dataframe, where diffrent rows have common values in lists in columns storing lists
<p>I can solve my task by writing a for loop, but I wonder, how to do this in a more pandorable way.</p> <p>So I have this dataframe storing some lists and want to find all the rows that have any common values in these lists, </p> <p>(This code just to obtaine a df with lists:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame( {'a':['A','A','B','B','B','C'], 'b':[1,2,5,1,4,6]}) &gt;&gt;&gt; df a b 0 A 1 1 A 2 2 B 5 3 B 1 4 B 4 5 C 6 &gt;&gt;&gt; d = df.groupby('a')['b'].apply(list) </code></pre> <p>)</p> <p>Here we start:</p> <pre><code>&gt;&gt;&gt; d A [1, 2] B [5, 1, 4] C [6] Name: b, dtype: object </code></pre> <p>I want to select rows with index 'A' and 'B', because their lists overlap by the value 1.</p> <p>I could write now a for loop or expand the dataframe at these lists (reversing the way I got it above) and have multiple rows copying other values. What would you do here? Or is there some way, to use df.groupby(by=lambda x, y : return not set(x).isdisjoint(y)), that compares two rows? But groupby and also boolean masking just look at one element at once...</p> <hr> <p>I tried now to overload the equality operator for lists, and because lists are not hashable, then of tuples and sets (I set hash to 1 to avoid identity comparison). I then used groupby and merge on the frame with itself, but as it seems, that it checks off the indexes, that it has already matched.</p> <pre><code>import pandas as pd import numpy as np from operator import itemgetter class IndexTuple(set): def __hash__(self): #print(hash(str(self))) return hash(1) def __eq__(self, other): #print("eq ") is_equal = not set(self).isdisjoint(other) return is_equal l = IndexTuple((1,7)) l1 = IndexTuple((4, 7)) print (l == l1) df = pd.DataFrame(np.random.randint(low=0, high=4, size=(10, 2)), columns=['a','b']).reset_index() d = df.groupby('a')['b'].apply(IndexTuple).to_frame().reset_index() print (d) print (d.groupby('b').b.apply(list)) print (d.merge (d, on = 'b', how = 'outer')) </code></pre> <p>outputs (it works fine for the first element, but at <code>[{3}]</code> there should be <code>[{3},{0,3}]</code> instead: </p> <pre><code>True a b 0 0 {1} 1 1 {0, 2} 2 2 {3} 3 3 {0, 3} b {1} [{1}] {0, 2} [{0, 2}, {0, 3}] {3} [{3}] Name: b, dtype: object a_x b a_y 0 0 {1} 0 1 1 {0, 2} 1 2 1 {0, 2} 3 3 3 {0, 3} 1 4 3 {0, 3} 3 5 2 {3} 2 </code></pre>
<p>Using a <code>merge</code> on <code>df</code>:</p> <pre><code>v = df.merge(df, on='b') common_cols = set( np.sort(v.iloc[:, [0, -1]].query('a_x != a_y'), axis=1).ravel() ) common_cols {'A', 'B'} </code></pre> <p>Now, pre-filter and call <code>groupby</code>:</p> <pre><code>df[df.a.isin(common_cols)].groupby('a').b.apply(list) a A [1, 2] B [5, 1, 4] Name: b, dtype: object </code></pre>
python|pandas|dataframe|pandas-groupby
2
376,370
49,677,060
Pandas: count empty strings in a column
<p>I tried to find the number of cells in a column that only contain empty string <code>''</code>. The <code>df</code> looks like:</p> <pre><code>currency USD EUR ILS HKD </code></pre> <p>The code is:</p> <pre><code>df['currency'].str.contains(r'\s*') </code></pre> <p>but the code also recognizes cells with actual string values as containing empty strings.</p> <p>I am wondering how to fix this issue that it only detects cells that only contains empty strings.</p>
<p>Several ways. Using <code>numpy</code> is usually more efficient.</p> <pre><code>import pandas as pd, numpy as np df = pd.DataFrame({'currency':['USD','','EUR','']}) (df['currency'].values == '').sum() # 2 len(df[df['currency'] == '']) # 2 df.loc[df['currency'] == ''].count().iloc[0] # 2 </code></pre>
python|string|pandas|dataframe|series
25
376,371
49,539,203
Pandas - Combine Excel Rows on ID
<p>I have a dataframe that currently looks like this. I need to combine the two rows on the id.</p> <pre><code> id post date 0 10-1 Lorem ipsum dolor sit amet, consectetur adipiscing... 2012-01-28 1 10-1 Ut enim ad minim veniam, quis nostrud exercitation... 2012-01-28 </code></pre> <p>Expected result is this:</p> <pre><code> id post date 0 10-1 Lorem ipsum dolor sit amet, consectetur adipiscing... 2012-01-28 </code></pre> <p>What I have tried:</p> <pre><code>1) df = df.groupby(['id', 'post']) 2) df = df.groupby(['id', 'post']).first().reset_index(); 3) df = df.groupby('id', 'post').agg({'post: sum'}) 4) df = df.groupby('id') df['id'].nunique() 5) df = df.groupby('id').agg(lambda x: x.tolist()) </code></pre> <p>5 has gotten me the closest. When I run that, it removes duplicates, but doesn't aggregate the post column. I am having trouble understanding how to solve this problem. I don't understand how to groupby two things after reading the documentation.</p>
<p>You can can past a dict to agg, <code>key</code> of the dict is the <code>column</code> and <code>value</code> is the function you will implement to that column. </p> <pre><code>df.groupby('id').agg({'post':'sum','date':'first'}) </code></pre>
python|excel|pandas
2
376,372
49,502,617
How can I multiply unaligned numpy matrices in python?
<p>I've got two numpy matrices: the first, <code>indata</code> has a shape of <code>(2, 0)</code>. The second (<code>self.Ws[0]</code> in my code) has a shape of <code>(100, 0)</code>.</p> <p>Is it possible to multiply these matrices by each other? </p> <pre><code>def Evaluate(self, indata): sum = np.dot(self.Ws[0], indata) + self.bs[0] self.As[0] = self.sigmoid(sum) for i in range(1, len(self.numLayers)): sum = np.dot(self.Ws[i], self.As[i-1] + self.bs[i]) self.As[i] = self.softmax(sum) return self.As[len(self.numLayers)-1] </code></pre> <p>The error I'm getting when running this code is the following:</p> <pre><code>File "C:/Users/1/PycharmProjects/Assignment4PartC/Program.py", line 28, in main NN.Train(10000, 0.1) File "C:\Users\1\PycharmProjects\Assignment4PartC\Network.py", line 53, in Train self.Evaluate(self.X[i]) File "C:\Users\1\PycharmProjects\Assignment4PartC\Network.py", line 38, in Evaluate sum = np.dot(self.Ws[0], indata) + self.bs[0] ValueError: shapes (100,) and (2,) not aligned: 100 (dim 0) != 2 (dim 0) </code></pre> <p>Hopefully somebody can help me out with this -- any help is appreciated! If anyone needs more granular information about what I'm running, just let me know and I'll update my post.</p>
<p>There is no such thing as shape <code>(N, 0)</code> for an array unless the array is empty. What you have is probably of shape <code>(2,)</code> and <code>(100,)</code>. One way of multiplying these objects is:</p> <pre><code>np.dot(self.Ws[0].reshape((-1, 1)), indata.reshape((1, -1))) </code></pre> <p>This is going to give you a <code>(100, 2)</code> array. Whether this is what you want to get from a mathematical perspective it is really hard to say.</p>
python|numpy|sum
1
376,373
28,140,771
Select only one index of multiindex DataFrame
<p>I am trying to create a new DataFrame using only one index from a multi-indexed DataFrame. </p> <pre><code> A B C first second bar one 0.895717 0.410835 -1.413681 two 0.805244 0.813850 1.607920 baz one -1.206412 0.132003 1.024180 two 2.565646 -0.827317 0.569605 foo one 1.431256 -0.076467 0.875906 two 1.340309 -1.187678 -2.211372 qux one -1.170299 1.130127 0.974466 two -0.226169 -1.436737 -2.006747 </code></pre> <p>Ideally, I would like something like this:</p> <pre><code>In: df.ix[level="first"] </code></pre> <p>and:</p> <pre><code>Out: A B C first bar 0.895717 0.410835 -1.413681 0.805244 0.813850 1.607920 baz -1.206412 0.132003 1.024180 2.565646 -0.827317 0.569605 foo 1.431256 -0.076467 0.875906 1.340309 -1.187678 -2.211372 qux -1.170299 1.130127 0.974466 -0.226169 -1.436737 -2.006747 ` </code></pre> <p>Essentially I want to drop all the other indexes of the multi-index other than level <code>first</code>. Is there an easy way to do this?</p>
<p>One way could be to simply rebind <code>df.index</code> to the desired level of the MultiIndex. You can do this by specifying the label name you want to keep:</p> <pre><code>df.index = df.index.get_level_values('first') </code></pre> <p>or use the level's integer value:</p> <pre><code>df.index = df.index.get_level_values(0) </code></pre> <p>All other levels of the MultiIndex would disappear here.</p>
python|pandas|select|dataframe|indexing
106
376,374
28,331,948
Numpy getting in the way of int -> float type casting
<p>Apologies in advance - I seem to be having a very fundamental misunderstanding that I can't clear up. I have a fourvector class with variables for ct and the position vector. I'm writing code to perform an x-direction lorentz boost. The problem I'm running in to is that I, as it's written below, ct returns with a proper float value, but x does not. Messing around, I find that tempx is a float, but assigning tempx to r[0] does not make that into a float, instead it rounds down to an int. I have previously posted a question on mutability vs immutability, and I suspect this is the issue. If so I clearly have a deeper misunderstanding than expected. Regardless, there are a couple of questions I have;</p> <p>1a) If instantiate a with a = FourVector(ct=5,r=[55,2.,3]), then type(a._r[0]) returns numpy.float64 as opposed to numpy.int32. What is going on here? I expected just a._r[1] to be a float, and instead it changes the type of the whole list?</p> <p>1b) How do I get the above behaviour (The whole list being floats), without having to instantiate the variables as floats? I read up on the documentation and have tried various methods, like using astype(float), but everything I do seems to keep it as an int. Again, thinking this is the mutable/immutable problem I'm having.</p> <p>2) I had thought, in the tempx=... line, multiplying by 1.0 would convert it to a float, as it appears this is the reason ct converts to a float, but for some reason it doesn't. Perhaps the same reason as the others?</p> <pre><code>import numpy as np class FourVector(): def __init__(self, ct=0, x=0, y=0, z=0, r=[]): self._ct = ct self._r = np.array(r) if r == []: self._r = np.array([x,y,z]) def boost(self, beta): gamma=1/np.sqrt(1-(beta ** 2)) tempct=(self._ct*gamma-beta*gamma*self._r[0]) tempx=(-1.0*self._ct*beta*gamma+self._r[0]*gamma) self._ct=tempct print(type(self._r[0])) self._r[0]=tempx.astype(float) print(type(self._r[0])) a = FourVector(ct=5,r=[55,2,3]) b = FourVector(ct=1,r=[4,5,6]) print(a._r) a.boost(.5) print(a._r) </code></pre>
<p>All your problems are indeed related.</p> <p>A numpy array is an array that holds objects efficiently. It does this by having these objects be of the same <em>type</em>, like strings (of equal length) or integers or floats. It can then easily calculate just how much space each element needs and how many bytes it must "jump" to access the next element (we call these the "strides").</p> <p>When you create an array from a list, numpy will try to determine a suitable data type ("dtype") from that list, to ensure all elements can be represented well. Only when you specify the dtype explicitly, will it not make an educated guess.</p> <p>Consider the following example:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; integer_array = np.array([1,2,3]) # pass in a list of integers &gt;&gt;&gt; integer_array array([1, 2, 3]) &gt;&gt;&gt; integer_array.dtype dtype('int64') </code></pre> <p>As you can see, on my system it returns a data type of <code>int64</code>, which is a representation of integers using 8 bytes. It chooses this, because:</p> <ol> <li>numpy recognizes all elements of the list are integers</li> <li>my system is a 64-bit system</li> </ol> <p>Now consider an attempt at changing that array:</p> <pre><code>&gt;&gt;&gt; integer_array[0] = 2.4 # attempt to put a float in an array with dtype int &gt;&gt;&gt; integer_array # it is automatically converted to an int! array([2, 2, 3]) </code></pre> <p>As you can see, once a datatype for an array was set, automatic casting to that datatype is done. Let's now consider what happens when you pass in a list that has at least one float:</p> <pre><code>&gt;&gt;&gt; float_array = np.array([1., 2,3]) &gt;&gt;&gt; float_array array([ 1., 2., 3.]) &gt;&gt;&gt; float_array.dtype dtype('float64') </code></pre> <p>Once again, numpy determines a suitable datatype for this array.</p> <p>Blindly attempting to change the datatype of an array is not wise:</p> <pre><code>&gt;&gt;&gt; integer_array.dtype = np.float32 &gt;&gt;&gt; integer_array array([ 2.80259693e-45, 0.00000000e+00, 2.80259693e-45, 0.00000000e+00, 4.20389539e-45, 0.00000000e+00], dtype=float32) </code></pre> <p>Those numbers are gibberish you might say. That's because numpy tries to reinterpret the memory locations of that array as 4-byte floats (the skilled people will be able to convert the numbers to binary representation and from there reinterpret the original integer values). </p> <p>If you want to cast, you'll have to do it explicitly and numpy will return a <em>new</em> array:</p> <pre><code>&gt;&gt;&gt; integer_array.dtype = np.int64 # go back to the previous interpretation &gt;&gt;&gt; integer_array array([2, 2, 3]) &gt;&gt;&gt; integer_array.astype(np.float32) array([ 2., 2., 3.], dtype=float32) </code></pre> <p>Now, to address your specific questions:</p> <blockquote> <p>1a) If instantiate a with a = FourVector(ct=5,r=[55,2.,3]), then type(a._r[0]) returns numpy.float64 as opposed to numpy.int32. What is going on here? I expected just a._r[1] to be a float, and instead it changes the type of the whole list?</p> </blockquote> <p>That's because numpy has to determine a datatype for the entire array (unless you use a <a href="http://docs.scipy.org/doc/numpy/user/basics.rec.html" rel="noreferrer">structured array</a>), ensuring all elements fit in that datatype. Only then can numpy iterate over the elements of that array efficiently.</p> <blockquote> <p>1b) How do I get the above behaviour (The whole list being floats), without having to instantiate the variables as floats? I read up on the documentation and have tried various methods, like using astype(float), but everything I do seems to keep it as an int. Again, thinking this is the mutable/immutable problem I'm having.</p> </blockquote> <p>Specify the <code>dtype</code> when you are creating the array. In your code, that would be:</p> <pre><code>self._r = np.array(r, dtype=np.float) </code></pre> <blockquote> <p>2) I had thought, in the tempx=... line, multiplying by 1.0 would convert it to a float, as it appears this is the reason ct converts to a float, but for some reason it doesn't. Perhaps the same reason as the others?</p> </blockquote> <p>That is true. Try printing the datatype of <code>tempx</code>, it should be a float. However, later on, you are reinserting that value into the array <code>self._r</code>, which has the dtype of int. And as you saw previously, that will cast the float back to an integer type.</p>
python|numpy
12
376,375
28,314,337
TypeError: sparse matrix length is ambiguous; use getnnz() or shape[0] while using RF classifier?
<p>I am learning about random forests in scikit learn and as an example I would like to use Random forest classifier for text classification, with my own dataset. So first I vectorized the text with tfidf and for classification:</p> <pre><code>from sklearn.ensemble import RandomForestClassifier classifier=RandomForestClassifier(n_estimators=10) classifier.fit(X_train, y_train) prediction = classifier.predict(X_test) </code></pre> <p>When I run the classification I got this:</p> <pre><code>TypeError: A sparse matrix was passed, but dense data is required. Use X.toarray() to convert to a dense numpy array. </code></pre> <p>then I used the <code>.toarray()</code> for <code>X_train</code> and I got the following:</p> <pre><code>TypeError: sparse matrix length is ambiguous; use getnnz() or shape[0] </code></pre> <p>From a previous <a href="https://stackoverflow.com/questions/21689141/classifying-text-documents-with-random-forests">question</a> as I understood I need to reduce the dimensionality of the numpy array so I do the same:</p> <pre><code>from sklearn.decomposition.truncated_svd import TruncatedSVD pca = TruncatedSVD(n_components=300) X_reduced_train = pca.fit_transform(X_train) from sklearn.ensemble import RandomForestClassifier classifier=RandomForestClassifier(n_estimators=10) classifier.fit(X_reduced_train, y_train) prediction = classifier.predict(X_testing) </code></pre> <p>Then I got this exception:</p> <pre><code> File "/usr/local/lib/python2.7/site-packages/sklearn/ensemble/forest.py", line 419, in predict n_samples = len(X) File "/usr/local/lib/python2.7/site-packages/scipy/sparse/base.py", line 192, in __len__ raise TypeError("sparse matrix length is ambiguous; use getnnz()" TypeError: sparse matrix length is ambiguous; use getnnz() or shape[0] </code></pre> <p>The I tried the following:</p> <pre><code>prediction = classifier.predict(X_train.getnnz()) </code></pre> <p>And got this:</p> <pre><code> File "/usr/local/lib/python2.7/site-packages/sklearn/ensemble/forest.py", line 419, in predict n_samples = len(X) TypeError: object of type 'int' has no len() </code></pre> <p>Two questions were raised from this: How can I use Random forests to classify correctly? and what's happening with <code>X_train</code>?. </p> <p>Then I tried the following:</p> <pre><code>df = pd.read_csv('/path/file.csv', header=0, sep=',', names=['id', 'text', 'label']) X = tfidf_vect.fit_transform(df['text'].values) y = df['label'].values from sklearn.decomposition.truncated_svd import TruncatedSVD pca = TruncatedSVD(n_components=2) X = pca.fit_transform(X) a_train, a_test, b_train, b_test = train_test_split(X, y, test_size=0.33, random_state=42) from sklearn.ensemble import RandomForestClassifier classifier=RandomForestClassifier(n_estimators=10) classifier.fit(a_train, b_train) prediction = classifier.predict(a_test) from sklearn.metrics.metrics import precision_score, recall_score, confusion_matrix, classification_report print '\nscore:', classifier.score(a_train, b_test) print '\nprecision:', precision_score(b_test, prediction) print '\nrecall:', recall_score(b_test, prediction) print '\n confussion matrix:\n',confusion_matrix(b_test, prediction) print '\n clasification report:\n', classification_report(b_test, prediction) </code></pre>
<p>I don't know much about <code>sklearn</code>, though I vaguely recall some earlier issue triggered by a switch to using sparse matricies. Internally some of the matrices had to replaced by <code>m.toarray()</code> or <code>m.todense()</code>.</p> <p>But to give you an idea of what the error message was about, consider</p> <pre><code>In [907]: A=np.array([[0,1],[3,4]]) In [908]: M=sparse.coo_matrix(A) In [909]: len(A) Out[909]: 2 In [910]: len(M) ... TypeError: sparse matrix length is ambiguous; use getnnz() or shape[0] In [911]: A.shape[0] Out[911]: 2 In [912]: M.shape[0] Out[912]: 2 </code></pre> <p><code>len()</code> usually is used in Python to count the number of 1st level terms of a list. When applied to a 2d array, it is the number of rows. But <code>A.shape[0]</code> is a better way of counting the rows. And <code>M.shape[0]</code> is the same. In this case you aren't interested in <code>.getnnz</code>, which is the number of nonzero terms of a sparse matrix. <code>A</code> doesn't have this method, though can be derived from <code>A.nonzero()</code>.</p>
python|numpy|machine-learning|nlp|scikit-learn
13
376,376
28,238,275
python pandas yahoo data ETF
<p>I would like to fetch some ETF data from yahoo finance using pandas. </p> <p>If I go onto the yahoo finance website, I can find the single ETFs (e.g. C001).</p> <p>However, if I try to pull the data using python pandas, I get nothing.</p> <pre><code>df = pd.io.data.DataReader('C001','yahoo',start=datetime(2010,1,1), end=date.today()) </code></pre> <p>The code works fine if I use 'AAP' instead of 'C001'. </p> <p>Is there something obvious I am doing wrong? Is there a reason why 'yahoo' works but the ETF ticker symbols don't?</p> <p>Thanks a lot in advance. </p>
<p>i noticed that on yahoo finance there are several tickers for C001 (C001.f,c001.de and so on).</p> <p>i used some of my code(that include the ticker simbol too) and with C001F (or everything else) it worked fine.</p> <pre><code> import datetime import pandas as pd from pandas import DataFrame from pandas.io.data import DataReader symbols_list = ['C001.F'] symbols=[] for ticker in symbols_list: r = DataReader(ticker, "yahoo", start=datetime.datetime(20140, 01, 01)) # add a symbol column r['Symbol'] = ticker symbols.append(r) # concatenate all the dfs df = pd.concat(symbols) print (df) </code></pre> <p>The result is this:</p> <pre><code> Open High Low Close Volume Adj Close Symbol Date 2010-01-12 60.40 60.40 59.10 59.13 5100 59.13 C001.F 2010-01-13 59.30 59.81 59.30 59.81 3300 59.81 C001.F 2010-01-14 59.93 59.93 59.58 59.90 400 59.90 C001.F 2010-01-15 59.81 60.04 58.46 58.54 3400 58.54 C001.F 2010-01-18 58.93 59.09 58.91 59.09 4100 59.09 C001.F 2010-01-19 58.70 59.52 58.48 59.52 16700 59.52 C001.F 2010-01-20 59.39 59.52 58.42 58.46 89300 58.46 C001.F 2010-01-21 58.71 58.83 56.94 57.08 11800 57.08 C001.F 2010-01-22 57.19 57.19 56.17 56.17 14200 56.17 C001.F 2010-01-25 56.32 56.83 56.16 56.21 45700 56.21 C001.F 2010-01-26 55.72 56.60 55.71 56.60 4200 56.60 C001.F 2010-01-27 56.06 56.53 55.92 56.22 300 56.22 C001.F </code></pre> <p>Is that what you wanted to accomplish? (if you don't want the symbol ticker, don't use the lines after #add a symbol...) and change print df in print r. I use symbol ticker 'cause i need to see symbols when i retrieve multiple tickers</p>
python|pandas|yahoo-finance
1
376,377
28,207,077
Why does inserting a dimension of size 1 into a numpy array invalidate its 'contiguous' flag?
<p>Consider this array:</p> <pre><code>In [1]: a = numpy.array([[1,2],[3,4]], dtype=numpy.uint8) In [2]: a.strides Out[2]: (2, 1) In [3]: a.flat[:] Out[3]: array([1, 2, 3, 4], dtype=uint8) In [4]: a.flags['C_CONTIGUOUS'] Out[4]: True In [5]: numpy.getbuffer(a)[:] Out[5]: '\x01\x02\x03\x04' </code></pre> <p>So far, so good. But watch what happens when I create a view of that array, in which I insert a dimension of size 1:</p> <pre><code>In [6]: b = a[:, numpy.newaxis, :] # Insert dimension In [7]: b.strides Out[7]: (2, 0, 1) In [8]: b.flat[:] Out[8]: array([1, 2, 3, 4], dtype=uint8) In [9]: b.flags['C_CONTIGUOUS'] Out[9]: False In [10]: numpy.getbuffer(b)[:] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /.../&lt;ipython-input-28-0127b71fae43&gt; in &lt;module&gt;() ----&gt; 1 numpy.getbuffer(b)[:] TypeError: single-segment buffer object expected </code></pre> <p>What gives? Why does <code>numpy</code> think that <code>b</code> isn't <code>C_CONTIGUOUS</code>? It definitely is, right? Or am I missing something?</p> <p><strong>Update:</strong> @senderle points out that <code>numpy.reshape()</code> works as expected:</p> <pre><code>In [11]: b = numpy.reshape(a, (2,1,2)) In [12]: b.flags['C_CONTIGUOUS'] Out[12]: True </code></pre> <p>That's strange, I would have expected the view to be the same in both cases.</p>
<p>This is a bit mysterious, and I even dug into the source code a bit before I saw <a href="https://stackoverflow.com/questions/28207077/why-does-inserting-a-dimension-of-size-1-into-a-numpy-array-invalidate-its-cont#comment44782376_28207077">hpaulj</a>'s comment. His observation that <code>reshape</code> and slicing with <code>newaxis</code> produce different strides shows what <em>causes</em> this behavior -- answering the why question. </p> <p>But there still remains the question of what <em>motivates</em> this behavior -- the Why question. I don't have any solid evidence for this, but my intuition is that <code>numpy</code> does everything it can to quickly ensure that slices are views, not copies, and in this case, that demands breaking c contiguity. The issue is that it may be difficult to determine how to add a <em>correctly strided</em> dimension to an array with strange strides; it might even be impossible in some cases. But you can always add a dimension with stride <code>0</code> to <em>any</em> array. So rather than special casing, checking for c contiguity and other possible stride arrangements, <code>numpy</code> simply adds a stride <code>0</code> dimension.</p> <p>This laziness makes sense because it's simpler, and (possibly) because it's faster (though a single check for c contiguity wouldn't cost a lot of time). I think the simplicity explanation takes precedence here -- things would get very complicated very quickly in these kinds of situations. </p> <p><code>reshape</code>, on the other hand, needs to be able to produce arrays of arbitrary shape, so it respects contiguity requirements, and makes copies when it must to do so. </p>
python|numpy
0
376,378
28,196,476
Pandas Boolean indexing with two dataframes
<p>I have two pandas dataframes:</p> <pre><code>df1 'A' 'B' 0 0 0 2 1 1 1 1 1 3 df2 'ID' 'value' 0 62 1 70 2 76 3 4674 4 3746 </code></pre> <p>I want to assign <code>df.value</code> as a new column <code>D</code> to df1, but just when <code>df.A == 0</code>. <code>df1.B</code> and <code>df2.ID</code> are supposed to be the identifiers.</p> <p>Example output:</p> <pre><code>df1 'A' 'B' 'D' 0 0 62 0 2 76 1 1 NaN 1 1 NaN 1 3 NaN </code></pre> <p>I tried the following:</p> <pre><code>df1['D'][ df1.A == 0 ] = df2['value'][df2.ID == df1.B] </code></pre> <p>However, since df2 and df1 don't have the same length, I get the a ValueError.</p> <pre><code>ValueError: Series lengths must match to compare </code></pre> <p>This is quite certainly due to the boolean indexing in the last part: <code>[df2.ID == df1.B]</code></p> <p>Does anyone know how to solve the problem without needing to iterate over the dataframe(s)?</p> <p>Thanks a bunch!</p> <p>==============</p> <p>Edit in reply to @EdChum: It worked perfectly with the example data, but I have issues with my real data. df1 is a huge dataset. df2 looks like this:</p> <pre><code>df2 ID value 0 1 1.00000 1 2 1.00000 2 3 1.00000 3 4 1.00000 4 5 1.00000 5 6 1.00000 6 7 1.00000 7 8 1.00000 8 9 0.98148 9 10 0.23330 10 11 0.56918 11 12 0.53251 12 13 0.58107 13 14 0.92405 14 15 0.00025 15 16 0.14863 16 17 0.53629 17 18 0.67130 18 19 0.53249 19 20 0.75853 20 21 0.58647 21 22 0.00156 22 23 0.00000 23 24 0.00152 24 25 1.00000 </code></pre> <p>After doing the merging, the output is the following: first 133 times 0.98148, then 47 times 0.00025 and then it continues with more sequences of values from df2 until finally a sequence of NaN entries appear...</p> <pre><code>Out[91]: df1 A B D 0 1 3 0.98148 1 0 9 0.98148 2 0 9 0.98148 3 0 7 0.98148 5 1 21 0.98148 7 1 12 0.98148 ... ... ... ... 2592 0 2 NaN 2593 1 17 NaN 2594 1 16 NaN 2596 0 17 NaN 2597 0 6 NaN </code></pre> <p>Any idea what might have happened here? They are all int64.</p> <p>==============</p> <p>Here are two csv with data that reproduces the problem.</p> <p>df1: <a href="https://owncloud.tu-berlin.de/public.php?service=files&amp;t=2a7d244f55a5772f16aab364e78d3546" rel="nofollow">https://owncloud.tu-berlin.de/public.php?service=files&amp;t=2a7d244f55a5772f16aab364e78d3546</a></p> <p>df2: <a href="https://owncloud.tu-berlin.de/public.php?service=files&amp;t=6fa8e0c2de465cb4f8a3f8890c325eac" rel="nofollow">https://owncloud.tu-berlin.de/public.php?service=files&amp;t=6fa8e0c2de465cb4f8a3f8890c325eac</a></p> <p>To reproduce:</p> <pre><code>import pandas as pd df1 = pd.read_csv("../../df1.csv") df2 = pd.read_csv("../../df2.csv") df1['D'] = df1[df1.A == 0].merge(df2,left_on='B', right_on='ID', how='left')['value'] </code></pre>
<p>Slightly tricky this one, there are 2 steps here, first is to select only the rows in df where 'A' is 0, then merge to this the other df where 'B' and 'ID' match but perform a 'left' merge, then select the 'value' column from this and assign to the df:</p> <pre><code>In [142]: df['D'] = df[df.A == 0].merge(df1, left_on='B',right_on='ID', how='left')['value'] df Out[142]: A B D 0 0 0 62 1 0 2 76 2 1 1 NaN 3 1 1 NaN 4 1 3 NaN </code></pre> <p>Breaking this down will show what is happening:</p> <pre><code>In [143]: # boolean mask on condition df[df.A == 0] Out[143]: A B D 0 0 0 62 1 0 2 76 In [144]: # merge using 'B' and 'ID' columns df[df.A == 0].merge(df1, left_on='B',right_on='ID', how='left') Out[144]: A B D ID value 0 0 0 62 0 62 1 0 2 76 2 76 </code></pre> <p>After all the above you can then assign directly: </p> <pre><code>df['D'] = df[df.A == 0].merge(df1, left_on='B',right_on='ID', how='left')['value'] </code></pre> <p>This works as it will align with the left hand side idnex so any missing values will automatically be assigned <code>NaN</code></p> <p><strong>EDIT</strong></p> <p>Another method and one that seems to work for your real data is to use <code>map</code> to perform the lookup for you, <code>map</code> accepts a dict or series as a param and will lookup the corresponding value, in this case you need to set the index to 'ID' column, this reduces your df to one with just the 'Value' column:</p> <pre><code>df['D'] = df[df.A==0]['B'].map(df1.set_index('ID')['value']) </code></pre> <p>So the above performs boolean indexing as before and then calls <code>map</code> on the 'B' column and looksup the corresponding 'Value' in the other df after we set the index on 'ID'.</p> <p><strong>Update</strong></p> <p>I looked at your data and my first method and I can see why this fails, the alignment to the left hand side df fails so you get 1192 values in a continuous row and then the rest of the rows are <code>NaN</code> up to row 2500.</p> <p>What does work is if you apply the same mask to the left hand side like so:</p> <pre><code>df1.loc[df1.A==0, 'D'] = df1[df1.A == 0].merge(df2,left_on='B', right_on='ID', how='left')['value'] </code></pre> <p>So this masks the rows on the left hand side correctly and assigns the result of the merge</p>
python|python-3.x|pandas
3
376,379
73,468,198
Shuffling of time series data in pytorch-forecasting
<p>I am using pytorch-forecasting for count time series. I have some date information such as hour of day, day of week, day of month etc...</p> <p>when I assign these as categorical variables in <strong>TimeSeriesDataSet</strong> using <strong>time_varying_known_categoricals</strong> the training.data['categoricals'] values seem shuffled and not in the right order as the target. Why is that?</p> <p>pandas dataframe is like below before going through <strong>TimeSeriesDataSet</strong></p> <p><a href="https://i.stack.imgur.com/u61V7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u61V7.png" alt="enter image description here" /></a></p> <p>After the following code</p> <p><a href="https://i.stack.imgur.com/llhDG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/llhDG.png" alt="enter image description here" /></a></p> <p>why has hour of day column changed to 0, 1, 12, 17?</p>
<p>Actually, the <strong>time_varying_known_categoricals</strong> are NOT shuffled. The categories assigned to them are not in order like 1 for 1st hour, 2 for 2nd hour etc.. that's why it feels like it has shuffled the time series. I tried to align &quot;hour_of_day&quot; categorical variable for 3 days. I noticed that the encoding for each hour matches correcly for each day so there is no shuffling. This information should be mentioned in the doc string atleast. It will save a lot of time and confusion.</p> <p><a href="https://i.stack.imgur.com/jBcvK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jBcvK.png" alt="enter image description here" /></a></p>
time-series|pytorch-forecasting
0
376,380
73,461,557
How to reward for two parameters in reinforcement learning?
<p>I have a two box that should touch each other in straight line, so I have done two approach to reward:</p> <p>Approach 1: Reward when distance is decreasing this approach works well in 50% event after 100 million steps of training. The problem is that two box do not touch each other completely straight and it fails in 50%</p> <p>Approach 2 Reward when distance is decreasing and radius difference between two box is decreasing So here is the methods</p> <pre><code>if(distance &lt; lastDistance) AddReward(...) lastDistance = distance; if(radius &lt; lastRadius ) AddReward(...) lastRadius = radius; </code></pre> <p>The problem with approach 2 is that the box 1 which is moving is only rotating after a 10 million steps and even not decreasing distance</p> <p>So How can I reward for multi parameters (distance, radius) problems</p>
<p>May be you can try to use only the first way and improve it.</p> <p>You can add more conditions to help your agent to reach the goal.</p> <p>for example :</p> <pre><code>reward = 0 if(distance &lt; lastDistance) reward += 1 lastDistance = distance if(distance &lt; 5) reward += 5 if(distance &lt; 3) reward += 10 </code></pre> <p>PS : the values are chosen randomly. Change them to reach the goal you want.</p>
tensorflow|machine-learning|pytorch
0
376,381
73,258,315
deleting pandas dataframe rows not working
<pre><code>import numpy as np import pandas as pd randArr = np.random.randint(0,100,20).reshape(5,4) df =pd.DataFrame(randArr,np.arange(101,106,1),['PDS', 'Algo','SE','INS']) df.drop('103',inplace=True) </code></pre> <p>this code not working</p> <pre><code>Traceback (most recent call last): File &quot;D:\Education\4th year\1st sem\Machine Learning Lab\1st Lab\python\pandas\pdDataFrame.py&quot;, line 25, in &lt;module&gt; df.drop('103',inplace=True) </code></pre>
<p>The string '103' isnt in the index, but the integer 103 is:</p> <p>Replace <code>df.drop('103',inplace=True)</code> with <code>df.drop(103,inplace=True)</code></p>
python-3.x|pandas|dataframe|numpy
0
376,382
73,332,481
How to use grouped rows in pandas
<p>Hello I have table with MultiIndex:</p> <pre><code>Lang C++ java python All Corp Name ASW ASW 0.0 0.0 5.0 5 Facebook Facebook 8.0 1.0 5.0 14 Google Google 2.0 24.0 1.0 27 ASW Cristiano NaN NaN 5.0 5 Facebook Cristiano NaN NaN 3.0 3 Michael NaN 1.0 2.0 3 Piter 8.0 NaN NaN 8 Google Cristiano NaN NaN 1.0 1 Michael NaN 24.0 NaN 24 Piter 2.0 NaN NaN 2 </code></pre> <p>I am trying use this code</p> <pre><code>out = df.groupby(level=0).apply(lambda g: g.sort_values('All', ascending=False) </code></pre> <p>But It adds one more level index, how Can I use code without adding index? I don't want to add and then delete indexes</p> <p>thank You in Advance!</p>
<p>Add <code>group_keys=False</code> parameter in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>DataFrame.groupby</code></a>:</p> <pre><code>out = (df.groupby(level=0, group_keys=False) .apply(lambda g: g.sort_values('All', ascending=False))) print (out) C++ java python All Corp Name ASW ASW 0.0 0.0 5.0 5 Cristiano NaN NaN 5.0 5 Facebook Facebook 8.0 1.0 5.0 14 Piter 8.0 NaN NaN 8 Cristiano NaN NaN 3.0 3 Michael NaN 1.0 2.0 3 Google Google 2.0 24.0 1.0 27 Michael NaN 24.0 NaN 24 Piter 2.0 NaN NaN 2 Cristiano NaN NaN 1.0 1 </code></pre> <p>Better/faster/simplier solution is sorting by level of MultiIndex and column:</p> <pre><code>out = df.sort_values(['Corp','All'], ascending=[True, False]) print (out) C++ java python All Corp Name ASW ASW 0.0 0.0 5.0 5 Cristiano NaN NaN 5.0 5 Facebook Facebook 8.0 1.0 5.0 14 Piter 8.0 NaN NaN 8 Cristiano NaN NaN 3.0 3 Michael NaN 1.0 2.0 3 Google Google 2.0 24.0 1.0 27 Michael NaN 24.0 NaN 24 Piter 2.0 NaN NaN 2 Cristiano NaN NaN 1.0 1 </code></pre>
python|pandas|group
0
376,383
73,364,110
unable to change date format while loading excel file with pandas
<p>I'm loading an Excel file with pandas using the parse_dates=True parameter. However, the date format can not change. When I open the file in Excel on my local computer, the date format is accurate, but when the file is loaded in Python, the format is incorrect. The issue is that &quot;dmY acts like mdY for half of the data.&quot; I'm expecting the output with YYYY-mm-dd HH:MM:SS. Below is the code for the data</p> <pre><code>df = pd.DataFrame({'ID':[2914240.0,2914137.0,2929456.0,2920801.0], 'RT_Date':['2021-01-02 06:38:00','2021-01-02 02:58:00','02-19-2021 17:59:00','2021-09-02 12:49:00'], 'DateCreated':['01-31-2021 14:38:12','01-31-2021 10:57:50','02-19-2021 01:58:38','2021-09-02 11:35:51']}) </code></pre> <p><a href="https://i.stack.imgur.com/neaS8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/neaS8.png" alt="enter image description here" /></a></p> <p>The code which i tried is below</p> <pre><code>df = pd.read_excel('demo.xlsx', parse_dates=True) </code></pre> <p>But the above code is not working and returning the same output as shown in the image. Is there any issue with the data/pandas? Can anyone please help me with this?</p>
<p>Using pd.to_datetime() allows for the use of the <code>infer_datetime_format</code> argument which eases working with datetime values. In this particular case, if you'd like to parse all columns except for ID you can try:</p> <pre><code>df['RT_Date'],df['DateCreated'] = [pd.to_datetime(df[x],infer_datetime_format=True) for x in df if x != 'ID'] </code></pre> <p>Based on your example, it outputs:</p> <pre><code> ID RT_Date DateCreated 0 2914240.0 2021-01-02 06:38:00 2021-01-31 14:38:12 1 2914137.0 2021-01-02 02:58:00 2021-01-31 10:57:50 2 2929456.0 2021-02-19 17:59:00 2021-02-19 01:58:38 3 2920801.0 2021-09-02 12:49:00 2021-09-02 11:35:51 </code></pre> <p>Should you want only the date:</p> <pre><code>df['RT_Date'],df['DateCreated'] = [pd.to_datetime(df[x],infer_datetime_format=True).dt.date for x in df if x != 'ID'] </code></pre> <p>Returning:</p> <pre><code> ID RT_Date DateCreated 0 2914240.0 2021-01-02 2021-01-31 1 2914137.0 2021-01-02 2021-01-31 2 2929456.0 2021-02-19 2021-02-19 3 2920801.0 2021-09-02 2021-09-02 </code></pre> <p>EDIT: If there are multiple columns you wish to evaluate this for and no easy way, then you can create a list of candidate columns:</p> <pre><code>to_date_cols = ['RT_Date','DateCreated','Additional_Col1'] </code></pre> <p>And use:</p> <pre><code>for x in to_date_cols: df[x] = pd.to_datetime(df[x],infer_datetime_format=True) </code></pre>
python|python-3.x|pandas|dataframe|datetime
2
376,384
73,418,894
How to obtain counts and sums for pairs of values in each row of Pandas DataFrame
<p><strong>Problem:</strong></p> <p>I have a <code>DataFrame</code> like so:</p> <pre><code>import pandas as pd df = pd.DataFrame({ &quot;name&quot;:[&quot;john&quot;,&quot;jim&quot;,&quot;eric&quot;,&quot;jim&quot;,&quot;john&quot;,&quot;jim&quot;,&quot;jim&quot;,&quot;eric&quot;,&quot;eric&quot;,&quot;john&quot;], &quot;category&quot;:[&quot;a&quot;,&quot;b&quot;,&quot;c&quot;,&quot;b&quot;,&quot;a&quot;,&quot;b&quot;,&quot;c&quot;,&quot;c&quot;,&quot;a&quot;,&quot;c&quot;], &quot;amount&quot;:[100,200,13,23,40,2,43,92,83,1] }) </code></pre> <pre><code> name | category | amount ---------------------------- 0 john | a | 100 1 jim | b | 200 2 eric | c | 13 3 jim | b | 23 4 john | a | 40 5 jim | b | 2 6 jim | c | 43 7 eric | c | 92 8 eric | a | 83 9 john | c | 1 </code></pre> <p>I would like to add two new columns: first; the total <code>amount</code> for the relevant <code>category</code> for the <code>name</code> of the row (eg: the value in row <code>0</code> would be <code>140</code>, because <code>john</code> has a total of <code>100 + 40</code> of the <code>a</code> <code>category</code>). Second; the counts of those <code>name</code> and <code>category</code> combinations which are being summed in the first new column (eg: the row <code>0</code> value would be <code>2</code>).</p> <p><strong>Desired output:</strong></p> <p>The output I'm looking for here looks like this:</p> <pre><code> name | category | amount | sum_for_category | count_for_category ------------------------------------------------------------------------ 0 john | a | 100 | 140 | 2 1 jim | b | 200 | 225 | 3 2 eric | c | 13 | 105 | 2 3 jim | b | 23 | 225 | 3 4 john | a | 40 | 140 | 2 5 jim | b | 2 | 225 | 3 6 jim | c | 43 | 43 | 1 7 eric | c | 92 | 105 | 2 8 eric | a | 83 | 83 | 1 9 john | c | 1 | 1 | 1 </code></pre> <p>I don't want to group the data by the features because I want to keep the same number of rows. I just want to tag on the desired value for each row.</p> <p><strong>Best I could do:</strong></p> <p>I can't find a good way to do this. The best I've been able to come up with is the following:</p> <pre><code>names = df[&quot;name&quot;].unique() categories = df[&quot;category&quot;].unique() sum_for_category = {i:{ j:df.loc[(df[&quot;name&quot;]==i)&amp;(df[&quot;category&quot;]==j)][&quot;amount&quot;].sum() for j in categories } for i in names} df[&quot;sum_for_category&quot;] = df.apply(lambda x: sum_for_category[x[&quot;name&quot;]][x[&quot;category&quot;]],axis=1) count_for_category = {i:{ j:df.loc[(df[&quot;name&quot;]==i)&amp;(df[&quot;category&quot;]==j)][&quot;amount&quot;].count() for j in categories } for i in names} df[&quot;count_for_category&quot;] = df.apply(lambda x: count_for_category[x[&quot;name&quot;]][x[&quot;category&quot;]],axis=1) </code></pre> <p>But this is extremely clunky and slow; far too slow to be viable on my actual dataset (roughly 700,000 rows x 10 columns). I'm sure there's a better and faster way to do this... Many thanks in advance.</p>
<p>You need two <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html" rel="nofollow noreferrer"><code>groupby.transform</code></a>:</p> <pre><code>g = df.groupby(['name', 'category'])['amount'] df['sum_for_category'] = g.transform('sum') df['count_or_category'] = g.transform('size') </code></pre> <p>output:</p> <pre><code> name category amount sum_for_category count_or_category 0 john a 100 140 2 1 jim b 200 225 3 2 eric c 13 105 2 3 jim b 23 225 3 4 john a 40 140 2 5 jim b 2 225 3 6 jim c 43 43 1 7 eric c 92 105 2 8 eric a 83 83 1 9 john c 1 1 1 </code></pre>
python|pandas|dataframe
2
376,385
73,455,571
two intervals color df panda red <-3 and green >3
<p>I would like to know how to generate two intervals for my highlight. I want all the value of my df &lt;-3 in red and all the value &gt;3 in green, else don't change, in black.</p> <p>I try with highlight_between and other, but never work :</p> <pre><code>cm = sns.light_palette(&quot;blue&quot;, as_cmap = True) df = df.style.background_gradient(cmap=cm, low = -0.2, high = 0.5, text_color_threshold = 0.6 ).set_precision(2) df.style.highlight_between(left = 3, right = 10, color=&quot;green&quot;)\ .style.highlight_between(left = -10, right = -3, color=&quot;red&quot;) df = df.style.highlight_min(color = 'red', axis=None) df = df.style.apply(lambda x: [&quot;background-color:lime&quot; if i&gt;3 \ elif &quot;background-color:tomato&quot; if i&lt;-3 else &quot;background-color:black&quot; for i in x], ) ## Axis=1 : Row and Axis=0: Column </code></pre> <p>I think my problem is to add multiple style for one df.</p> <p>An exemple of what I tried :</p> <pre class="lang-py prettyprint-override"><code>def highlight(val): green = 'background-color: green' if val &gt; 3 else '' red = 'background-color: red' if val &lt; -3 else '' return green, red df.style.applymap(highlight) </code></pre> <p>the red part don't work but just green give me that :</p> <p><a href="https://i.stack.imgur.com/LZpyy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LZpyy.png" alt="screenshot_result" /></a></p>
<pre><code>import pandas as pd import numpy as np np.random.seed(0) df = pd.DataFrame((np.random.rand(5,10) - 0.5) * 20) df.style.applymap(lambda x: f&quot;background-color: {'lime' if x &gt; 3 else 'tomato' if x &lt; -3 else 'white'}&quot;) </code></pre> <p><a href="https://i.stack.imgur.com/winXU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/winXU.png" alt="enter image description here" /></a></p> <p>You can also chain styles like that to produce the same result:</p> <pre><code>df.style.applymap(lambda x: f&quot;background-color: {'lime' if x &gt; 3 else ''}&quot;)\ .applymap(lambda x: f&quot;background-color: {'tomato' if x &lt; -3 else ''}&quot;) </code></pre>
python|pandas|styles|highlight|intervals
1
376,386
73,211,773
How to fill nans with multiple if-else conditions?
<p>I have a dataset:</p> <pre><code> value score 0 0.0 8 1 0.0 7 2 NaN 4 3 1.0 11 4 2.0 22 5 NaN 12 6 0.0 4 7 NaN 15 8 0.0 5 9 2.0 24 10 1.0 12 11 1.0 15 12 0.0 5 13 2.0 26 14 NaN 28 </code></pre> <p>There are some NaNs in it. I want to fill those NaNs with these conditions:</p> <ul> <li>If 'score' is less than 10, then fill nan with 0.0</li> <li>If 'score' is between 10 and 20, then fill nan with 1.0</li> <li>If 'score' is greater than 20, then fill nan with 2.0</li> </ul> <p>How do I do this in pandas?</p> <p>Here is an example dataframe:</p> <pre><code>value = [0,0,np.nan,1,2,np.nan,0,np.nan,0,2,1,1,0,2,np.nan] score = [8,7,4,11,22,12,4,15,5,24,12,15,5,26,28] pd.DataFrame({'value': value, 'score':score}) </code></pre>
<p>You could use <code>numpy.select</code> with conditions on <code>&lt;10</code>, <code>10≤score&lt;20</code>, etc. but a more efficient version could be to use a floor division to have values below 10 become 0, below 20 -&gt; 1, etc.</p> <pre><code>df['value'] = df['value'].fillna(df['score'].floordiv(10)) </code></pre> <p>with <code>numpy.select</code>:</p> <pre><code>df['value'] = df['value'].fillna(np.select([df['score'].lt(10), df['score'].between(10, 20), df['score'].ge(20)], [0, 1, 2]) ) </code></pre> <p>output:</p> <pre><code> value score 0 0.0 8 1 0.0 7 2 0.0 4 3 1.0 11 4 2.0 22 5 1.0 12 6 0.0 4 7 1.0 15 8 0.0 5 9 2.0 24 10 1.0 12 11 1.0 15 12 0.0 5 13 2.0 26 14 2.0 28 </code></pre>
pandas
3
376,387
73,264,482
what is the difference between Sequential and Model([input],[output]) in TensorFlow?
<p>It seems <code>Sequential</code> and <code>Model([input],[output])</code> have the same results when I just build a model layer by layer. However, when I use the following two models with the same input, they give me different results.By the way,the input shape is <code>(None, 15, 2)</code> ande the output shape is <code>(None, 1, 2)</code>.<br /> <code>Sequential</code> model:</p> <pre class="lang-py prettyprint-override"><code>model = tf.keras.Sequential( [ tf.keras.layers.Conv1D(filters = 4, kernel_size =7, activation = &quot;relu&quot;), tf.keras.layers.Conv1D(filters = 6, kernel_size = 11, activation = &quot;relu&quot;), tf.keras.layers.LSTM(100, return_sequences=True,activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.LSTM(100,activation='relu'), tf.keras.layers.Dense(2,activation='relu'), tf.keras.layers.Reshape((1,2)) ] ) </code></pre> <p><code>Model([input],[output])</code> model</p> <pre class="lang-py prettyprint-override"><code>input_layer = tf.keras.layers.Input(shape=(LOOK_BACK, 2)) conv = tf.keras.layers.Conv1D(filters=4, kernel_size=7, activation='relu')(input_layer) conv = tf.keras.layers.Conv1D(filters=6, kernel_size=11, activation='relu')(conv) lstm = tf.keras.layers.LSTM(100, return_sequences=True, activation='relu')(conv) dropout = tf.keras.layers.Dropout(0.2)(lstm) lstm = tf.keras.layers.LSTM(100, activation='relu')(dropout) dense = tf.keras.layers.Dense(2, activation='relu')(lstm) output_layer = tf.keras.layers.Reshape((1,2))(dense) model = tf.keras.models.Model([input_layer], [output_layer]) </code></pre> <p>the result of <code>Sequential</code> model:</p> <p><img src="https://i.stack.imgur.com/X2Igi.png" alt="enter image description here" /></p> <p>mse: 21.679258038588586<br /> rmse: 4.65609901511862<br /> mae: 3.963341420395535</p> <p>And the result of <code>Model([input],[output])</code> model:<br /> <img src="https://i.stack.imgur.com/4TH7z.png" alt="enter image description here" /></p> <p>mse: 36.85855652774293<br /> rmse: 6.071124815694612<br /> mae: 4.4878270279889065</p>
<p>The <code>Sequence</code> version uses the <a href="https://keras.io/guides/sequential_model/" rel="nofollow noreferrer">Sequencial model</a> while the <code>Model([inputs], [outputs])</code> uses the <a href="https://keras.io/guides/functional_api/" rel="nofollow noreferrer">Functional API</a>.</p> <p>The first is easier to use, but only works for single-input single-output feed forward models (in the sense of Keras layers).</p> <p>The second is more complex but get rid of those constraints, allowing to create many more models.</p> <p>So, your main point is right: any sequencial model can be re-written as a functional model. You can double check this by comparing the architectures with the usage of <a href="https://keras.io/api/models/model/" rel="nofollow noreferrer"><code>summary</code> function</a> and <a href="https://keras.io/api/utils/model_plotting_utils/" rel="nofollow noreferrer">plotting the models</a>.</p> <p>However, this only shows that architectures are the same, but not the weights!</p> <p>Assuming you are fitting both models with same data and same <code>compile</code> and <code>fit</code> params (by the way, include those in your question), there is lots of randomness in the training process which may lead to different results. So, try the following <strong>to compare them</strong> better:</p> <ul> <li>remove as much randomness as possible by setting seeds, in your code and for each layer instantiation.</li> <li>avoid using data augmentation if using it.</li> <li>use the same validation/train split for both models: to be sure, you can split the dataset yourself.</li> <li>do not use shuffling in data generators nor during the training.</li> </ul> <p><a href="https://keras.io/getting_started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development" rel="nofollow noreferrer">Here</a> you can read more about producing reproducible results in keras.</p> <p>Even after following those tips, your results may not be deterministic, and hence not the same, so finally, and maybe more important: do not compare single run: train and eval each model several times (for instance, 20) and then compare the average MAE with it's standard deviation.</p> <p>If after all this your results are still so different, please, update your question with them.</p>
python|tensorflow|keras|deep-learning|lstm
1
376,388
73,328,284
filtering "events" in awkward-array
<p>I am reading data from a file of &quot;events&quot;. For each event, there is some number of &quot;tracks&quot;. For each track there are a series of &quot;variables&quot;. A stripped down version of the code (using awkward0 as awkward) looks like</p> <pre><code>f = h5py.File('dataAA/pv_HLT1CPU_MinBiasMagDown_14Nov.h5',mode=&quot;r&quot;) afile = awkward.hdf5(f) pocaz = np.asarray(afile[&quot;poca_z&quot;].astype(dtype_X)) pocaMx = np.asarray(afile[&quot;major_axis_x&quot;].astype(dtype_X)) pocaMy = np.asarray(afile[&quot;major_axis_y&quot;].astype(dtype_X)) pocaMz = np.asarray(afile[&quot;major_axis_z&quot;].astype(dtype_X)) </code></pre> <p>In this snippet of code, &quot;pocaz&quot;, &quot;pocaMx&quot;, etc. are what I have called variables (a physics label, not a Python data type). On rare occasions, pocaz takes on an extreme value, pocaMx and/or pocaMy take on nan values, and/or pocaMz takes on the value inf. I would like to remove these tracks from the events using some syntactically simple method. I am guessing this functionality exists (perhaps in the current version of awkward but not awkward0), but cannot find it described in a transparent way. Is there a simple example anywhere?</p> <p>Thanks, Mike</p>
<p>It looks to me, from the fact that you're able to call <code>np.asarray</code> on these arrays without error, that they are one-dimensional arrays of numbers. If so, then Awkward Array isn't doing anything for you here; you should be able to find the one-dimensional NumPy arrays inside</p> <pre class="lang-py prettyprint-override"><code>f[&quot;poca_z&quot;], f[&quot;major_axis_x&quot;], f[&quot;major_axis_y&quot;], f[&quot;major_axis_z&quot;] </code></pre> <p>as groups (note that this is <code>f</code>, not <code>afile</code>) and leave Awkward Array entirely out of it.</p> <p>The reason I say that is because you can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.isfinite.html" rel="nofollow noreferrer">np.isfinite</a> on these NumPy arrays. (There's an equivalent in Awkward v1, v2, but you're talking about Awkward v0 and I don't remember.) That will give you an array of booleans for you to slice these arrays.</p> <p>I don't have the HDF5 file for testing, but I think it would go like this:</p> <pre class="lang-py prettyprint-override"><code>f = h5py.File('dataAA/pv_HLT1CPU_MinBiasMagDown_14Nov.h5',mode=&quot;r&quot;) pocaz = np.asarray(a[&quot;poca_z&quot;][&quot;0&quot;], dtype=dtype_X) pocaMx = np.asarray(a[&quot;major_axis_x&quot;][&quot;0&quot;], dtype=dtype_X) # the only array pocaMy = np.asarray(a[&quot;major_axis_y&quot;][&quot;0&quot;], dtype=dtype_X) # in each group pocaMz = np.asarray(a[&quot;major_axis_z&quot;][&quot;0&quot;], dtype=dtype_X) # is named &quot;0&quot; good = np.ones(len(pocaz), dtype=bool) good &amp;= np.isfinite(pocaz) good &amp;= np.isfinite(pocaMx) good &amp;= np.isfinite(pocaMy) good &amp;= np.isfinite(pocaMz) pocaz[good], pocaMx[good], pocaMy[good], pocaMz[good] </code></pre> <p>If you also need to cut extreme finite values, you can include</p> <pre class="lang-py prettyprint-override"><code>good &amp;= (-1000 &lt; pocaz) &amp; (pocaz &lt; 1000) </code></pre> <p>etc. in the <code>good</code> selection criteria.</p> <p>(The way you'd do this in Awkward Array is not any different, since Awkward is just generalizing what NumPy does here, but if you don't need it, you might as well leave it out.)</p>
numpy|hdf5|awkward-array
0
376,389
73,308,149
Meaning of output shapes of ResNet9 model layers
<p>I have a ResNet9 model, implemented in Pytorch which I am using for multi-class image classification. My total number of classes is 6. Using the following code, from torchsummary library, I am able to show the summary of the model, seen in the attached image:</p> <p><code>INPUT_SHAPE = (3, 256, 256) #input shape of my image</code></p> <p><code>print(summary(model.cuda(), (INPUT_SHAPE)))</code></p> <p><a href="https://i.stack.imgur.com/j4XGT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j4XGT.png" alt="enter image description here" /></a></p> <p>However, I am quite confused about the -1 values in all layers of the ResNet9 model. Also, for Conv2d-1 layer, I am confused about the 64 value in the output shape [-1, 64, 256, 256] as I believe the <code>n_channels</code> value of the input image is 3. Can anyone please help me with the explanation of the output shape values? Thanks!</p>
<p>Yes<br /> your <code>INPUT_SHAPE</code> is <code>torch.Size([3, 256, 256])</code> if it's channel first format AND <code>(256, 256, 3)</code> if it's channel last format.<br /> As Pytorch model accepts it in channel first format , for you it shows torch.Size([3, 256, 256])</p> <p>and talking about our <code>output shape [-1, 64, 256, 256]</code>, this is the output shape of your first <code>conv</code> output which has 64 filter each of <code>256x256</code> dim and not your <code>input_shape</code>. <code>-1</code> represents your variable <code>batch_size</code> which can be fixed in <code>dataloader</code></p>
python|pytorch|resnet|modelsummary
1
376,390
73,366,608
How can I import resnet_rs module from keras?
<p>I was trying to import resnet_rs module from keras. But counldn't find a way to do it via keras.applications. Using tensorflow 2.8</p> <p>I tried the following:</p> <pre><code>from tensorflow.keras.applications.resnet_rs import ResNetRS50 </code></pre> <p>Got no module error.</p> <p>Then I tried to use it via api, again got error</p> <pre><code>tf.keras.applications.resnet_rs.ResNetRS50( include_top=True, weights='imagenet', classes=1000, input_shape=None, input_tensor=None, pooling=None, classifier_activation='softmax', include_preprocessing=True ) </code></pre> <p>So, how can I import resnet_rs? (as we do for other resnet import) Any suggestions or pointers will be appreciated.</p>
<p>What version of Tensorflow are you on? I was on Tensorflow 2.8.2 and it wasn't working for me either. I switched to 2.9.1, and it fixed the import. I think those modules were added only recently. That's why previous versions won't work.</p> <p>To upgrade to the latest version:</p> <pre><code>pip install tensorflow --upgrade </code></pre>
python|tensorflow|keras
1
376,391
73,209,775
Time Series data to fit for ConvLSTM
<p>I used stock data with 4057 samples, made it into 28 time steps, with 25 features.</p> <pre><code>TrainX shape: (4057, 28, 25) </code></pre> <p>The Target consists of 5 categories of interger</p> <pre><code>[0,1,2,3,4] </code></pre> <p>and reshape into:</p> <pre><code>trainX_reshape= trainX.reshape(4057,1, 28,25,1) testX_reshape= testX.reshape(1334,1, 28,25,1) </code></pre> <p>trying to fit the model:</p> <pre><code>seq =Sequential([ ConvLSTM2D(filters=40, kernel_size=(3, 3),input_shape=(1, 28, 25, 1),padding='same', return_sequences=True), BatchNormalization(), ConvLSTM2D(filters=40, kernel_size=(3, 3),padding='same', return_sequences=True), BatchNormalization(), ConvLSTM2D(filters=40, kernel_size=(3, 3),padding='same', return_sequences=True), BatchNormalization(), ConvLSTM2D(filters=40, kernel_size=(3, 3),padding='same', return_sequences=True), BatchNormalization(), Conv3D(filters=5, kernel_size=(3, 3, 3),activation='sigmoid',padding='same', data_format='channels_last') ]) </code></pre> <p>compile with</p> <pre><code>seq.compile(loss='sparse_categorical_crossentropy', optimizer='rmsprop') history = seq.fit(trainX_reshape, trainY, epochs=10, batch_size= 128, shuffle=False, verbose = 1, validation_data=(testX_reshape, testY), # validation_split=0.2) </code></pre> <p>and it gives ERROR:</p> <pre><code>InvalidArgumentError: Graph execution error: </code></pre> <p>How to fix it? Ive tried many methods, but had no clue.</p> <p>the code and data are at: <a href="https://drive.google.com/drive/folders/1WDa_CUO1Mr7wZTqE3wHsR0Tp_3NRMcZ6?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/drive/folders/1WDa_CUO1Mr7wZTqE3wHsR0Tp_3NRMcZ6?usp=sharing</a></p> <p>works on colab</p>
<p>Your model's output does not make any sense, if you are working with sparse integer labels. It is 5D and your labels are 2D (including batch size). Try:</p> <pre><code>seq =Sequential([ ConvLSTM2D(filters=40, kernel_size=(3, 3),input_shape=(1, 28, 25, 1),padding='same', return_sequences=True), BatchNormalization(), ConvLSTM2D(filters=40, kernel_size=(3, 3),padding='same', return_sequences=True), BatchNormalization(), ConvLSTM2D(filters=40, kernel_size=(3, 3),padding='same', return_sequences=True), BatchNormalization(), ConvLSTM2D(filters=40, kernel_size=(3, 3),padding='same', return_sequences=True), BatchNormalization(), Conv3D(filters=5, kernel_size=(3, 3, 3),padding='same', data_format='channels_last'), GlobalMaxPooling3D(), Dense(5, activation='softmax') ]) </code></pre> <p>Note, in your solution, the output of your model is reshaped to a 2D tensor to be compatible with <code>sparse_categorical_crossentropy</code>, but this reshaping is destroying the batch size. So you get the logits shape <code>[89600,5]</code> and labels shape <code>[128]</code>, although it should be <code>[128, 5]</code> and <code>[128]</code>. If your model's output were, for example, <code>[128, 1, 1, 1, 5]</code>, it would probably be possible.</p>
python|tensorflow|keras
2
376,392
73,494,634
Converting dataframe column from Pandas Timestamp to datetime (or datetime.date)
<p><em>The bjillion python time formats cause more lost time than anything I do.</em></p> <p>Reading a file or a sql query into a dataframe gives me a column of Pandas Timestamp (i.e. type = pandas._libs.tslibs.timestamps.Timestamp). Not the 'normal' timestamps. Answers I find do not address the combination of both this particular &quot;timestamp' and with columns in dataframes.</p> <p>The below solution works, but does a more compact or 'pythonic' conversion exist?</p> <pre><code>df['date'] = [pd.Timestamp.to_pydatetime(x).date() for x in df['pdTimeStamp']] df['datetime'] = [pd.Timestamp.to_pydatetime(x) for x in df['pdTimeStamp']] </code></pre> <p>and the below doesn't work because of the wrong 'timestamp' type (typical of the online answers which almost uniformly address the typical 'timestamp' type)</p> <pre><code>df['date'] = df['pdTimeStamp'].apply(lambda d: datetime.date.fromtimestamp(d)) TypeError: an integer is required (got type Timestamp) </code></pre>
<p>My assumption without a sample df is you can use:</p> <pre><code>df['date'] = df['pdTimeStamp'].apply(lambda x: pd.Timestamp(x).strftime('%Y-%m-%d')) </code></pre>
python|pandas|dataframe|datetime|timestamp
0
376,393
73,328,310
Concatenate values in a dataframe with value in preceding column on same row - Python
<p>I am trying to concatenate the values in a cell with values in its preceding cell on the same row i.e. one column before it throughout my dataframe. For sure, the first column values wont have anything to concatenate with. Also, my df has NaN values - which I have changed to None.</p> <p><a href="https://i.stack.imgur.com/nj2zI.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nj2zI.jpg" alt="enter image description here" /></a></p> <p>Any help would be appreciated.</p> <p>Thanks in advance.</p>
<p>Try with <code>add</code> then <code>cumsum</code></p> <pre><code>out = df.add('_').apply(lambda x : x[x.notna()].cumsum().str[:-1],axis=1) Out[871]: 1 2 3 4 5 0 a a_b a_b_c a_b_c_d a_b_c_d_e 1 a a_e a_e_f NaN NaN </code></pre>
python|pandas|dataframe|concatenation
2
376,394
73,444,928
Combining CSVs with Python issue
<p>I'm trying to combine a bunch of CSVs in a folder into one using Python. Each CSV has 9 columns but no headers. When they combine, some 'sheets' are spread far to the right in the sheet. So it seems they are not combining properly.</p> <p>Please see code below</p> <pre><code>## Merge Multiple 1M Rows CSV files import os import pandas as pd # 1. defines path to csv files path = &quot;C://halfordsCSV//new//Archive1/&quot; # 2. creates list with files to merge based on name convention file_list = [path + f for f in os.listdir(path) if f.startswith('greyville_po-')] # 3. creates empty list to include the content of each file converted to pandas DF csv_list = [] # 4. reads each (sorted) file in file_list, converts it to pandas DF and appends it to the csv_list for file in sorted(file_list): csv_list.append(pd.read_csv(file).assign(File_Name = os.path.basename(file))) # 5. merges single pandas DFs into a single DF, index is refreshed csv_merged = pd.concat(csv_list, ignore_index=True) # 6. Single DF is saved to the path in CSV format, without index column csv_merged.to_csv(path + 'halfordsOrders.csv', index=False) </code></pre> <p>It should be sticking to the same number of columns. Any idea what might be going wrong?</p>
<p>First, please check if separator and delimiter are fine in pandas.read_csv, default are ',' and None. You can pass them like that for example:</p> <pre><code>pandas.read_csv(&quot;my_file_path&quot;, sep=';', delimiter=',') </code></pre> <p>If they are already ok regarding to your csv files, try cleaning the dataframes before concating them</p> <p>replace :</p> <pre><code>for file in sorted(file_list): csv_list.append(pd.read_csv(file).assign(File_Name = os.path.basename(file))) </code></pre> <p>by :</p> <pre><code> nan_value = float(&quot;NaN&quot;) for file in sorted(file_list): my_df = pd.read_csv(file) my_df.assign(File_Name = os.path.basename(file)) my_df.replace(&quot;&quot;, nan_value, inplace=True) my_df.dropna(how='all', axis=1, inplace=True) csv_list.append(my_df) </code></pre>
python|pandas|csv
0
376,395
73,505,129
Concatenate strings in dataframe rows (Python - pandas)
<p>Let's say I have the following dataframe d1:</p> <pre><code>d1 = pd.DataFrame(data = {'col1': [&quot;A&quot;, &quot;C&quot;], 'col2': [&quot;B&quot;, &quot;D&quot;]}) </code></pre> <p><a href="https://i.stack.imgur.com/wKYbQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wKYbQ.png" alt="enter image description here" /></a></p> <p>I want to built a dataframe d2 with a single row. That row would concatenate the values of d1's rows, separated by a space. That is:</p> <pre><code>d2 = pd.DataFrame(data = {'col1': [&quot;A B&quot;], 'col2': [&quot;C D&quot;]}) </code></pre> <p><a href="https://i.stack.imgur.com/eecxr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eecxr.png" alt="enter image description here" /></a></p> <p>What should I do?</p>
<p>You can also use <code>agg</code> to operate <code>join</code>. However, it will return a <code>pd.Series</code>, so convert it to a dataframe with <code>pd.Series.to_frame</code>:</p> <pre><code>d2 = d1.agg(' '.join).to_frame().T </code></pre>
python|pandas
3
376,396
73,246,897
IndexError: single positional indexer is out-of-bounds (iloc[1: , :])
<p>This will be tricky, but I need your help. To sum up, my coworker in charge of data left, and there is an unsolved bug in his Python requests. I have close to 0 knowledge of this language, and I didn't write these Python requests, so I can't figure out the issue. Here is the code:</p> <pre class="lang-py prettyprint-override"><code>churn_list = [] for i in Version: churn_temp = churn_data[churn_data['Version']==i].sort_values('Level').reset_index() churn_temp = churn_temp.iloc[1: , :] **churn_temp = churn_calculation(churn_temp)** # Plotting Churn ## import plotly.graph_objects as go fig = go.Figure() fig.add_trace(go.Bar(x=churn_temp[churn_temp['Level']&lt;15]['Level'], y=churn_temp[churn_temp['Level']&lt;15]['Churn_perc'], name=&quot;drop off in %&quot;)) fig.add_trace(go.Scatter(x=churn_temp[churn_temp['Level']&lt;15]['Level'], y=churn_temp[churn_temp['Level']&lt;15]['Cumulative_Churn_perc'], name=&quot;Cumulative drop off in %&quot;)) fig.update_layout( autosize=False, width=1000, height=500, template='simple_white', title_text='Churn Visualisation for version '+i, yaxis_title=&quot;drop off in %&quot;, xaxis_title=&quot;Level&quot;, title_x=0.5) fig.show() churn_list.append(churn_temp) </code></pre> <p>It's a code to calculate churn, based on a BigQuery request. It worked fine in the first days, and then this happened out of the blue. The error is at <code>churn_temp = churn_calculation(churn_temp)</code>. Here's the error:</p> <pre><code>--------------------------------------------------------------------------- IndexError Traceback (most recent call last) &lt;ipython-input-73-ddc2b4335820&gt; in &lt;module&gt;() 3 churn_temp = churn_data[churn_data['Version']==i].sort_values('Level').reset_index() 4 churn_temp = churn_temp.iloc[1: , :] ----&gt; 5 churn_temp = churn_calculation(churn_temp) 6 7 # Plotting Churn 5 frames &lt;ipython-input-56-4b23d0a07bff&gt; in churn_calculation(df) 4 df.loc[:,'Cumulative Churn']=df.loc[:,'Chrun'].expanding(1).sum() 5 df.loc[:,'Churn_perc']=round((df.loc[:,'Chrun']/df.loc[:,'Players'])*100,2) ----&gt; 6 df.loc[:,'Cumulative_Churn_perc']=round((df.loc[:,'Cumulative Churn']/df.iloc[0,3])*100,2) 7 return df /usr/local/lib/python3.7/dist-packages/pandas/core/indexing.py in __getitem__(self, key) 923 with suppress(KeyError, IndexError): 924 return self.obj._get_value(*key, takeable=self._takeable) --&gt; 925 return self._getitem_tuple(key) 926 else: 927 # we by definition only have the 0th axis /usr/local/lib/python3.7/dist-packages/pandas/core/indexing.py in _getitem_tuple(self, tup) 1504 def _getitem_tuple(self, tup: tuple): 1505 -&gt; 1506 self._has_valid_tuple(tup) 1507 with suppress(IndexingError): 1508 return self._getitem_lowerdim(tup) /usr/local/lib/python3.7/dist-packages/pandas/core/indexing.py in _has_valid_tuple(self, key) 752 for i, k in enumerate(key): 753 try: --&gt; 754 self._validate_key(k, i) 755 except ValueError as err: 756 raise ValueError( /usr/local/lib/python3.7/dist-packages/pandas/core/indexing.py in _validate_key(self, key, axis) 1407 return 1408 elif is_integer(key): -&gt; 1409 self._validate_integer(key, axis) 1410 elif isinstance(key, tuple): 1411 # a tuple should already have been caught by this point /usr/local/lib/python3.7/dist-packages/pandas/core/indexing.py in _validate_integer(self, key, axis) 1498 len_axis = len(self.obj._get_axis(axis)) 1499 if key &gt;= len_axis or key &lt; -len_axis: -&gt; 1500 raise IndexError(&quot;single positional indexer is out-of-bounds&quot;) 1501 1502 # ------------------------------------------------------------------- IndexError: single positional indexer is out-of-bounds </code></pre> <p>Can someone help me on that matter? I searched similar issues, but it's not quite the same so I didn't find the solution. I tried to replace the current &quot;churn_temp.iloc[1: , :]&quot; with the exact number of columns and rows of the databse, but it didn't fix the issue.</p> <p>Thanks a ton in advance!</p>
<p>Problem is in your <code>churn_calculation</code> method</p> <pre class="lang-py prettyprint-override"><code>----&gt; 6 df.loc[:,'Cumulative_Churn_perc']=round((df.loc[:,'Cumulative Churn']/df.iloc[0,3])*100,2) </code></pre> <p>Here you use <code>df.iloc[0,3]</code>, <code>iloc</code> indexing starts from 0, the problem might be that your <code>df</code> is empty or <code>df</code> doesn't have 4 columns.</p>
python|pandas|indexoutofboundsexception
0
376,397
73,337,400
Why the error information "unrecognized arguments" return?
<p>When I tried to use ArgumentParser() class to define the argument &quot;epochs&quot; as the training epoch of my CNN model with PyTorch, the system informed me this error. This is my code block:</p> <pre><code># 2.1 define super arguments (training epochs for example) import argparse parser = argparse.ArgumentParser() parser.add_argument('-e', '--epochs', default=10, type=int, help='number of epochs to train the VAE for') args = vars(parser.parse_args()) epochs = args['epochs'] </code></pre> <p>When I executed it, the error was thrown out as:</p> <pre><code>usage: ipykernel_launcher.py [-h] [-e EPOCHS] ipykernel_launcher.py: error: unrecognized arguments: -f /root/.local/share/jupyter/runtime/kernel-ae55ea85-75ab-4308-8b6a-3319e5b09a40.json An exception has occurred, use %tb to see the full traceback. SystemExit: 2 </code></pre> <p>I do not know what this error information means and how to fix it. I never made any extra arguments with the prompt &quot;-f&quot;. How can I deal with this error? Many thanks!!</p>
<p>From your error message, I guess you are running the code in the jupyter notebook environment. In jupyter notebook, if you want to use argparse, please modify the code to the following form: args = vars(parser.parse_args(args=[]))</p>
python|pytorch|arguments
0
376,398
73,425,535
I'm using this Spleeter library for vocal seperation but it is not working
<p>I'm using this Spleeter library for vocal seperation <a href="https://github.com/FaceOnLive/Spleeter-Android-iOS" rel="nofollow noreferrer">Spleeter-Android-iOS</a></p> <p>But it gives me 1 instead of 0 when I call the func spleeterSDK.process(wavPath!, outPath: path). I don't know what is the problem.</p> <p>Any help will be appreciated</p> <pre><code>let ret = spleeterSDK.process(wavPath!, outPath: path) // here the ret should be zero if(ret == 0) { let queue = DispatchQueue(label: &quot;process-queue&quot;) queue.async { while(true) { let progress = self.spleeterSDK.progress() DispatchQueue.main.async { self.progress.text = String(progress) + &quot;%&quot; } usleep(1000 * 1000); if(progress == 100) { break } } self.spleeterSDK.saveOne(url.path + &quot;/record.wav&quot;, stemRatio: UnsafeMutablePointer&lt;Float32&gt;(mutating: self.stemRatio)) DispatchQueue.main.async { self.btnProcess.isEnabled = true do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default) try AVAudioSession.sharedInstance().setActive(true) self.player = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path + &quot;/record.wav&quot;)) guard let player = self.player else { return } player.play() } catch let error { print(error.localizedDescription) } } } } </code></pre> <p><strong>Update:</strong></p> <p>how I'm creating SDK</p> <pre><code>spleeterSDK = SpleeterSDK(); let ret = spleeterSDK.createSDK() spleeterSDK.release() print(&quot;create SDK: &quot;, ret) // Here it prints 2 </code></pre> <p>This is the way I'm using to get wav wavPath</p> <pre><code>let wavPath = Bundle.main.path(forResource: &quot;_input.wav&quot;, ofType: nil) </code></pre> <p>This is the way I'm using to get wav outPath</p> <pre><code>let url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] print(url.path) let path = url.path </code></pre>
<p>Generally, if a function works, it has requirements on its inputs. We would need to see your inputs to have any chance to know why it doesn't work</p> <pre><code>spleeterSDK.process(wavPath!, outPath: path) </code></pre> <ol> <li>What is wavPath?</li> <li>Is there actually a file there on your device in a place your app can see it?</li> <li>Is it a wav formatted file?</li> <li>What does it have in it?</li> <li>what is path?</li> <li>is it a valid path?</li> <li>Does the folder it reference exist? Do you have permission to write to it?</li> <li>Does spleeterSDK require that you call any other functions before process would work (an initialization?)</li> <li><a href="https://github.com/FaceOnLive/Spleeter-Android-iOS/blob/main/iOS/SpleeterDemo/ViewController.swift" rel="nofollow noreferrer">The demo code</a> says you need to call <code>ret = spleeterSDK.createSDK()</code> -- did you do that? If so, what did it return?</li> <li>Did you run their demo app? Did it work?</li> </ol> <p>You should verify that you checked all of those things.</p>
python|ios|swift|tensorflow|spleeter
1
376,399
73,280,226
pandas remove equal rows by comparing columns in two dataframes
<pre><code>df1 = [['tom', 10, 1.2], ['nick', 15, 1.3], ['juli', 14, 1.4]] </code></pre> <p><a href="https://i.stack.imgur.com/8SGSR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8SGSR.png" alt="enter image description here" /></a></p> <pre><code>df1 = [['tom', 10, 1.2], ['nick', 15, 1.3], ['juli', 100, 1.4]] </code></pre> <p><a href="https://i.stack.imgur.com/hbLte.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hbLte.png" alt="enter image description here" /></a></p> <p>When I am trying compare and remove equal using below code</p> <pre class="lang-py prettyprint-override"><code>diff = df1.compare(df2, align_axis=1, keep_equal=True, keep_shape=True).drop_duplicates( keep=False).rename(index={'self': 'df1', 'other': 'df2'}, level=-1) </code></pre> <p>I am getting</p> <p><a href="https://i.stack.imgur.com/64QGc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/64QGc.png" alt="enter image description here" /></a></p> <p>I want to keep only that row which has any unequal records and remove remaining. It means only last row should be present in output not all rows like blow. Please suggest changes.</p> <p><a href="https://i.stack.imgur.com/POiNe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/POiNe.png" alt="enter image description here" /></a></p>
<p>Assuming you want everything from df1 that does not matches df2</p> <pre><code>n_columns = len(df1.columns) df1[(df1 == df2).apply(sum, axis=1).apply(lambda x: x != n_columns)] </code></pre>
python|pandas|dataframe
0