Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
5,500
68,726,586
Fillna pandas doesnt fill all the missing values in the column
<p>I have a column Sulphate with missing values. I'm trying to fill in the missing values with predictions from a model I made. But, it does not fill in all the missing values. Instead it only fills in some of the missing values.</p> <p><a href="https://i.stack.imgur.com/LT8Ze.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LT8Ze.png" alt="enter image description here" /></a></p> <p>I have tried doing this but it only fills in 196 missing values not all.</p> <p><a href="https://i.stack.imgur.com/PqOau.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PqOau.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/VmQFf.png" rel="nofollow noreferrer">enter image description here</a></p> <p><a href="https://i.stack.imgur.com/sSkyl.png" rel="nofollow noreferrer">enter image description here</a></p>
<p><code>fillna</code>'s input is used by index, so if the first value in dataframe is not empty, the first value of the <code>sulfate_predictions</code> is ignored, and so on. You need to pass the array of the same size as your dataframe. Or try this:</p> <pre><code>df.loc[df['Sulfate'].isnull(),'Sulfate'] = sulfate_predictions </code></pre>
python|pandas|dataframe
1
5,501
68,746,619
Parse nested JSON and iterate into Pandas Dataframe
<p>I'm using a Foursquare API call to find venues associated with particular ZIP codes in the US.</p> <p>I am able to generate the JSON with information, but am having trouble looping and parsing to construct a pandas dataframe.</p> <p>So far:</p> <pre><code># scraping the foursquare website for the information we want and obtaining the json file as results for i, series in df_income_zip_good.iterrows(): lat = series ['lat'] lng = series ['lng'] town = series ['place'] LIMIT = 100 radius = 1000 url4Sqr = 'https://api.foursquare.com/v2/venues/explore?&amp;client_id={}&amp;client_secret={}&amp;v={}&amp;ll={},{}&amp;radius={}&amp;limit={}'.format( CLIENT_ID, CLIENT_SECRET, VERSION, lat, lng, radius, LIMIT) venues = requests.get(url4Sqr).json() #print results from call print (venues) #https://stackoverflow.com/questions/6386308/http-requests-and-json-parsing-in-python </code></pre> <p>This works fine and produces the JSON. I've linked the output to a JSON file on GitHub: <a href="https://github.com/adhorvitz/coursera_ibm_capstone/blob/524c6609ea8872e0c188cd373a4778caaadb1cf6/venuedatasample.json" rel="nofollow noreferrer">(https://github.com/adhorvitz/coursera_ibm_capstone/blob/524c6609ea8872e0c188cd373a4778caaadb1cf6/venuedatasample.json</a>)</p> <p>I am not sure how to best flatten the JSON, then loop to extract the pieces of information I want to load into a dataframe. I've tried to mess around with the following with no success.</p> <pre><code>def flatten_json(nested_json, exclude=['']): &quot;&quot;&quot;Flatten json object with nested keys into a single level. Args: nested_json: A nested json object. exclude: Keys to exclude from output. Returns: The flattened json object if successful, None otherwise. The code recursively extracts values out of the object into a flattened dictionary. json_normalize can be applied to the output of flatten_object to produce a python dataframe: &quot;&quot;&quot; out = {} def flatten(x, name='venues', exclude=exclude): if type(x) is dict: for a in x: if a not in exclude: flatten(x[a], name + a + '_') elif type(x) is list: i = 0 for a in x: flatten(a, name + str(i) + '_') i += 1 else: out[name[:-1]] = x flatten(nested_json) return out #https://towardsdatascience.com/flattening-json-objects-in-python-f5343c794b10 </code></pre> <p>I then run:</p> <pre><code>for i in venues(): json_flat_venues = flatten_json(venues) json_flat_venues </code></pre> <p>An error is produced stating that the 'dict' object is not callable.</p> <p>I've also tried:</p> <pre><code>for i in venues(): df_venues_good = pd.json_normalize(venues) df_venues_good </code></pre> <p>The same error is produced.</p> <p>I'm a bit lost on where to go, and how to best convert the JSON into a workable DF.</p> <p>Thanks in advance.</p> <p>-------update-----------</p> <p>So I’ve tried a few things.</p> <ol> <li><p>After I referenced the page left in the comments: <a href="https://www.geeksforgeeks.org/flattening-json-objects-in-python/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/flattening-json-objects-in-python/</a>, I installed json_flatten (using pop), but had issues importing flatten.</p> </li> <li><p>As an attempt at a work around I tried to re-create the code from the website, adapted to my project. I think I made more of a mess than I cleared up.</p> </li> <li><p>I re-ran the original &quot;flatten_json&quot; def (see above). I then assigned df_venues_good without the for loop statement (also above).</p> </li> <li><p>With the for loop removed it look like it starts to pulls the first record from the json. However, it looks like metadata (or at least data that I'm not trying to extract).</p> </li> <li><p>I also noticed an issue when reviewing the json. In my output (I'm using a Jupyter notebook) cell it looks like all of the records are retrieved (there are about 95 in all).</p> </li> </ol> <p>I then ran this to just dump the file to inspect:</p> <pre><code>JsonString = json.dumps(venues) JsonFile = open(&quot;venuedata.json&quot;, &quot;w&quot;) JsonFile.write(JsonString) JsonFile.close() </code></pre> <p>When I open the dump file (which I put linked above) it doesn't look complete.</p> <p>Any direction would be appreciated.</p>
<p>After 4 days of communication I think I see your real question that will get you moving forward. You need to look up and troubleshoot the below two errors. Please mark my answer as correct if you agree amd with some more work troubleshooting on your own, create a new question around your insights, errors and questions around the following images.</p> <p><a href="https://i.stack.imgur.com/O9lK0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O9lK0.png" alt="valueerror: if using all scaler values, you must an index" /></a></p> <p><a href="https://i.stack.imgur.com/NXv9P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NXv9P.png" alt="enter image description here" /></a></p> <p>Library you can use to help you “flatten Json data” are Pandas, requests, Jsons, and even the library csv can help you here.</p> <p>Do to the fact you are learning python, data analysis and how to work with api’s you will find little to no more help on stackoverflow with out a more clear description amd examples of your technical issue.</p> <p>Please continue your self study and keep trying! You got this:)</p> <p>Plz let us know how the community can help with individual issues and questions as you grow:)</p>
python|json|pandas|nested|flatten
1
5,502
5,446,522
data type not understood
<p>I'm trying to use a matrix to compute stuff. The code is this</p> <pre><code>import numpy as np # some code mmatrix = np.zeros(nrows, ncols) print mmatrix[0, 0] </code></pre> <p>but I get 'data type not understood', and it works if I do it from terminal.</p>
<p>Try:</p> <pre><code>mmatrix = np.zeros((nrows, ncols)) </code></pre> <p>Since the shape parameter has to be an int or sequence of ints</p> <p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html" rel="noreferrer">http://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html</a></p> <p>Otherwise you are passing <code>ncols</code> to <code>np.zeros</code> as the dtype.</p>
python|matrix|numpy
162
5,503
5,134,932
numpy - pretty printing
<p>I have a numpy array of strings. When a value in the array is undefined, None is printed as you would expect. Is it possible to provide a default value for None values?</p> <p>e.g. in the following I want "_" instead of None</p> <pre><code>[[None B C] [M None O] [X Y None]] </code></pre> <p>would become</p> <pre><code>[[_ B C] [M _ O] [X Y _]] </code></pre>
<p>You might also consider using a masked array:</p> <pre><code>import numpy as np x=np.array([[None, 'B', 'C'], ['M', None, 'O'], ['X', 'Y', None]]) print(x) # [[None B C] # [M None O] # [X Y None]] x=np.ma.masked_equal(x,None) print(x) # [[-- B C] # [M -- O] # [X Y --]] </code></pre>
python|numpy
7
5,504
53,116,468
name 'pd' is not defined
<pre><code># Importing the libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd # Importing the dataset dataset = pd.read_csv('Data.csv') print(dataset) </code></pre> <p>Error:</p> <pre><code>dataset = pd.read_csv('Data.csv') Traceback (most recent call last): File "&lt;ipython-input-6-bd7168d85704&gt;", line 1, in &lt;module&gt; dataset = pd.read_csv('Data.csv') NameError: name 'pd' is not defined </code></pre>
<p>From your comments, you're using Spyder. The traceback confirms to me that you're running <code>dataset = pd.read_csv('Data.csv')</code> in the IPython interactive console. </p> <p>Spyder has configurable namespace sharing between scripts and the console. Running:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import pandas as pd </code></pre> <p>in a script makes all 3 modules accessible both to code within the script and within the interactive console. You could also run <code>import pandas as pd</code> in the console and then use <code>pd</code> there indefinitely. This import <em>may</em> or <em>may not</em> then be available in your scripts depending on what settings you have used.</p> <p>Your issue is that you either:</p> <ol> <li>haven't run <code>import pandas as pd</code> <em>anywhere</em></li> <li>You've restarted the kernel at some point and lost your imports</li> <li>You've configured Spyder to wipe the namespace of your scripts each time they run, and haven't accounted for this.</li> </ol> <p>Regardless of what settings you have with namespace sharing, <em>always</em> import modules into your scripts, don't rely on unusual quirks of Spyder because your code won't work elsewhere. What you do in the console is inconsequential.</p>
python|pandas
2
5,505
65,538,068
Find string in data frame and store new values in a new column
<p>I am creating a script that takes a csv file which columns organisation and columns name are unknown. However I know that only one of the column contains some values in which the str 'rs' and 'del' appears.</p> <p>I need to create an extra column (called 'Type') and store 'dbsnp' in the row where 'rs' was found and 'deletion' in the row where 'del' was found. If not str is found, leave this row in column type empty.</p> <p>As example I provide this df:</p> <pre><code>Data = {'Number': ['Mukul', 'Rohan', 'Mayank', 'Shubham', 'Aakash'], 'Location': ['Saharsanpur', 'MERrs', 'rsAdela', 'aaaadelaa', 'aaa'], 'Pay': [25000, 30000, 35000, 40000, 45000]} df = pd.DataFrame(Data) print(df) Name Location Pay 0 Mukul Saharsanpur 25000 1 Rohan MERrs 30000 2 Mayank rsAdela 35000 3 Shubham aaaadelaa 40000 4 Aakash aaa 45000 </code></pre> <p>I have been trying things like that</p> <pre><code>df[&quot;type&quot;] = df[&quot;Name&quot;].str.extract(&quot;rs&quot;)[0] # and then do some replace </code></pre> <p>But one of my problems is that I dont know non the name of the column neither the position.</p> <p>Desire output</p> <pre><code> Name Location Pay type 0 Mukul Saharsanpur 25000 dbsnp 1 Rohan MERrs 30000 dbsnp 2 Mayank rsAdela 35000 dbsnp 3 Shubham aaaadelaa 40000 deletion 4 Aakash aaa 450 </code></pre> <p>The next for loop solve the problem of the unknown column but now I need to solve the issue of identify my str in the value.</p> <p>How can I use str.contains(&quot;rs&quot;) in the if condition?</p> <pre><code>for index, row in df[:3].iterrows(): for i in range(len(df.columns)): if row[i] == 5: print(row.index[i]) </code></pre>
<p>You can do it without the loop. Here's an approach. You can use applymap and search all the columns.</p> <pre><code>import pandas as pd data = {'Number': ['Mukul', 'Rohan', 'Mayank', 'Shubham', 'Aakash'], 'Location': ['Saharsanpur', 'MERrs', 'rsAdela', 'aaaadelaa', 'aaa'], 'Pay': [25000, 30000, 35000, 40000, 45000]} df = pd.DataFrame(data) df['rs'] = df.astype(str).applymap(lambda x: 'rs' in x).any(1) df['del'] = df.astype(str).applymap(lambda x: 'del' in x).any(1) df['type']='' df.loc[df['rs'] == True, 'type'] = 'dbsnp' df.loc[df['del'] == True, 'type'] = 'deletion' df = df.drop(columns=['rs','del']) print (df) </code></pre> <p>Based on the data in the table, <code>rsAdela</code> has both <code>rs</code> and <code>del</code>. Since I am applying <code>rs</code> first and <code>del</code> second, the row is flagged for <code>deletion</code>. You can choose to swap the order to decide if you want to retain value as <code>dbsnp</code> or <code>deletion</code>.</p> <p>The code processes all the columns irrespective of dtype.</p> <p>The output of the above data is:</p> <pre><code> Number Location Pay type 0 Mukul Saharsanpur 25000 dbsnp 1 Rohan MERrs 30000 dbsnp 2 Mayank rsAdela 35000 deletion 3 Shubham aaaadelaa 40000 deletion 4 Aakash aaa 45000 </code></pre>
python|pandas
1
5,506
65,623,939
Faster way to filter pandas dataframe and create new columns
<p>Given df</p> <pre><code> ticker close open 0 AAPL 1.2 1.1 1 TSLA 25.0 27.0 2 TSLA 83.0 80.0 3 TSLA 95.0 93.0 4 CCL 234.0 234.2 5 AAPL 512.0 520.0 </code></pre> <p>My purpose:</p> <p>(1) Apply functions to each ticker dataframe (subset)</p> <p>(2) Create new column with values in string like 'exist' to each ticker dataframe</p> <p>My expected output</p> <pre><code> ticker close open candlestick SMA_20 SMA_50 0 AAPL 1.2 1.1 bullish (number) (number) 1 TSLA 25.0 27.0 bearish (number) (number) 2 TSLA 83.0 80.0 bullish (number) (number) 3 TSLA 95.0 93.0 bullish (number) (number) 4 CCL 234.0 234.2 bearish (number) (number) 5 AAPL 512.0 520.0 bearish (number) (number) </code></pre> <p>I've tried this code, which is extremely slow</p> <pre><code>for x in df.ticker: df_ticker = df[df.ticker == x] df_close_price = pd.DataFrame(df_ticker.close) for days in [20,50]: df_ticker[f'SMA_{days}'] = df_close_price.apply(lambda c: abstract.SMA(c, days)) ...... df_result = df_result.append(df_ticker) </code></pre> <p>I was wondering how to filter the dataframe by ticker in a faster way when dealing with millions rows. Many suggested using <code>.loc</code>, <code>numpy</code>, but I could not find a possible way to perform.</p> <p>Thanks!</p>
<p>I think you need <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a>:</p> <pre><code>df['candlestick'] = np.where(df['close'] &gt; df['open'], 'bullish', 'bearish') print (df) ticker close open candlestick 0 AAPL 1.2 1.1 bullish 1 TSLA 25.0 27.0 bearish 2 TSLA 83.0 80.0 bullish 3 TSLA 95.0 93.0 bullish 4 CCL 234.0 234.2 bearish 5 AAPL 512.0 520.0 bearish </code></pre> <p>EDIT: Here is possible use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.apply.html" rel="nofollow noreferrer"><code>GroupBy.apply</code></a> with custom functionand mainly pass Series to <code>abstract.SMA</code> instead <code>.apply(lambda c: abstract.SMA(c, days)</code>:</p> <pre><code>def f(x): for days in [20,50]: x[f'SMA_{days}'] = abstract.SMA(x.close, days) return x df = df.groupby('ticker')['close'].apply(f) print (df) </code></pre>
python|pandas
1
5,507
65,795,980
ValueError: Shape must be rank 2 but is rank 4 for 'in_top_k/InTopKV2' (op: 'InTopKV2') with input shapes: [?,28,28,10], [?], []
<p>I'm new to Tensorflow and I am trying to train on MNIST. However, the code fails on</p> <pre><code>correct = tf.nn.in_top_k(logits, tf.argmax(y, axis=1), 1) </code></pre> <p>with the error &quot;ValueError: Shape must be rank 2 but is rank 4 for 'in_top_k/InTopKV2' (op: 'InTopKV2') with input shapes: [?,28,28,10], [?], []&quot;</p> <p>What is going on here, and what do I need to know to make this compatible with different architectures in the future? I've included the entire file below.</p> <pre><code>import tensorflow as tf import numpy as np from tensorflow.python.framework import graph_util from tensorflow.python.framework import graph_io tf.reset_default_graph() x = tf.placeholder(tf.float32, shape=(None, 28, 28), name='x_input') y = tf.placeholder(tf.float32, shape=(None, 10), name='y_label') y = tf.stop_gradient(y, name=&quot;stop_gradient_y&quot;) input_layer = tf.reshape(x, [-1, 28, 28, 1], name='x_reshaped') fc_layer1 = tf.layers.dense( inputs=input_layer, units=1024, activation=tf.nn.relu, name='fc_layer_1') fc_layer2 = tf.layers.dense( inputs=fc_layer1, units=512, activation=tf.nn.relu, name='fc_layer_2') fc_layer3 = tf.layers.dense( inputs=fc_layer2, units=512, activation=tf.nn.relu, name='fc_layer_3') fc_layer4 = tf.layers.dense( inputs=fc_layer3, units=512, activation=tf.nn.relu, name='fc_layer_4') fc_layer5 = tf.layers.dense( inputs=fc_layer4, units=512, activation=tf.nn.relu, name='fc_layer_5') logits = tf.layers.dense(inputs=fc_layer5, units=10, name='logits') classes = tf.argmax(input=logits, axis=1, name='classes') probabilities = tf.nn.softmax(logits, name=&quot;probabilities_out&quot;) loss = tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=logits, name='loss_func') grad = tf.gradients(loss, x) grad_out = tf.identity(grad, name='gradient_out') optimizer = tf.train.AdamOptimizer() train_op = optimizer.minimize(loss) correct = tf.nn.in_top_k(logits, tf.argmax(y, axis=1), 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() x_train = x_train/np.float32(255) y_train = y_train.astype(np.int32) x_test = x_test/np.float32(255) y_test = y_test.astype(np.int32) y_train = tf.keras.utils.to_categorical(y_train, 10) y_test = tf.keras.utils.to_categorical(y_test, 10) num_epochs = 100 batch_size = 100 init = tf.global_variables_initializer() with tf.Session() as sess: init.run() for epoch in range(num_epochs): print('Epoch: {}'.format(epoch)) for i in range(x_train.shape[0] // batch_size): batch_indices = np.random.randint(x_train.shape[0], size=batch_size) x_batch = x_train[batch_indices] y_batch = y_train[batch_indices] sess.run(train_op, feed_dict={x: x_batch, y: y_batch}) acc_test = accuracy.eval(feed_dict={x: x_test, y: y_test}) print(epoch, &quot;Test accuracy:&quot;, acc_test) constant_graph = graph_util.convert_variables_to_constants( sess, sess.graph.as_graph_def(), ['probabilities_out', 'gradient_out']) graph_io.write_graph(constant_graph, '.', 'mnist_gradient_fc_without.pb', as_text=False) </code></pre>
<p>Thanks JacoSolari and Jarom Allen. For the benefit of community providing complete working code here</p> <pre><code>import tensorflow as tf import numpy as np from tensorflow.python.framework import graph_util from tensorflow.python.framework import graph_io tf.reset_default_graph() x = tf.placeholder(tf.float32, shape=(None, 28, 28), name='x_input') y = tf.placeholder(tf.float32, shape=(None, 10), name='y_label') y = tf.stop_gradient(y, name=&quot;stop_gradient_y&quot;) input_layer = tf.reshape(x, [-1, 784 ], name='x_reshaped') fc_layer1 = tf.layers.dense( inputs=input_layer, units=1024, activation=tf.nn.relu, name='fc_layer_1') fc_layer2 = tf.layers.dense( inputs=fc_layer1, units=512, activation=tf.nn.relu, name='fc_layer_2') fc_layer3 = tf.layers.dense( inputs=fc_layer2, units=512, activation=tf.nn.relu, name='fc_layer_3') fc_layer4 = tf.layers.dense( inputs=fc_layer3, units=512, activation=tf.nn.relu, name='fc_layer_4') fc_layer5 = tf.layers.dense( inputs=fc_layer4, units=512, activation=tf.nn.relu, name='fc_layer_5') logits = tf.layers.dense(inputs=fc_layer5, units=10, name='logits') classes = tf.argmax(input=logits, axis=1, name='classes') probabilities = tf.nn.softmax(logits, name=&quot;probabilities_out&quot;) loss = tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=logits, name='loss_func') grad = tf.gradients(loss, x) grad_out = tf.identity(grad, name='gradient_out') optimizer = tf.train.AdamOptimizer() train_op = optimizer.minimize(loss) correct = tf.nn.in_top_k(logits, tf.argmax(y, axis=1), 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() x_train = x_train/np.float32(255) y_train = y_train.astype(np.int32) x_test = x_test/np.float32(255) y_test = y_test.astype(np.int32) y_train = tf.keras.utils.to_categorical(y_train, 10) y_test = tf.keras.utils.to_categorical(y_test, 10) num_epochs = 5 batch_size = 100 init = tf.global_variables_initializer() with tf.Session() as sess: init.run() for epoch in range(num_epochs): print('Epoch: {}'.format(epoch)) for i in range(x_train.shape[0] // batch_size): batch_indices = np.random.randint(x_train.shape[0], size=batch_size) x_batch = x_train[batch_indices] y_batch = y_train[batch_indices] sess.run(train_op, feed_dict={x: x_batch, y: y_batch}) acc_test = accuracy.eval(feed_dict={x: x_test, y: y_test}) print(epoch, &quot;Test accuracy:&quot;, acc_test) constant_graph = graph_util.convert_variables_to_constants( sess, sess.graph.as_graph_def(), ['probabilities_out', 'gradient_out']) graph_io.write_graph(constant_graph, '.', 'mnist_gradient_fc_without.pb', as_text=False) </code></pre> <p>Output:</p> <pre><code>Epoch: 0 0 Test accuracy: 0.9527 Epoch: 1 1 Test accuracy: 0.9683 Epoch: 2 2 Test accuracy: 0.9731 Epoch: 3 3 Test accuracy: 0.9776 Epoch: 4 4 Test accuracy: 0.9821 INFO:tensorflow:Froze 12 variables. INFO:tensorflow:Converted 12 variables to const ops. </code></pre>
python|tensorflow
0
5,508
21,362,843
Interpret numpy.fft.fft2 output
<p>My goal is to obtain a plot with the spatial frequencies of an image - kind of like doing a fourier transformation on it. I don't care about the position on the image of features with the frequency f (for instance); I'd just like to have a graphic which tells me how much of every frequency I have (the amplitude for a frequency band could be represented by the sum of contrasts with that frequency).</p> <p>I am trying to do this via the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fft2.html" rel="noreferrer"><code>numpy.fft.fft2</code></a> function.</p> <p>Here is a link to a <a href="http://nbviewer.ipython.org/urls/gist.github.com/TheChymera/8606917/raw/2a93b8b1c3b15e8467cca21b5924c20533787fc5/imFreq?create=1" rel="noreferrer">minimal example</a> portraying my use case.</p> <p>As it turns out I only get distinctly larger values for <code>frequencies[:30,:30]</code>, and of these the absolute highest value is <code>frequencies[0,0]</code>. How can I interpret this? </p> <ul> <li>What exactly does the amplitude of each value stand for?</li> <li>What does it mean that my highest value is in <code>frequency[0,0]</code> What is a <code>0 Hz</code> frequency?</li> <li>Can I bin the values somehow so that my frequency spectrum is orientation agnostic?</li> </ul>
<p><code>freq</code> has a few very large values, and lots of small values. You can see that by plotting </p> <pre><code>plt.hist(freq.ravel(), bins=100) </code></pre> <p>(See below.) So, when you use</p> <pre><code>ax1.imshow(freq, interpolation="none") </code></pre> <p>Matplotlib uses <code>freq.min()</code> as the lowest value in the color range (which is by default colored blue), and <code>freq.max()</code> as the highest value in the color range (which is by default colored red). Since almost all the values in <code>freq</code> are near the blue end, the plot as a whole looks blue.</p> <p>You can get a more informative plot by rescaling the values in <code>freq</code> so that the low values are more widely distributed on the color range. </p> <p>For example, you can get a better distribution of values by taking the <code>log</code> of <code>freq</code>. (You probably don't want to throw away the highest values, since they correspond to frequencies with the highest power.)</p> <pre><code>import matplotlib as ml import matplotlib.pyplot as plt import numpy as np import Image file_path = "data" image = np.asarray(Image.open(file_path).convert('L')) freq = np.fft.fft2(image) freq = np.abs(freq) fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(14, 6)) ax[0,0].hist(freq.ravel(), bins=100) ax[0,0].set_title('hist(freq)') ax[0,1].hist(np.log(freq).ravel(), bins=100) ax[0,1].set_title('hist(log(freq))') ax[1,0].imshow(np.log(freq), interpolation="none") ax[1,0].set_title('log(freq)') ax[1,1].imshow(image, interpolation="none") plt.show() </code></pre> <p><img src="https://i.stack.imgur.com/g7pwM.png" alt="enter image description here"></p> <hr> <p>From <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fft2.html" rel="noreferrer">the docs</a>:</p> <blockquote> <p>The output, analogously to fft, contains the term for zero frequency in the low-order corner of the transformed axes,</p> </blockquote> <p>Thus, <code>freq[0,0]</code> is the "zero frequency" term. In other words, it is the constant term in the <a href="http://en.wikipedia.org/wiki/Discrete_Fourier_transform#Definition" rel="noreferrer">discrete Fourier Transform</a>.</p>
python|numpy|fft|frequency-distribution
13
5,509
63,701,600
How to choose my primary key column in saving pandas dataframe to sql
<p>I am trying to save a dataframe to mysql with the following:</p> <pre><code>df.to_sql('dedupe__df', con=to_conn, if_exists='replace') </code></pre> <p>This adds the <code>index</code> as the primary key. However, if I add the actual primary key, which is called <code>path</code>, I get the following error:</p> <pre><code>&gt;&gt;&gt; df.to_sql('dedupe__df', con=to_conn, if_exists='replace', index_label='path') </code></pre> <blockquote> <p>ValueError: duplicate name in index/columns: cannot insert path, already exists</p> </blockquote> <p>What does this mean exactly? It may be related to pandas trying to save all string fields as <code>TEXT</code> instead of varchar, as when I try and index on that field (as pandas as saved it, I get):</p> <pre><code>&gt; [MySQL] BLOB/TEXT column 'path' used in key specification without a key length </code></pre> <p>It seems like pandas doesn't too well with string fields on defaults...What would be the suggested way to fix this, outside of supplying a full schema (which would be overkill).</p>
<p>Unfortunately, it is not possible with df.to_sql(). But you can do it in below given way:</p> <pre><code>df.to_sql('dedupe__df', engine, if_exists=&quot;replace&quot;, index=False) with engine.connect() as con: con.execute('ALTER TABLE dedupe__df ADD PRIMARY KEY (path);') </code></pre> <p>Table name &amp; index column is referenced from your code.</p>
python|sql|pandas
2
5,510
63,323,336
expected Inputs to have shape (150,) but got array with shape (1,)
<p>I'm running a keras model but it is showing shape error, although i did check the shape and it is correct.</p> <pre><code>predict = model.predict(sequences_matrix) </code></pre> <p>its showing the error:</p> <pre><code>ValueError: Error when checking input: expected Inputs to have shape (150,) but got array with shape (1,) </code></pre> <p>when I check its shape, its showing correct one:</p> <pre><code>sequences_matrix.shape </code></pre> <p>output is:</p> <pre><code>(150,) </code></pre>
<p>That is because the first channel is reserved for the batches. The function should work if you reshape your sequence_matrix to (1,150)</p> <pre><code>sequence_matrix = sequence_matrix.reshape(1,-1) </code></pre> <p>Then the model separates the batch of 1 and gets a (150,) input to the model. Right now it is assuming you are passing it 150 batches, each batch with (1,) shape.</p>
python-3.x|numpy|machine-learning|keras|vectorization
1
5,511
63,710,974
How to create a bar graph using pandas Dataframe in Python
<p>so I have created a dataframe that holds the len of tweets (in characters) for a specific user. I am having trouble creating the bar graph.Below is what I have:</p> <pre><code>data ={ 'Length of Tweets' : '0-32 ','33-64','65-96 ','97-128 ','129-160+ ' 'Length': [len(Tweeted_word032),len(Tweeted_word3264),len(Tweeted_word6496),len(Tweeted_word96128),len(Tweeted_word128160)] &quot;Length of Tweets&quot; : &quot;0-32 &quot;,&quot;33-64 &quot;,&quot;65-96 &quot;,&quot;97-128 &quot;,&quot;129-160+ &quot; &quot;Length&quot;: [len(Tweeted_word032),len(Tweeted_word3264),len(Tweeted_word6496),len(Tweeted_word96128),len(Tweeted_word128160)] } dataframe = pd.DataFrame(data=data) df.plot.bar(x=&quot;Length of Tweets&quot;, y=&quot;Length&quot;, rot=70, title=&quot;Length of Tweets&quot;) plt.show(block=True)` </code></pre>
<p>Try this instead:</p> <pre><code>import pandas as pd from matplotlib import pyplot as plt // Defining Tweeded_word variables... data = { &quot;Length of Tweets&quot; : (&quot;0-32 &quot;,&quot;33-64 &quot;,&quot;65-96 &quot;,&quot;97-128 &quot;,&quot;129-160+ &quot;), &quot;Length&quot;: [len(Tweeted_word032),len(Tweeted_word3264),len(Tweeted_word6496),len(Tweeted_word96128),len(Tweeted_word128160)] } dataframe = pd.DataFrame(data=data) dataframe.plot.bar(x=&quot;Length of Tweets&quot;, y=&quot;Length&quot;, rot=70, title=&quot;Length of Tweets&quot;) plt.show(block=True) </code></pre> <p>I think the main problem is that you called <code>plot</code> on <code>df</code> instead of <code>dataframe</code>. There were also some syntax problems with your data dictionary like not having commas between the key-value pairs in your dictionary. I also had to add parentheses to the tuples.</p>
python|pandas|matplotlib
0
5,512
63,425,356
Function to get top left and bottom right indexes of nonzero elements in an array
<p>So the goal is to write a function that gives the top left and bottom right indexes of a group of nonzero elements of an array. Two groups cant be next to eachother in the array. And if the group exist of just one element, the top left and bottom right indexes are the same. It was easy to get a function that gets all the indexes of nonzero elements, but I cant seem to filter it so just the top left and bottom right indexes remain. Here is the function I already got:</p> <pre><code>import numpy as np def get_indexes(A): M,N = A.shape pos = [] for i in range(M): for j in range(N): if A[i,j]!=0: pos.append((i,j,A[i,j])) return pos </code></pre> <p>And here is an example of a given array:</p> <pre><code>A= np.array([[0, 7, 7, 0, 0, 0, 0, 0, 0], [0, 7, 7, 0, 0, 9, 0, 0, 0], [0, 7, 7, 0, 0, 9, 0, 0, 0], [0, 7, 7, 0, 0, 9, 0, 0, 0], [0, 7, 7, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0, 0], [6, 6, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 5, 5, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]]) </code></pre> <p>I appreciate the help!</p>
<p>I traversed on matrix in increasing order of for loop to find top left indexes of non-zero element, and to get bottom right indexes of non-zero element I traversed decreasing order of for loop. Here the code:</p> <pre><code>import numpy as np A= np.array([ [0, 7, 7, 0, 0, 0, 0, 0, 0], [0, 7, 7, 0, 0, 9, 0, 0, 0], [0, 7, 7, 0, 0, 9, 0, 0, 0], [0, 7, 7, 0, 0, 9, 0, 0, 0], [0, 7, 7, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0, 0], [6, 6, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 5, 5, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]]) def getting_top_left_indexes(A): break_condition = False M,N = A.shape pos = [] for i in range(M): if break_condition == True: break for j in range(N): if A[i,j]!=0: pos.append((i,j,A[i,j])) break_condition = True break return pos def getting_bottom_right_indexes(A): break_condition = False M,N = A.shape pos = [] for i in range(M-1, 0, -1): if break_condition == True: break for j in range(N-1, 0, -1): if A[i,j]!=0: pos.append((i,j,A[i,j])) break_condition = True break return pos top_left_indexes = getting_top_left_indexes(A) bottom_right_indexes = getting_bottom_right_indexes(A) </code></pre> <p><strong>Edit:</strong> I edited my answer regarding your explanation. Here the updated code:</p> <pre><code>import numpy as np A= np.array([ [0, 7, 7, 0, 0, 0, 0, 0, 0], [0, 7, 7, 0, 0, 9, 0, 0, 0], [0, 7, 7, 0, 0, 9, 0, 0, 0], [0, 7, 7, 0, 0, 9, 0, 0, 0], [0, 7, 7, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0, 0], [6, 6, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 5, 5, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]]) def get_indexes(A): M,N = A.shape pos = [] for i in range(M): for j in range(N): if A[i, j] != 0: pos.append((i,j,A[i,j])) return pos def filtering_indexes(indexes): filtered_list = [] sorted_indexes = sorted(indexes, key=lambda x: x[-1]) top_left_index = sorted_indexes[0] for i in range(0, len(sorted_indexes)): if sorted_indexes[i][2] == top_left_index[2]: pass else: filtered_list.append(sorted_indexes[i-1]) top_left_index = sorted_indexes[i] filtered_list.append(top_left_index) if sorted_indexes[i][2] == top_left_index[2]: filtered_list.append(sorted_indexes[i]) return filtered_list indexes = get_indexes(A) filtered_indexes = filtering_indexes(indexes) </code></pre>
python|arrays|numpy
2
5,513
63,721,340
How to split the values of a column to differenet columns in a dataframe
<p>I have a dataframe in the below format. I want to split the values of <strong>points</strong> column into different columns like A,B,C and so on based on the number of items in the list by deleting the original column.</p> <pre><code>df: x y points 0 82.123610 16.724781 [1075038212.0, -18.099967840282456, -18.158378... 1 82.126540 16.490998 [1071765909.0, -20.406018294234215, -15.850444... 2 82.369578 17.402203 [1072646747.0, -16.839004016179505, -18.334996... 3 81.612240 17.464167 [1096294130.0, -15.335239025421126, -15.303402... </code></pre>
<p>I think best here is create numeric columns names:</p> <pre><code>df = df.join(pd.DataFrame(df.pop('points').tolist(), index=df.index)) </code></pre> <p>If length of list is less like 27 is possible use:</p> <pre><code>import string d = dict(enumerate(string.ascii_lowercase)) df = df.join(pd.DataFrame(df.pop('points').tolist(), index=df.index).rename(columns=d)) </code></pre>
python|pandas|list|dataframe
1
5,514
21,797,889
Divide ndarray by scalar - Numpy / Python
<p>I'm just wondering how could I do such thing without using loops.</p> <p>I made a simple test trying to call a division as we do with a numpy.array, but I got the same ndarray.</p> <pre><code>N = 2 M = 3 matrix_a = np.array([[15., 27., 360.], [180., 265., 79.]]) matrix_b = np.array([[.5, 1., .3], [.25, .7, .4]]) matrix_c = np.zeros((N, M), float) n_size = 360./N m_size = 1./M for i in range(N): for j in range(M): n = int(matrix_a[i][j] / n_size) % N m = int(matrix_b[i][j] / m_size) % M matrix_c[n][m] += 1 matrix_c / (N * M) print matrix_c </code></pre> <p>I guess this should be pretty simple. Any help would be appreciated.</p>
<p>I think that you want to modify <code>matrix_c</code> in-place:</p> <pre><code>matrix_c /= (N * M) </code></pre> <p>Or probably less effective:</p> <pre><code>matrix_c = matrix_c / (N * M) </code></pre> <p>Expression <code>matrix_c / (N * M)</code> doesn't change <code>matrix_c</code> - it creates a new matrix.</p>
python|numpy|matrix|division
16
5,515
24,761,662
In Python/Pandas how do I convert century-months to DateTimeIndex?
<p>I am working with a dataset that encodes dates as the integer number of months since December 1899, so month 1 is January 1900 and month 1165 is January 1997. I would like to convert to a pandas DateTimeIndex. So far the best I've come up with is:</p> <pre><code>month0 = np.datetime64('1899-12-15') one_month = np.timedelta64(30, 'D') + np.timedelta64(10.5, 'h') birthdates = pandas.DatetimeIndex(month0 + one_month * resp.cmbirth) </code></pre> <p>The start date is the 15th of the month, and the timedelta is 30 days 10.5 hours, the average length of a calendar month. So the date within the month drifts by a day or two.</p> <p>So this seems a little hacky and I wondered if there's a better way.</p>
<p>You can use built-in <code>pandas</code> date-time functionality.</p> <pre><code>import pandas as pd import numpy as np indexed_months = np.random.random_integers(0, high=1165, size=100) month0 = pd.to_datetime('1899-12-01') date_list = [month0 + pd.DateOffset(months=mnt) for mnt in indexed_months] birthdates = pd.DatetimeIndex(date_list) </code></pre> <p>I've made an assumption that your <code>resp.cmbirth</code> object looks like an array of integers between 0 and 1165.</p> <p>I'm not quite clear on why you want the bin edges of the indices to be offset from the start or end of the month. This can be done: </p> <pre><code>shifted_birthdates = birthdates.shift(15, freq=pd.datetools.day) </code></pre> <p>and similarly for hours if you want. There is also useful info in the answers to this <a href="https://stackoverflow.com/questions/13445174/date-ranges-in-pandas">SO question</a> and the related <a href="https://github.com/pydata/pandas/issues/2289" rel="nofollow noreferrer">pandas github issue</a>.</p>
python|pandas
3
5,516
24,918,287
Total size of array must be unchanged
<p>I am using a Python module called emcee to sample a distribution. I need to pass a (37,100) (which I have named Ntrig and Nsamp, respectively) array called <code>events</code> to the below function. </p> <pre><code>def mp(SNR2, *events): events = np.asarray(events).reshape((Ntrig,Nsamp)) bessel = special.iv(0,np.sqrt(x*SNR2(event))) exp = np.exp(-0.5*(x+SNR2(event))) I = integrate.quad(lambda x: exp*bessel,0,SNRth**2)[0] return np.asarray([np.array[I for event in events[i]] for i in range(len(events))]).reshape(events.shape) </code></pre> <p>I keep getting the error:</p> <pre><code>ValueError: total size of new array must be unchanged </code></pre> <p>As I understand, <code>*events</code> will break up the <code>events</code> array into 37*100 separate arguments. Shouldn't the next line where I reshape the array just put it back into a 37 by 100 array?</p> <p>P.S. before you ask why I even bother breaking up <code>events</code> into separate arguments--the module needs this to work, it can't take an array.</p> <p>Full Traceback error:</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-17-c8e815326a69&gt; in &lt;module&gt;() ----&gt; 1 mp(SNR2,events) &lt;ipython-input-16-9f73f234c628&gt; in mp(SNR2, *events) 5 def mp(SNR2, *events): 6 events = np.asarray(events).reshape((Ntrig,Nsamp)) ----&gt; 7 return np.asarray([np.array([integrate.quad(lambda x: np.exp(-0.5*(x+SNR2(event)))*special.iv(0,np.sqrt(x*SNR2(event))),0,SNRth**2)[0] for event in events[i]]) for i in range(len(events))]).reshape(events.shape) 8 # return integrate.quad(lambda x: 0.5*np.exp(-0.5*(x+SNR2(event)))*special.iv(0,np.sqrt(x*SNR2(event))),0,SNRth**2)[0] 9 def pp(SNR2, *events): /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/integrate/quadpack.pyc in quad(func, a, b, args, full_output, epsabs, epsrel, limit, points, weight, wvar, wopts, maxp1, limlst) 279 args = (args,) 280 if (weight is None): --&gt; 281 retval = _quad(func,a,b,args,full_output,epsabs,epsrel,limit,points) 282 else: 283 retval = _quad_weight(func,a,b,args,full_output,epsabs,epsrel,limlst,limit,maxp1,weight,wvar,wopts) /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/integrate/quadpack.pyc in _quad(func, a, b, args, full_output, epsabs, epsrel, limit, points) 343 if points is None: 344 if infbounds == 0: --&gt; 345 return _quadpack._qagse(func,a,b,args,full_output,epsabs,epsrel,limit) 346 else: 347 return _quadpack._qagie(func,bound,infbounds,args,full_output,epsabs,epsrel,limit) &lt;ipython-input-16-9f73f234c628&gt; in &lt;lambda&gt;(x) 5 def mp(SNR2, *events): 6 events = np.asarray(events).reshape((Ntrig,Nsamp)) ----&gt; 7 return np.asarray([np.array([integrate.quad(lambda x: np.exp(-0.5*(x+SNR2(event)))*special.iv(0,np.sqrt(x*SNR2(event))),0,SNRth**2)[0] for event in events[i]]) for i in range(len(events))]).reshape(events.shape) 8 # return integrate.quad(lambda x: 0.5*np.exp(-0.5*(x+SNR2(event)))*special.iv(0,np.sqrt(x*SNR2(event))),0,SNRth**2)[0] 9 def pp(SNR2, *events): &lt;ipython-input-16-9f73f234c628&gt; in SNR2(*events) 1 def SNR2(*events): ----&gt; 2 events = np.asarray(events).reshape((Ntrig,Nsamp)) 3 C = 5*np.pi**(-1.33333)*events**(1.66667)/(96*d**2) 4 return C*integrate.quad(lambda f: f**(-2.3333)/S(f), 20, 1500, limit=1000)[0] 5 def mp(SNR2, *events): ValueError: total size of new array must be unchanged </code></pre>
<blockquote> <p>As I understand, <code>events</code> will break up the events array into 37*100 separate arguments.</p> </blockquote> <p>This is not true. If you call <code>mp</code> using</p> <pre><code>mp(SNR2, events) </code></pre> <p>then inside <code>mp</code>, <code>events</code> will be a 1-element tuple, <code>(arr,)</code>, where <code>arr</code> is the (37, 100)-shaped array.</p> <p>If you call <code>mp</code> using </p> <pre><code>mp(SNR2, *events) </code></pre> <p>then inside <code>mp</code>, <code>events</code> will be a 37-element tuple, where the 37 elements are the 37 rows of the (37, 100)-shaped array.</p> <p>If you call <code>mp</code> using</p> <pre><code>mp(SNR2, *events.flat) </code></pre> <p>then inside <code>mp</code>, <code>events</code> will be a tuple of 37*100 elements.</p> <hr> <p>Notice that the last stanza of the traceback says:</p> <pre><code>&lt;ipython-input-16-9f73f234c628&gt; in SNR2(*events) 1 def SNR2(*events): ----&gt; 2 events = np.asarray(events).reshape((Ntrig,Nsamp)) 3 C = 5*np.pi**(-1.33333)*events**(1.66667)/(96*d**2) 4 return C*integrate.quad(lambda f: f**(-2.3333)/S(f), 20, 1500, limit=1000)[0] 5 def mp(SNR2, *events): ValueError: total size of new array must be unchanged </code></pre> <p>So the error is raised while Python is in the <code>SNR2</code> function.</p> <p>Since <code>SNR2</code> was called in <code>mp</code> using <code>SNR2(event)</code>, and <code>event</code> is a (37,100)-shaped array, the <code>event</code> variable in <code>SNR2</code> is a 1-element tuple containing the original array. That's not what you want. </p> <p>The easiest way to fix the code is to define</p> <pre><code>def SNR2(events): # no longer needed # events = np.asarray(events).reshape((Ntrig,Nsamp)) </code></pre> <p>and just pass events around as one would expect. </p> <p>However, if you can not change the signature of <code>SNR2</code>, then you must call it with</p> <pre><code>SNR2(*event.flat) </code></pre> <p>inside the <code>mp</code> function.</p> <hr> <p>Reference: Here is <a href="http://www.saltycrane.com/blog/2008/01/how-to-use-args-and-kwargs-in-python/" rel="nofollow">an excellent explanation</a> of the <code>*</code> unpacking operator, and how the syntax is used when defining functions, and calling functions.</p>
python|arrays|numpy|distribution|emcee
3
5,517
17,644,259
How do I provide slider control to a matplotlib plot of a pandas timeseries
<p>I am able to plot a a couple of pandas time series as to slots</p> <p>Stock= A pandas timeseries of stock data</p> <p>Signal=A pandas timesries if signals from Stock above has the same range</p> <p>The following displays it the way I want where the dates between the 2 plots are in sync</p> <pre><code>fig=figure(num=None, figsize=(14, 8), dpi=80, facecolor='w', edgecolor='k') ax1=fig.add_subplot(211) ax1.autoscale(True) Stock.plot(ax=ax1) ax2=fig.add_subplot(212, sharex=ax1) Signal.plot(ax=ax2) fig.set_tight_layout(True) </code></pre> <p>What I want to do is for the plot to display only 1000 bars of data from the whole Stock time series and then implement a Sider below that will allow the plot to slide across the whole range of Stock display a specific range of 1000 bars. </p> <p>Can some one show me how to do this? I have seen some of the slider examples but I am not able to translate that to pandas plots or what axes or arguments to pass to the Slider object.</p> <p>Thx, Sarvi</p>
<p>I would use java script. For this. Look at <a href="http://www.highcharts.com/demo/line-time-series" rel="nofollow">http://www.highcharts.com/demo/line-time-series</a> </p>
matplotlib|pandas
-1
5,518
17,471,176
Equality not working as expected with ndarray sub-class
<p>The following example:</p> <pre><code>import numpy as np class SimpleArray(np.ndarray): __array_priority__ = 10000 def __new__(cls, input_array, info=None): return np.asarray(input_array).view(cls) def __eq__(self, other): return False a = SimpleArray(10) print (np.int64(10) == a) print (a == np.int64(10)) </code></pre> <p>gives the following output</p> <pre><code>$ python2.7 eq.py True False </code></pre> <p>so that in the first case, <code>SimpleArray.__eq__</code> is not called (since it should always return <code>False</code>). Is this a bug, and if so, can anyone think of a workaround? If this is expected behavior, how do I ensure <code>SimpleArray.__eq__</code> gets called in both cases?</p> <p>EDIT: just to clarify, this <em>only</em> happens with Numpy scalar arrays - with normal arrays, <code>__eq__</code> always get called because the <code>__array_priority__</code> tells Numpy that it should always execute this <code>__eq__</code> even if the object is on the RHS of an equality operation:</p> <pre><code>b = SimpleArray([1,2,3]) print(np.array([1,2,3]) == b) print(b == np.array([1,2,3])) </code></pre> <p>gives:</p> <pre><code>False False </code></pre> <p>So it seems that with scalar Numpy 'arrays', <code>__array_priority__</code> does not get respected.</p>
<p>This is somewhere between a bug and a wart. When you call <code>a op b</code> and <code>b</code> is a subclass of <code>a</code> python checks to see if <code>b</code> has a reflected version of <code>op</code> and calls that (<code>__eq__</code> is the reflected version of itself), So for example this <code>np.array(10) == a</code> gives the expected result because SimpleArray is a subclass of ndarray. However because SimpleArray is not an instance of np.int64 it doesn't work in the example you've provided. This might actually be kind of easy to fix on the numpy end of things so you might consider bringing it up on the mailing list.</p>
python|numpy
1
5,519
17,375,383
multidimensional numpy array -- reverse along a given axis
<p>Let's say I have a multidimensional array with a shape that I don't know until runtime.</p> <p>How can I reverse it along a given axis k, also not known until runtime?</p> <p>The notation <code>somearray[:,:,::-1,:,:]</code> relies on static dimension references, <a href="https://stackoverflow.com/questions/7416170/numpy-reverse-multidimensional-array">as in this other SO question</a>, so I can't use it here.</p>
<p>You can either construct a tuple of <code>slice</code> objects such as @ali_m suggests, or do something like this:</p> <pre><code>reversed_arr = np.swapaxes(np.swapaxes(arr, 0, k)[::-1], 0, k) </code></pre> <p>This places the desired axis at the front of the shape tuple, then reverses that first axis, and then returns it to its original position.</p> <p>Some people think this approach lacks readability, but I disagree.</p>
python|arrays|numpy
9
5,520
12,203,901
pandas crashes on repeated DataFrame.reset_index()
<p>Very weird bug here: I'm using pandas to merge several dataframes. As part of the merge, I have to call reset_index several times. But when I do, it crashes unexpectedly on the second or third use of reset_index.</p> <p>Here's minimal code to reproduce the error:</p> <pre><code>import pandas A = pandas.DataFrame({ 'val' : ['aaaaa', 'acaca', 'ddddd', 'zzzzz'], 'extra' : range(10,14), }) A = A.reset_index() A = A.reset_index() A = A.reset_index() </code></pre> <p>Here's the relevant part of the traceback:</p> <pre><code>.... A = A.reset_index() File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 2393, in reset_index new_obj.insert(0, name, _maybe_cast(self.index.values)) File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 1787, in insert self._data.insert(loc, column, value) File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 893, in insert raise Exception('cannot insert %s, already exists' % item) Exception: cannot insert level_0, already exists </code></pre> <p>Any idea what's going wrong here? How do I work around it?</p>
<p>Inspecting frame.py, it looks like pandas tries to insert a column 'index' or 'level_0'. If either/both(??) of them are already taken, then it throws the error.</p> <p>Fortunately, there's a "drop" option. AFAICT, this drops an existing index with the same name and replaces it with the new, reset index. This might get you in trouble if you have a column named "index," but I think otherwise you're okay.</p> <p>"Fixed" code:</p> <pre><code>import pandas A = pandas.DataFrame({ 'val' : ['aaaaa', 'acaca', 'ddddd', 'zzzzz'], 'extra' : range(10,14), }) A = A.reset_index(drop=True) A = A.reset_index(drop=True) A = A.reset_index(drop=True) </code></pre>
python|pandas
92
5,521
72,137,980
Can i make dataframe "active" in pandas
<p>I dont know if i'm asking this question right but fell free ask more info if needed.</p> <p>So i do this dataframe where i read csv file. Then i want to use the file to do another tasks. i want that df to be &quot;active&quot; but it seems like it dont recognise that dataframe outside of button.</p> <pre><code>def on_button_clicked(b): df = pd.read_csv(F&quot;./siivous/cleanedfiles/node_{karry.value}.csv&quot;) with output: display (df) display(img) clear_output(wait=True) </code></pre> <p>So how can i make that dataframe active just click of the button. So excample i wrote print(df) it print that df.</p>
<p>Your dataframe named <code>df</code> is declared inside of a function. If you do this you cannot access to it outside of that function.</p> <p>I suggest you the check out <a href="https://stackoverflow.com/questions/14051916/how-to-make-a-local-variable-inside-a-function-global">this thread</a>.</p> <p>I hope it helped!</p>
python|pandas
0
5,522
71,909,501
how to count the duplicates in pandas?
<p>just say I want to check for the duplicates in this df column:</p> <pre><code>df = pd.DataFrame( {&quot;column_with_some_duplicates&quot; : ['a', 'b', 'b', 'c', 'c']}, index = [1, 2, 3, 4, 5]) </code></pre> <p>in <code>r</code> I would check for duplicates like:</p> <pre><code>table(duplicated(df$column_with_some_duplicates)) </code></pre> <p>which gives me a table of <code>true</code> and 'false' for the boolean result of <code>duplicated</code>. How can I view the same thing in pandas? Thanks.</p>
<p>to check if the column provides duplicate values, i will suggest to do a function</p> <p>you could use the builtin <code>set</code> class, wich eliminates the duplicates, re-transforming it to a <code>list</code>, and then checking for equality:</p> <pre><code>def isduplicate(df,col): return list(set(df[col]))==list(col) </code></pre> <p>or you could just use the <code>.duplicated()</code> method:</p> <pre><code>def isduplicate(df,col): return df[col].duplicated() </code></pre> <p>a different approach would be to use the lenght of the unique elements.</p> <pre><code>def is_duplicate(df,col): return len(df[col].unique())&lt;len(df[col]) </code></pre>
python|r|pandas
0
5,523
71,887,751
Pandas zip multiple internal corresponding lists to long records
<p>I don't think this is a typical wide to long question because the items I'm looking to turn to long are actually nested in list fields.</p> <p>I have a uid field which is a list of integers, and another array which is a list of booleans that corresponds to the uid fields. I'd like to turn this array records into long records instead.</p> <p>My data frame that looks like this:</p> <pre><code> name uid is_left colC colD colE ... record01 [885] [True] .. .. .. ... record02 [981] [False] .. .. .. ... record03 [713, 981] [False, True] .. .. .. ... record04 [713] [True] .. .. .. ... record05 [126] [True] .. .. .. ... </code></pre> <p>I'd like to unwind it to:</p> <pre><code> name uid is_left colC colD colE ... record01 885 True .. .. .. ... record02 981 False .. .. .. ... record03 713 False .. .. .. ... record03 981 True .. .. .. ... record04 713 True .. .. .. ... record05 126 True .. .. .. ... </code></pre> <p>You can see <code>record03</code> now has 2 entries, one for <code>(713, False)</code> and one for <code>(981, True)</code>.</p>
<p><strong>Better Answer:</strong></p> <p>In pandas 1.3 you can use multi-column explode:</p> <pre><code>df.explode(['uid','is_left']) </code></pre> <p>For older versions, <code>explode</code> each column individually:</p> <pre><code>df.apply(pd.Series.explode) </code></pre> <p><strong>Old Answer:</strong></p> <p>You can use the <code>explode</code> method:</p> <pre class="lang-py prettyprint-override"><code>df.explode(&quot;uid&quot;).explode(&quot;is_left&quot;) </code></pre> <p><code>explode</code> take the name of the column to convert from list elements to new rows.</p>
pandas|dataframe
1
5,524
16,613,546
Using arctan / arctan2 to plot a from 0 to 2π
<p>I am trying to replicate a plot in <em>Orbital Mechanics</em> by Curtis, but I just can't quite get it. However, I have made head way by switching to <code>np.arctan2</code> from <code>np.arctan</code>.</p> <p>Maybe I am implementing <code>arctan2</code> incorrectly?</p> <pre><code>import pylab import numpy as np e = np.arange(0.0, 1.0, 0.15).reshape(-1, 1) nu = np.linspace(0.001, 2 * np.pi - 0.001, 50000) M2evals = (2 * np.arctan2(1, 1 / (((1 - e) / (1 + e)) ** 0.5 * np.tan(nu / 2) - e * (1 - e ** 2) ** 0.5 * np.sin(nu) / (1 + e * np.cos(nu))))) fig2 = pylab.figure() ax2 = fig2.add_subplot(111) for Me2, _e in zip(M2evals, e.ravel()): ax2.plot(nu.ravel(), Me2, label = str(_e)) pylab.legend() pylab.xlim((0, 7.75)) pylab.ylim((0, 2 * np.pi)) pylab.show() </code></pre> <p>In the image below, there are discontinuities popping up. The function is supposed to be smooth and connect at 0 and 2 pi in the y range of (0, 2pi) not touching 0 and 2pi.</p> <p><img src="https://i.stack.imgur.com/JY9BX.png" alt="Enter image description here" /></p> <p>Textbook plot and equation:</p> <p><img src="https://i.stack.imgur.com/2jfj5.jpg" alt="Enter image description here" /></p> <p><img src="https://i.stack.imgur.com/coYmE.jpg" alt="Enter image description here" /></p> <p>At the request of Saullo Castro, I was told that:</p> <blockquote> <p>The problem may lie in the arctan function which gives &quot;principle values&quot; as output.</p> </blockquote> <p>Thus, arctan(tan(x)) does not yield x if x is an angle in the second or third quadrant. If you plot arctan(tan(x)) from x = 0 to x = Pi, you will find that it has a discontinuous jump at x = Pi/2.</p> <p>For your case, instead of writing arctan(arg), I believe you would write arctan2(1, 1/arg) where arg is the argument of your arctan function. That way, when arg becomes negative, arctan2 will yield an angle in the second quadrant rather than the fourth.&quot;</p>
<p>The common practice is to sum 2<em>pi in the negative results of <code>arctan()</code>, <a href="https://stackoverflow.com/questions/10335090/numpy-replace-negative-values-in-array/10335159#10335159">which can be done efficiently</a>. The OP's suggestion to replace arctan(x) by arctan2(1, 1/x), also suggested by Maple 15's documentation as <a href="https://stackoverflow.com/questions/16613546/using-arctan-arctan2-to-plot-a-from-0-to-2%CF%80#comment79235056_16614914">pointed out by Yay295</a>, produces the same results without the need to sum 2</em>pi. Both are shown below:</p> <pre><code>import pylab import numpy as np e = np.arange(0.0, 1.0, 0.15).reshape(-1, 1) nu = np.linspace(0, 2*np.pi, 50000) x = ((1-e)/(1+e))**0.5 * np.tan(nu/2.) x2 = e*(1-e**2)**0.5 * np.sin(nu)/(1 + e*np.cos(nu)) using_arctan = True using_OP_arctan2 = False if using_arctan: M2evals = 2*np.arctan(x) - x2 M2evals[M2evals&lt;0] += 2*np.pi elif using_OP_arctan2: M2evals = 2 * np.arctan2(1,1/x) - x2 fig2 = pylab.figure() ax2 = fig2.add_subplot(111) for M2e, _e in zip(M2evals, e.ravel()): ax2.plot(nu.ravel(), M2e, label = str(_e)) pylab.legend(loc='upper left') pylab.show() </code></pre> <p><img src="https://i.stack.imgur.com/54wbZ.png" alt="Enter image description here" /></p>
python|numpy|matplotlib
12
5,525
19,084,423
Simple classification in scikit-learn
<p>I am trying to develop a simple classification program using scikit-learn. I want to pull in my set of tsv values, save them in an array. Then, save a csv containing the first value of my tsv from above and simply a random 1 or 0. So it will be output to the csv as follows:</p> <pre><code>tsvValue1, random1or0 eg string123, 0 foo234, 1 </code></pre> <p>I have all the code (nearly) separately, my problem is fitting it all together. </p> <pre><code>import numpy as np from sklearn import metrics,preprocessing,cross_validation import pandas as p loadData = lambda f: np.genfromtxt(open(f,'r'), delimiter=' ') def main(): traindata = list(np.array(p.read_table('../data/train.tsv'))[:,2]) testdata = list(np.array(p.read_table('../data/test.tsv'))[:,2]) y = np.array(p.read_table('../data/train.tsv'))[:,-1] X_all = traindata + testdata # What can I do below? What can I use to export to csv # properly with an appended 1 or 0 value below ? from random import randint randomInt = randint(0,1) #Inclusive testfile = p.read_csv( '../data/test.tsv', sep="\t", na_values=['?'], index_col=1) pred_df = p.DataFrame(testdata, index=testfile.index, columns=['label']) pred_df.to_csv('test.csv') print ("your random file has been created..") if __name__=="__main__": main() </code></pre> <p>UPDATE : Standard format of input tsv file:</p> <pre><code>foo1 foo2 foo3 foo4 fooN RelevantString123123123 RelevantString456456456 RelevantString789789789 </code></pre> <p>Format of desired resulting csv:</p> <pre><code>RelevantString123123123,1 RelevantString456456456,0 RelevantString789789789,1 </code></pre> <p>The second 1 or 0 in the csv file being ranzomly generated.</p>
<p>Having the file <code>input.tsv</code> with the content (separated by tabs):</p> <pre><code>foo1 foo2 foo3 foo4 fooN RelevantString123123123 RelevantString456456456 RelevantString789789789 </code></pre> <p>This shows how to get the output you want:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; import pandas &gt;&gt;&gt; df = pandas.read_csv('input.tsv', sep='\t') &gt;&gt;&gt; df['value'] = pandas.Series(np.random.randint(2, size=len(df)), index=df.index) &gt;&gt;&gt; df.to_csv('output.csv', cols=['foo1', 'value'], index=False) </code></pre> <p>The <code>output.csv</code> content is:</p> <pre><code>foo1,value RelevantString123123123,1 RelevantString456456456,0 RelevantString789789789,0 </code></pre>
python|numpy|pandas|scikit-learn|export-to-csv
1
5,526
22,086,619
how to apply a function to multiple columns in a pandas dataframe at one time
<p>I frequently deal with data which is poorly formatted (I.e. number fields are not consistent etc)</p> <p>There may be other ways, which I am not aware of but the way I format a single column in a dataframe is by using a function and mapping the column to that function.</p> <pre><code>format = df.column_name.map(format_number) </code></pre> <p>Question: 1 - what if I have a dataframe with 50 columns, and want to apply that formatting to multiple columns, etc column 1, 3, 5, 7, 9,</p> <p>Can you go:</p> <pre><code>format = df.1,3,5,9.map(format_number) </code></pre> <p>.. This way I could format all my number columns in one line?</p>
<p>You can do <code>df[['Col1', 'Col2', 'Col3']].applymap(format_number)</code>. Note, though that this will return new columns; it won't modify the existing DataFrame. If you want to put the values back in the original, you'll have to do <code>df[['Col1', 'Col2', 'Col3']] = df[['Col1', 'Col2', 'Col3']].applymap(format_number)</code>.</p>
python|pandas|filtering|slice
18
5,527
56,770,790
Splitting text of a single row into multiple rows of the same column in a CSV file using Python
<p>The dictionary has the following key-value pairs:</p> <pre><code> { 'Target_Tab': 'employees', ' Target_Col': 'empp_id last_name first_name', 'Source_Col': 'emp_id l_name f_name', 'Source_Tab': 'employee' } </code></pre> <p>I'm writing this dictionary into a CSV file and so far I've got this:</p> <pre><code>Source_Tab Source_Col Target_Tab Target_Col employee emp_id last_name first_name employees empp_id l_name f_name </code></pre> <p>I want to write the Source _col and Target_col values in different rows. Se below is what I need:</p> <pre><code>Source_Tab Source_Col Target_Tab Target_Col employee emp_id employees empp_id last_name l_name first_name f_name </code></pre> <p>My code is as follows:</p> <pre><code>import pandas as pd d = [sdict] d2 = [] col = ["Source_Table","Source_Columns","Target_Table","Target_Columns"] for i in d: temp = {} for c in col: if c in i: temp[c] = i[c] else: temp[c] = '' d2.append(temp) df2 = pd.DataFrame(d2, columns=col) df2.to_csv('test21.csv', index=False) </code></pre>
<p>Use list comprehension with <code>split</code> for list of <code>Series</code> and join together by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a>, last replace missing values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>DataFrame.fillna</code></a> and change order of columns by list <code>col</code>:</p> <pre><code>d = { 'Target_Tab': 'employees', 'Target_Col': 'empp_id last_name first_name', 'Source_Col': 'emp_id l_name f_name', 'Source_Tab': 'employee' } col = ["Source_Tab","Source_Col","Target_Tab","Target_Col"] df = pd.concat([pd.Series(v.split(), name=k) for k, v in d.items()], axis=1).fillna('')[col] print (df) Source_Tab Source_Col Target_Tab Target_Col 0 employee emp_id employees empp_id 1 l_name last_name 2 f_name first_name </code></pre> <p>Another solution:</p> <pre><code>col = ["Source_Tab","Source_Col","Target_Tab","Target_Col"] df = pd.Series(d).str.split(expand=True).fillna('').reindex(col).T print (df) Source_Tab Source_Col Target_Tab Target_Col 0 employee emp_id employees empp_id 1 l_name last_name 2 f_name first_name </code></pre> <p>EDIT:</p> <p>If need filter keys in source dictionary:</p> <pre><code>d = { 'Target_Tab': 'employees', 'Target_Col': 'empp_id last_name first_name', 'Source_Col': 'emp_id l_name f_name', 'Source_Tab': 'employee' } L = ['Source_Tab','Source_Col'] df = (pd.concat([pd.Series(v.split(), name=k) for k, v in d.items() if k in L], axis=1) .fillna('')) print (df) Source_Col Source_Tab 0 emp_id employee 1 l_name 2 f_name </code></pre>
python|pandas|csv|dictionary
2
5,528
56,853,418
How to convert np.int64 into python int64 for PandasSeries?
<p>I try to insert data from pandas DataFrame into the PostgreSQL table, </p> <p>table that a try to insert looks like:</p> <pre><code>city_id date forecast 5 29.05.2019 0 1 29.05.2019 0 151 29.05.2019 0 55 29.05.2019 0 ... </code></pre> <p><strong>types:</strong></p> <ul> <li><code>city_id</code> - <code>numpy.int64</code></li> <li><code>date</code> - <code>datetime.date</code></li> <li><code>forecast</code> - <code>numpy.int64</code></li> </ul> <p>And the <strong>block of code</strong>, that inserting data to db:</p> <pre><code> with psycopg2.connect(f"host='{hostname}' \ dbname='{database}' \ user='{username}' \ password='{password}'") as connection: with connection.cursor() as cursor: connection.set_client_encoding('UTF8') for i in df_with_new_one.index: date = df_with_new_one['date'][i] city_id = df_with_new_one['city_id'][i] value = df_with_new_one['forecast'][i] cursor.execute("INSERT INTO forecast \ (city_id, computed_time, date, value) \ VALUES (%s, %s, %s, %s)", (city_id, now, date, value)) </code></pre> <p>Where <code>now</code> is time saved as <code>datetime.datetime.now()</code></p> <p><strong>And i get</strong> <em>ProgrammingError</em>:</p> <pre><code> ProgrammingError: can't adapt type 'numpy.int64' </code></pre> <p>I checked <strong>type</strong> <code>type(df_with_new_one['forecast'][0])</code> type is <code>numpy.int64</code></p> <p>So I get that PostreSQL can read only pythonic <code>int</code> and <code>float</code>, and the first thing i've tried was converting <code>np.int64</code> into simple <code>int</code> with:</p> <ul> <li><code>tolist()</code> </li> <li><code>pd.to_numeric()</code></li> <li><code>int()</code> for <code>((int(city_id), now, date, int(value))</code></li> <li><code>.astype(int)</code></li> <li><code>.value.astype('int')</code></li> </ul> <p><strong>Upd.:</strong></p> <ul> <li><code>city_id = int(df_with_new_one['city_id'][i])</code> <code>value = int(df_with_new_one['forecast'][i])</code></li> </ul> <p>Unfortunately <strong>none of them works</strong> for me</p> <p>When I tried <code>int()</code> I get another error: </p> <pre><code> TypeError: cannot convert the series to &lt;class 'int'&gt; </code></pre> <p>Answers that <strong>i found</strong>, but no one of them helped me:</p> <ul> <li><a href="https://stackoverflow.com/questions/50626058/psycopg2-cant-adapt-type-numpy-int64">psycopg2: can&#39;t adapt type &#39;numpy.int64&#39;</a></li> <li><a href="https://stackoverflow.com/questions/39564755/programmingerror-psycopg2-programmingerror-cant-adapt-type-numpy-ndarray">ProgrammingError: (psycopg2.ProgrammingError) can&#39;t adapt type &#39;numpy.ndarray&#39;</a></li> <li><a href="https://stackoverflow.com/questions/41000428/python-typeerror-cannot-convert-the-series-to-class-int-when-trying-to-do-m">Python TypeError: cannot convert the series to &lt;class &#39;int&#39;&gt; when trying to do math on dataframe</a></li> <li><a href="https://stackoverflow.com/questions/48120611/python-pandas-filtering-typeerror-cannot-convert-the-series-to-class-int?rq=1">Python Pandas filtering; TypeError: cannot convert the series to &lt;class &#39;int&#39;&gt;</a></li> </ul> <p>Are there <strong>any other methods</strong> to change type of values?</p>
<p>The problem was in wrong indexation: </p> <ul> <li>first index was from 83 to 1161 and after 1161, where should've been 1161, was 83 again and next values were 83 + 1 etc.</li> </ul> <p>Thus, problem was solved by <code>.reset_index()</code></p> <p><code>df_with_new_one.reset_index(drop = True, inplace = True)</code></p> <p>Thanks you all for answers!</p>
python|pandas|postgresql|numpy|types
2
5,529
56,835,971
split string for a range of columns Pandas
<p>How can I split the string to list for each column for the following Pandas dataframe with many columns?</p> <pre><code>col1 col2 0/1:9,12:21:99 0/1:9,12:22:99 0/1:9,12:23:99 0/1:9,15:24:99 </code></pre> <p>Desired output:</p> <pre><code>col1 col2 [0/1,[9,12],21,99] [0/1,[9,12],22,99] [0/1,[9,12],23,99] [0/1,[9,15],24,99] </code></pre> <p>I could do:</p> <pre><code>df['col1'].str.split(":", n = -1, expand = True) df['col2'].str.split(":", n = -1, expand = True) </code></pre> <p>but I have many columns, I was wondering if I could do it in a more automated way?</p> <p>I would then like to calculate the mean of the 2nd element of each list for every row, that is for the first row, get the mean of 21 and 22 and for the second row, get the mean of 23 and 24.</p>
<p>If the data is like your sample, you can make use of <code>stack</code>:</p> <pre><code>new_df = (df.iloc[:,0:2] .stack() .str.split(':',expand=True) ) </code></pre> <p>Then <code>new_df</code> is double indexed:</p> <pre><code> 0 1 2 3 0 col1 0/1 9,12 21 99 col2 0/1 9,12 22 99 1 col1 0/1 9,12 23 99 col2 0/1 9,15 24 99 </code></pre> <p>And say if you want the mean of 2nd numbers:</p> <pre><code>new_df[2].unstack(level=-1).astype(float).mean(axis=1) </code></pre> <p>gives:</p> <pre><code>0 21.5 1 23.5 dtype: float64 </code></pre>
pandas
1
5,530
25,701,513
conda update numpy to 1.8.x for 64-bit windows
<p>I'm using a 64-bit machine with Spyder by Anaconda and want to upgrade numpy from 1.7.1 to 1.8.x. But when I use this command: </p> <pre><code>conda update numpy </code></pre> <p>I get the following message:</p> <p><img src="https://i.stack.imgur.com/Q1LtD.png" alt="enter image description here"></p> <p>In other words, the same old version 1.7.1. Why is this? I want the latest numpy.</p>
<p>I found the solution in this thread: </p> <p><a href="https://stackoverflow.com/questions/11200137/installing-numpy-on-64bit-windows-7-with-python-2-7-3">Installing Numpy on 64bit Windows 7 with Python 2.7.3</a></p> <p>There was an answer saying:</p> <blockquote> <p>But you need to modify your environment variable PATH, so that the anaconda folder is before the original Python folder.</p> </blockquote> <p>I've actually got two anaconda systems on my machine a 32-bit path and the path to the 64-bit environment that I use, so I deleted the path to 32-bit anaconda and put the Python27 path after the 64-bit anaconda path, and now when I try to update numpy, it works, to 1.8.x!</p>
python|numpy|anaconda|spyder
1
5,531
66,767,350
Keeping only the first occurrences of a data in column on a given date without removing other occurrences in pandas
<p>I'm pretty new to pandas so bear with me. I have 1 min interval wise data time frame for few years. Each row have a <code>Long signal</code> column . My index for data frame is date time column.</p> <p><a href="https://i.stack.imgur.com/u8apk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u8apk.png" alt="Data Frame" /></a></p> <p>For simplicity let's take only two days data.</p> <pre><code> Long Signal Date 2008-01-01 09:55 0 2008-01-01 09:56 1 2008-01-01 09:57 0 ............... 2008-01-01 03:28 0 2008-01-01 03:29 1 2008-01-01 03:30 1 2008-01-02 09:55 0 2008-01-02 09:56 0 2008-01-02 09:57 1 ............... 2008-01-02 03:28 0 2008-01-02 03:29 1 2008-01-02 03:30 1 </code></pre> <p>I'm trying to convert this into data frame as below,</p> <pre><code> Long Signal Date 2008-01-01 09:55 0 2008-01-01 09:56 1 2008-01-01 09:57 0 ............... 2008-01-01 03:28 0 2008-01-01 03:29 0 2008-01-01 03:30 0 2008-01-02 09:55 0 2008-01-02 09:56 0 2008-01-02 09:57 1 ............... 2008-01-02 03:28 0 2008-01-02 03:29 0 2008-01-02 03:30 0 </code></pre> <p>That is I just wanna keep only the first occurrence of <code>Long signal</code> and fill with 0 for remaining occurrences on same day. That is for on any given day <code>Long signal</code> will have max only one occurrence of value 1. Tried <code>drop_duplicate</code> but no luck. I would appreciate any help.</p> <p>Edit 1 :</p> <p>I want to keep the time information at which the first <code>Long signal</code> is 1. In my case for the day <code>2008-01-01</code> it is <code>09:56 </code> and for the <code>2008-01-02</code> it is <code>09:57</code> . In other words I want my data to be in 1 min time interval itself while keeping data only for the first occurrence of value <code>1</code> in <code>Long signal</code>.</p>
<p>To keep only the first occurence of <code>Long Signal</code> and fill with 0 the remaining ones, you can use a combination of <code>loc</code>, <code>groupby</code> with <code>dt.day</code>, and <code>idxmax()</code>.</p> <p>The <code>idxmax()</code> function is used to get the row label of the maximum value and if multiple values equal the maximum (i.e in your case 1) , the first row label with that value is returned, so it is perfect for your needs.</p> <p>To illustrate:</p> <pre><code>df['Long Signal'] = df['Long Signal'].loc[df.groupby([df.Date.dt.day])['Long Signal'].idxmax()] df['Long Signal'].fillna(0,inplace=True) </code></pre> <p>Will get back:</p> <pre><code>Out[134]: Date Long Signal 0 2008-01-01 09:55:00 0.0 1 2008-01-01 09:56:00 1.0 2 2008-01-01 09:57:00 0.0 3 2008-01-01 03:28:00 0.0 4 2008-01-01 03:29:00 0.0 5 2008-01-01 03:30:00 0.0 6 2008-01-02 09:55:00 0.0 7 2008-01-02 09:56:00 0.0 8 2008-01-02 09:57:00 1.0 9 2008-01-02 03:28:00 0.0 10 2008-01-02 03:29:00 0.0 11 2008-01-02 03:30:00 0.0 </code></pre> <p>To keep the time information at which the first Long signal is 1, you can simply use:</p> <pre><code>df.loc[df['Long Signal']==1]['Date'] 1 2008-01-01 09:56:00 8 2008-01-02 09:57:00 </code></pre> <p>But I can't be 100% sure this is what you need for the 2nd part as it is not demonstrated in your desired output.</p>
python|pandas
1
5,532
67,123,730
Saving dataframes with a key
<p>I'm trying to parse a csv file and print certain timeseries graphs.</p> <p><strong>About csv file:</strong> The csv file contains a lot of data from which I need to parse a certain sections of it based on the id inside a for loop. <em>The csv file looks like that:</em></p> <pre><code>ID,name,date,confirmedInfections DE2,BAYERN,2020-02-24,19 . DE2,BAYERN,2020-02-25,19 DE1,BADEN-WÜRTTEMBERG,2020-02-24,1 . DE1,BADEN-WÜRTTEMBERG,2020-02-26,7 . DE4,BRANDENBURG,2020-02-24,2 . DE4,BRANDENBURG,2020-07-27,45 </code></pre> <p><strong>About my code:</strong> So, after seeing an example of the csv file I'm about to tell you what is my goal. I need to save a dataframe for every different ID appears in that csv file, with all the following columns like <code>name</code>,<code>date</code> and <code>confirmedInfections</code> for a particular ID. There are about 12 IDs in this file, so I'm trying to save different dataframes for each ID with a for loop and then do some actions for plotting timeseries graphs.</p> <p><em>My code:</em></p> <pre><code>import pandas as pd import matplotlib.pyplot as plt from statsmodels.compat import pandas def main(file): id_array = [&quot;DE2&quot;, &quot;DE1&quot;, &quot;DE4&quot;, &quot;DE6&quot;, &quot;DE5&quot;, &quot;DE8&quot;, &quot;DE7&quot;, &quot;DE9&quot;, &quot;DEB&quot;, &quot;DEA&quot;, &quot;DED&quot;, &quot;DEC&quot;, &quot;DEF&quot;, &quot;DEE&quot;, &quot;DEG&quot;] df = pd.read_csv(file, index_col='ID') print(df) for key in id_array: df=df.loc[key] print(key) df.plot() plt.show() main('data.txt') </code></pre> <p>After running this code I'm taking the results of ID=DE2 and the graph of it. But after DE2 my code is crashing and none of the following dataframes I want to see don't appear.</p> <p>Errors:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\Deray\PycharmProjects\covid-19\venv\lib\site-packages\pandas\core\indexes\base.py&quot;, line 3080, in get_loc return self._engine.get_loc(casted_key) File &quot;pandas\_libs\index.pyx&quot;, line 70, in pandas._libs.index.IndexEngine.get_loc File &quot;pandas\_libs\index.pyx&quot;, line 96, in pandas._libs.index.IndexEngine.get_loc File &quot;pandas\_libs\index.pyx&quot;, line 120, in pandas._libs.index.IndexEngine._get_loc_duplicates KeyError: 'DE1' </code></pre> <p>Thanks for your time, any thoughts?</p>
<p>The problem is that you're assigning the key-filtered dataframe to 'df' within your 'for' loop, thus overwriting the original dataframe. To fix, you need to assign the filtered dataframe to another variable. Try:</p> <pre><code>for key in id_array: df_temp = df.loc[key] print(key) df_temp.plot() plt.show() </code></pre>
python|pandas|csv
2
5,533
66,908,476
xlwings removes ' from my string stored in a dataframe object when I try to paste it into excel
<p>I am trying to paste a dataframe into excel using xlwings.</p> <p>one of the columns holds the name of the item and some of those names starts with ' eg. 'FIRST' itemname</p> <p>When I use</p> <pre><code>xw.Range(startcell, index=False, header=False).value = df </code></pre> <p>xlwings removes the ' at the beginning so 'FIRST' itemname becomes FIRST' itemname.</p> <p>I have printed out the dataframe before trying to paste it in with xlwings and there everything looks as it should and it stores the name as an object type when I call df.dtypes</p> <p>EDIT: on a closer inspection it does carry over to Excel, however, even though it is written correctly in the formula bar it isn't in the actual cell.</p> <p>How can I make the ' carry over and get pasted into excel along with everything else?</p>
<p>I'll answer my own question.</p> <p>Someone informed me Excel will always treat a leading ' as text so if I want to actually have it write ' to begin with I have to make it a double ' so ''FIRST' itemname. I'll handle that in the dataframe before pasting it</p>
python|excel|pandas|dataframe|xlwings
1
5,534
67,007,721
AssertionError with Keras 2.4
<p>The other answers already asked on SO did not answer my question.</p> <p>I have the following versions:</p> <pre><code>pip list | egrep -i '(keras|tensor)' Keras 2.4.3 Keras-Preprocessing 1.1.2 tensorboard 2.4.1 tensorboard-plugin-wit 1.8.0 tensorflow-estimator 2.4.0 tensorflow-gpu 2.4.1 tensorflow-serving-api 2.4.1 </code></pre> <p>Code:</p> <pre><code>def make_generator_model(): model = tf.keras.Sequential() model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,))) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) model.add(layers.Reshape((7, 7, 256))) logging.info(f&quot;model.output_shape: {model.output_shape}&quot;) assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False)) logging.info(f&quot;model.output_shape: {model.output_shape}&quot;) assert model.output_shape == (None, 7, 7, 128) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False)) logging.info(f&quot;model.output_shape: {model.output_shape}&quot;) assert model.output_shape == (None, 14, 14, 64) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh')) logging.info(f&quot;model.output_shape: {model.output_shape}&quot;) assert model.output_shape == (None, 28, 28, 1) return model </code></pre> <p>Error:</p> <pre><code>2021-04-08 15:54:32 ERROR Exception caught: Traceback (most recent call last): File &quot;./bin/imgenk&quot;, line 348, in main args_switch(args, config) File &quot;./bin/imgenk&quot;, line 297, in args_switch return fn(args, config) File &quot;./bin/imgenk&quot;, line 287, in train_cli start_training(args.image_folder) File &quot;./bin/imgenk&quot;, line 241, in start_training generator = make_generator_model() File &quot;./bin/imgenk&quot;, line 94, in make_generator_model assert model.output_shape == (None, 7, 7, 128) AssertionError </code></pre> <p>Line 94 is this:</p> <pre><code> assert model.output_shape == (None, 7, 7, 128) </code></pre> <p>In fact the logging before the error:</p> <pre><code>2021-04-08 15:54:32 INFO &lt;BatchDataset shapes: (None, 28, 28, 1), types: tf.float32&gt; 2021-04-08 15:54:32 INFO model.output_shape: (None, 7, 7, 256) 2021-04-08 15:54:32 INFO model.output_shape: (None, 128, 7, 256) 2021-04-08 15:54:32 ERROR Exception caught in main </code></pre> <p>This is really weird. On Python 3.9 this code just works:</p> <pre><code>pip list | egrep -i '(keras|tensor)' keras-nightly 2.5.0.dev2021032900 Keras-Preprocessing 1.1.2 tensorboard 2.4.1 tensorboard-plugin-wit 1.8.0 tensorflow 2.5.0rc0 </code></pre> <pre><code>2021-04-08 17:59:45 INFO &lt;BatchDataset shapes: (None, 28, 28, 1), types: tf.float32&gt; 2021-04-08 17:59:45 INFO model.output_shape: (None, 7, 7, 256) 2021-04-08 17:59:45 INFO model.output_shape: (None, 7, 7, 128) 2021-04-08 17:59:45 INFO model.output_shape: (None, 14, 14, 64) 2021-04-08 17:59:45 INFO model.output_shape: (None, 28, 28, 1) </code></pre> <p>Does anybody know what could make Keras behave totally differently?</p>
<p>It seems that a different version of Tensorflow + Python combination is causing this. Since this was happening on a AWS instance with a DeepLearning AMI I was able to switch a different environment with different version where the code started to work.</p>
python|tensorflow|machine-learning|keras|deep-learning
0
5,535
47,343,838
How to change column names in pandas Dataframe using a list of names?
<p>I have been trying to change the column names of a pandas dataframe using a list of names. The following code is being used:</p> <pre><code>df.rename(columns = list_of_names, inplace=True) </code></pre> <p>However I got a Type Error each time, with an error message that says "<strong>list object is not callable</strong>". I would like to know why does this happen? And What can I do to solve this problem? Thank you for your help.</p>
<p>you could use</p> <pre><code>df.columns = ['Leader', 'Time', 'Score'] </code></pre>
python|pandas|numpy
51
5,536
11,071,490
numpy, mapping one array to parts of another
<p>lets say i have one array</p> <pre><code> a = numpy.arange(8*6*3).reshape((8, 6, 3)) #and another: l = numpy.array([[0,0],[0,1],[1,1]]) #an array of indexes to array "a" #and yet another: b = numpy.array([[0,0,5],[0,1,0],[1,1,3]]) </code></pre> <p>where "l" and "b" are of equal length, and i want to say</p> <pre><code> a[l] = b </code></pre> <p>such that a[0][0] becomes [0,0,5], a[0][1] becomes [0,1,0] etc.</p> <p>it seems to work fine when ive got one-dimensional arrays, but it gives me the error</p> <pre><code> ValueError: array is not broadcastable to correct shape </code></pre> <p>when i try it with a 3-dimensional array.</p>
<pre><code>import numpy as np a = np.arange(8*6*3).reshape((8, 6, 3)) l = np.array([[0,0],[0,1],[1,1]]) #an array of indexes to array "a" b = np.array([[0,0,5],[0,1,0],[1,1,3]]) a[tuple(l.T)] = b print(a[0,0]) # [0 0 5] print(a[0,1]) # [0 1 0] print(a[1,1]) # [1 1 3] </code></pre> <p><a href="http://mail.scipy.org/pipermail/numpy-discussion/2010-September/052549.html" rel="nofollow">Anne Archibald says</a>,</p> <blockquote> <p>When you are supplying arrays in all index slots, what you get back has the same shape as the arrays you put in; so if you supply one-dimensional lists, like</p> <p>A[[1,2,3],[1,4,5],[7,6,2]]</p> <p>what you get is</p> <p>[A[1,1,7], A[2,4,6], A[3,5,2]]</p> </blockquote> <p>When you compare that with your example, you see that </p> <p><code>a[l] = b</code> tells NumPy to set</p> <pre><code>a[0,0,1] = [0,0,5] a[0,1,1] = [0,1,0] </code></pre> <p>and leaves the third element of <code>b</code> unassigned. This is why you get the error</p> <pre><code>ValueError: array is not broadcastable to correct shape </code></pre> <p>The solution is to transpose the array <code>l</code> into the correct shape:</p> <pre><code>In [50]: tuple(l.T) Out[50]: (array([0, 0, 1]), array([0, 1, 1])) </code></pre> <p>(You could also use <code>zip(*l)</code>, but <code>tuple(l.T)</code> is a bit quicker.)</p>
python|numpy
3
5,537
68,211,850
TypeError: __array__() takes 1 positional argument but 2 were given
<p>I've been doing the pytorch tutorial (<a href="https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html" rel="nofollow noreferrer">https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html</a>) and have been getting this error that I don't know how to fix. The full error is below:</p> <pre><code>Traceback (most recent call last): File &quot;main.py&quot;, line 146, in &lt;module&gt; main() File &quot;main.py&quot;, line 138, in main train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10) File &quot;/engine.py&quot;, line 26, in train_one_epoch for images, targets in metric_logger.log_every(data_loader, print_freq, header): File &quot;/utils.py&quot;, line 180, in log_every for obj in iterable: File &quot;/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py&quot;, line 521, in __next__ data = self._next_data() File &quot;/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py&quot;, line 1203, in _next_data return self._process_data(data) File &quot;/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py&quot;, line 1229, in _process_data data.reraise() File &quot;/usr/local/lib/python3.6/dist-packages/torch/_utils.py&quot;, line 425, in reraise raise self.exc_type(msg) TypeError: Caught TypeError in DataLoader worker process 0. Original Traceback (most recent call last): File &quot;/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py&quot;, line 287, in _worker_loop data = fetcher.fetch(index) File &quot;/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py&quot;, line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File &quot;/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py&quot;, line 44, in &lt;listcomp&gt; data = [self.dataset[idx] for idx in possibly_batched_index] File &quot;/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataset.py&quot;, line 311, in __getitem__ return self.dataset[self.indices[idx]] File &quot;main.py&quot;, line 64, in __getitem__ img, target = self.transforms(img, target) File &quot;/transforms.py&quot;, line 26, in __call__ image, target = t(image, target) File &quot;/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/transforms.py&quot;, line 50, in forward image = F.to_tensor(image) File &quot;/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py&quot;, line 129, in to_tensor np.array(pic, mode_to_nptype.get(pic.mode, np.uint8), copy=True) TypeError: __array__() takes 1 positional argument but 2 were given </code></pre> <p>I believe it means somewhere I'm using an array with 2 arguments which isn't allowed, but I don't really know where abouts that is happening - perhaps in one of their pre written libraries?</p> <p>I can share the code in full if desired, but thought its a bit unwieldy. Does anyone know what might be causing this error?</p>
<p>PyTorch has already considered this <a href="https://github.com/pytorch/pytorch/issues/61125#issuecomment-872624273" rel="noreferrer">issue</a>. It does not seem to be a PyTorch problem.</p> <p>As <a href="https://github.com/xwang233" rel="noreferrer">xwang233</a> mentioned in the issue, we can fix it by downgrading pillow:</p> <pre><code>pip install pillow==8.2.0 </code></pre>
python|machine-learning|computer-vision|pytorch|torchvision
7
5,538
68,056,122
AttributeError: can't set attribute in splitting MNIST dataset
<p>I'm work with pytorch <code>torchvision.datasets.MNIST</code></p> <p>to load the dataset I use:</p> <pre><code>mnist_data = datasets.MNIST('../data', train=True, download=True, transform=transforms.Compose( [transforms.ToTensor(),transforms.Normalize((0.1307,), (0.3081,))])) </code></pre> <p>and to split the data into training dataset I use:</p> <pre><code>mnist_data.train_data = (mnist_data.train_data.type(torch.FloatTensor)/255).bernoulli() </code></pre> <p>I got error: <code>AttributeError: can't set attribute</code></p> <p>How to solve this error?</p>
<p>If you use <code>torchvision.dataset.MNIST</code> you can change <code>train=True/False</code> for your train or test set.</p> <p>From <a href="https://pytorch.org/vision/stable/_modules/torchvision/datasets/mnist.html#MNIST" rel="nofollow noreferrer">docs</a>, MNIST class has <code>@property</code> train_data, so you can't set train_data as an attribute. You can change it to <code>mnist_data.data_train = ...</code>.</p>
python|pytorch|torchvision
0
5,539
59,450,901
pandas dataframe group by max on one column
<p>I'm trying to group by the 'keyword' column and get the characteristic with the larger number of records.</p> <p>Let's consider the pandas df:</p> <pre><code>pd.DataFrame([['a', 'A'], ['b', 'A'], ['a', 'B'], ['b', 'B'], ['a', 'A'], ['c', 'B']], columns=['Keywords', 'Char']) </code></pre> <p>For the keyword a the characteristic A is the most frequent, for the keyword b either A or B are ok, for the keyword c, B is the most frequent.</p> <p>In my case I have 10000 keywords and 3 characteristics. I want to have as a return a pd.Series with the keyword as an index and the most frequent characteristic as value or a dictionary with keyword as key and the most frequent characteristic as value.</p> <p>I tried grouping my keywords and characteristics and count the rows as following:</p> <pre><code>res = frame.groupby(['Keywords', 'Char']).size().reset_index().rename(columns={0:'records'}) </code></pre> <p>But I don't know how to get the characteristic corresponding to the maximum.</p> <p>Expected output (any of this is ok):</p> <pre><code>pd.Series(data=['A', 'A', 'B'], index = ['a', 'b', 'c']) </code></pre> <p>or</p> <pre><code>pd.Series(data=['A', 'B', 'B'], index = ['a', 'b', 'c']) </code></pre> <p>or </p> <pre><code>{'a':'A', 'b':'A', 'c':'B'} </code></pre> <p>or</p> <pre><code>{'a':'A', 'b':'B', 'c':'B'} </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>Series.value_counts</code></a> in lambda function per groups by <code>Keywords</code> and return first value of index. Idea of solution is use <code>value_counts</code> because sorting values by default by counts:</p> <pre><code>res = frame.groupby('Keywords')['Char'].apply(lambda x: x.value_counts().index[0]) print (res) Keywords a A b B c B Name: Char, dtype: object </code></pre>
python|pandas|dataframe
3
5,540
59,339,689
Compair 2 columns and add a value in a new column
<p>There are so many suggestions in here on how to solve thes problem, but I can´t find anything I can get to work.</p> <p>How do I run through a DataFrame to compare the value in 2 cells from 2 different columns but on the same row, and add a value in a new column. I know that the code I wrote cannot be used since "for loop" counts on columns and not rows</p> <pre><code>def open_ticker_index(): with open('pickle/' + tickerlist, "rb") as f: tickers = pickle.load(f) for ticker in tickers: df = pd.read_csv('calcuatet_daily_stock_dfs/' + ticker + '.csv') df = df.tail(250) for row in df: if df['Adj Close'] &gt; df['MA3']: df['Adj Close &gt; MA3'] = 1 else: df['Adj Close &gt; MA3'] = 0 </code></pre> <p>I have also tried this, but then I cannot be allowed to create a new column</p> <pre><code>def open_ticker_index(): with open('pickle/' + tickerlist, "rb") as f: tickers = pickle.load(f) for ticker in tickers: df = pd.read_csv('calcuatet_daily_stock_dfs/' + ticker + '.csv') df = df.tail(250) df['Adj Close &gt; MA3'] for col, row in df.iterrows(): if (col, row["Adj Close"][1]) &gt; (col, row["MA3"][1]): df['Adj Close &gt; MA3'] = 1 else: df['Adj Close &gt; MA3'] = 0 </code></pre> <p>have also tried, but then I cannot be allowed to create a new column</p>
<p>You don't need make a loop , using numpy is more easy do a comparison like this :</p> <pre><code>import numpy as np import pandas as pd from io import StringIO data = """ Col1,Adj,MA3 A,1,2 B,8,5 C,7,7 """ # Here just to create a csv df = pd.read_csv(StringIO(data),sep=',') # With 2 Logical operators &gt; &lt; df['Adj Close &gt; MA3'] =np.where(df['Adj']&gt;=df['MA3'],'1', '0') print(df) # With 3 Logical operators &gt; &lt; = df['Adj Close &gt; MA3'] =np.where(df['Adj']&gt;df['MA3'],'1', np.where(df['Adj']&lt;df['MA3'],'0', 'equals')) print(df) </code></pre>
python|pandas|dataframe
-1
5,541
57,146,153
Reduce multiclass to binary classification problem
<p>I'm doing an experiment with the well known <a href="https://archive.ics.uci.edu/ml/datasets/Heart+Disease" rel="nofollow noreferrer">UCI heart disease dataset</a> but it's not showing good results (~58% acc.). </p> <p>This dataset has 5 ordinal classes with "levels of heart disease presence" going from 0 to 4, where 0 means <em>no heart disease</em> and 4 indicates <em>high presence of heart issues</em>. The problem is that this dataset is very unbalanced, and there are much more objects classified as 0 than the others. Present this dataset to a MLP has given 58% of accuracy, which is very low.</p> <p>So, i'd like to combine all objects classified from 1-4 and transform this into a binary classification (e.g. 0 = no disease / 1 = disease found). I've noticed that this is known as <code>one-against-all</code>strategy. Since i'm very new to this world of ML, i'd like to know how could this be done with pandas or if there is a better tool for that.</p>
<p>Its simple, currently your <code>y_train</code> data looks like: <code>[1,2,5,2,1,3,2,4,4,4,5,5,5]</code> what you do is you create an empty array <code>binary_labels</code>, then iterate through each row in the PD, if the label is 1-4 you append 0 to binary_labels, else you append 1. Then you introduce a new column to the PD and set binary_labels against its values or you replace the y_train data with this array.</p> <p>Also, you would replace the loss function in the MLP, etc. But this is how you structure the data.</p>
python|pandas|machine-learning|scikit-learn|classification
0
5,542
57,187,839
Use numpy array as lambda argument?
<p>Is there a reasonable way to get the following done on one line? I'd really like to avoid creating a temporary variable or a separate function.</p> <pre><code>import numpy as np x = np.array([1,2,3,4,5]) x = np.ma.masked_where(x&gt;2, x) </code></pre> <p>I tried</p> <pre><code>x = map(lambda x: np.ma.masked_where(x&gt;2, x), np.array([1,2,3,4,5])) </code></pre> <p>but the map object is not what I want? I can of course define separate fuction, which avoids assigning variable:</p> <pre><code>masker = lambda x: np.ma.masked_where(x&gt;2, x) x = masker(np.array([1,2,3,4,5])) </code></pre>
<p>You don't need <code>map</code> at all, just an anonymous function. All you will do is replace the initial assignment to <code>x</code> with a parameter binding in a function call.</p> <pre><code>import numpy as np # x = np.array([1,2,3,4,5]) # x = np.ma.masked_where(x&gt;2, x) x = (lambda x: np.ma.masked_where(x&gt;2, x))(np.array([1,2,3,4,5])) </code></pre>
python|numpy|lambda|functional-programming
1
5,543
46,067,729
Pandas : series object of one column based on another column
<p>I have data like this :</p> <pre><code> end station name User Type 0 Carmine St &amp; 6 Ave Subscriber 1 South End Ave &amp; Liberty St Subscriber 2 Christopher St &amp; Greenwich St Subscriber 3 Lafayette St &amp; Jersey St Subscriber 4 W 52 St &amp; 11 Ave Subscriber 5 E 53 St &amp; Lexington Ave Subscriber 6 W 17 St &amp; 8 Ave Subscriber 7 St Marks Pl &amp; 2 Ave Subscriber 8 Washington St &amp; Gansevoort St Customer 9 Barclay St &amp; Church St Subscriber 10 Washington St &amp; Gansevoort St Customer 11 E 37 St &amp; Lexington Ave Subscriber 12 E 51 St &amp; 1 Ave Subscriber 13 W 33 St &amp; 7 Ave Subscriber 14 Pike St &amp; Monroe St Subscriber 15 E 24 St &amp; Park Ave S Subscriber 16 1 Ave &amp; E 15 St Subscriber 17 Broadway &amp; W 32 St Customer 18 E 39 St &amp; 3 Ave Customer 19 W 59 St &amp; 10 Ave Subscriber 20 Centre St &amp; Chambers St Subscriber 21 9 Ave &amp; W 45 St Customer 22 8 Ave &amp; W 33 St Subscriber 23 Suffolk St &amp; Stanton St Subscriber 24 W 47 St &amp; 10 Ave Subscriber 25 W 33 St &amp; 7 Ave Subscriber 26 8 Ave &amp; W 33 St Subscriber 27 1 Ave &amp; E 15 St Customer 28 8 Ave &amp; W 33 St Subscriber 29 W 33 St &amp; 7 Ave Subscriber ... ... ... </code></pre> <p>I want to find five(5) most popular stations for <strong>Customers</strong> in descending order of popularity</p> <p>Here is my code:</p> <pre><code>import pandas as pd rides = pd.read_csv(csv_file_path, low_memory=False, parse_dates=True) five_popular_station_end_trip = rides['end station name'].value_counts().head() </code></pre> <p>I can find most popular stations from one column but I have no idea about how to find it based on another column.</p>
<p>I think you need filter first by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p> <pre><code>df1 = rides[rides['User Type'] == 'Customer'] five_popular_station_end_trip = df1['end station name'].value_counts().head() print (five_popular_station_end_trip) Washington St &amp; Gansevoort St 2 Broadway &amp; W 32 St 1 1 Ave &amp; E 15 St 1 E 39 St &amp; 3 Ave 1 9 Ave &amp; W 45 St 1 Name: end station name, dtype: int64 </code></pre> <p>But if need all categories:</p> <pre><code>df = rides.groupby('User Type')['end station name'] \ .apply(lambda x: x.value_counts().head()) \ .reset_index(name='count') \ .rename(columns={'level_1':'end station name'}) print (df) User Type end station name count 0 Customer Washington St &amp; Gansevoort St 2 1 Customer Broadway &amp; W 32 St 1 2 Customer 1 Ave &amp; E 15 St 1 3 Customer E 39 St &amp; 3 Ave 1 4 Customer 9 Ave &amp; W 45 St 1 5 Subscriber 8 Ave &amp; W 33 St 3 6 Subscriber W 33 St &amp; 7 Ave 3 7 Subscriber W 59 St &amp; 10 Ave 1 8 Subscriber E 24 St &amp; Park Ave S 1 9 Subscriber W 17 St &amp; 8 Ave 1 </code></pre>
python|python-2.7|pandas
0
5,544
35,607,662
Combine sparse and masked array in numpy
<p>Is there a convenient way to use masked array over sparse matrices ?</p> <p>Because it seems that mask not work when creating masked array with scipy sparse matrix...</p> <p>And a typical application would be a adjacency matrix where values could be {0,1,?} for representing links in a network {0,1} and unknown/unseen value {?} to predict.</p>
<p>I'm not surprised that trying to give a sparse matrix to masked does not work. The few <code>numpy</code> functions that work with sparse ones are ones that delegate to the task to the sparse code.</p> <p>It might be possible to construct a <code>coo</code> format matrix with the <code>data</code> attribute being a masked array, but I doubt if that carries far. Code that isn't <code>masked</code> aware generally will ignore the mask.</p> <p>A masked array is an <code>ndarray</code> subclass that maintains two attributes, the data and mask, both of which are arrays. Many masked methods work by filling the masked values with a suitable value (0 for sums, 1 for products), and performing regular array calculations.</p> <p>A sparse matrix is not an <code>ndarray</code> subclass. One format is actually a dictionary subclass. Most store their data in 3 arrays, the 2 coordinates and the data. Interactions with non-sparse arrays often involve <code>todense()</code> to turn the action into a regular <code>numpy</code> one.</p> <p>There's no interoperability by design. If something does work it's probably because of some coincidental delegation of method.</p> <p>For example</p> <pre><code>In [85]: A=sparse.coo_matrix(np.eye(3)) In [86]: M=np.ma.masked_array(np.eye(3)) In [87]: A+M Out[87]: masked_array(data = [[ 2. 0. 0.] [ 0. 2. 0.] [ 0. 0. 2.]], mask = False, fill_value = 1e+20) In [88]: M+A NotImplementedError: adding a nonzero scalar to a sparse matrix is not supported </code></pre> <p>I would have expected <code>M+A</code> to work, but since I read it as adding sparse to a masked. But sometimes <code>x+y</code> is actually implemented as <code>y.__add__(x)</code>. <code>A+np.eye(3)</code> works in both orders.</p>
numpy|sparse-matrix
2
5,545
35,339,139
What values are valid in Pandas 'Freq' tags?
<p>I am new to Pandas, and am trying to use <code>date_range</code>. I came across all kinds of good things for <code>freq</code>, like <code>BME</code> and <code>BMS</code> and I would like to be able to quickly look up the proper strings to get what I want. Yesterday I found a nicely formatted table somewhere in the documentation, but the title of the table was so obtuse that I can not use search to find it again today.</p> <p>What values are valid in Pandas 'Freq' tags?</p>
<p>You can find it called <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases" rel="noreferrer">Offset Aliases</a>:</p> <blockquote> <p>A number of string aliases are given to useful common time series frequencies. We will refer to these aliases as offset aliases.</p> </blockquote> <pre><code>Alias Description B business day frequency C custom business day frequency D calendar day frequency W weekly frequency M month end frequency SM semi-month end frequency (15th and end of month) BM business month end frequency CBM custom business month end frequency MS month start frequency SMS semi-month start frequency (1st and 15th) BMS business month start frequency CBMS custom business month start frequency Q quarter end frequency BQ business quarter end frequency QS quarter start frequency BQS business quarter start frequency A, Y year end frequency BA, BY business year end frequency AS, YS year start frequency BAS, BYS business year start frequency BH business hour frequency H hourly frequency T, min minutely frequency S secondly frequency L, ms milliseconds U, us microseconds N nanoseconds </code></pre>
python|pandas|documentation|dataframe|frequency
228
5,546
50,773,556
How to list all the pairs of numbers which fall under a group of range?
<p>Suppose I have dataframe df1 which includes two columns - A &amp; B. Value of A represents the lower range and value of B represents the upper range.</p> <pre><code> A B 10.5 20.5 30.5 40.5 50.5 60.5 </code></pre> <p>I've another dataframe which includes two columns - C &amp; D containing a different range of numbers.</p> <pre><code> C D 12.34 15.90 13.68 19.13 33.5 35.60 35.12 38.76 50.6 59.1 </code></pre> <p>Now I want to list all the pairs from df2 that fall under the groups (between the lower and upper range) in the df1. </p> <p>Final output should be like this - </p> <pre><code> Key Values (10.5, 20.5) [(12.34, 15.90), (13.68, 19.13)] (30.5, 40.5) [(33.5, 35.60), (35.12, 38.76)] (50.5, 60.5) [(50.6, 59.1)] </code></pre> <p>The solution should be efficient as I have 5000 groups of range and 85000 numbers from different range.</p>
<p>It is not blazing fast (~ 30 secs) on my computer) but could easily be accelerated with the <code>multiprocessing</code> package if you have multiple cores.</p> <p>Generating data : </p> <pre><code>def get_fake(n): df = pd.DataFrame(np.random.rand(n * 2).reshape(-1, 2)) df.loc[:, 1] += 1 return df df1 = get_fake(200) df2 = get_fake(90000) </code></pre> <p>Then for the processing part :</p> <pre><code>from collections import defaultdict result = defaultdict(list) for index, start, stop in df1.itertuples(): subdf = df2[(start &lt; df2.iloc[:, 0]) &amp; (df2.iloc[:, 1] &lt; stop)] result[(start, stop)] += subdf.values.tolist() </code></pre> <p>Result is a dict but could easily be converted to a Series if necessary.</p>
python|performance|pandas|data-science
2
5,547
50,787,213
Look up values when the columns names of two dataframes are a match
<p>I would like to write a function that updates the values of df1 when the column names of df1 and df2 match each other. </p> <p>For example: df1: </p> <pre><code> Name | Graduated | Employed | Married AAA 1 2 3 BBB 0 1 2 CCC 1 0 1 </code></pre> <p>df2: </p> <pre><code> Answer_Code | Graduated | Employed | Married 0 No No No 1 Yes Intern Engaged 2 N/A PT Yes 3 N/A FT Divorced </code></pre> <p>Final Result: df3:</p> <pre><code> Name | Graduated | Employed | Married AAA Yes PT Divorced BBB No Intern Yes CCC Yes No NO </code></pre> <p>I would like to code something like this:</p> <pre><code> IF d1.columns = d2.columns THEN df1.column.update(df1.column.map(df2.set_index('Answer_Code').column)) </code></pre>
<p>You can use <code>map</code>.</p> <p>Example:</p> <pre><code>df1.Graduated.map(df2.Graduated) </code></pre> <p>yields</p> <pre><code>0 Yes 1 No 2 Yes </code></pre> <p>Thus just do that for every column, as follows</p> <pre><code>for col in df1.columns: if col in df2.columns: df1[col] = df1[col].map(df2[col]) </code></pre> <p>Remember to set the index to the answer code first, i.e. <code>df2 = df2.set_index("Answer_Code")</code>, if necessary.</p>
python|pandas|dataframe|match|insert-update
1
5,548
51,059,001
Python - apply lambda with an if condition
<p>I want to transform a <code>pandas</code> column that contains <code>Nan</code> from string to float. This is the code I tried but it keeps returning me an invalid syntax error</p> <pre><code>data.VAL_DEAL=data.VAL_DEAL.apply(lambda x: float(x.replace(",","")) if math.isnan(x)!=True) </code></pre>
<p>The following lambda expression should work: </p> <pre><code>lambda x: float(x.replace(",","") if not math.isnan(x) else x) </code></pre> <p>Note the mandatory <code>else</code>-part. This assumes that you want the nan's unchanged. See the docs on <a href="https://docs.python.org/3/reference/expressions.html#conditional-expressions" rel="nofollow noreferrer">Conditional Expressions</a>. </p>
python|pandas
2
5,549
50,762,963
Getting all other columns based on a value Pandas Dataframe
<p>Lets say I have the following df:</p> <pre><code>&gt; Name A B C D John Nan 1 2 Nan Mike 2 Nan Nan Nan Fred Nan 5 6 7 Ana 3 Nan 3 2 Fran 2 Nan 1 1 </code></pre> <p>What I want to do is sorting some columns so, I what everyone who has only column A filled (in this case, Mike):</p> <pre><code>&gt; df_1 = df[(df['A'] &gt; 0)&amp;(~(df['A'] == 0))] </code></pre> <p>or I want only two columns filled (in this case, none):</p> <pre><code>df_1 = df[(df['A','B'] &gt; 0)&amp;(~(df['A','B'] == 0))] </code></pre> <p>I am really strugling with this.</p> <p>tks</p>
<h3>isnull + all</h3> <p>Your syntax is incorrect. You can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isnull.html" rel="nofollow noreferrer"><code>pd.DataFrame.isnull</code></a>:</p> <pre><code>mask1 = df['A'] &gt; 0 mask2 = df[['B', 'C', 'D']].isnull().all(1) df_1 = df_1[mask1 &amp; mask2] </code></pre> <p>Similarly, for your second query:</p> <pre><code>mask1 = (df[['A', 'B']] &gt; 0).all(1) mask2 = df[['C', 'D']].isnull().all(1) df_1 = df_1[mask1 &amp; mask2] </code></pre> <p>This assumes you wish to filter explicitly for values greater than 0 in <code>mask1</code>. If any non-null number suffices, you can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.notnull.html" rel="nofollow noreferrer"><code>pd.DataFrame.notnull</code></a>.</p> <p>Don't be afraid to split your masks across multiple lines in this way. It will make your code clearer and easier to manage.</p> <h3>pipe + isnull + all</h3> <p>More generically, you can write a function to calculate and apply your Boolean series mask:</p> <pre><code>def masker(df, cols_required): """ Supply list cols_required. These must be &gt; 0; others null. """ mask1 = (df[cols_required] &gt; 0).all(1) mask2 = df[df.columns.difference(cols_required)].isnull().all(1) return df[mask1 &amp; mask2] df = df.pipe(masker, cols_required=['A', 'B']) </code></pre>
python|pandas|dataframe
2
5,550
50,772,489
Different results when calculating quantile in pandas (Python) and R
<p>Could you please tell me, why the results differ, when quantiles are calculated in pandas (Python) and R?</p> <p>Pandas code:</p> <pre><code> print('p_new: {:&gt;5} {:&gt;5} {:&gt;5}'.format( round(self.pandas_data_frame['pending_new'].quantile(0.50), 2), round(self.pandas_data_frame['pending_new'].quantile(0.95), 2), round(self.pandas_data_frame['pending_new'].quantile(0.99), 2), )) print('new: {:&gt;5} {:&gt;5} {:&gt;5}'.format( round(self.pandas_data_frame['new'].quantile(0.50), 2), round(self.pandas_data_frame['new'].quantile(0.95), 2), round(self.pandas_data_frame['new'].quantile(0.99), 2), )) </code></pre> <p>results:</p> <pre><code>name | .50| .95| .99| p_new: 2.0 12.0 20.0 new: 52.0 78.0 106.06 </code></pre> <p>R code:</p> <pre><code>dd = read.csv(“stats.csv”) quantile(dd$pending_new, c(.50, .95, .99)) quantile(dd$new, c(.50, .95, .99)) </code></pre> <p>results:</p> <pre><code>&gt; quantile(dd$pending_new, c(.50, .95, .99)) 50% 95% 99% 2.0 13.1 34.0 &gt; quantile(dd$new, c(.50, .95, .99)) 50% 95% 99% 52.00 81.00 129.26 </code></pre>
<p>When doing this function in Python, all functions of the <code>np.percentile()</code> family have an optional argument interpolation. Set this argument to 'midpoint' and your results with match the result in R. You can also read more about the python function here: <a href="https://stackoverflow.com/questions/45926230/how-to-calculate-1st-and-3rd-quartiles/57784105#57784105">How to calculate 1st and 3rd quartiles?</a> </p>
python|r|python-3.x|pandas
0
5,551
50,679,638
Python - data.to_csv output format
<p>From a csv file having the following format:</p> <pre><code>Date,Data 01-01-01,111 02-02-02,222 03-03-03,333 </code></pre> <p>I am calculating the monthly average of the values using the following code:</p> <pre><code>data = pd.read_csv("input.csv") data['Month'] = pd.DatetimeIndex(data.reset_index()['Date']).month mean_data = data.groupby('Month').mean() </code></pre> <p>Then I output a csv file using the following command:</p> <pre><code>mean_data.to_csv("test.csv") </code></pre> <p>It works fine and give me the following output:</p> <pre><code>Month,Data 01,01 02,02 03,03 04,04 ... </code></pre> <p>But now I would like to know how many data have been included inside the monthly average calculation. For that I changed:</p> <pre><code>mean_data = data.groupby('Month').mean() </code></pre> <p>by:</p> <pre><code>mean_data = data.groupby(['Month']).agg(['mean', 'count']) </code></pre> <p>But the problem comes now. When I want to output the csv , I now have a weird format as follow:</p> <pre><code> Data,Data, mean,count, Month, 01, 01,8, 02, 02,9, 03, 03,7, 04, 04,5, </code></pre> <p>Which is not really convenient. Instead I would like to have the following output:</p> <pre><code>Month,Mean,Count 01,01,8 02,02,9 03,03,7 04,04,5 </code></pre> <p>Does anyone know how to achieve that?</p>
<p>Need specify column after <code>groupby</code>:</p> <pre><code>#convert first column to datetime data = pd.read_csv("input.csv", parse_dates=[0]) </code></pre> <hr> <pre><code>df['Month'] = df['Date'].dt.month mean_data = data.groupby('Month')['Data'].agg(['mean', 'count']) </code></pre> <p>should be simplify:</p> <pre><code>mean_data = data.groupby(df['Date'].dt.month)['Data'].agg(['mean', 'count']) </code></pre>
python|pandas|csv|aggregate|columnname
1
5,552
20,508,968
Series.fillna() in a MultiIndex DataFrame Does not Fill; Is This a Bug?
<p>For me, the following snippet leaves the NaN value as NaN:</p> <pre><code>import pandas a = [12, 23] b = [123, None] c = [1234, 2345] d = [12345, 23456] tuples = [('eyes', 'left'), ('eyes', 'right'), ('ears', 'left'), ('ears', 'right')] events = {('eyes', 'left'): a, ('eyes', 'right'): b, ('ears', 'left'): c, ('ears', 'right'): d} multiind = pandas.MultiIndex.from_tuples(tuples, names=['part', 'side']) zed = pandas.DataFrame(events, index=['a', 'b'], columns=multiind) zed['eyes']['right'].fillna(value=555, inplace=True) </code></pre> <p>I get:</p> <pre><code>part eyes ears side left right left right a 12 123 1234 12345 b 23 NaN 2345 23456 </code></pre> <p>If I run this with <code>inplace</code> set to False, the returned Series has replaced <code>NaN</code> with 555. I <em>could</em> use this work-around, but on the one hand, if it's a bug I want to report it, and on the other hand, even the work-around doesn't work for my actual application.</p> <p>So the question is whether I misunderstand <code>fillna()</code> or this is a bug. Thanks!</p> <p>Edit: I'm using pandas 0.12.0, numpy 1.8.0, and python 2.7.5 on openSUSE 13.1.</p>
<p>I would use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.update.html" rel="nofollow noreferrer"><code>update</code></a> here since it's more explicit... and avoids the whole <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#returning-a-view-versus-a-copy" rel="nofollow noreferrer">updating a copy thing</a>.</p> <p>First select the subframe where the column is (eyes, right):</p> <pre><code>In [11]: zed.loc[:, [('eyes', 'right')]] Out[11]: part eyes side right a 123 b NaN [2 rows x 1 columns] </code></pre> <p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html" rel="nofollow noreferrer">Fill in the NaN</a> with 555, and update:</p> <pre><code>In [12]: zed.loc[:, [('eyes', 'right')]].fillna(555) Out[12]: part eyes side right a 123 b 555 [2 rows x 1 columns] In [13]: zed.update(zed.loc[:, [('eyes', 'right')]].fillna(555)) In [14]: zed Out[14]: part eyes ears side left right left right a 12 123 1234 12345 b 23 555 2345 23456 [2 rows x 4 columns] </code></pre> <p>Similar to <a href="https://stackoverflow.com/questions/19867734/changing-certain-values-in-multiple-columns-of-a-pandas-dataframe-at-once/19867768#19867768">chaining in an assignment</a>:</p> <pre><code>zed['eyes']['right'].fillna(value=555, inplace=True) zed.loc[:,[('eyes', 'right')]].fillna(value=555, inplace=True) </code></pre> <p><em>may</em> sometimes work but don't count on it (<em>@Jeff suggests it may work if all columns are floats!</em>), it's likely you'll end up <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#returning-a-view-versus-a-copy" rel="nofollow noreferrer">modifying a copy</a> and not the original frame.</p>
python|pandas
2
5,553
33,467,477
How to find all variables with identical id?
<p>Let's say I have a <code>numpy</code> array <code>a</code> and create <code>b</code> like this:</p> <pre><code>a = np.arange(3) b = a </code></pre> <p>If I now change <code>b</code> e.g. like this</p> <pre><code>b[0] = 100 </code></pre> <p>and print <code>a</code>, <code>b</code>, their <code>id</code>s and <code>.flags</code></p> <pre><code>print a print a.flags print b print b.flags print id(a) print id(b) </code></pre> <p>I obtain</p> <pre><code>[100 1 2] C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False [100 1 2] C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False 139767698376944 139767698376944 </code></pre> <p>So, <code>a</code> and <code>b</code> look the same and their <code>id</code>s are identical as expected.</p> <p>When I now do the same using <code>copy()</code></p> <pre><code>c = np.arange(3) d = c.copy() d[0] = 20 print c print c.flags print id(c) print d print d.flags print id(d) </code></pre> <p>I get</p> <pre><code>[0 1 2] C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False 139767698377344 [20 1 2] C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False 139767698376864 </code></pre> <p>In this case <code>c</code> and <code>d</code> differ and so do their <code>id</code>s; also as expected.</p> <p>However, what confuses me is the output I obtain from <code>.flags</code>: In all cases, <code>OWNDATA</code> is set to <code>True</code>. When I read the <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.ndarray.flags.html" rel="nofollow">documentation</a>, I find:</p> <blockquote> <p>OWNDATA (O) The array owns the memory it uses or borrows it from another object.</p> </blockquote> <p>My main question is now:</p> <p>What would be the easiest way to find all variables that point to the same <code>id</code> (in the example above <code>a</code> and <code>b</code>) i.e. to check whether another variable with the same id exists? I thought <code>OWNDATA</code> would be of help for that but apparently it is not.</p> <p>Related question:</p> <p>What is <code>OWNDATA</code> actually used for, in which case is <code>OWNDATA</code> set to <code>False</code>?</p>
<p>There are 2 issues - how do you identify the variables that you want to compare, and how to do you compare them.</p> <p>Take the second first.</p> <p>My version (1.8.2) does not have a <code>np.shares_memory</code> function. It does have a <code>np.may_share_memory</code>. </p> <p><a href="https://github.com/numpy/numpy/pull/6166" rel="noreferrer">https://github.com/numpy/numpy/pull/6166</a> is the pull request that adds <code>shares_memory</code>; it' dated last August. So you'd have to have brand new <code>numpy</code> to use it. Note that a definitive test is potentially hard, and it may issue as 'TOO HARD' error message. I imagine, for example that there are some slices that share the memory, but hard to identify by simply comparing buffer starting points.</p> <p><a href="https://github.com/numpy/numpy/blob/97c35365beda55c6dead8c50df785eb857f843f0/numpy/core/tests/test_mem_overlap.py" rel="noreferrer">https://github.com/numpy/numpy/blob/97c35365beda55c6dead8c50df785eb857f843f0/numpy/core/tests/test_mem_overlap.py</a> is the unit test for these <code>memory_overlap</code> functions. Read it if you want to see what a daunting task it is to think of all the possible overlap conditions between 2 known arrays.</p> <p>I like to look at the array's <code>.__array_interface__</code>. One item in that dictionary is 'data', which is a pointer to the data buffer. Identical pointer means the data is shared. But a view might start somewhere down the line. I wouldn't be surprised if <code>shares_memeory</code> looks at this pointer.</p> <p>Identical <code>id</code> means 2 variables reference the same object, but different array objects can share a data buffer.</p> <p>All these tests require looking specific references; so you still need to get some sort of list of references. Look at <code>locals()</code>?, <code>globals()</code>. What about unnamed references, such as list of arrays, or some user defined dictionary?</p> <p>An example Ipython run:</p> <p>Some variables and references:</p> <pre><code>In [1]: a=np.arange(10) In [2]: b=a # reference In [3]: c=a[:] # view In [4]: d=a.copy() # copy In [5]: e=a[2:] # another view In [6]: ll=[a, a[:], a[3:], a[[1,2,3]]] # list </code></pre> <p>Compare <code>id</code>:</p> <pre><code>In [7]: id(a) Out[7]: 142453472 In [9]: id(b) Out[9]: 142453472 </code></pre> <p>None of the others share the <code>id</code>, except <code>ll[0]</code>.</p> <pre><code>In [10]: np.may_share_memory(a,b) Out[10]: True In [11]: np.may_share_memory(a,c) Out[11]: True In [12]: np.may_share_memory(a,d) Out[12]: False In [13]: np.may_share_memory(a,e) Out[13]: True In [14]: np.may_share_memory(a,ll[3]) Out[14]: False </code></pre> <p>That's about what I'd expect; views share memory, copies do not.</p> <pre><code>In [15]: a.__array_interface__ Out[15]: {'version': 3, 'data': (143173312, False), 'typestr': '&lt;i4', 'descr': [('', '&lt;i4')], 'shape': (10,), 'strides': None} In [16]: a.__array_interface__['data'] Out[16]: (143173312, False) In [17]: b.__array_interface__['data'] Out[17]: (143173312, False) In [18]: c.__array_interface__['data'] Out[18]: (143173312, False) In [19]: d.__array_interface__['data'] Out[19]: (151258096, False) # copy - diff buffer In [20]: e.__array_interface__['data'] Out[20]: (143173320, False) # differs by 8 bytes In [21]: ll[1].__array_interface__['data'] Out[21]: (143173312, False) # same point </code></pre> <p>Just with this short session I hav 76 items in <code>locals()</code>. But I can search it for matching <code>id</code> with:</p> <pre><code>In [26]: [(k,v) for k,v in locals().items() if id(v)==id(a)] Out[26]: [('a', array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])), ('b', array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]))] </code></pre> <p>Same for the other tests.</p> <p>I can search <code>ll</code> in the same way:</p> <pre><code>In [28]: [n for n,l in enumerate(ll) if id(l)==id(a)] Out[28]: [0] </code></pre> <p>And I could add a layer to the <code>locals()</code> search by testing if an item is a list or dictionary, and doing a search within that.</p> <p>So even if we settle on the testing method, it isn't trivial to search for all possible references. </p> <p>I think the best approach is to just understand your own use of variables, so that you can clearly identify references, views and copies. In selected cases you can perform tests like <code>may_share_memory</code> or comparing databuffers. But there isn't an inexpensive, definitive test. When in doubt it is cheaper to make a copy, than to risk over writing something. In my years of <code>numpy</code> use I've never felt the need to an definitive answer to this question.</p> <hr> <p>I don't find the <code>OWNDATA</code> flag very useful. Consider the above variables</p> <pre><code>In [35]: a.flags['OWNDATA'] Out[35]: True In [36]: b.flags['OWNDATA'] # ref Out[36]: True In [37]: c.flags['OWNDATA'] # view Out[37]: False In [38]: d.flags['OWNDATA'] # copy Out[38]: True In [39]: e.flags['OWNDATA'] # view Out[39]: False </code></pre> <p>While I can predict the <code>OWNDATA</code> value in these simple cases, its value doesn't say much about shared memory, or shared id. <code>False</code> suggests it was created from another array, and thus may share memory. But that's just a 'may'.</p> <p>I often create a sample array by reshaping a range.</p> <pre><code>In [40]: np.arange(3).flags['OWNDATA'] Out[40]: True In [41]: np.arange(4).reshape(2,2).flags['OWNDATA'] Out[41]: False </code></pre> <p>There's clearly no other reference to the data, but the reshaped array does not 'own' its own data. Same would happen with</p> <pre><code>temp = np.arange(4); temp = temp.reshape(2,2) </code></pre> <p>I'd have to do</p> <pre><code>temp = np.arange(4); temp.shape = (2,2) </code></pre> <p>to keep <code>OWNDATA</code> true. False <code>OWNDATA</code> means something right after creating the new array object, but it doesn't change if the original reference is redefined or deleted. It easily becomes out of date.</p>
python|arrays|numpy|copy
5
5,554
66,379,473
How to augment text datasets in Tensorflow?
<p>I'm trying to augment the imdb movie reviews dataset by adding a random swap of some words. Unlike with image data, I don't think this function is originally in tensorflow. For example with images, you could do something like</p> <pre><code>def transform(image, label): image = tf.image.flip_left_right(image) return image, label </code></pre> <p>Where you use tensorflow's native functions for flipping images. But for augmenting text, I don't see anything that can do that in tf.string. So I am using the Easy Data Augmentation implementation from textaugment. <a href="https://github.com/dsfsi/textaugment" rel="nofollow noreferrer">https://github.com/dsfsi/textaugment</a></p> <p>EG:</p> <pre><code>try: import textaugment except ModuleNotFoundError: !pip install textaugment import textaugment from textaugment import EDA import nltk nltk.download('stopwords') t = EDA() t.random_swap(&quot;John is going to town&quot;) </code></pre> <p>Returns &quot;John going to town is&quot;</p> <p>But now when I try to use this random_swap command to augment the entire imdb reviews dataset, it runs into an error because it's trying to act on tensors.</p> <p>Example:</p> <pre><code>try: import textaugment except ModuleNotFoundError: !pip install textaugment import textaugment import pandas as pd import tensorflow as tf from tensorflow.keras.preprocessing import sequence from tensorflow.keras.models import Sequential from tensorflow.keras.datasets import imdb # set parameters: max_features = 5000 maxlen = 400 batch_size = 32 embedding_dims = 50 filters = 250 kernel_size = 3 hidden_dims = 250 epochs = 1 runs = 1 (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features) print(len(x_train), 'train sequences') print(len(x_test), 'test sequences') print('Pad sequences (samples x time)') x_train = sequence.pad_sequences(x_train, maxlen=maxlen) x_test = sequence.pad_sequences(x_test, maxlen=maxlen) print('x_train shape:', x_train.shape) print('x_test shape:', x_test.shape) from textaugment import EDA import nltk nltk.download('stopwords') t = EDA() for text in x_train: text = t.random_swap(text) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-7-7fc9edb2f37b&gt; in &lt;module&gt;() 1 for text in x_train: ----&gt; 2 text = t.random_swap(text) 1 frames /usr/local/lib/python3.7/dist-packages/textaugment/eda.py in validate(**kwargs) 72 raise TypeError(&quot;p must be a fraction between 0 and 1&quot;) 73 if 'sentence' in kwargs: ---&gt; 74 if not isinstance(kwargs['sentence'].strip(), str) or len(kwargs['sentence'].strip()) == 0: 75 raise TypeError(&quot;sentence must be a valid sentence&quot;) 76 if 'n' in kwargs: AttributeError: 'numpy.ndarray' object has no attribute 'strip' </code></pre> <p>So how do you augment data in TensorFlow, when the native commands don't exist and you want to make a custom function to do the augmentation?</p>
<p>By loading the dataset with <code>imdb.load_data()</code> you don't get the film reviews as text. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary.</p> <p>For this reason, you cannot apply <code>t.random_swap(text)</code> to it. You have to decode these reviews back to English words first.</p> <p>Therefor you'll need the corresponding <code>word_index</code>. It is a dictionary mapping words to an integer index.</p> <p>In the next step you should revers it, to get a dictionary mapping integer indices to words. Note that the indices are offset by 3 because 0, 1, and 2 are reserved indices, for padding start of sequence, and unknown. You can <a href="https://stackoverflow.com/questions/42821330/restore-original-text-from-keras-s-imdb-dataset/44891281">find more details here</a>.</p> <pre><code>word_index = imdb.get_word_index() reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) </code></pre> <p>You should decode the reviews <strong>before</strong> you're applying <code>sequence.pad_sequences()</code> to them. Otherwise there will be a lot of unknown words represented by zeros in the reviews.</p> <p>For <code>print(x_train[0])</code> you'll get:</p> <pre><code>[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 2, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 2, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 2, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 2, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 2, 19, 178, 32] </code></pre> <p>Let's decode this review:</p> <pre><code>decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in x_train[0]]) </code></pre> <p>And you'll get:</p> <pre><code>print(decoded_review) &gt;&gt;&gt; &quot;? this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert ? is an amazing actor and now the same being director ? father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for ? and would recommend it to everyone to watch and the fly ? was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also ? to the two little ? that played the ? of norman and paul they were just brilliant children are often left out of the ? list i think because the stars that play them all grown up are such a big ? for the whole film but these children are amazing and should be ? for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was ? with us all&quot; </code></pre> <p>After the reviews are decoded back to text you are able to augment them with <code>t.random_swap(decoded_review)</code>. The augmented data can be encoded back to a sequence of integers with the <code>word_index</code> dictionary.</p>
python|tensorflow|text|nlp|data-augmentation
1
5,555
66,650,233
Pixel operations in batches
<p>I have a batch of depth images, shape -&gt; [B, 1, H, W]. For each pixel in each image of the batch I need to perform:</p> <p><code>X = d * Kinverse @ [u, v, 1] #therefore X is in R^3</code> where d is float tensor[0;1] representing depth at pixel u,v; Kinverse is a constant 3X3 matrix and u, v refer to the pixel column and row respectively.</p> <p>Is there some way I can vectorize the operation to obtain X(u+1,v), X(u,v) and X(u,v+1) for all the images in the batch. I eventually need to take this cross product: {X(u+1,v) - X(u,v)} x {X(u, v+1) - X(u,v)}</p> <p>Thanks for the help!</p>
<p>You can use <a href="https://pytorch.org/docs/stable/generated/torch.meshgrid.html#torch.meshgrid" rel="nofollow noreferrer"><code>torch.meshgrid</code></a> to produce the <code>u</code> and <code>v</code> tensors. Once you have them, you can use <a href="https://stackoverflow.com/a/55894780/1714410"><code>torch.einsum</code></a> to do the batched matrix multiplication with <code>Kinverse</code>. Finally, you can use <a href="https://pytorch.org/docs/stable/generated/torch.cross.html#torch.cross" rel="nofollow noreferrer"><code>torch.cross</code></a> to compute the cross product:</p> <pre class="lang-py prettyprint-override"><code>u, v = torch.meshgrid(*[torch.arange(s_, dtype=d.dtype, device=d.device) for s_ in d.shape[2:]]) # make a single 1x1xHxW for [u v 1] per pixel: uv = torch.cat((u[None, None, ...], v[None, None, ...], torch.ones_like(u)[None, None, ...]), dim=1) # compute X X = d * torch.einsum('ij,bjhw-&gt;bihw',Kinverse,uv) # the cross product out = torch.cross(X[..., 1:, :-1] - X[..., :-1, :-1], X[..., :-1, 1:] - X[..., :-1, :-1], dim=1) </code></pre>
python|computer-vision|pytorch|vectorization
1
5,556
66,719,029
Why Keras Embedding layer's input_dim = vocab_size + 1
<p>In this code snippet from TensorFlow tutorial <a href="https://www.tensorflow.org/tutorials/keras/text_classification#create_the_model" rel="nofollow noreferrer">Basic text classification</a>,</p> <pre><code>model = tf.keras.Sequential([ layers.Embedding(max_features + 1, embedding_dim), layers.Dropout(0.2), layers.GlobalAveragePooling1D(), layers.Dropout(0.2), layers.Dense(1)]) </code></pre> <p>As far as I understood, <code>max_features</code> is the size of vocabulary(with index 0 for padding and index 1 for OOV).</p> <p>Also, I've done an experiment by setting <code>layers.Embedding(max_features, embedding_dim)</code>, the tutorial can still successfully run through(screenshots below).</p> <p>So why do we need <code>input_dim=max_features + 1</code> here?</p> <p><a href="https://i.stack.imgur.com/wiWOo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wiWOo.png" alt="model" /></a> <a href="https://i.stack.imgur.com/2TM5l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2TM5l.png" alt="training" /></a></p>
<p>Vocabulary Size = Maximum Integer Index + 1</p> <p>Example:<br/> a[0] = 'item 1'<br /> a[1] = 'item 2'<br /> a[2] = 'item 3'<br /> ................<br /> Maximum Integer Index = 2 <br /> Vocabulary Size = 3</p>
tensorflow|keras|embedding
1
5,557
66,498,659
Function to generate incremental weights based on np.select conditions
<p>Objective: Define function to use flags (1,2,3) as conditions that trigger different weights (.2,.4,0). Output is a new df with the weights only.</p> <p>The np.select is generating this error:</p> <p>TypeError: invalid entry 0 in condlist: should be boolean ndarray</p> <p>Image shows desired output as &quot;incremental weight output&quot;</p> <pre><code>import pandas as pd import numpy as np flags = pd.DataFrame({'Date': ['2020-01-01','2020-02-01','2020-03-01'], 'flag_1': [1, 2, 3], 'flag_2': [1, 1, 1], 'flag_3': [2, 1, 2], 'flag_4': [3, 1, 3], 'flag_5' : [1, 2, 2], 'flag_6': [2, 1, 2], 'flag_7': [1, 1, 1], 'flag_8': [1, 1, 1], 'flag_9': [3, 3, 2]}) flags = flags.set_index('Date') def inc_weights(dfin, wt1, wt2, wt3): dfin = pd.DataFrame(dfin.iloc[:,::-1]) dfout = pd.DataFrame() conditions = [1,2,3] choices = [wt1,wt2,wt3] dfout=np.select(conditions, choices, default=np.nan) return(dfout.iloc[:,::-1]) inc_weights = inc_weights(flags, .2, .4, 0) print(inc_weights) </code></pre> <p><a href="https://i.stack.imgur.com/MygjM.png" rel="nofollow noreferrer">Input and Output</a></p>
<p>np.select was unnecessary. simple solution using df.replace with a mapping dict.</p> <pre><code>import pandas as pd import numpy as np flags = pd.DataFrame({'Date': ['2020-01-01','2020-02-01','2020-03-01'], 'flag_1': [1, 2, 3], 'flag_2': [1, 1, 1], 'flag_3': [2, 1, 2], 'flag_4': [3, 1, 3], 'flag_5' : [1, 2, 2], 'flag_6': [2, 1, 2], 'flag_7': [1, 1, 1], 'flag_8': [1, 1, 1], 'flag_9': [3, 3, 2]}) flags = flags.set_index('Date') print(flags) def inc_weights(dfin, wt1, wt2, wt3): dfin = pd.DataFrame(dfin.iloc[:,::-1]) dfout = pd.DataFrame() mapping = {1:wt1,2:wt2,3:wt3} dfout=dfin.replace(mapping) return(dfout.iloc[:,::-1]) inc_weights = inc_weights(flags, .2, .4, 0) print(inc_weights) </code></pre>
python-3.x|pandas|loops|cumulative-sum
1
5,558
16,212,232
How to merge and split numpy array along the axis?
<p>I have the data in the following form the shape of the array is </p> <pre><code> (10,4,4,3) </code></pre> <p>First i want to create an array with shape (merging, or flattening)</p> <pre><code> (10,48) </code></pre> <p>such that data (4,4,3) is converted to one row.</p> <p>Secondly I want to go back to the original shape of the data(splitting) such that each element is again placed at the same location. </p> <p>Thanks </p>
<pre><code>b = a.reshape(10,48) a = b.reshape(10,4,4,3) </code></pre>
python|arrays|numpy|merge|split
2
5,559
16,548,560
check if numpy array is subset of another array
<p>Similar questions have already been asked on SO, but they have more specific constraints and their answers don't apply to my question.</p> <p>Generally speaking, what is the most pythonic way to determine if an arbitrary numpy array is a subset of another array? More specifically, I have a roughly 20000x3 array and I need to know the indices of the 1x3 elements that are entirely contained within a set. More generally, is there a more pythonic way of writing the following:</p> <pre><code>master = [12, 155, 179, 234, 670, 981, 1054, 1209, 1526, 1667, 1853] # some indices of interest triangles = np.random.randint(2000, size=(20000, 3)) # some data for i, x in enumerate(triangles): if x[0] in master and x[1] in master and x[2] in master: print i </code></pre> <p>For my use case, I can safely assume that len(master) &lt;&lt; 20000. (Consequently, it is also safe to assume that master is sorted because this is cheap).</p>
<p>One can also use <code>np.isin</code> which might be more efficient than the list comprehension in <a href="https://stackoverflow.com/a/16548813/1534017">@petrichor's answer</a>. Using the same set up:</p> <pre><code>import numpy as np x = np.arange(30).reshape(10, 3) searchKey = [4, 5, 8] x[[0, 3, 7], :] = searchKey array([[ 4, 5, 8], [ 3, 4, 5], [ 6, 7, 8], [ 4, 5, 8], [12, 13, 14], [15, 16, 17], [18, 19, 20], [ 4, 5, 8], [24, 25, 26], [27, 28, 29]]) </code></pre> <p>Now one can use <code>np.isin</code>; by default, it will work element wise:</p> <pre><code>np.isin(x, searchKey) array([[ True, True, True], [False, True, True], [False, False, True], [ True, True, True], [False, False, False], [False, False, False], [False, False, False], [ True, True, True], [False, False, False], [False, False, False]]) </code></pre> <p>We now have to filter the rows where all entries evaluate to <code>True</code> for which we could use <code>all</code>:</p> <pre><code>np.isin(x, searchKey).all(1) array([ True, False, False, True, False, False, False, True, False, False]) </code></pre> <p>If one now wants the corresponding indices, one can use <code>np.where</code>:</p> <pre><code>np.where(np.isin(x, searchKey).all(1)) (array([0, 3, 7]),) </code></pre> <p>EDIT:</p> <p>Just realize that one has to be careful though. For example, if I do</p> <pre><code>x[4, :] = [8, 4, 5] </code></pre> <p>so, in the assignment I use the same values as in <code>searchKey</code> but in a different order, I will still get it returned when doing</p> <pre><code>np.where(np.isin(x, searchKey).all(1)) </code></pre> <p>which prints</p> <pre><code>(array([0, 3, 4, 7]),) </code></pre> <p>That can be undesired.</p>
python|numpy|set
4
5,560
57,556,182
Generating heatmap from frames
<p>I have an issue as follow, i have coordinates x, y, z and r. Each of point is a Frame. Based on Frames want to generate heat-map with python. What i did till now, i imported the following frames:</p> <pre><code>-1.52588e-05 -1.52588e-05 8.17212e-06 300 -220.414 -220.305 217.847 79.5859 -220.899 220.54 -219.881 79.1004 219.275 218.495 -221.124 78.8756 -216.911 220.674 218.582 78.848 218.126 -219.362 221.977 78.0233 -222.961 -224.281 -204.107 75.7191 225.267 222.614 221.81 74.7329 </code></pre> <p>parse it as well. From here i know is actually nothing really. as far as i'm concerned, generating heat-map based on frames. I don't know how should i do after importing frames.I'm really lost in context. Could someone give tips or way of doing i.e steps... thanks the code below is not work as well</p> <pre><code>import csv import seaborn as sns result = [[]] with open("data.csv") as csvfile: reader = csv.reader(csvfile, quoting=csv.QUOTE_NONNUMERIC) for row in reader: result.append(row) print(result) </code></pre>
<p>Try following code:</p> <pre><code>import pandas as pd import seaborn as sns import matplotlib.pyplot as plt with open('data.txt', 'r') as f: data = f.read().replace(' ', ' ') with open('data.txt', 'w') as f: f.write(data) df = pd.read_csv('data.txt', sep=' ', header=None) sns.heatmap(df, annot=True) plt.show() </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/x6nIZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x6nIZ.png" alt="enter image description here"></a></p>
python|pandas|heatmap
1
5,561
57,323,598
Is Tensorflow Federated-Learning only for simulating federated learning on one machine?
<p>I read multiple guides on <a href="https://www.tensorflow.org/federated/federated_learning" rel="nofollow noreferrer">https://www.tensorflow.org/federated/federated_learning</a> e.g. the image classification or text generation example.</p> <p>From what I have read I can not see how to use tensorflow federated-learning (tff) for a real world application: datasets on multiple hardware clients. It all looks like its meant only for simulating federated learning.</p> <p>I want to use tff on multiple machines and not simulate it on only one. I would appreciate it when someone knows if it's even possible with tff or found a guide on how to do it.</p> <p>thank you.</p>
<p>As of today TFF only provides a simulation environment for use in Federated Learning (FL) research.</p> <p>There is work being done on a supporting multi-machine simulation environment, but this is still ongoing work (see <a href="https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/core/impl/remote_executor.py" rel="nofollow noreferrer">https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/core/impl/remote_executor.py</a>)</p> <p>There is not yet a "real world" FL deployment platform.</p>
tensorflow|tensorflow-federated
4
5,562
57,377,894
Pandas, count if time difference is within x seconds
<p>I want to group values, if they are within the same x amount of seconds. e.g. I got this by doing this:</p> <pre><code>m_failed = df[(df[&quot;Signal&quot;] == &quot;Alarm&quot;) &amp; (df[&quot;State&quot;] == &quot;Active&quot;)] dd_failed = m_failed.groupby(['Country', 'Lane', 'Unit', 'Datetime']).size().to_frame('count').reset_index() </code></pre> <p>UPDATE: Sorry, but my question was very vague, and I even forgot to include important data, so I have updated the question and added part of a log. I have changed city to lane, as that it is more true to the real data. (Sorry for the obscurity)</p> <pre><code>Sign Descr State Country Lane Unit Datetime Alarm Active USA Lane1 00003 2019-08-03 13:32:43 Alarm Active USA Lane1 00005 2019-08-03 13:32:43 Alarm Active USA Lane1 00006 2019-08-03 13:32:43 Alarm Active USA Lane1 00004 2019-08-03 13:32:43 Alarm Active USA Lane1 00002 2019-08-03 13:32:43 Alarm Active USA Lane1 00007 2019-08-03 13:32:43 Alarm Active Spain Lane1 00003 2019-08-03 07:47:54 Alarm Active Spain Lane1 00002 2019-08-03 07:47:54 Alarm Active Spain Lane1 00005 2019-08-03 07:47:54 Alarm Active Spain Lane1 00007 2019-08-03 07:47:54 Alarm Active Spain Lane1 00004 2019-08-03 07:47:53 Alarm Active Spain Lane1 00006 2019-08-03 07:47:53 Alarm Active Spain Lane1 00004 2019-08-03 07:26:16 Alarm Active Spain Lane1 00003 2019-08-03 07:26:16 Alarm Active Italy Lane2 00002 2019-08-03 12:09:34 Alarm Active Italy Lane2 00004 2019-08-03 09:50:32 Alarm Active Italy Lane2 00006 2019-08-03 09:50:32 Alarm Active Italy Lane2 00002 2019-08-03 09:50:32 Alarm Active Italy Lane1 00007 2019-08-03 07:58:43 Alarm Active Italy Lane2 00002 2019-08-03 07:58:01 Alarm Active Germany Lane1 00007 2019-08-03 12:36:48 Alarm Active Germany Lane1 00007 2019-08-03 12:31:19 Alarm Active Sweden Lane1 00007 2019-08-03 12:27:33 Alarm Active Norway Lane1 00007 2019-08-03 12:35:21 Alarm Active Norway Lane1 00005 2019-08-03 12:35:21 Alarm Active Norway Lane1 00002 2019-08-03 12:35:21 Alarm Active Norway Lane1 00007 2019-08-03 12:28:50 Alarm Active Norway Lane2 00007 2019-08-03 12:27:31 Alarm Active Norway Lane2 00003 2019-08-03 12:27:31 Alarm Active Norway Lane2 00006 2019-08-03 12:27:31 Alarm Active Norway Lane2 00005 2019-08-03 09:24:53 Alarm Active Denmark Lane2 00003 2019-08-03 09:46:23 Alarm Active UK Lane2 00003 2019-08-03 09:56:08 Alarm Active UK Lane2 00004 2019-08-03 09:56:08 Alarm Active Brazil Lane2 00002 2019-08-03 09:47:19 Alarm Active Brazil Lane2 00003 2019-08-03 09:47:19 </code></pre> <p>and I want the results to be like this:</p> <pre><code>Sign Descr State Country Lane Unit Datetime Count Alarm Active USA Lane1 2019-08-03 13:32:43 1 Alarm Active Spain Lane1 2019-08-03 07:47:54 1 Alarm Active Spain Lane1 00004 2019-08-03 07:26:16 1 Alarm Active Spain Lane1 00003 2019-08-03 07:26:16 1 Alarm Active Italy Lane2 00002 2019-08-03 12:09:34 3 Alarm Active Italy Lane2 00004 2019-08-03 09:50:32 1 Alarm Active Italy Lane2 00006 2019-08-03 09:50:32 1 Alarm Active Italy Lane1 00007 2019-08-03 07:58:43 1 Alarm Active Germany Lane1 00007 2019-08-03 12:36:48 2 Alarm Active Sweden Lane1 00007 2019-08-03 12:27:33 1 Alarm Active Norway Lane1 00007 2019-08-03 12:35:21 1 Alarm Active Norway Lane1 00005 2019-08-03 12:35:21 1 Alarm Active Norway Lane1 00002 2019-08-03 12:35:21 1 Alarm Active Norway Lane2 00007 2019-08-03 12:27:31 2 Alarm Active Norway Lane2 00003 2019-08-03 12:27:31 1 Alarm Active Norway Lane2 00006 2019-08-03 12:27:31 1 Alarm Active Norway Lane2 00005 2019-08-03 09:24:53 1 Alarm Active Denmark Lane2 00003 2019-08-03 09:46:23 1 Alarm Active UK Lane2 00003 2019-08-03 09:56:08 1 Alarm Active UK Lane2 00004 2019-08-03 09:56:08 1 Alarm Active Brazil Lane2 00002 2019-08-03 09:47:19 1 Alarm Active Brazil Lane2 00003 2019-08-03 09:47:19 1 </code></pre> <p>The units can be from 00002 to 00007 The lanes can be either lane 1 or lane 2, while the &quot;country&quot; can be -anything- Log created is from 00:00 -&gt; 23:59</p> <p>If the country and lane are the same, and if all units failed within the same 1-2 minutes, then group them and count them as 1, as it's the lane that failed. If the same lane fails several times during the day, then count the amount of times the whole lane failed.</p> <p>while if not all units failed, then show the unit and count the amount of times this unit failed during the day.</p> <h2>??What is the best way to add tables in stack overflow??</h2>
<p>Use <code>pd.Grouper</code> along with <code>Country</code> and <code>City</code> as your <code>groupby</code> keys. I chose <code>60S</code> as the frequency, but change this as needed.</p> <hr> <pre><code>keys = ['Country', 'City', pd.Grouper(key='Datetime', freq='60S')] df.groupby(keys, sort=False).agg(Unit=('Unit', 'first'), count=('count', 'sum')) </code></pre> <p></p> <pre><code> Unit count Country City Datetime USA NY 2019-08-03 13:32:00 00002 6 ITALY Roma 2019-08-03 07:47:00 00002 1 2019-08-03 07:26:00 00003 1 Spain Madrid 2019-08-03 07:47:00 00004 4 2019-08-03 07:58:00 00007 1 </code></pre>
python-3.x|pandas
2
5,563
57,317,159
doing math operations using numpy in python3
<p>I have a <code>numpy</code> <code>array</code> like this example:</p> <p>example:</p> <pre><code>arr = array([[31, 18], [ 27, 9], [21, 20]]) </code></pre> <p>and I want to get the <code>mean</code> of every inner list separately and also <code>standard deviation</code> of every inner list separately. until here I would have 2 lists (for mean and std) and every list would have 3 items (one per inner list in arr). then I will multiply every single item of std list and then will add mean list and new std list item by item. so at the end the results would be a list with 3 items. here is the steps for the example:</p> <pre><code>std = [9.19238815542512, 12.7279220613579, 0.707106781186548] std2 = [18.3847763108502, 25.4558441227157, 1.4142135623731] mean = [24.5, 18, 20.5] </code></pre> <p>and here is the expected output:</p> <pre><code>final = [42.8847763108502, 43.4558441227157, 21.9142135623731] </code></pre> <p>to make such results I wrote the following code in python:</p> <pre><code>import numpy as np for item in arr: mean, std = [np.mean(), np.std()*2] results = mean + std </code></pre> <p>but it does not return expected output. do you know how to fix it?</p>
<p>There are two issues in your code. First, you are calling <code>np.mean</code> without an argument, which should result in an error. Instead, you want to call either <code>arr.mean(...)</code> or <code>np.mean(arr, ...)</code>. Second, you are overwriting the <code>result</code> variable in every iteration of the loop. You probably wanted to declare the result arrays outside the loop, and use <code>list.append</code> to add to them.</p> <p>However, there is a specialized solution to your question built in to Numpy: many Numpy functions have an <code>axis</code> parameter that lets you take a mean along one axis of an array.</p> <pre><code>import numpy as np arr = np.array([[0, 100], [1, 101], [2, 102]]) arr.mean(axis=0) # =&gt; [1, 101] arr.mean(1) # =&gt; [50, 51, 52] </code></pre> <p>To tell which axis to use, remember that the given axis will be deleted. So for a 3 by 2 array, operating over axis 0 will leave you with a length-2 array, and using axis 1 will leave length 3.</p> <p>Numpy also lets you perform elementwise arithmetic on arrays<sup>1</sup> of the same shape, or between arrays and numbers.<sup>2</sup></p> <pre><code>np.array([1, 2, 3]) + np.array([4, 5, 6]) # =&gt; [5, 7, 9] </code></pre> <p>Using those hints, it should be pretty straightforward to get the results you want.</p> <hr> <p>The <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.mean.html" rel="nofollow noreferrer">Numpy documentation</a> is a great place to start if you want to learn the ins and outs of Numpy's features.</p> <p><sup>1</sup> You can also add arrays to lists, and add lists using <code>np.add</code>. This applies to anything Numpy considers "array-like," such as tuples.</p> <p><sup>2</sup> Numpy also allows certain operations between different shaped arrays, using its <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow noreferrer">broadcasting rule</a>, but that's a bit off-topic here.</p>
numpy|math
2
5,564
57,684,367
In Pandas, is there an elegant way to assign a category to an item based on whether it contains particular strings?
<p>I have a .csv file with the following column headers:</p> <p>Identifier, Date, Task, Category, Person</p> <p>I want to assign a category or categories to each task, based on whether the task string contains any of a number of substrings, such as:</p> <p>“met”, “fix, “corresp”, “particip”, “update”, “sent”, “attend”, “help”, “assist”, “research”, “create”, “meet, “send”, “devel”, “source”</p> <p>I want to write the assigned category(ies) in the ‘Category’ column for each task in each row.</p> <p>I’ve tried several different approaches. For example, I can get the script to indicate whether a substring exists in the task item and return either a Boolean or binary result:</p> <p><code>df['Task'].str.contains('work', case=False).fillna(0).astype(int)</code></p> <p>or </p> <p><code>df['Task'].str.contains('work', case=False).fillna(0)</code></p> <p>I can also get it to return a list of the tasks that contain a substring:</p> <p><code>df[df &gt; 0]</code></p> <p>But I can’t get the code to write the category into the Category column. I’ve tried every approach I could find, but I think I’m missing something straightforward. I was optimistic about the numpy np.where function, but no dice. </p> <p>Many thanks in advance for your guidance.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np df = pd.read_csv('CAPA Tasks.csv') df.head() df['Identifier'].is_unique df = df.set_index('Identifier') df.head() df['Task'] = df['Task'].astype(str) df['Category'] = np.where(['Task'].str.contains('work', case=False), "Work", np.where(['Task'].str.contains('corresp', case=False), "Correspond", np.where(['Task'].str.contains('order', case=False), "Order", np.where(['Task'].str.contains('met with', case=False), "Meet”, ... np.where(['Task'].str.contains('receive', case=False), "Administration")))))))))))))))))))) </code></pre> <p>I think I’m failing to convert the task items to strings correctly and am starting to make a mess of my code.</p> <p>I’ve also tried iterating through each row with if and elseif, but that didn’t work either.</p> <p>UPDATE: Here’s the functioning code, using the second approach suggested by @mohanys:</p> <pre><code> import pandas as pd import matplotlib.pyplot as plt import numpy as np df = pd.read_csv('CAPA Tasks.csv') df['Identifier'].is_unique df = df.set_index('Identifier') df['Task'] = df['Task'].astype(str) df['Category'] = np.select([df['Task'].str.contains('work', case=False), df['Task'].str.contains('corresp', case=False), df['Task'].str.contains('met ', case=False), df['Task'].str.contains('share', case=False), df['Task'].str.contains('made', case=False), df['Task'].str.contains('fix', case=False), df['Task'].str.contains('sent', case=False), df['Task'].str.contains('update', case=False), df['Task'].str.contains('set ', case=False), df['Task'].str.contains('stood up', case=False), df['Task'].str.contains('file', case=False), df['Task'].str.contains('worked with', case=False), df['Task'].str.contains('help', case=False), df['Task'].str.contains('print', case=False), df['Task'].str.contains('develop', case=False), df['Task'].str.contains('partici', case=False), df['Task'].str.contains('attend', case=False), df['Task'].str.contains('talk', case=False), df['Task'].str.contains('plan', case=False), df['Task'].str.contains('order', case=False), df['Task'].str.contains('discuss', case=False), df['Task'].str.contains('taught', case=False), df['Task'].str.contains('teach', case=False), df['Task'].str.contains('writ', case=False), df['Task'].str.contains('research', case=False)],["Develop","Correspond","Meet","Provide","Create","Problem Solve", "Provide", "Maintain &amp; Enhance", "Develop", "Meet", "Administer &amp; Document", "Assist", "Assist", "Produce", "Develop", "Participate", "Meet", "Correspond", "Plan", "Order", "Correspond", "Teach", "Teach", "Write", "Research"]) ```` </code></pre>
<pre><code>df.loc[masked_df,'Category'] == 'whatever_you_want' </code></pre> <p>where masked_df is your boolean result</p>
python|pandas|csv|dataframe|categories
0
5,565
57,674,270
Appending a dataframe row with specific values of other dataframes - python
<p>I am working on implementing the Connexion Scan Algorithm in python because I need to have access to shortests public transport paths. So I am trying to create a connexion table from gtfs files.</p> <p>I have a dataframe (stop_times) that contains the following columns:</p> <pre class="lang-py prettyprint-override"><code> trip_id arrival_time departure_time stop_sequence stop_id 0 id1 06:02:00 06:02:00 0 stop_id1 1 id1 06:05:00 06:05:00 1 stop_id2 2 id1 06:06:00 06:06:00 2 stop_id3 3 id1 06:08:00 06:08:00 3 stop_id4 </code></pre> <p>The original file is much longer and contains the data of many trips which are defined by their trip_id.</p> <p>I want to save some of the values contained in that first dataframe in a second one that would list the connexion between stations and basically have four columns:</p> <pre class="lang-py prettyprint-override"><code> departure_station arrival_station departure_time arrival_time </code></pre> <p>My goal is to extract values from the stop_times dataframe and insert them in the right rows in the empty one I created. I, however, encounter problems with that andI have been stuck for quite a while now.</p> <hr> <p>I need to iterate over the stop_times dataframe 2 "rows at a time" and starting the new iteration at the previous row. The first iteration would be on indexes 0-1, the second on 1-2, the third on 2-3 etc.</p> <p>For now I was only able to make the iterations on rows 0-1, 2-3 etc. with the following code but it is not what I am trying to do here. </p> <pre class="lang-py prettyprint-override"><code>for i, g in course.groupby(np.arange(len(course)) // 2): </code></pre> <p>Any idea how I could manage that?</p> <hr> <p>Now let's consider the first iteration on rows 0-1: I need to append the empty dataframe first row with:</p> <ul> <li>the departure_time of the stop_times first row </li> <li>the arrival_time of the stop_times second row </li> <li>the stop_sequence of the stop_times first row (corresponding to the departure_station column) </li> <li>the stop_sequence of the stop_times second row (corresponding to the arrival_station column)</li> </ul> <p>That would give me the following:</p> <pre class="lang-py prettyprint-override"><code> departure_station arrival_station departure_time arrival_time 0 0 1 06:02:00 06:05:00 </code></pre> <p>And then repeat that for the rest of the dataframe:</p> <pre class="lang-py prettyprint-override"><code> departure_station arrival_station departure_time arrival_time 0 0 1 06:02:00 06:05:00 1 1 2 06:05:00 06:06:00 2 2 3 06:06:00 06:08:00 </code></pre> <hr> <p>This is what I tried so far:</p> <pre class="lang-py prettyprint-override"><code>stop_time = pd.read_csv('/Users/im/Downloads/IDFM_gtfs/stop_times.txt') stop_time = stop_time[:30] course = stop_time.loc[stop_time['trip_id'] == 'id1'] for i, g in course.groupby(np.arange(len(course)) // 2): connexion = g.reset_index() connexion = connexion[['trip_id', 'arrival_time', 'departure_time', 'stop_id', 'stop_sequence']] dep_hor = connexion.loc[connexion.index == 0, ['departure_time']] arriv_hor = connexion.loc[connexion.index == 1, ['arrival_time']] table_horaire = table_horaire.append(dep_hor) table_horaire = table_horaire.append(arriv_hor) </code></pre> <p>Which gives me the following dataframe:</p> <pre class="lang-py prettyprint-override"><code> arrival_time departure_time arrival_station departure_station 0 NaN 06:02:00 NaN NaN 1 06:05:00 NaN NaN NaN 0 NaN 06:06:00 NaN NaN 1 06:08:00 NaN NaN NaN 0 NaN 06:10:00 NaN NaN 1 06:12:00 NaN NaN NaN 0 NaN 06:14:00 NaN NaN 1 06:16:00 NaN NaN NaN </code></pre> <p>Any help would be greatly appreciated and please do tell me if some parts are not explained well, I am still quite new at programming and don't know all the right terms yet.</p>
<p>If I got your question right, you don't need <code>groupby</code> at all and can use a combination of <code>shift(1)</code> and concat to get, what you want:</p> <pre><code>import numpy as np # make sure the dataframe is sorted by trip_id and arrival_time # please choose what is better according your data arrival_time # or stop_sequence (in case your public transport goes near the # speed of light :-) df.sort_values(['trip_id', 'arrival_time'], inplace=True) # shift the columns, we need for the departure part # by one row and rename the columns df_departure= df[['trip_id', 'stop_id', 'arrival_time']].shift(1) df_departure.columns= ['departure_trip_id', 'departuere_station', 'departure_time'] # create a subset of the dataframe with the arrival-columns df_arrival= df[['trip_id', 'arrival_time', 'stop_id']].copy() df_arrival.columns= ['trip_id', 'arrival_time', 'arrival_station'] # concat both together df_combined= pd.concat([df_departure, df_arrival], axis='columns') # now take care of the rows at the beginning of each group # of rows that belong to the same trip_id and delete the # departure values of theses rows since they belong to another # trip df_combined.loc[df_combined['trip_id'] != df_combined['departure_trip_id'], ['departuere_station', 'departure_time']]= (np.NaN, np.NaN) df_combined.drop(['departure_trip_id'], axis='columns', inplace=True) </code></pre> <p>With the following test data:</p> <pre><code>raw=""" trip_id arrival_time departure_time stop_sequence stop_id 0 id1 06:02:00 06:02:30 0 stop_id1 1 id1 06:05:00 06:05:30 1 stop_id2 2 id1 06:06:00 06:06:30 2 stop_id3 3 id1 06:08:00 06:08:30 3 stop_id4 4 id2 06:12:00 06:12:30 4 stop_id5 5 id2 06:15:00 06:15:30 5 stop_id6 6 id2 06:16:00 06:16:30 6 stop_id7 7 id2 06:18:00 06:18:30 7 stop_id8 """ df= pd.read_csv(io.StringIO(raw), index_col=0, sep='\s+') </code></pre> <p>The code above outputs:</p> <pre><code>Out[65]: departuere_station departure_time trip_id arrival_time arrival_station 0 NaN NaN id1 06:02:00 stop_id1 1 stop_id1 06:02:00 id1 06:05:00 stop_id2 2 stop_id2 06:05:00 id1 06:06:00 stop_id3 3 stop_id3 06:06:00 id1 06:08:00 stop_id4 4 NaN NaN id2 06:12:00 stop_id5 5 stop_id5 06:12:00 id2 06:15:00 stop_id6 6 stop_id6 06:15:00 id2 06:16:00 stop_id7 7 stop_id7 06:16:00 id2 06:18:00 stop_id8 </code></pre> <p>If <code>stop_id</code> is not a synonym for <code>station</code>, you can just do a <code>merge</code> (or <code>map</code>) to translate it just before you do the <code>shift</code>.</p> <p>Hope that's what you were searching for.</p>
python|pandas|loops|dataframe|gtfs
0
5,566
73,151,593
Iterative time difference between two column entries in a data frame column
<p>I have a pandas data frame that has a column of Date that is in this format as an example, 2022-07-22. The table is also below for a better understanding. I would like to get the time elapsed between each entry in hours. So far I have managed to get the elapsed time using this code:</p> <pre><code>startTime = data.Date.loc[1] endTime = data.Date.loc[2] T= endTime-startTime seconds = T.total_seconds() hours = seconds / 3600 print('Difference in hours: ', hours) </code></pre> <p>Now I would like to do this iteratively over the entire column. Any help with this will be appreciated. Here is a small section of the table to see what I mean:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Date</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">2022-07-22 15:35:13</td> </tr> <tr> <td style="text-align: left;">2022-07-22 15:35:18</td> </tr> <tr> <td style="text-align: left;">2022-07-22 15:35:23</td> </tr> <tr> <td style="text-align: left;">2022-07-22 15:35:28</td> </tr> </tbody> </table> </div>
<p>The 'Date' column is converted from string to datetime64[ns]. Here the difference function diff is used to calculate the 'dif ' column. Further, the differences are divided by the pd.Timedelta in one hour. I think the cycle is redundant here. And I added more hour difference in each value.</p> <pre><code>import pandas as pd df = pd.DataFrame( {'Date': ['2022-07-22 15:35:13', '2022-07-22 17:35:18', '2022-07-22 19:35:18', '2022-07-22 20:35:28']}) df['Date'] = pd.to_datetime(df['Date'], errors='raise') df['dif'] = df['Date'].diff() df['h'] = df['dif'] / pd.Timedelta('1 hour') print(df) </code></pre> <p>Output</p> <pre><code> Date dif h 0 2022-07-22 15:35:13 NaT NaN 1 2022-07-22 17:35:18 0 days 02:00:05 2.001389 2 2022-07-22 19:35:18 0 days 02:00:00 2.000000 3 2022-07-22 20:35:28 0 days 01:00:10 1.002778 </code></pre> <p>But, if you still need to iteratively, then you can do something like this:</p> <pre><code>a = 0 for i in range(1, len(df)): a = df.loc[i, 'Date'] - df.loc[i-1, 'Date'] a = a / pd.Timedelta('1 hour') print(a) </code></pre>
python|pandas
1
5,567
72,892,050
Mark repeated id with a-b relationship in dataframe
<p>I'm trying to create a relationship between repeated ID's in dataframe. For example take 91, so 91 is repeated 4 times so for first 91 entry <strong>first</strong> column row value will be updated to <strong>A</strong> and <strong>second</strong> will be updated to <strong>B</strong> then for next row of 91, first will be updated to <strong>B</strong> and second will updated to <strong>C</strong> then for next first will be <strong>C</strong> and second will be <strong>D</strong> and so on and this same relationship will be there for all duplicated ID's. For ID's that are not repeated first will marked as <strong>A</strong>.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">id</th> <th style="text-align: center;">first</th> <th style="text-align: right;">other</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">11</td> <td style="text-align: center;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">09</td> <td style="text-align: center;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">91</td> <td style="text-align: center;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">91</td> <td style="text-align: center;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">91</td> <td style="text-align: center;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">91</td> <td style="text-align: center;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">15</td> <td style="text-align: center;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">15</td> <td style="text-align: center;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">12</td> <td style="text-align: center;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">01</td> <td style="text-align: center;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">01</td> <td style="text-align: center;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">01</td> <td style="text-align: center;">0</td> <td style="text-align: right;">0</td> </tr> </tbody> </table> </div> <p><strong>Expected output:</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">id</th> <th style="text-align: center;">first</th> <th style="text-align: right;">other</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">11</td> <td style="text-align: center;">A</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">09</td> <td style="text-align: center;">A</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">91</td> <td style="text-align: center;">A</td> <td style="text-align: right;">B</td> </tr> <tr> <td style="text-align: left;">91</td> <td style="text-align: center;">B</td> <td style="text-align: right;">C</td> </tr> <tr> <td style="text-align: left;">91</td> <td style="text-align: center;">C</td> <td style="text-align: right;">D</td> </tr> <tr> <td style="text-align: left;">91</td> <td style="text-align: center;">D</td> <td style="text-align: right;">E</td> </tr> <tr> <td style="text-align: left;">15</td> <td style="text-align: center;">A</td> <td style="text-align: right;">B</td> </tr> <tr> <td style="text-align: left;">15</td> <td style="text-align: center;">B</td> <td style="text-align: right;">C</td> </tr> <tr> <td style="text-align: left;">12</td> <td style="text-align: center;">A</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">01</td> <td style="text-align: center;">A</td> <td style="text-align: right;">B</td> </tr> <tr> <td style="text-align: left;">01</td> <td style="text-align: center;">B</td> <td style="text-align: right;">C</td> </tr> <tr> <td style="text-align: left;">01</td> <td style="text-align: center;">C</td> <td style="text-align: right;">D</td> </tr> </tbody> </table> </div> <p>I using <code>df.iterrows()</code> for this but that's becoming very messy code and will be slow if dataset increases is there any easy way of doing it.</p>
<p>You can perform a mapping using a <code>cumcount</code> per group as source:</p> <pre><code>from string import ascii_uppercase # mapping dictionary # this is an example, you can use any mapping d = dict(enumerate(ascii_uppercase)) # {0: 'A', 1: 'B', 2: 'C'...} g = df.groupby('id') c = g.cumcount() m = g['id'].transform('size').gt(1) df['first'] = c.map(d) df.loc[m, 'other'] = c[m].add(1).map(d) </code></pre> <p>Output:</p> <pre><code> id first other 0 11 A 0 1 9 A 0 2 91 A B 3 91 B C 4 91 C D 5 91 D E 6 15 A B 7 15 B C 8 12 A 0 9 1 A B 10 1 B C 11 1 C D </code></pre>
python|pandas
2
5,568
73,042,918
Converting MATLAB random function to python
<p>My task is to convert one big MATLAB file into python.</p> <p>There is a line in MATLAB</p> <pre><code>weightsEI_slow = random('binom',1,0.2,[EneuronNum_slow,IneuronNum_slow]); </code></pre> <p>I am trying to convert this into python code, I am not quite finding the right documentation. I looked for numpy library too. Does any one have any suggestions?</p>
<p>It looks like you generate a random number that follows the Binomial distribution with probability <code>p=0.2</code> and sample size <code>n=1</code>. In turn, you can leverage numpy</p> <pre><code>import numpy as np np.random.binomial(n=1, p=0.2) &gt;0 </code></pre> <p>If you require replicability, add <code>np.random.seed(3408)</code> before the number is sampled. Otherwise, the output might be <code>0</code> or <code>1</code> depending on the execution. Of course, you can switch in another integer value as the seed instead of <code>3408</code>.</p>
python|numpy|matlab
0
5,569
73,009,611
How to drop specific pandas rows by value
<p>Is there any way that I can drop the value if its index = column index.</p> <p>I mean, this is my toy dataframe</p> <pre class="lang-py prettyprint-override"><code>d = {'Non': [1, 2,4,5,2,7], 'Schzerando': [3, 4,8,4,7,7], 'cc': [1,2,0.75,0.25,0.3,1]} df = pd.DataFrame(data=d) df </code></pre> <p>Then I just want to keep the row which df[&quot;cc&quot;] == 1 and 2, like this <a href="https://i.stack.imgur.com/zosuS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zosuS.png" alt="enter image description here" /></a></p> <p>Toy dataframe to try.</p>
<p>You can filter out the rows by converting the <code>cc</code> column to int type then filter by applying mask.</p> <pre><code>df['cc'] = df['cc'].astype('Int64') df = df[df['cc'] == 1 | df['cc'] == 2 | df['cc'] == 3] </code></pre> <p>or you can declare a list with all the values you want to filter for then use pandas <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.isin.html" rel="nofollow noreferrer"><code>isin</code></a></p> <pre><code>f_list = [1,2,3] df[df['cc'].isin(f_list)] </code></pre>
python|pandas
1
5,570
42,938,994
Inconsistent loading of keras backend between theano and tensorflow
<p>My <code>keras.json</code> has backend specified to be <code>tensorflow</code> and if I open Spyder and Jupyter IDE then <code>tensorflow</code> is used as a backend.</p> <p>Strangely if I open <code>python</code> or <code>ipython</code> shell within my WinPython installation, the backend defaults into <code>theano</code>. Has anyone seen this behaviour before and if so what was the solution?</p> <p>I have tried playing with environment variables to no effect. </p>
<p>Looks like existing notebooks still say Theano. But If I create new one and enter the following then I get proper result as tensorflow import os</p> <pre><code>os.environ['KERAS_BACKEND']='tensorflow' import keras keras.backend.backend() </code></pre> <p>Using TensorFlow backend. Out[1]: 'tensorflow' In [ ]:</p>
python|windows|tensorflow|theano|keras
0
5,571
42,728,532
List of Lists in dataframe, need to access a single value for all
<p>I have a dataframe with lists in the cells.</p> <pre><code>player1 player2 player3 0 ['PF/C', 'DeMarcus Cousins', 11000] ['PG', 'John Wall', 10700] ['SF', 'LeBron James', 10600] 1 ['PF/C', 'DeMarcus Cousins', 11000] ['PG', 'John Wall', 10700] ['PG/SF', 'Giannis Antetokounmpo', 10200] 2 ['PF/C', 'DeMarcus Cousins', 11000] ['PG', 'John Wall', 10700] ['PG', 'Isaiah Thomas', 10100] 3 ['PF/C', 'DeMarcus Cousins', 11000] ['PG', 'John Wall', 10700] ['PG', 'Stephen Curry', 10000] </code></pre> <p>For each row, I want to get the int (the players salary) for all 3 listed players in a row and add them up into a new column - df['total salary']. I can loop through and turn each row_x, column_y into a list, select the salary, store it, then do the same thing for the other two players. Then store the sum of the salaries back into the dataframe. But I know that isn't pythonic. Any help appreciated.</p>
<pre><code>df['total_salary'] = df.apply(lambda x: x['player1'][2] + \ x['player2'][2] + \ x['player3'][2], axis=1) </code></pre> <p>That said, I would like to put in a vote for your splitting each of those lists into separate columns, as that fits my general definition for being more pandas like.</p>
pandas|dataframe
0
5,572
43,025,963
Pandas foreach row multiplication - Speedup
<p>I have extrem slow code:</p> <p>A DataFrame called <code>tmp</code> with a MultiIndex (date and id) with around 2.000.000 lines and 2 columns (V1,V2). </p> <pre><code> V1 V2 Date ID 2000-01-01 1 0.3 0.1 2000-01-01 2 0.3 0.1 2000-01-02 1 0.1 0.1 ..... </code></pre> <p>and <code>ref</code> contains around 5.000 lines with 250 columns</p> <pre><code> C1 C2 ... C250 ID 1 0.2 0.3 ... 0.1 2 1.2 1.3 ... 0.0 </code></pre> <p>The expected result should have the following form:</p> <pre><code> C1 C2 ... C250 Date 2000-01-01 xx xx ... xx </code></pre> <p>I've tried it with:</p> <pre><code> sum1 = pd.DataFrame(0, index=idx1, columns=idx2) sum2 = pd.DataFrame(0, index=idx1, columns=idx2) def gen(row): i1 = row.name[0] # date i2 = row.name[1] # id sum1.loc[i1] += ref.loc[i2] * row['V1'] sum2.loc[i1] += ref.loc[i2] * row['V2'] tmp.apply( gen , axis=1) </code></pre> <p>Is it possible to speed this up - I've tried it with Cython but killed the app after 3 hours ...</p>
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mul.html" rel="nofollow noreferrer"><code>mul</code></a>, then remove level <code>id</code> of <code>MultiIndex</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a>:</p> <pre><code>tmp = pd.DataFrame({'date':pd.date_range('2000-01-01', periods=3), 'id':[1,2,1], 'V1':[.3,.3,.1], 'V2':[.1,.1,.1]}).set_index(['date','id']) print (tmp) V1 V2 date id 2000-01-01 1 0.3 0.1 2000-01-02 2 0.3 0.1 2000-01-03 1 0.1 0.1 ref = pd.DataFrame({'C1':[.2,1.2],'C2':[.3,1.3], 'C250':[.1,0.0]}, index=[1,2]) ref.index.name = 'id' print (ref) C1 C2 C250 id 1 0.2 0.3 0.1 2 1.2 1.3 0.0 </code></pre> <hr> <pre><code>sum1 = ref.mul(tmp['V1'], axis=0).reset_index(level=1, drop=True) sum2 = ref.mul(tmp['V2'], axis=0).reset_index(level=1, drop=True) print (sum1) C1 C2 C250 date 2000-01-01 0.06 0.09 0.03 2000-01-02 0.36 0.39 0.00 2000-01-03 0.02 0.03 0.01 print (sum2) C1 C2 C250 date 2000-01-01 0.02 0.03 0.01 2000-01-02 0.12 0.13 0.00 2000-01-03 0.02 0.03 0.01 </code></pre> <p>and then if need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html" rel="nofollow noreferrer"><code>sum</code></a> columns:</p> <pre><code>sum1 = ref.mul(tmp['V1'], axis=0).reset_index(level=1, drop=True).sum(axis=1).to_frame('SUM') sum2 = ref.mul(tmp['V2'], axis=0).reset_index(level=1, drop=True).sum(axis=1).to_frame('SUM') print (sum1) SUM date 2000-01-01 0.18 2000-01-02 0.75 2000-01-03 0.06 print (sum2) SUM date 2000-01-01 0.06 2000-01-02 0.25 2000-01-03 0.06 </code></pre>
python|pandas
0
5,573
42,701,863
Pandas: cross dataset column matching
<p>There are multiple datasets and I would like to find out how they potentially connected with each other. E.g. if string columns in datasets A and B have lots of values in common, that might be a link. Is it possible to do this kind of analysis automatically?</p>
<p>you could always make them into dataframes and check that way. Might be slow depending on size of your data. But this is a very basic way and the code below creates extra dataframes for learning purposes, not the best code but I wanted you to see the progression.</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'A' : [np.NaN,np.NaN,3,4,5,5,3,1,5,np.NaN], 'B' : [1,0,3,5,0,0,np.NaN,9,0,0], 'C' : ['Pharmacy of IDAHO','Access medicare arkansas','NJ Pharmacy','Idaho Rx','CA Herbals','Florida Pharma','AK RX','Ohio Drugs','PA Rx','USA Pharma'], 'D' : [123456,123456,1234567,12345678,12345,12345,12345678,123456789,1234567,np.NaN], 'E' : ['Assign','Unassign','Assign','Ugly','Appreciate','Undo','Assign','Unicycle','Assign','Unicorn',]}) df2 = pd.DataFrame({'A' : [np.NaN,np.NaN,3,4,5,5,3,1,5,np.NaN], 'B' : [1,0,3,5,0,0,np.NaN,9,0,0], 'C' : ['Pharmacy of IDAHO','Arkansas','NJ Pharmacy','Idaho Rockies?','CA Herbals','blah blah','AK RX','test_test','PA Rx','USA4Lyfe'], 'D' : [123456,123456,1234567,12345678,12345,12345,12345678,123456789,1234567,np.NaN]}) #Creates a Column in DF2 If Matching df2['Values']= df['C'] == df2['C'] #Creates another dataframe where the values are only True df3 = df2[df2['Values']== True] #Prints the length of the DataFrame which actually gives you the amount of common values print("There are",len(df3), "Occurences") </code></pre> <p>output: <code>There are 5 Occurences</code></p>
python|pandas
0
5,574
27,277,381
Plot arrays same extension Matlotlib
<p>I've several time series stored in numpy arrays with de same extension (*.corr.npy). I would like to plot it in the same figure with matplotlib.</p> <p>Now I'm plotting like that:</p> <pre><code>import pylab as plt import numpy as num a=num.load('100.corr.npy') b=num.load('2345.corr.npy') ... plt.plot(a) plt.plot(b) ... plt.savefig('corr', papertype='a4', orientation='portrait', format='ps') </code></pre> <p>But as I have a lot of arrays I would like to make a cycle for plotting. Can anyone help me with that?</p>
<p>This is where the <a href="https://docs.python.org/2/library/glob.html" rel="nofollow"><code>glob</code></a> standard module shines! It will generate lists of files matching simple format rules.</p> <p>In your case:</p> <pre><code>import glob import numpy as np array_files = glob.glob('*.corr.npy') for fname in array_files: x = np.load(fname) plt.plot(x) </code></pre> <p><code>glob.glob</code> will operate in the current working directory, so ýou might want to use the absolute path instead:</p> <pre><code>ROOT_DIR = '/some/path/to/array/files/' array_files = glob.glob(os.path.join(ROOT_DIR, '*.corr.npy')) </code></pre> <hr> <p>I see you use <code>num</code> as an alias for <code>numpy</code>. I think <code>np</code> is the de-facto standard of numpy aliasing, so you could consider using that instead.</p>
python|numpy|matplotlib
4
5,575
27,346,930
Remove quotations from numpy array
<p>I have a numpy array which includes unnecessary quotations ("):</p> <pre><code>array(["'sf64user_Number__c':'tKey'", "'PreferredFirstName__c':'tPreferredFirstName'"], dtype=object) </code></pre> <p>How can I go about removing the opening and closing "s so my numpy result would read as follows:</p> <pre><code>['sf64user_Number__c':'tKey', 'PreferredFirstName__c':'tPreferredFirstName'] </code></pre> <p>BTW, my list includes 30 entries but I'm showing only two entires here.</p> <p>Any help would be appreciated.</p>
<pre><code>array(["'sf64user_Number__c':'tKey'", "'PreferredFirstName__c':'tPreferredFirstName'"], dtype=object) </code></pre> <p>is an array of 'objects', though the objects look like strings.</p> <pre><code>['sf64user_Number__c':'tKey', 'PreferredFirstName__c':'tPreferredFirstName'] </code></pre> <p>does not look like a valid array, or list. But a dictionary might print as:</p> <pre><code>{'sf64user_Number__c':'tKey', 'PreferredFirstName__c':'tPreferredFirstName'} </code></pre> <p>A dictionary wrapped in an array (with shape <code>()</code>) might print as</p> <pre><code>array({'sf64user_Number__c':'tKey', 'PreferredFirstName__c':'tPreferredFirstName'}, dtype=object) </code></pre> <p>while an array with 2 dictionaries as:</p> <pre><code>array([{'sf64user_Number__c':'tKey'}, {'PreferredFirstName__c':'tPreferredFirstName'}], dtype=object) </code></pre> <p>You may need to elaborate on how this array was generated.</p>
python|arrays|numpy
1
5,576
25,321,357
Counting qualitative values based on the date range in Pandas
<p>I am learning to use Pandas library and need to perform analysis and plot the crime data set below. Each row represents one occurrence of crime. date_rep column contains daily dates for a year. </p> <p><img src="https://i.stack.imgur.com/Eshxx.png" alt="enter image description here"></p> <p>Data needs to be grouped by month and instances of specific crime need to be added up per month, like in the table below.</p> <p><img src="https://i.stack.imgur.com/pwzqF.png" alt="enter image description here"></p> <p>The problem I am running into is that data in crime column is qualitative and I just cant find resources online that can help me solve this!</p> <p>I have been reading up on groupby and different methods of sorting but what is the most efficient way of accomplishing this? Thank you in advance!</p>
<p>To replicate something of your data:</p> <pre><code>In [29]: df = pd.DataFrame({'date_rep':pd.date_range('2012-01-01', periods=100), ...: 'crm_cd_desc':np.random.choice(['robbery', 'traffic', 'assault'], size=100)}) In [30]: df.head() Out[30]: crm_cd_desc date_rep 0 traffic 2012-01-01 1 traffic 2012-01-02 2 assault 2012-01-03 3 robbery 2012-01-04 </code></pre> <p>In essence, what you want to do is a <strong>value counts</strong>:</p> <pre><code>In [31]: df['crm_cd_desc'].value_counts() Out[31]: assault 36 traffic 34 robbery 30 dtype: int64 </code></pre> <p>However, you want to do this for each month seperately. To group by month, you can use <code>pd.Grouper</code> inside <code>groupby</code> to specify the month:</p> <pre><code>In [34]: df.groupby(pd.Grouper(key='date_rep', freq='M'))['crm_cd_desc'].value_counts() Out[34]: date_rep 2012-01-31 traffic 12 robbery 10 assault 9 2012-02-29 assault 13 traffic 11 robbery 5 2012-03-31 assault 12 robbery 10 traffic 9 2012-04-30 robbery 5 assault 2 traffic 2 dtype: int64 </code></pre> <p>And then <code>unstack</code> to get the result:</p> <pre><code>In [35]: df.groupby(pd.Grouper(key='date_rep', freq='M'))['crm_cd_desc'].value_counts().unstack() Out[35]: assault robbery traffic date_rep 2012-01-31 9 10 12 2012-02-29 13 5 11 2012-03-31 12 10 9 2012-04-30 2 5 2 </code></pre> <p>Instead of using <code>value_counts</code>, you can also group by both the month and the crime type and then calculate the length of each group:</p> <pre><code>In [46]: df.groupby([pd.Grouper(key='date_rep', freq='M'), 'crm_cd_desc']).size().unstack() Out[46]: crm_cd_desc assault robbery traffic date_rep 2012-01-31 9 10 12 2012-02-29 13 5 11 2012-03-31 12 10 9 2012-04-30 2 5 2 </code></pre>
python|pandas|data-analysis
3
5,577
26,571,741
Splitting one NumPy array into two arrays
<p>Suppose I have a <code>NumPy</code> <code>2D</code> array <code>A</code>: </p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; A=np.arange(30).reshape(3,10) &gt;&gt;&gt; A array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]]) </code></pre> <p>I need to get two arrays <code>B</code> and <code>C</code> with the following properties:</p> <pre><code>B = array([[ 0, 3, 4, 5, 6, 7, 8, 9], [10, 13, 14, 15, 16, 17, 18, 19], [20, 23, 24, 25, 26, 27, 28, 29]]) C = array([[ 1, 2], [11, 12], [21, 22]]) </code></pre> <p><strong>What is the easiest way to accomplish this?</strong></p> <p>Note that I have to get all sets of <code>C</code> (2 adjacent columns) and <code>B</code> (which is <code>A</code> without <code>C</code>). I tried different <code>NumPy</code> constructs like <code>np.delete</code>, <code>np.hstack</code> but nothing seem to work at the corner conditions like in the above example. </p>
<p>One of the simplest ways is to use indexing to select the appropriate columns:</p> <pre><code>&gt;&gt;&gt; A[:, [1, 2]] # choose all rows from columns 1-2 (gives C) array([[ 1, 2], [11, 12], [21, 22]]) &gt;&gt;&gt; A[:, np.r_[0, 3:10]] # choose all rows from columns 0, 3-9 (gives B) array([[ 0, 3, 4, 5, 6, 7, 8, 9], [10, 13, 14, 15, 16, 17, 18, 19], [20, 23, 24, 25, 26, 27, 28, 29]]) </code></pre> <p>Alternatively, you could try <code>hsplit</code> break up <code>A</code> and then concatenate bits back together. This feels less efficient than the indexing method above though:</p> <pre><code>&gt;&gt;&gt; splits = np.hsplit(A, [1, 3]) &gt;&gt;&gt; B = np.hstack((splits[0], splits[2])) &gt;&gt;&gt; C = splits[1] </code></pre>
python|arrays|numpy
14
5,578
39,064,212
BinaryNet implementation in TensorFlow
<p>I recently read a very interesting paper (<a href="http://arxiv.org/pdf/1602.02830v3.pdf" rel="noreferrer">http://arxiv.org/pdf/1602.02830v3.pdf</a>) suggesting a method for training a CNN with weights and activations constrained to [-1,1]. This is highly beneficial from power/speed perspective.</p> <p>There are implementations for the method in Torch and Theano publicly available in github: <a href="https://github.com/MatthieuCourbariaux/BinaryNet" rel="noreferrer">https://github.com/MatthieuCourbariaux/BinaryNet</a> (Theano) <a href="https://github.com/itayhubara/BinaryNet" rel="noreferrer">https://github.com/itayhubara/BinaryNet</a> (Torch)</p> <p>I was wondering if the above method can be implemented in TensorFlow ? Has anyone tried implementing this?</p>
<p>Take a look at the TensorFlow <a href="http://github.com/tensorflow/tensorflow/issues/1592" rel="nofollow" title="GitHub issue 1592">GitHub issue #1592</a>. It tracks the progress of the current attempt to add support for binary networks in TensorFlow.</p>
tensorflow|conv-neural-network|quantization
3
5,579
39,335,535
Label smoothing (soft targets) in Pandas
<p>In Pandas there is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html" rel="noreferrer"><code>get_dummies</code></a> method that one-hot encodes categorical variable. Now I want to do label smoothing as described in section 7.5.1 of <a href="http://www.deeplearningbook.org/" rel="noreferrer">Deep Learning</a> book:</p> <blockquote> <p>Label smoothing regularizes a model based on a softmax with <em>k</em> output values by replacing the hard <em>0</em> and <em>1</em> classification targets with targets of <code>eps / k</code> and <code>1 - (k - 1) / k * eps</code>, respectively. </p> </blockquote> <p>What would be the most efficient and/or elegant way to do label smothing in Pandas dataframe? </p>
<p>First, lets use much simpler equation (<code>ϵ</code> denotes how much probability mass you move from "true label" and distribute to all remaining ones).</p> <pre><code>1 -&gt; 1 - ϵ 0 -&gt; ϵ / (k-1) </code></pre> <p>You can simply use nice mathematical property of the above, since all you have to do is</p> <pre><code>x -&gt; x * (1 - ϵ) + (1-x) * ϵ / (k-1) </code></pre> <p>thus if your dummy columns are <code>a, b, c, d</code> just do</p> <pre><code>indices = ['a', 'b', 'c', 'd'] eps = 0.1 df[indices] = df[indices] * (1 - eps) + (1-df[indices]) * eps / (len(indices) - 1) </code></pre> <p>which for</p> <pre><code>&gt;&gt;&gt; df a b c d 0 1 0 0 0 1 0 1 0 0 2 0 0 0 1 3 1 0 0 0 4 0 1 0 0 5 0 0 1 0 </code></pre> <p>returns</p> <pre><code> a b c d 0 0.900000 0.033333 0.033333 0.033333 1 0.033333 0.900000 0.033333 0.033333 2 0.033333 0.033333 0.033333 0.900000 3 0.900000 0.033333 0.033333 0.033333 4 0.033333 0.900000 0.033333 0.033333 5 0.033333 0.033333 0.900000 0.033333 </code></pre> <p>as expected.</p>
python|pandas|machine-learning
10
5,580
39,150,020
Pandas select columns and data dependant on header
<p>I have a large .csv file. I want to select only the column with he time/date and 20 other columns which I know by header. </p> <p>As a test I try to take only the column with the header 'TIMESTAMP' I know this is 4207823 rows long in the .csv and it only contains dates and times. The code below selects the TIMESTAMP column but also carries on to take values from other columns as shown below:</p> <pre><code>import csv import numpy as np import pandas low_memory=False f = pandas.read_csv('C:\Users\mmso2\Google Drive\MABL Wind\_Semester 2 2016\Wind Farm Info\DataB\DataB - NaN2.csv', dtype = object)#convert file to variable so it can be edited time = f[['TIMESTAMP']] time = time[0:4207823]#test to see if this stops time taking other data print time </code></pre> <p>output</p> <pre><code> TIMESTAMP 0 2007-08-15 21:10:00 1 2007-08-15 21:20:00 2 2007-08-15 21:30:00 3 2007-08-15 21:40:00 4 2007-08-15 21:50:00 5 2007-08-15 22:00:00 6 2007-08-15 22:10:00 7 2007-08-15 22:20:00 8 2007-08-15 22:30:00 9 2007-08-15 22:40:00 10 2007-08-15 22:50:00 11 2007-08-15 23:00:00 12 2007-08-15 23:10:00 13 2007-08-15 23:20:00 14 2007-08-15 23:30:00 15 2007-08-15 23:40:00 16 2007-08-15 23:50:00 17 2007-08-16 00:00:00 18 2007-08-16 00:10:00 19 2007-08-16 00:20:00 20 2007-08-16 00:30:00 21 2007-08-16 00:40:00 22 2007-08-16 00:50:00 23 2007-08-16 01:00:00 24 2007-08-16 01:10:00 25 2007-08-16 01:20:00 26 2007-08-16 01:30:00 27 2007-08-16 01:40:00 28 2007-08-16 01:50:00 29 2007-08-16 02:00:00 #these are from the TIMESTAMP column ... ... 679302 221.484 #This is from another column 679303 NaN 679304 2015-09-23 06:40:00 679305 NaN 679306 NaN 679307 2015-09-23 06:50:00 679308 NaN 679309 NaN 679310 2015-09-23 07:00:00 </code></pre>
<p>The problem was due to an error in the input file so simple use of <code>usecols</code> in <code>pandas.read_csv</code> worked.</p> <p>code below demonstrates the selection of a few columns of data</p> <pre><code>import csv import pandas low_memory=False #read only the selected columns df = pandas.read_csv('DataB - Copy - Copy.csv',delimiter=',', dtype = object, usecols=['TIMESTAMP', 'igmmx_U_77m', 'igmmx_U_58m', ]) print df # see what the data looks like outfile = open('DataB_GreaterGabbardOnly.csv','wb')#somewhere to write the data to df.to_csv(outfile)#save selection to the blank .csv created above </code></pre>
python|python-2.7|csv|pandas|columnheader
0
5,581
39,377,229
csv file to numpy array via Python
<p>I have a csv file of the following format that I am trying to normalise. The numbers represent the counts for associated strings. The file contains close to 100K entries.</p> <pre><code>159028,CASSVDGSYEQYFGPG 86832,CASSLQLYFGEG 74720,CASSQDQDTQYFGPG 71701,CASSRVGSDYTFGSG 69360,CARNVTPPKSYAVFFGKG 52458,CAAEQFFGPG 51406,CASSSGDQDTQYFGPG 50305,CASQLYFGEG 38745,CAYFGPG 32565,CASSPDWGENTLYFGAG </code></pre> <p>I have tried to create a dictionary using the following </p> <pre><code>import csv input = csv.DictReader(open("data.csv")) for row in input: print(row) </code></pre> <p>Result </p> <pre><code>{'159028': '86832', 'CASSVDGSYEQYFGPG': 'CASSLQLYFGEG'} {'159028': '74720', 'CASSVDGSYEQYFGPG': 'CASSQDQDTQYFGPG'} {'159028': '71701', 'CASSVDGSYEQYFGPG': 'CASSRVGSDYTFGSG'} {'159028': '69360', 'CASSVDGSYEQYFGPG': 'CARNVTPPKSYAVFFGKG'} {'159028': '52458', 'CASSVDGSYEQYFGPG': 'CAAEQFFGPG'} {'159028': '51406', 'CASSVDGSYEQYFGPG': 'CASSSGDQDTQYFGPG'} {'159028': '50305', 'CASSVDGSYEQYFGPG': 'CASQLYFGEG'} {'159028': '38745', 'CASSVDGSYEQYFGPG': 'CAYFGPG'} {'159028': '32565', 'CASSVDGSYEQYFGPG': 'CASSPDWGENTLYFGAG'} ... </code></pre> <p>Instead of </p> <pre><code> {'CASSVDGSYEQYFGPG': 159028} {'CASSLQLYFGEG': '86832'} {'CASSQDQDTQYFGPG': '74720'} {'CASSRVGSDYTFGSG': '71701'} {'CARNVTPPKSYAVFFGKG': '69360'} {'CAAEQFFGPG': '52458'} {'CASSSGDQDTQYFGPG': '51406'} {'CASQLYFGEG': '50305'} {'CAYFGPG': '38745'} {'CASSPDWGENTLYFGAG': '32565'} ... </code></pre> <p>I also tried converting the csv file into a numpy array, but I get the following:</p> <pre><code>&gt;&gt;&gt;from numpy import genfromtxt &gt;&gt;&gt;data = genfromtxt('data.csv', delimiter=',') &gt;&gt;&gt;data array([[ 1.59028000e+05, nan], [ 8.68320000e+04, nan], [ 7.47200000e+04, nan], ..., [ 1.00000000e+00, nan], [ 1.00000000e+00, nan], [ 1.00000000e+00, nan]]) </code></pre> <p>There may be other ways of normalising and other data processing this data via Python.</p>
<p>Use Numpy loadtxt to import, then use a dict comprehension if you need it as a dict.</p> <pre><code>import numpy as np arr = np.loadtxt('data.csv', dtype=str, delimiter=",") b = dict([(y, x) for (x, y) in arr]) </code></pre>
python|list|csv|numpy|dictionary
1
5,582
39,345,624
Can pandas dataframe have dtype of list?
<p>I'm new to Pandas, I process a dataset, where one of the columns is string with pipe (<code>|</code>) separated values. Now I have a task to remove any text in this |-separated field that's not fulfilling certain criteria.</p> <p>My naive approach is to iterate the dataframe row by row and explode the field into list and validate this way. Then write the modified row back to the original dataframe. See this metasample:</p> <pre><code>for index, row in dataframe.iterrows(): fixed = [x[:29] for x in row['field'].split('|')] dataframe.loc[index, 'field'] = "|".join(fixed) </code></pre> <p>Is there a better, and more importantly faster way to do this?</p>
<p>IIUC you can use:</p> <pre><code>dataframe = pd.DataFrame({'field':['aasd|bbuu|cccc|ddde|e','ffff|gggg|hhhh|i|j','cccc|u|k'], 'G':[4,5,6]}) print (dataframe) G field 0 4 aasd|bbuu|cccc|ddde|e 1 5 ffff|gggg|hhhh|i|j 2 6 cccc|u|k print (dataframe.field.str.split('|', expand=True) .stack() .str[:2] #change to 29 .groupby(level=0) .apply('|'.join)) 0 aa|bb|cc|dd|e 1 ff|gg|hh|i|j 2 cc|u|k dtype: object </code></pre> <p>Another solution via list comprehension:</p> <pre><code>dataframe['new'] = pd.Series([[x[:2] for x in y] for y in dataframe.field.str.split('|')], index=dataframe.index) .apply('|'.join) print (dataframe) G field new 0 4 aasd|bbuu|cccc|ddde|e aa|bb|cc|dd|e 1 5 ffff|gggg|hhhh|i|j ff|gg|hh|i|j 2 6 cccc|u|k cc|u|k </code></pre> <hr> <pre><code>dataframe = pd.DataFrame({'field':['aasd|bbuu|cc|ddde|e','ffff|gggg|hhhh|i|j','cccc|u|k'], 'G':[4,5,6]}) print (dataframe) G field 0 4 aasd|bbuu|cc|ddde|e 1 5 ffff|gggg|hhhh|i|j 2 6 cccc|u|k </code></pre> <p>If need filter all values with values longer as <code>2</code>:</p> <pre><code>s = dataframe.field.str.split('|', expand=True).stack() print (s) 0 0 aasd 1 bbuu 2 cc 3 ddde 4 e 1 0 ffff 1 gggg 2 hhhh 3 i 4 j 2 0 cccc 1 u 2 k dtype: object dataframe['new'] = s[s.str.len() &lt; 3].groupby(level=0).apply('|'.join) print (dataframe) G field new 0 4 aasd|bbuu|cc|ddde|e cc|e 1 5 ffff|gggg|hhhh|i|j i|j 2 6 cccc|u|k u|k </code></pre> <p>Another solution:</p> <pre><code>dataframe['new'] = pd.Series([[x for x in y if len(x) &lt; 3] for y in dataframe.field.str.split('|')], index=dataframe.index) .apply('|'.join) print (dataframe) G field new 0 4 aasd|bbuu|cc|ddde|e cc|e 1 5 ffff|gggg|hhhh|i|j i|j 2 6 cccc|u|k u|k </code></pre>
python|string|list|pandas|list-comprehension
1
5,583
13,213,039
Reshape a group of Pandas Series into a DataFrame and fillin missing values
<p>I have several Pandas Series objects that look like this:</p> <pre><code>r = pd.Series({'a': [1,2,3,4]}) s = pd.Series({'b': [2,4,1]}) u = pd.Series({'c': [8,6]}) v = pd.Series({'d': [4,3,1]}) </code></pre> <p>I'd like to convert these Series objects into a data fram with the dictionay keys as column names and the values as columns. My desired output is:</p> <pre><code> 'r' 's' 'u' 'v' 0 1 2 8 4 1 2 4 6 3 2 3 1 Nan 1 3 4 Nan Nan Nan </code></pre> <p>How can I create a data frame object as depicted above? I'm aware of the <code>.fillna</code> method, but I could not get this to work with my data. The missing values should be Nan. Thanks for the help.</p>
<p>I think the easiest way to do this is to <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index" rel="nofollow"><code>join</code> on index</a>. I've tweaked the original variables to DataFrames to enable this <em>(Note: they ought to be DataFrames rather than Series anyway)</em>:</p> <pre><code>r = pd.DataFrame({'r': [1,2,3,4]}) s = pd.DataFrame({'s': [2,4,1]}) u = pd.DataFrame({'v': [8,6]}) v = pd.DataFrame({'u': [4,3,1]}) r.join([s, u, v], how='outer') # r s v u # 0 1 2 8 4 # 1 2 4 6 3 # 2 3 1 NaN 1 # 3 4 NaN NaN NaN </code></pre>
scipy|pandas
2
5,584
33,885,051
Using Spyder / Python to Open .npy File
<p>Sorry. I'm just now learning Python and everything there is to do with data analysis. </p> <p>How on earth do I open a .npy file with Spyder? Or do I have to use another program? I'm using a Mac, if that is at all relevant.</p>
<p><code>*.npy</code> files are binary files to store numpy arrays. They are created with</p> <pre><code>import numpy as np data = np.random.normal(0, 1, 100) np.save('data.npy', data) </code></pre> <p>And read in like</p> <pre><code>import numpy as np data = np.load('data.npy') </code></pre>
python|file|numpy
45
5,585
23,645,484
Elements arrangement in a numpy array
<pre><code>import numpy as np data = np.array([[0, 0, 1, 1, 2, 2], [1, 0, 0, 1, 2, 2], [1, 0, 1, 0, 0, 0], [1, 1, 0, 0, 2, 0]]) </code></pre> <p>How can I do the followings?</p> <p>Within 2 by 2 patch:</p> <pre><code>if any element is 2: put 2 if any element is 1: put 1 if all elements are 0: put 0 </code></pre> <p>The expected result is:</p> <pre><code>np.array([[1, 1, 2], [1, 1, 2]]) </code></pre>
<p>Using <code>extract_patches</code> from scikit-learn you can write this as follows (copy and paste-able code):</p> <pre><code>import numpy as np from sklearn.feature_extraction.image import extract_patches data = np.array([[0, 0, 1, 1, 2, 2], [1, 0, 0, 1, 2, 2], [1, 0, 1, 0, 0, 0], [1, 1, 0, 0, 2, 0]]) patches = extract_patches(data, patch_shape=(2, 2), extraction_step=(2, 2)) output = patches.max(axis=-1).max(axis=-1) </code></pre> <p><strong>Explanation</strong>: <code>extract_patches</code> gives you a view on patches of your array, of size <code>patch_shape</code> and lying on a grid of <code>extraction_step</code>. The result is a 4D array where the first two axes index the patch and the last two axes index the pixels within the patch. We then evaluate the maximum over the last two axes to obtain the maximum per patch.</p> <p><strong>EDIT</strong> This is actually very much related to <a href="https://stackoverflow.com/questions/23472366/changing-structure-of-numpy-array-enforcing-given-value/">this question</a></p>
python|arrays|numpy|scipy|scikit-learn
4
5,586
29,745,968
Finding Given Coordinate Positions in NumPy Array
<pre><code>import numpy as np ref_cols = [11, 5, 12, 13, 15] ref_rows = [1, 11, 2, 3, 5] rows, cols = np.mgrid[1:6, 11:16] print cols [[11 12 13 14 15] [11 12 13 14 15] [11 12 13 14 15] [11 12 13 14 15] [11 12 13 14 15]] print rows [[1 1 1 1 1] [2 2 2 2 2] [3 3 3 3 3] [4 4 4 4 4] [5 5 5 5 5]] </code></pre> <p>I want to get where the given cols and rows (11,1), (5,11), (12,2), (13,3), (15,5) exists. So the expected answer is follows:</p> <pre><code>[[True, False, False, False, False], [False, True, False, False, False], [False, False, True, False, False], [False, False, False, False, False], [False, False, False, False, True]] </code></pre> <p>I tried as:</p> <pre><code>rows_indices = np.in1d(rows, ref_rows).reshape(rows.shape) cols_indices = np.in1d(cols, ref_cols).reshape(cols.shape) answers = (rows_indices &amp; cols_indices) print answers </code></pre> <p>But answer is wrong.</p> <p>How to do it guys?</p>
<p>The reason that your try goes wrong is that you first need to evaluate each pair separately and you cannot evaluate first all rows and columns separately and then combine them in a logical operation.</p> <p>Here is one way to fix it:</p> <pre><code>out = np.zeros(rows.shape, dtype=bool) for r, c in zip(ref_rows, ref_cols): out |= (r == rows) &amp; (c == cols) print out </code></pre>
python|numpy|scipy
2
5,587
62,387,274
Create CSV with referencing in pandas
<p>I am working on creating a csv file using to_csv($outfile.csv') in python pandas, Basically i wanted to create macro variable name for csv file rather then giving it into the to_csv function. Any lead would really appriciated : )</p> <pre><code>outfile = CNR_FCS_MAY_T_2020_FX df.to_csv('outfile.csv') print("query export ran") </code></pre>
<p>Below part is working for me now : )</p> <pre><code> outfile = 'CNR_FCS_MAY_T_2020_FX.csv' df.to_csv(outfile) print("query export ran") </code></pre>
python|pandas|export-to-csv
0
5,588
62,285,072
How to replace certain values of a multiindex?
<p>I have a dataframe looking like <a href="https://i.stack.imgur.com/XPnQ4.png" rel="nofollow noreferrer">this</a> with a multiindex. Now I want to replace all values in the cluster column equal to 1 with a 4, if the date in the row before is a Saturday.</p> <p>I managed to get a boolean-array which is true for all values that need to be changed and false for all others like this:</p> <pre><code>a = pd.to_datetime(data_pivot.index.get_level_values(0)).dayofweek==6 b = data_pivot.index.get_level_values(1)==1 np.logical_and(a, b) </code></pre> <p>But now I dont know, how to change all the values in my multiindex accordingly.</p>
<p>It's not easy. A MultiIndex is made of tuples, which are immutable, and the Index itself is immutable, so we need to recreate the entire MultiIndex. </p> <p>Re-create the entire MultiIndex from the arrays, where we modify the second level using <code>np.where</code> to change it to 4 when your condition is satisfied.</p> <h3>Sample data</h3> <pre><code>import pandas as pd import numpy as np idx = pd.MultiIndex.from_arrays([pd.date_range('2010-01-01', freq='d', periods=5), [2,2,1,1,2]]) df = pd.DataFrame({'data': 1}, index=idx) # data #2010-01-01 2 1 #2010-01-02 2 1 #2010-01-03 1 1 #2010-01-04 1 1 #2010-01-05 2 1 </code></pre> <hr> <pre><code>m = ((df.index.get_level_values(0).dayofweek == 6) &amp; (df.index.get_level_values(1) == 1)) df.index = pd.MultiIndex.from_arrays([df.index.get_level_values(0), np.where(m, 4, df.index.get_level_values(1))]) print(df) # data #2010-01-01 2 1 #2010-01-02 2 1 #2010-01-03 4 1 &lt;- Index level 1 is changed to 4 #2010-01-04 1 1 #2010-01-05 2 1 </code></pre>
python|pandas|dataframe|indexing|replace
0
5,589
62,414,658
TensorFlow dataset .map() method not working for built-in tf.keras.preprocessing.image functions
<p>I load in a dataset as such:</p> <pre><code>import tensorflow_datasets as tfds ds = tfds.load( 'caltech_birds2010', split='train', as_supervised=False) </code></pre> <p>And this function works fine:</p> <pre><code>import tensorflow as tf @tf.function def pad(image,label): return (tf.image.resize_with_pad(image,32,32),label) ds = ds.map(pad) </code></pre> <p>But when when I try mapping a different built-in function</p> <pre><code>from tf.keras.preprocessing.image import random_rotation @tf.function def rotate(image,label): return (random_rotation(image,90), label) ds = ds.map(rotate) </code></pre> <p>I get the following error:</p> <blockquote> <p>AttributeError: 'Tensor' object has no attribute 'ndim'</p> </blockquote> <p>This is not the only function giving me issues, and it happens with or without the <code>@tf.function</code> decorator.</p> <p>Any help is greatly appreciated!</p>
<p>I would try using tf.py_function in here for the random_rotation. For eg:</p> <pre><code>def rotate(image, label): im_shape = image.shape [image, label,] = tf.py_function(random_rotate,[image, label], [tf.float32, tf.string]) image.set_shape(im_shape) return image, label ds = ds.map(rotate) </code></pre> <p>Although I think they do similar things here according to <a href="https://stackoverflow.com/questions/61564748/what-is-the-difference-in-purpose-between-tf-py-function-and-tf-function">What is the difference in purpose between tf.py_function and tf.function?</a>, tf.py_function is more straightforward for executing python code through tensorflow even though tf.function has a performance advantage.</p>
python|tensorflow|keras|tensorflow-datasets
2
5,590
62,156,753
Plotly express line chart - get default colors (how to color lines as specified by a dictionary object?)
<p>I have a data frame of a multivariate time series, for which I've created a interactive plotly express plot. I'm adding vertical lines at particular locations specified by a dictionary, each line associated to one of the time series, and wish to set the line color to agree with that of the corresponding variable. In essence, in the picture below, each vertical segment can be identified with one of Fp1 or Fp2 and I want to color it as red or black accordingly:</p> <p><a href="https://i.stack.imgur.com/TC5vQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TC5vQ.png" alt="enter image description here"></a></p> <p>First I plot the data frame, where X is my time series matrix, plotly.express has been imported as px, pandas imported as pd, and channels=['Fp1','Fp2']:</p> <pre><code>df=pd.DataFrame(X,columns=channels) df['id'] = df.index df = pd.melt(df, id_vars='id', value_vars=df.columns[:-1]) fig = px.line(df, x='id', y='value', color='variable',labels = {'id':'time (20K~100 sec)', 'value':'millivolts','variable':'channel'},title='Patient') </code></pre> <p>Subsequent calculations yield a dictionary, HFOs, where each key corresponds is one of my two channels, and each value is a list of times, e.g. something of the form</p> <pre><code>HFOs={'Fp1':[500,....,10500],'Fp2':[800,...11000]} </code></pre> <p>I then created my lines and added them to the figure:</p> <pre><code>for channel,times in HFOs.items(): for t in times: fig.add_shape(type='line',yref="y",xref="x", x0=t,y0=df['value'].min()*1.2,x1=t,y1=df['value'].max()*1.2,line=dict(color='black', width=.25)) fig.add_trace(go.Scatter(x=[t],y= [df['value'].max()*1.5],mode='text',showlegend=False)) fig.show() </code></pre> <p>This creates the image shown above. How do I modify line=dict(color='black',width=.25) to change the color to what I want? I wish for the vertical lines at times [500,....,10500]to be blue and times [800,...11000] to be red. (Of course, in the future, there will be many more channels.)</p> <p>I tried replacing 'black' with 'variable" but that, not surprisingly, just resulted in an error message. I feel there must be a very simple way to achieve my goal.</p>
<p>Cool question. There might be a better solution, but here's the one I found. Replace the code that creates the vertical lines with the following: </p> <pre><code># fetch the colors of the traces from the figure. colors = [trace.line["color"] for trace in fig.data] for inx, (channel,times) in enumerate(HFOs.items()): for t in times: fig.add_shape(type='line',yref="y",xref="x", x0=t,y0=df['value'].min()*1.2,x1=t,y1=df['value'].max()*2, line=dict(color=colors[inx], width=3)) fig.show() </code></pre> <p>The resulting figure looks as follows (Random data, made the vertical lines wider for visibility):</p> <p><a href="https://i.stack.imgur.com/Dyigl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dyigl.png" alt="enter image description here"></a> </p> <p>An alternative way to get the list of default colors is to use </p> <p><code>px.colors.qualitative.Plotly</code>, which produces a list of 10 hex color codes. My understanding is that these colors would be used for the first 10 series, and then used again for traces 11-20, etc. </p>
python|pandas|plotly
2
5,591
62,300,836
KeyError when using non-default models in Huggingface transformers pipeline
<p>I have no problems using the default model in the sentiment analysis pipeline. </p> <pre class="lang-py prettyprint-override"><code># Allocate a pipeline for sentiment-analysis nlp = pipeline('sentiment-analysis') nlp('I am a black man.') &gt;&gt;&gt;[{'label': 'NEGATIVE', 'score': 0.5723695158958435}] </code></pre> <p>But, when I try to customise the pipeline a little by adding a specific model. It throws a KeyError. </p> <pre class="lang-py prettyprint-override"><code>nlp = pipeline('sentiment-analysis', tokenizer = AutoTokenizer.from_pretrained("DeepPavlov/bert-base-cased-conversational"), model = AutoModelWithLMHead.from_pretrained("DeepPavlov/bert-base-cased-conversational")) nlp('I am a black man.') &gt;&gt;&gt;--------------------------------------------------------------------------- KeyError Traceback (most recent call last) &lt;ipython-input-55-af7e46d6c6c9&gt; in &lt;module&gt; 3 tokenizer = AutoTokenizer.from_pretrained("DeepPavlov/bert-base-cased-conversational"), 4 model = AutoModelWithLMHead.from_pretrained("DeepPavlov/bert-base-cased-conversational")) ----&gt; 5 nlp('I am a black man.') 6 7 ~/opt/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs) 721 outputs = super().__call__(*args, **kwargs) 722 scores = np.exp(outputs) / np.exp(outputs).sum(-1, keepdims=True) --&gt; 723 return [{"label": self.model.config.id2label[item.argmax()], "score": item.max().item()} for item in scores] 724 725 ~/opt/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py in &lt;listcomp&gt;(.0) 721 outputs = super().__call__(*args, **kwargs) 722 scores = np.exp(outputs) / np.exp(outputs).sum(-1, keepdims=True) --&gt; 723 return [{"label": self.model.config.id2label[item.argmax()], "score": item.max().item()} for item in scores] 724 725 KeyError: 58129 </code></pre>
<p>I am facing the same problem. I am working with a model from XML-R fine-tuned with squadv2 data set (&quot;a-ware/xlmroberta-squadv2&quot;). In my case, the KeyError is 16.</p> <p><strong>Link</strong></p> <p>Looking for help on the issue I have found this information: <a href="https://www.gitmemory.com/issue/huggingface/transformers/5711/660463344" rel="nofollow noreferrer">link</a> I hope you find it helpful.</p> <p><strong>Answer (from the link)</strong></p> <blockquote> <p>The pipeline throws an exception when the model predicts a token that is not part of the document (e.g. final special token [SEP])</p> </blockquote> <p><strong>My problem</strong>:</p> <pre><code>from transformers import XLMRobertaTokenizer, XLMRobertaForQuestionAnswering from transformers import pipeline nlp = pipeline('question-answering', model = XLMRobertaForQuestionAnswering.from_pretrained('a-ware/xlmroberta-squadv2'), tokenizer= XLMRobertaTokenizer.from_pretrained('a-ware/xlmroberta-squadv2')) </code></pre> <pre><code>nlp(question = &quot;Who was Jim Henson?&quot;, context =&quot;Jim Henson was a nice puppet&quot;) --------------------------------------------------------------------------- KeyError Traceback (most recent call last) &lt;ipython-input-15-b5a8ece5e525&gt; in &lt;module&gt;() 1 context = &quot;Jim Henson was a nice puppet&quot; 2 # --------------- CON INTERROGACIONES ----&gt; 3 nlp(question = &quot;Who was Jim Henson?&quot;, context =context) 1 frames /usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in &lt;listcomp&gt;(.0) 1745 ), 1746 } -&gt; 1747 for s, e, score in zip(starts, ends, scores) 1748 ] 1749 KeyError: 16 </code></pre> <p><strong>Solution 1: Adding punctuation at the end of the context</strong></p> <p>In order to avoid the bug of trying to extract the final token (which may be an special one as [SEP]) I added an element (in this case a punctuation mark) at the end of the context:</p> <pre><code>nlp(question = &quot;Who was Jim Henson?&quot;, context =&quot;Jim Henson was a nice puppet.&quot;) [OUT] {'answer': 'nice puppet.', 'end': 28, 'score': 0.5742837190628052, 'start': 17} </code></pre> <p><strong>Solution 2: Do not use pipeline()</strong></p> <p>The original model can handle itself to retrieve the correct token`s index.</p> <pre><code>from transformers import XLMRobertaTokenizer, XLMRobertaForQuestionAnswering import torch tokenizer = XLMRobertaTokenizer.from_pretrained('a-ware/xlmroberta-squadv2') model = XLMRobertaForQuestionAnswering.from_pretrained('a-ware/xlmroberta-squadv2') question, text = &quot;Who was Jim Henson?&quot;, &quot;Jim Henson was a nice puppet&quot; encoding = tokenizer(question, text, return_tensors='pt') input_ids = encoding['input_ids'] attention_mask = encoding['attention_mask'] start_scores, end_scores = model(input_ids, attention_mask=attention_mask, output_attentions=False)[:2] all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0]) answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]) answer = tokenizer.convert_tokens_to_ids(answer.split()) answer = tokenizer.decode(answer) </code></pre> <p><strong>Update</strong></p> <p>Looking in more detail your case, I found that the default model for Conversational task in the pipeline is <code>distilbert-base-cased</code> (<a href="https://huggingface.co/transformers/_modules/transformers/pipelines.html#ConversationalPipeline" rel="nofollow noreferrer">source code</a>).</p> <p>The first solution I posted is not a good solution indeed. Trying other questions I got the same error. However, the model itself outside the pipeline works fine (as I showed in solution 2). Thus, I believe that not all models can be introduced in the pipeline. If anyone has more information about it please help us out. Thanks.</p>
huggingface-transformers
2
5,592
51,503,717
Alternative to groupby for generating a summary table from tidy pandas DataFrame
<p>I want to generate a summary table from a <a href="https://en.wikipedia.org/wiki/Tidy_data" rel="nofollow noreferrer">tidy</a> pandas DataFrame. I now use <code>groupby</code> and two <code>for</code> loops, which does not seem efficient. Seems stacking and unstacking would get me there, but I have failed. </p> <p><strong>Sample data</strong> </p> <pre><code>import pandas as pd import numpy as np import copy import random df_tidy = pd.DataFrame(columns = ['Stage', 'Exc', 'Cat', 'Score']) for _ in range(10): df_tidy = df_tidy.append( { 'Stage': random.choice(['OP', 'FUEL', 'EOL']), 'Exc': str(np.random.randint(low=0, high=1000)), 'Cat': random.choice(['CC', 'HT', 'PM']), 'Score': np.random.random(), }, ignore_index=True ) df_tidy </code></pre> <p>returns</p> <pre><code> Stage Exc Cat Score 0 OP 929 HT 0.946234 1 OP 813 CC 0.829522 2 FUEL 114 PM 0.868605 3 OP 896 CC 0.382077 4 FUEL 10 CC 0.832246 5 FUEL 515 HT 0.632220 6 EOL 970 PM 0.532310 7 FUEL 198 CC 0.209856 8 FUEL 848 CC 0.479470 9 OP 968 HT 0.348093 </code></pre> <p>I would like a new DataFrame with Stages as columns, Cats as rows and sum of Scores as values. I achieve it this way: </p> <p><strong>Working but probably inefficient approach</strong></p> <pre><code>new_df = pd.DataFrame(columns=list(df_tidy['Stage'].unique())) for cat, small_df in df_tidy.groupby('Cat'): for lcs, smaller_df in small_df.groupby('Stage'): new_df.loc[cat, lcs] = smaller_df['Score'].sum() new_df['Total'] = new_df.sum(axis=1) new_df </code></pre> <p>Which returns what I want:</p> <pre><code> OP FUEL EOL Total CC 1.2116 1.52157 NaN 2.733170 HT 1.29433 0.63222 NaN 1.926548 PM NaN 0.868605 0.53231 1.400915 </code></pre> <p>But I cannot believe this is the simplest or most efficient path. </p> <p><strong>Question</strong></p> <p>What pandas magic am I missing out on? </p> <p><strong>Update - Timing the proposed solutions</strong></p> <p>To understand the differences between <code>pivot_table</code> and <code>crosstab</code> proposed below, I timed the three solutions with a 100,000 row dataframe built exactly as above: </p> <p><em>groupby solution</em>, that I thought was inefficient: </p> <pre><code>%%timeit new_df = pd.DataFrame(columns=list(df_tidy['Stage'].unique())) for cat, small_df in df_tidy.groupby('Cat'): for lcs, smaller_df in small_df.groupby('Stage'): new_df.loc[cat, lcs] = smaller_df['Score'].sum() new_df['Total'] = new_df.sum(axis=1) 41.2 ms ± 3.18 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre> <p><em><code>crosstab</code> solution</em>, that requires a creation of a DataFrame in the background, even if the passed data is already in DataFrame format:</p> <pre><code>%%timeit pd.crosstab(index=df_tidy.Cat,columns=df_tidy.Stage, values=df_tidy.Score, aggfunc='sum', margins = True, margins_name = 'Total').iloc[:-1,:] 67.8 ms ± 1.08 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre> <p><em><code>pivot_table</code> solution</em>:</p> <pre><code>%%timeit pd.pivot_table(df_tidy, index=['Cat'], columns=["Stage"], margins=True, margins_name='Total', aggfunc=np.sum).iloc[:-1,:] 713 ms ± 20.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) </code></pre> <p>So, it would appear that the clunky <code>groupby</code>solution is the quickest.</p>
<p>A simple solution from <code>crosstab</code></p> <pre><code>pd.crosstab(index=df.Cat,columns=df.Stage,values=df.Score,aggfunc='sum', margins = True, margins_name = 'Total').iloc[:-1,:] Out[342]: Stage EOL FUEL OP Total Cat CC NaN 1.521572 1.211599 2.733171 HT NaN 0.632220 1.294327 1.926547 PM 0.53231 0.868605 NaN 1.400915 </code></pre>
python|pandas
3
5,593
51,382,253
Pandas - InvalidIndexError: Reindexing only valid with uniquely valued Index objects
<p>I have 2 dataframes, with one that contains company data and the other that has some employee count details as shown below:</p> <p><strong>df1:</strong></p> <pre><code>cust_id,name,count 1,abc, 2,def, </code></pre> <p><strong>df2:</strong></p> <pre><code>account,count abc,4 klm,1 </code></pre> <p><strong>I am trying to generate the below output (Expected output)</strong>:</p> <pre><code>cust_id,name,count 1,abc,4 </code></pre> <p><strong>Given below is what I have thus far</strong>:</p> <pre><code>df2_updated = df2.reset_index() df1['count'] = df1['cust_id'].map(df2_updated.set_index('name')['count']) </code></pre> <p>On running the above I keep getting the below error</p> <pre><code>InvalidIndexError: Reindexing only valid with uniquely valued Index objects </code></pre> <p>On running the below I find no duplicates. Could anyone assist</p> <pre><code>df2_updated.index.get_duplicates() </code></pre>
<p>This should work: </p> <pre><code>result = pd.merge(df1, df2, left_on='name', right_on='account', how='left', sort=False) </code></pre>
python|python-3.x|pandas|duplicates
0
5,594
51,256,020
How to save data from `console.log` to MySQL database?
<p>`</p> <pre><code>var imageScaleFactor = 0.5; var outputStride = 16; var flipHorizontal = false; var imageElement = document.getElementById('video'); posenet.load().then(function(net){ return net.estimateSinglePose(imageElement, imageScaleFactor, flipHorizontal, outputStride) }).then(function(pose){ console.log(pose); }) </code></pre> <p> `I wanna save that data to database. What should I do?</p> <p>How to save data from <code>console.log</code> to MySQL database? I use PHP and JavaScript. I wanna save that data to a database. What should I do?</p>
<p>You could create a PHP handler that you could send the console.log() JS dump to via an ajax POST. That's the only way to bridge the gap I could think of.</p> <p>In your Javascript you could override the built-in <code>console.log</code> function to accomplish this:</p> <pre><code>console.log = function(value){ ... // do your ajax post here sending value to your php script that writes it to the db ... }; </code></pre>
javascript|php|mysql|tensorflow
-2
5,595
48,316,149
Replace row value by comparing dates
<p>I have a date in a list:</p> <pre><code>[datetime.date(2017, 8, 9)] </code></pre> <p>I want replace the value of a dataframe matching that date with zero.</p> <p>Dataframe:</p> <pre><code> Date Amplitude Magnitude Peaks Crests 0 2017-06-21 6.953356 1046.656154 4 3 1 2017-06-27 7.015520 1185.221306 5 4 2 2017-06-28 6.947471 908.115055 2 2 3 2017-06-29 6.921587 938.175153 3 3 4 2017-07-02 6.906078 938.273547 3 2 5 2017-07-03 6.898809 955.718452 6 5 6 2017-07-04 6.876283 846.514852 5 5 7 2017-07-26 6.862897 870.610086 6 5 8 2017-07-27 6.846426 824.403786 7 7 9 2017-07-28 6.831949 813.753420 7 7 10 2017-07-29 6.823125 841.245427 4 3 11 2017-07-30 6.816301 846.603427 5 4 12 2017-07-31 6.810133 842.287006 5 4 13 2017-08-01 6.800645 794.167590 3 3 14 2017-08-02 6.793034 801.505774 4 3 15 2017-08-03 6.790814 860.497395 7 6 16 2017-08-04 6.785664 815.055002 4 4 17 2017-08-05 6.782069 829.607640 5 4 18 2017-08-06 6.778176 819.014799 4 3 19 2017-08-07 6.774587 817.624203 5 5 20 2017-08-08 6.771193 815.101641 4 3 21 2017-08-09 6.765695 772.970000 1 1 22 2017-08-10 6.769422 945.207554 1 1 23 2017-08-11 6.773154 952.422598 4 3 24 2017-08-12 6.770926 826.700122 4 4 25 2017-08-13 6.772816 916.046905 5 5 26 2017-08-14 6.771130 834.881662 5 5 27 2017-08-15 6.769183 826.009391 5 5 28 2017-08-16 6.767313 824.650882 5 4 29 2017-08-17 6.765894 832.752100 5 5 30 2017-08-18 6.766861 894.165751 5 5 31 2017-08-19 6.768392 912.200274 4 3 </code></pre> <p>i have tried this:</p> <pre><code>for x in range(len(all_details)): for y in selected_day: m = all_details['Date'] &gt; y all_details.loc[m, 'Peaks'] = 0 </code></pre> <p>But getting an error:</p> <blockquote> <p>ValueError: Arrays were different lengths: 32 vs 1</p> </blockquote> <p>Can anybody suggest me the correct way to do it> Any help would be appreciated.</p>
<p>First your solution working nice with your sample data.</p> <p>Another faster solution is creating each mask in loop and then reduce by logical <code>or</code>, <code>and</code> - what need. Better it is explained <a href="https://stackoverflow.com/q/20528328/2901002">here</a>.</p> <pre><code>L = [datetime.date(2017, 8, 9)] m = np.logical_or.reduce([all_details['Date'] &gt; x for x in L]) all_details.loc[m, 'Peaks'] = 0 </code></pre> <p>In your solution is better compare only by minimal date from <code>list</code>:</p> <pre><code>all_details.loc[all_details['Date'] &gt; min(L), 'Peaks'] = 0 </code></pre>
python|pandas|datetime|dataframe
1
5,596
48,285,891
For loops with Dask arrays and/or h5py
<p>I have a time series with over a hundred million rows of data. I am trying to reshape it to include a time window. My sample data is of shape (79499, 9) and I am trying to reshape it to (79979, 10, 9). The following for loop works fine in numpy. </p> <pre><code>def munge(data, backprop_window): result = [] for index in range(len(data) - backprop_window): result.append(data[index: index + backprop_window]) return np.array(result) X_train = munge(X_train, backprop_window) </code></pre> <p>I have tried a few variations with dask, but all of them seem to hang without giving any error messages, including this one:</p> <pre><code>import h5py import dask.array as da f1 = h5py.File("data.hdf5") X_train = f1.create_dataset('X_train',data = X_train, dtype='float32') x = da.from_array(X_train, chunks=(10000, d.shape[1])) result = x.compute(munge(x, backprop_window)) </code></pre> <p>Any wise thoughts appreciated. </p>
<p>This doesn't necessarily solve your dask issue, but as a much faster alternative to <code>munge</code>, you could instead use numpy's <code>stride_tricks</code> to create a rolling view into your data (based on example <a href="http://arogozhnikov.github.io/2015/09/30/NumpyTipsAndTricks2.html#Rolling-window,--strided-tricks" rel="nofollow noreferrer">here</a>). </p> <pre><code>def munge_strides(data, backprop_window): """ take a rolling view into array by manipulating strides """ from numpy.lib.stride_tricks import as_strided new_shape = (data.shape[0] - backprop_window, backprop_window, data.shape[1]) new_strides = (data.strides[0], data.strides[0], data.strides[1]) return as_strided(data, shape=new_shape, strides=new_strides) X_train = np.arange(100).reshape(20, 5) np.array_equal(munge(X_train, backprop_window=3), munge_strides(X_train, backprop_window=3)) Out[112]: True </code></pre> <p><code>as_strided</code> needs to be used very carefully - it is an 'advanced' feature and incorrect parameters can easily lead you into segfaults - see <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.lib.stride_tricks.as_strided.html" rel="nofollow noreferrer">docstring</a></p>
python|numpy|dask|h5py
2
5,597
48,160,197
Adding a column from the original data frame to a groupby data frame?
<p>I have a data frame df1 with data that looks like this: </p> <pre><code> Item Store Sales Dept 0 1 1 5 A 1 1 2 3 A 2 1 3 4 A 3 2 1 3 A 4 2 2 3 A </code></pre> <p>I then want to use group by to see the total sales by item: </p> <pre><code>df2 = df1.groupby(['Item']).agg({'Item':'first','Sales':'sum'}) </code></pre> <p>Which gives me: </p> <pre><code> Item Sales 0 1 12 1 2 6 </code></pre> <p>And then I add a column with the rank of the item in terms of number of sales: </p> <pre><code> df2['Item Rank'] = df2['Sales'].rank(ascending=False,method='min').astype(int) </code></pre> <p>So that I get: </p> <pre><code> Item Sales Item Rank 0 1 12 1 1 2 6 2 </code></pre> <p>I now want to add the Dept column to df2, so that I have </p> <pre><code> Item Sales Item Rank Dept 0 1 12 1 A 1 2 6 2 A </code></pre> <p>But everything I have tried has failed. I either get an empty column, when I try to add the column in from the beginning, or a df with the wrong size if I try to concatenate the new df with the column from the original df. </p>
<pre><code>df.groupby(['Item']).agg({'Item':'first','Sales':'sum','Dept': 'first'}).\ assign(Itemrank=df.Sales.rank(ascending=False,method='min').astype(int) ) Out[64]: Item Dept Sales Itemrank Item 1 1 A 12 3 2 2 A 6 2 </code></pre>
python|pandas|pandas-groupby
2
5,598
48,449,087
Imputing means using pivot table in pandas?
<p>I am working on the Titanic data set right now, which as some missing values in the "Age" feature. I have the following Pivot Table created by pandas using the non-missing data:</p> <pre><code>+---------+-----------+-----------+ | Pclass | Sex | Age | +---------+-----------+-----------+ | 1 | female | 34.240964 | | | male | 41.281386 | | 2 | female | 28.722973 | | | male | 30.740707 | | 3 | female | 21.750000 | +---------+-----------+-----------+ </code></pre> <p>I would now like to impute the missing values in the 'Age' feature with the corresponding values in the pivot table but since I am new to Pivot tables I have no clue how to properly hint pandas which values to use for imputing </p> <p>My first thought would be to do an if elif else approach but that seems rather unidiomatic... </p> <p>any hints or pointers on where to go?</p>
<p>Using <code>fillna</code> </p> <pre><code>Originaldf=Originaldf.set_index(['Pclass','Sex']).Age.fillna(piv.Age).reset_index() </code></pre>
python|pandas
2
5,599
48,865,554
using dynamic_rnn with multiRNN gives error
<p>I want to create a dynamic_rnn using tensorflow in python with Multi LSTM cells. form searches on internet I have found this code:</p> <pre><code>import tensorflow as tf batch_size = 30 truncated_series_length = 4 num_layers = 3 state_size = 300 x_input = tf.placeholder(tf.float32, [batch_size, truncated_series_length, 1]) cell = tf.nn.rnn_cell.LSTMCell(state_size, state_is_tuple=True) cell = tf.nn.rnn_cell.MultiRNNCell([cell]*num_layers, state_is_tuple=True) outputs, state = tf.nn.dynamic_rnn(cell, x_input, dtype=tf.float32) </code></pre> <p>but when I run this code an error happens:</p> <blockquote> <p>ValueError: Dimensions must be equal, but are 600 and 301 for 'rnn/while/rnn/multi_rnn_cell/cell_0/cell_0/lstm_cell/MatMul_1' (op: 'MatMul') with input shapes: [30,600], [301,1200].</p> </blockquote> <p>but when I set num_layers = 1, there isn't any error. do you have any idea that where this error come from?</p>
<p>I think the problem is roughly addressed here: <a href="https://github.com/tensorflow/tensorflow/issues/16186" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/16186</a></p> <p>Basically, to elaborate the answer from this issue: Using <code>[cell]*num_layers</code> will create a list of two references to the same cell instance but not two separate instances. The correct behavior is that the LSTM cell at first layer should have weight shape like (dimensionality_of_feature + cell_state_size, 4*cell_state_size). And the LSTM cells at consecutive layers should have weight shape like (2*cell_state_size, 4*cell_state_size). Because they no longer take original input but the input from the previous layer. </p> <p>But the problem in the shared reference code is that for both layers they will use the same kernel due to "they" are actually the same cell instance. Therefore, even if you have more than one layers, all layers except the first one will have the wrong shape of weight (always referring to the weight of the first layer). </p> <p>A better code as indicated in this issue is using list comprehension or for loop to create multiple separate instances to avoid sharing references. </p>
python|tensorflow|lstm|rnn
2