Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
3,900
66,047,311
Slow numpy and pandas imports on Google Cloud Run
<p>I'm developping an API and deploying it on Google Cloud Run.</p> <p>There is a prestart python script that import pandas and numpy. When I time the imports numpy take about 2 seconds and pandas about 4 seconds on Cloud Run as opposed to less than 0.5 second on my local machine.</p> <p>I'm using <code>python:3.8-alpine</code> as my base image in order to build my docker container. (Though I have tried several non Alpine images... )</p> <p>Here is the Dockerfile</p> <pre><code>FROM python:3.8-alpine COPY requirements.txt ./ RUN apk add --no-cache --virtual build-deps g++ gcc gfortran make libffi-dev openssl-dev file build-base \ &amp;&amp; apk add --no-cache libstdc++ openblas-dev lapack-dev \ &amp;&amp; pip install --no-cache-dir uvicorn gunicorn fastapi \ &amp;&amp; CFLAGS=&quot;-g0 -Wl,--strip-all -I/usr/include:/usr/local/include -L/usr/lib:/usr/local/lib&quot; \ &amp;&amp; pip install --no-cache-dir --compile --global-option=build_ext --global-option=&quot;-j 16&quot; -r requirements.txt \ &amp;&amp; rm -r /root/.cache \ &amp;&amp; find /usr/local/lib/python3.*/ -name 'tests' -exec rm -r '{}' + \ &amp;&amp; find /usr/local/lib/python3.*/site-packages/ \( -type d -a -name test -o -name tests \) -o \( -type f -a -name '*.pyc' -o -name '*.pyo' \) -exec rm -r '{}' + \ &amp;&amp; find /usr/local/lib/python3.*/site-packages/ -name '*.so' -print -exec /bin/sh -c 'file &quot;{}&quot; | grep -q &quot;not stripped&quot; &amp;&amp; strip -s &quot;{}&quot;' \; \ &amp;&amp; find /usr/lib/ -name '*.so' -print -exec /bin/sh -c 'file &quot;{}&quot; | grep -q &quot;not stripped&quot; &amp;&amp; strip -s &quot;{}&quot;' \; \ &amp;&amp; find /usr/local/lib/ -name '*.so' -print -exec /bin/sh -c 'file &quot;{}&quot; | grep -q &quot;not stripped&quot; &amp;&amp; strip -s &quot;{}&quot;' \; \ &amp;&amp; rm -rf /usr/local/lib/python*/ensurepip \ &amp;&amp; rm -rf /usr/local/lib/python*/idlelib \ &amp;&amp; rm -rf /usr/local/lib/python*/distutils/command \ &amp;&amp; rm -rf /usr/local/lib/python*/lib2to2 \ &amp;&amp; rm -rf /usr/local/lib/python*/__pycache__/* \ &amp;&amp; rm -r /requirements.txt /databases.zip \ &amp;&amp; rm -rf /tmp/* \ &amp;&amp; rm -rf /var/cache/apk/* \ &amp;&amp; apk del build-deps g++ gcc make libffi-dev openssl-dev file build-base CMD [&quot;python&quot;,&quot;script.py&quot;] </code></pre> <p>requirements.txt :</p> <pre><code>numpy==1.2.0 pandas==1.2.1 </code></pre> <p>and the execution python file script.py :</p> <pre><code>import time ts = time.time() import pandas te = time.time() print(te-ts) </code></pre> <p>Are these slow imports to be expected? Or perhaps there is some python import trick ?</p> <p>I have been looking all over stackoverflow and github issues but nothing similar to this &quot;issue&quot;/&quot;behavior&quot;.</p> <p>Thanks in advance.</p>
<p>This is a known issue in the Python ecosystem.</p> <blockquote> <p>all modules are imported at runtime, and some modules are 300-500MB large in size</p> </blockquote> <p>There are tons of complaints about slow import times. The best thread is this one: <a href="https://stackoverflow.com/questions/16373510/improving-speed-of-python-module-import">improving speed of Python module import</a></p> <p>As regarding Cloud Run, I have experimented with various ways, and nothing was able to reduce the slowness drastically.</p> <p><strong>If you want to use in serverless environment</strong>, or in other cold start ecosystem,<br /> be aware <strong>the cold start can be in order of 10 seconds</strong> magnitude because of the &quot;imports&quot;.</p> <pre><code> importing pandas took 1.42 seconds importing numpy took 1.90 seconds importing torch took 2.84 seconds importing torchvision took 0.78 seconds importing IPython took 1.22 seconds importing sklearn took 1.51 seconds importing import dask took 0.74 seconds </code></pre> <p><strong>Try 1:</strong></p> <p>No solution with bumping up the CPU to maximum possible</p> <p><strong>Try 2:</strong></p> <p>No speed improvement with rewrite imports as:</p> <pre><code>pd = imp.load_module(&quot;pandas&quot;,None,&quot;/usr/local/lib/python3.10/site-packages/pandas&quot;,('','',5)) </code></pre> <p>this way the interpreter skips the &quot;finding&quot; phase, but still timing of this is the same, so there is no speed improvement over this.</p> <p><strong>Try 3:</strong></p> <p>No benefits of using install requirements with compile</p> <pre><code>RUN python -m pip install --no-cache-dir --compile -r requirements-prod.txt RUN python -m compileall . </code></pre> <p>I even explored the container and the <code>__pycache__</code> was built for all the modules and app code as well, but no improvement over the cold start time.</p> <p><strong>Summary:</strong></p> <p>A good read about a <a href="https://scientific-python.org/specs/spec-0001/" rel="nofollow noreferrer">lazy load proposal is here</a></p>
python-3.x|pandas|docker|google-cloud-platform|alpine-linux
2
3,901
52,669,198
Mapping two arrays containing strings to the same integer values
<p>I have 2 arrays like:</p> <pre><code>['16.37.235.200','17.37.235.200','16.37.235.200', '18.37.235.200'] ['17.37.235.200','17.37.235.200','16.37.235.200', '17.37.235.200'] </code></pre> <p>And I want to map (injective) every IP address to an integer value.<br> Like for that instance above, eg.:</p> <pre><code>[0,1,0,3] [1,1,0,1] </code></pre> <p>Is their an existing function (of NumPy or anything else) for that?</p>
<p>Ok i found this solution for seperate mapping of the lists <a href="https://stackoverflow.com/questions/9206609/python-map-list-of-strings-to-integer-list">Python Map List of Strings to Integer List</a></p> <p>Works like i want for seperated mapping of the 2 lists.</p>
python|arrays|string|numpy|integer
0
3,902
52,897,402
How to iterate through two numpy arrays of different dimensions
<p>I am working with the MNIST dataset, <code>x_test</code> has dimension of (10000,784) and <code>y_test</code> has a dimension of (10000,10). I need to iterate through each sample of these two numpy arrays at the same time as I need to pass them individually to <code>score.evaluate()</code></p> <p>I tried <code>nditer</code>, but it throws an error saying operands could not be broadcasted together since hey have different shape. </p> <pre><code> score=[] for x_sample, y_sample in np.nditer ([x_test,y_test]): a=x_sample.reshape(784,1) a=np.transpose(a) b=y_sample.reshape(10,1) b=np.transpose(b) s=model.evaluate(a,b,verbose=0) score.append(s) </code></pre>
<p><em>Assuming</em> that what you are actually trying to do here is to get the individual loss per sample in your test set, here is a way to do it (in your approach, even if you get past the iteration part, you will have issues with <code>model.evaluate</code>, which was not designed for <em>single</em> sample pairs)...</p> <p>To make the example reproducible, here I also assume we have first run the <a href="https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py" rel="nofollow noreferrer">Keras MNIST CNN example</a> for only 2 epochs; so, the shape of our data is:</p> <pre><code>x_test.shape # (10000, 28, 28, 1) y_test.shape # (10000, 10) </code></pre> <p>Given that, here is a way to get the individual loss per sample:</p> <pre><code>from keras import backend as K y_pred = model.predict(x_test) y_test = y_test.astype('float32') # necessary, as y_pred.dtype is 'float32' y_test_tensor = K.constant(y_test) y_pred_tensor = K.constant(y_pred) g = K.categorical_crossentropy(target=y_test_tensor, output=y_pred_tensor) ce = K.eval(g) # 'ce' for cross-entropy ce # array([1.1563368e-05, 2.0206178e-05, 5.4946734e-04, ..., 1.7662416e-04, # 2.4232995e-03, 1.8954457e-05], dtype=float32) ce.shape # (10000,) </code></pre> <p>i.e. <code>ce</code> now contains what the <code>score</code> list in your question was supposed to contain.</p> <p>For <strong>confirmation</strong>, let's calculate the loss for all test samples using <code>model.evaluate</code>:</p> <pre><code>score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) # Test loss: 0.050856544668227435 </code></pre> <p>and again manually, averaging the values of the <code>ce</code> we have just calculated:</p> <pre><code>import numpy as np log_loss = np.sum(ce)/ce.shape[0] log_loss # 0.05085654296875 </code></pre> <p>which, although not exactly equal (due to different numeric precision involved in the two ways of calculation), they are <em>practically</em> equal indeed:</p> <pre><code>log_loss == score[0] # False np.isclose(log_loss, score[0]) # True </code></pre> <p>Now, the adaptation of this to your own case, where the shape of <code>x_test</code> is <code>(10000, 784)</code>, is arguably straighforward...</p>
arrays|numpy|keras
1
3,903
52,597,757
Outer product with Numpy and Ndarray
<p>In one of my codes, I use numpy for matrices calculations.</p> <p>At one point, I have to do the outer product between 2 vectors to get a matrix. That's where I'm stuck. At first, I tried numpy.dot, or other matrix product, but when the arguments are both 1D, it only does the scalar product, which not what I want. Then I found that numpy.outer does exactly what I want : a column * a line.</p> <p>The thing is, my vectors are not arrays. Since they result from a numpy.dot operation, they are ndarray objects. But ndarrays do not have an outer method. I have tried everything I found on the Internet to convert my ndarrays to simple arrays. But nothing works, I still have a ndarray and the same attribute error again and again.</p> <p>Now I don't know what to try, so I wanted to check if you knew another way to do this outer product, before I do some nasty things implying cloning the values in a array.</p> <p>Thank you very much for your help.</p>
<p><code>outer</code> is not a method of any class, it is just a plain old function found in the <code>numpy</code> module.</p> <p>Here is an example of how to use it:</p> <pre><code>import numpy x = numpy.array([1, 2, 3]) y = numpy.array([4, 5, 6]) # x.__class__ and y.__class__ are both 'numpy.ndarray' outer_product = numpy.outer(x, y) # outer_product has the value: # array([[ 4, 5, 6], # [ 8, 10, 12], # [12, 15, 18]]) </code></pre>
python|numpy|multidimensional-array
3
3,904
52,525,361
Convert a list of vectors to DataFrame in PySpark
<p>Firstly, I have load the data by:</p> <pre><code>import urllib.request f = urllib.request.urlretrieve("https://www.dropbox.com/s/qz62t2oyllkl32s/kddcup.data_10_percent.gz?dl=1", "kddcup.data_10_percent.gz") data_file = "./kddcup.data_10_percent.gz" raw_data = sc.textFile(data_file) </code></pre> <p>Then, I created a list of required data by:</p> <pre><code>import numpy as np import pandas as pd def parse_interaction(line): line_split = line.split(",") # keep just numeric and logical values symbolic_indexes = [1,2,3,41] # in the above sample would be: tcp,http,SF,normal clean_line_split = [item for i,item in enumerate(line_split) if i not in symbolic_indexes] return np.array([x for x in clean_line_split], dtype=float) vector_data = raw_data.map(parse_interaction) </code></pre> <p>Now, I can see data by <code>vector_data.take(2)</code>:</p> <pre><code>[array([0.00e+00, 1.81e+02, 5.45e+03, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 1.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 8.00e+00, 8.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 1.00e+00, 0.00e+00, 0.00e+00, 9.00e+00, 9.00e+00, 1.00e+00, 0.00e+00, 1.10e-01, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00]), array([0.00e+00, 2.39e+02, 4.86e+02, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 1.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 8.00e+00, 8.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 1.00e+00, 0.00e+00, 0.00e+00, 1.90e+01, 1.90e+01, 1.00e+00, 0.00e+00, 5.00e-02, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00, 0.00e+00])] </code></pre> <p>I want to convert it into DataFrame with vector_data = <code>pd.DataFrame(vector_data)</code>, but the commands are not working and I am getting error, as:</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-112-6a2dcc5bdb85&gt; in &lt;module&gt;() 10 11 vector_data = raw_data.map(parse_interaction) ---&gt; 12 vector_data = pd.DataFrame(vector_data) ~/anaconda3/lib/python3.6/site-packages/pandas/core/frame.py in __init__(self, data, index, columns, dtype, copy) 420 dtype=values.dtype, copy=False) 421 else: --&gt; 422 raise ValueError('DataFrame constructor not properly called!') 423 424 NDFrame.__init__(self, mgr, fastpath=True) ValueError: DataFrame constructor not properly called! </code></pre> <p>I know that the input vector is in special format and I need to add something into DataFrame command to work properly. Please, guide me on that how can I make a DataFrame on that. </p>
<p>You can use <code>from_records()</code>:</p> <pre><code>vector_data = [np.array(...), np.array(...)] pd.DataFrame.from_records(vector_data) </code></pre>
python|pandas|pyspark|jupyter-notebook
0
3,905
58,254,885
Error in build while using keras custom layer
<p>I am trying to train an unsupervised classification model for which i am using deep clustering with my model on Keras. The code I am referring for clustering is <a href="https://github.com/XifengGuo/DCEC/blob/master/DCEC.py" rel="nofollow noreferrer">this</a>. While running the code i am getting an error in the cutom layer while adding weights. Below you can see the Code and the error. </p> <pre><code>import metrics import numpy as np from tensorflow.keras.layers import Layer, InputSpec import tensorflow.keras.backend as K from tensorflow.keras.models import Model from sklearn.cluster import KMeans class ClusteringLayer(Layer): """ Clustering layer converts input sample (feature) to soft label, i.e. a vector that represents the probability of the sample belonging to each cluster. The probability is calculated with student's t-distribution. # Example ``` model.add(ClusteringLayer(n_clusters=10)) ``` # Arguments n_clusters: number of clusters. weights: list of Numpy array with shape `(n_clusters, n_features)` witch represents the initial cluster centers. alpha: parameter in Student's t-distribution. Default to 1.0. # Input shape 2D tensor with shape: `(n_samples, n_features)`. # Output shape 2D tensor with shape: `(n_samples, n_clusters)`. """ def __init__(self, n_clusters, weights=None, alpha=1.0, **kwargs): if 'input_shape' not in kwargs and 'input_dim' in kwargs: kwargs['input_shape'] = (kwargs.pop('input_dim'),) super(ClusteringLayer, self).__init__(**kwargs) self.n_clusters = n_clusters self.alpha = alpha self.initial_weights = weights self.input_spec = InputSpec(ndim=2) def build(self, input_shape): assert len(input_shape) == 2 input_dim = input_shape[1] self.input_spec = InputSpec(dtype=K.floatx(), shape=(None, input_dim)) self.clusters = self.add_weight(shape=(self.n_clusters, input_dim), initializer='glorot_uniform', name='clusters') if self.initial_weights is not None: self.set_weights(self.initial_weights) del self.initial_weights self.built = True def call(self, inputs, **kwargs): """ student t-distribution, as same as used in t-SNE algorithm. q_ij = 1/(1+dist(x_i, u_j)^2), then normalize it. Arguments: inputs: the variable containing data, shape=(n_samples, n_features) Return: q: student's t-distribution, or soft labels for each sample. shape=(n_samples, n_clusters) """ q = 1.0 / (1.0 + (K.sum(K.square(K.expand_dims(inputs, axis=1) - self.clusters), axis=2) / self.alpha)) q **= (self.alpha + 1.0) / 2.0 q = K.transpose(K.transpose(q) / K.sum(q, axis=1)) return q def compute_output_shape(self, input_shape): assert input_shape and len(input_shape) == 2 return input_shape[0], self.n_clusters def get_config(self): config = {'n_clusters': self.n_clusters} base_config = super(ClusteringLayer, self).get_config() return dict(list(base_config.items()) + list(config.items())) class Inf: def __init__(self, D1, D2, n_clusters): from tensorflow.keras.models import model_from_json self.n_clusters = n_clusters json_file = open(D1, 'r') loaded_model_json = json_file.read() json_file.close() loaded_model = model_from_json(loaded_model_json) # load weights into new model loaded_model.load_weights(D2) print("Loaded model from disk") loaded_model.summary() self.model = loaded_model def create_model(self): hidden = self.model.get_layer(name='encoded').output self.encoder = Model(inputs = self.model.input, outputs = hidden) clustering_layer = ClusteringLayer(n_clusters=self.n_clusters)(hidden) self.model = Model(inputs = self.model.input, outputs = clustering_layer) self.model = model def compile(self, loss='kld', optimizer='adam'): self.model.compile(loss=loss, optimizer=optimizer) def fit(self, x, y=None, batch_size=16, maxiter=2e4, tol=1e-3, update_interval=140, save_dir='./results/temp'): print('Update interval', update_interval) save_interval = x.shape[0] / batch_size * 5 print('Save interval', save_interval) print('Initializing cluster centers with k-means.') kmeans = KMeans(n_clusters=self.n_clusters, n_init=20) self.y_pred = kmeans.fit_predict(self.encoder.predict(x)) y_pred_last = np.copy(self.y_pred) self.model.get_layer(name='clustering').set_weights([kmeans.cluster_centers_]) # Step : deep clustering # logging file import csv, os if not os.path.exists(save_dir): os.makedirs(save_dir) logfile = open(save_dir + '/dcec_log.csv', 'w') logwriter = csv.DictWriter(logfile, fieldnames=['iter', 'acc', 'nmi', 'ari', 'L', 'Lc', 'Lr']) logwriter.writeheader() loss = [0, 0, 0] index = 0 for ite in range(int(maxiter)): if ite % update_interval == 0: q, _ = self.model.predict(x, verbose=0) p = self.target_distribution(q) # update the auxiliary target distribution p # evaluate the clustering performance self.y_pred = q.argmax(1) if y is not None: acc = np.round(metrics.acc(y, self.y_pred), 5) nmi = np.round(metrics.nmi(y, self.y_pred), 5) ari = np.round(metrics.ari(y, self.y_pred), 5) loss = np.round(loss, 5) logdict = dict(iter=ite, acc=acc, nmi=nmi, ari=ari, L=loss[0], Lc=loss[1], Lr=loss[2]) logwriter.writerow(logdict) print('Iter', ite, ': Acc', acc, ', nmi', nmi, ', ari', ari, '; loss=', loss) # check stop criterion delta_label = np.sum(self.y_pred != y_pred_last).astype(np.float32) / self.y_pred.shape[0] y_pred_last = np.copy(self.y_pred) if ite &gt; 0 and delta_label &lt; tol: print('delta_label ', delta_label, '&lt; tol ', tol) print('Reached tolerance threshold. Stopping training.') logfile.close() break # train on batch if (index + 1) * batch_size &gt; x.shape[0]: loss = self.model.train_on_batch(x=x[index * batch_size::], y=[p[index * batch_size::], x[index * batch_size::]]) index = 0 else: loss = self.model.train_on_batch(x=x[index * batch_size:(index + 1) * batch_size], y=[p[index * batch_size:(index + 1) * batch_size], x[index * batch_size:(index + 1) * batch_size]]) index += 1 # save intermediate model if ite % save_interval == 0: # save DCEC model checkpoints print('saving model to:', save_dir + '/dcec_model_' + str(ite) + '.h5') self.model.save_weights(save_dir + '/dcec_model_' + str(ite) + '.h5') ite += 1 # save the trained model logfile.close() print('saving model to:', save_dir + '/dcec_model_final.h5') self.model.save_weights(save_dir + '/dcec_model_final.h5') </code></pre> <p>My Output layer is a dense layer with output dimension(?,128). I am getting a following error in the clustering layer.</p> <pre><code> File "C:/Users/u/Desktop/trained/inference.py", line 45, in build self.clusters = self.add_weight(shape=(self.n_clusters, input_dim), initializer='glorot_uniform', name='clusters') File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 384, in add_weight aggregation=aggregation) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\training\tracking\base.py", line 663, in _add_variable_with_custom_getter **kwargs_for_getter) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\keras\engine\base_layer_utils.py", line 155, in make_variable shape=variable_shape if variable_shape.rank else None) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\ops\variables.py", line 259, in __call__ return cls._variable_v1_call(*args, **kwargs) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\ops\variables.py", line 220, in _variable_v1_call shape=shape) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\ops\variables.py", line 198, in &lt;lambda&gt; previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 2495, in default_variable_creator shape=shape) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\ops\variables.py", line 263, in __call__ return super(VariableMetaclass, cls).__call__(*args, **kwargs) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 460, in __init__ shape=shape) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 604, in _init_from_args initial_value() if init_from_fn else initial_value, File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\keras\engine\base_layer_utils.py", line 135, in &lt;lambda&gt; init_val = lambda: initializer(shape, dtype=dtype) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\ops\init_ops.py", line 533, in __call__ shape, -limit, limit, dtype, seed=self.seed) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\ops\random_ops.py", line 239, in random_uniform shape = _ShapeTensor(shape) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\ops\random_ops.py", line 44, in _ShapeTensor return ops.convert_to_tensor(shape, dtype=dtype, name="shape") File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\framework\ops.py", line 1087, in convert_to_tensor return convert_to_tensor_v2(value, dtype, preferred_dtype, name) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\framework\ops.py", line 1145, in convert_to_tensor_v2 as_ref=False) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\framework\ops.py", line 1224, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\framework\constant_op.py", line 305, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\framework\constant_op.py", line 246, in constant allow_broadcast=True) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\framework\constant_op.py", line 284, in _constant_impl allow_broadcast=allow_broadcast)) File "C:\Users\u\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 562, in make_tensor_proto "supported type." % (type(values), values)) TypeError: Failed to convert object of type &lt;class 'tuple'&gt; to Tensor. Contents: (17, Dimension(128)). Consider casting elements to a supported type. </code></pre> <p>I have used an autoencoder's encoder past as an input. Following is the encoder part of the autoencoder.</p> <pre><code>ip = Input(shape=(256,256,1)) x = Conv2D(16, (3,3), padding='same')(ip) x = BatchNormalization()(x) x = Activation('relu')(x) x = Dropout(0.2)(x) x = MaxPooling2D((2,2), padding='same')(x) x = Flatten()(x) x = Dense(128, name="encoded")(x) </code></pre>
<p>Replace</p> <pre><code>input_dim = input_shape[1] </code></pre> <p>with </p> <pre><code>input_dim = input_shape[1].value </code></pre> <p>in the <code>build()</code> method of <code>ClusteringLayer</code>, so that input_dim will be 128 instead of Dimension(128).</p>
python|tensorflow|keras|deep-learning|unsupervised-learning
2
3,906
44,393,358
How to compress numpy arrays before inserting into LMDB dataset?
<p>I have size of [82,3,780,1024] tensors - merge of 82 different image frames - in uint8 format. LMDB goes wild in terms of size, once i start to insert these. Is there any way to compress these tensors before inserting? </p> <p>For inserting I follow the question <a href="https://stackoverflow.com/questions/44266384/write-numpy-arrays-to-lmdb">here</a> </p> <p>I find a solution with <code>cv2.encode</code> and <code>cv2.decode</code> but it is not applicable to such tensors afaik. </p>
<p>You could use one of the many fast in-memory compression algorithms. One very good option would be to use <a href="http://blosc.org/" rel="nofollow noreferrer">blosc</a> library, which itself allows you to use quite a few algorithms specialized (or performing well) in this scenarios.</p> <p>You can get the list of supported compression algorithms by invoking (in blosc version 1.4.4)</p> <pre><code>import blosc blosc.compressor_list() ['blosclz', 'lz4', 'lz4hc', 'snappy', 'zlib', 'zstd'] </code></pre> <p>and you can compress/decompress any binary data or string using the usual <code>blosc.compress(bytesobj, typesize=8, clevel=9, shuffle=1, cname='blosclz')</code> and <code>blosc.decompress(bytesobj)</code> methods.</p> <p>I typically use one of <code>blosc</code> variants if I need speed, and <code>bz2</code> library if I want very good compression ratios (but slower run time).</p>
numpy|lmdb
1
3,907
61,179,299
Pandas transform: assign result to each element of group
<p>I am currently using pandas groupby and transform to calculate smth for each group (once) and then assign the result to each row of the group. If the result of calculations is scalar it can be obtained like:</p> <pre><code>df['some_col'] = df.groupby('id')['some_col'].transform(lambda x:process(x)) </code></pre> <p>The problem is that the result of my calculations is <strong>vector</strong>, and pd tries to make element-wise assignment of result vector to the group (quote from <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html" rel="nofollow noreferrer">pandas docs</a>):</p> <blockquote> <p>The transform function must: Return a result that is either the same size as the group chunk or broadcastable to the size of the group chunk (e.g., a scalar, grouped.transform(lambda x: x.iloc[-1])).</p> </blockquote> <p>I could hardcode external function, creating a group-sized list, that will contain copies of result (currently on python 3.6, so it's not possible to use assignment inside lambda):</p> <pre><code>def return_group(x): result = process(x) return [result for item in x] </code></pre> <p>But I think that it's possible to solve this somehow "smarter". Remember that it's <strong>necessary to make calculations only once</strong> for each group.</p> <p>Is it possible to force pd.transform work with array-like result of lambda function like with scalars (just copy it n-times)?</p> <p>Would be grateful for any advices.</p> <p>P. S. I understand, that it's possible to use combination of apply and join to solve the original requirement, but the solution with transform has more priority in my case.</p>
<p>Sometime transform is a pain to work with If that's not a problem for you I'd suggest you to use <code>groupby</code> + a <code>left</code> <code>pd.merge</code> as in this example:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({"id":[1,1,2,2,2], "col":[1,2,3,4,5]}) # this return a list for every group grp = df.groupby("id")["col"]\ .apply(lambda x: list(x))\ .reset_index(name="out") # Then you merge it to the original df df = pd.merge(df, grp, how="left") </code></pre> <p>And <code>print(df)</code> returns</p> <pre class="lang-sh prettyprint-override"><code> id col out 0 1 1 [1, 2] 1 1 2 [1, 2] 2 2 3 [3, 4, 5] 3 2 4 [3, 4, 5] 4 2 5 [3, 4, 5] </code></pre>
python|pandas|numpy|data-structures
0
3,908
61,159,317
How to count occurrence of each element in pandas series of lists?
<p>I'm a newbie and quite stuck with my python project. I have a pandas series containing lists, like this:</p> <pre><code>&gt;&gt; df.head() &gt;&gt; column1 ['A', 'B'] ['A'] ['A', 'C'] ['A', 'B', 'C'] ['B'] </code></pre> <p>The desired output should be like this:</p> <pre><code>&gt;&gt; column1 column2 'A' 4 'B' 3 'C' 2 </code></pre> <p>It doesn't matter whether <strong>column1</strong> is a string or a list with one element.</p> <p>I tried these:</p> <p><code>df.groupby('column1').count()</code></p> <p><code>df['column1'].value_counts()</code></p> <p>But both gave me:</p> <p><code>TypeError: unhashable type: 'list'</code></p> <p>Also tried:</p> <p><code>df.groupby('column1')</code></p> <p>But it does not display results.</p> <p>Tried solutions here (<a href="https://stackoverflow.com/questions/22691010/how-to-print-a-groupby-object">How to print a groupby object</a>) but none worked :(</p>
<p>Try: </p> <pre><code>df1['column1'].explode().groupby().count() </code></pre> <p>or </p> <pre><code>df1.explode('column1').groupby('column1').count() </code></pre>
python|pandas|list|series
2
3,909
61,013,644
take numeric value without label in line, regex
<p>Input:</p> <pre><code>df=pd.DataFrame({'text':['value 123* 333','122* 666','722 888*']}) print(df) text 0 value 123* 333 1 122* 666 2 722 888* </code></pre> <p>I need to extract from <code>df['text']</code> only numeric values, but withou <code>*</code>label my code:</p> <pre><code>df.text.str.extract(r'([0-9]+|[0-9]+\.[0-9]+)') </code></pre> <p>But with this code, values with the <code>*</code> char on the right are returned.</p> <p>Expected output:</p> <pre><code>text 333 666 722 </code></pre>
<p>You may use</p> <pre><code>df['text'].str.extract(r'(?=([0-9]+(?:\.[0-9]+)?))\1(?!\*)') </code></pre> <p>See the <a href="https://regex101.com/r/upKQeA/1" rel="nofollow noreferrer">regex demo</a>. Or, you may also require a word boundary on the left with <code>r'\b(?=([0-9]+(?:\.[0-9]+)?))\1(?!\*)'</code>. See <a href="https://regex101.com/r/upKQeA/2" rel="nofollow noreferrer">this regex demo</a>.</p> <p><strong>Regex details</strong></p> <ul> <li><code>(?=([0-9]+(?:\.[0-9]+)?))</code> - a positive lookahead that requires and captures into Group 1 the following sequence of patterns immediately on the right: <ul> <li><code>[0-9]+</code> - 1+ digits</li> <li><code>(?:\.[0-9]+)?</code> - an optional sequence of <code>.</code> and 1+ digits.</li> </ul></li> <li><code>\1</code> - the value of Group 1</li> <li><code>(?!\*)</code> - a negative lookahead that fails the match if, immediately to the right, there is a <code>*</code> char.</li> </ul> <p>See the Python test:</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; df=pd.DataFrame({'text':['value 123* 333','122* 666','722 888*']}) &gt;&gt;&gt; df['text'].str.extract(r'(?=([0-9]+(?:\.[0-9]+)?))\1(?!\*)') 0 333 1 666 2 722 Name: text, dtype: object &gt;&gt;&gt; </code></pre>
python|regex|pandas
2
3,910
42,562,633
What is a 2D float tensor?
<p><strong>Disclamer:</strong> I know <strong>nothing</strong> about CNN and deep learning and I don't know <a href="http://torch.ch/" rel="nofollow noreferrer">Torch</a>.</p> <p>I'm using <a href="http://docs.opencv.org/3.1.0/da/df5/tutorial_py_sift_intro.html" rel="nofollow noreferrer">SIFT</a> for my object recognition application. I found this paper <a href="http://www.cv-foundation.org/openaccess/content_iccv_2015/html/Simo-Serra_Discriminative_Learning_of_ICCV_2015_paper.html" rel="nofollow noreferrer">Discriminative Learning of Deep Convolutional Feature Point Descriptors</a> which is particularly interesting because it's CNN based, which are more precise than classic image descripion methods (e.g. SIFT, SURF etc.), but (quoting the abstract):</p> <blockquote> <p>using the L2 distance during both training and testing we develop 128-D descriptors whose euclidean distances reflect patch similarity, and which can be used as a drop-in replacement for any task involving SIFT</p> </blockquote> <p>Wow, that's fantastic: that means that we can continue to use any SIFT based approach but with more precise descriptors! </p> <p>However, quoting the <a href="https://github.com/cvlab-epfl/deepdesc-release" rel="nofollow noreferrer">github code repository</a> README:</p> <blockquote> <p>Note the output will be a Nx128 2D float tensor where each row is a descriptor.</p> </blockquote> <p>Well, what is a "2D float tensor"? SIFT descriptors matrix is Nx128 floats, is there something that I am missing?</p>
<p>2D float tensor = 2D float matrix.</p> <p>FYI: <a href="https://stats.stackexchange.com/q/181556/12359">The meaning of tensors in the neural network community</a></p>
tensorflow|computer-vision|deep-learning|torch|tensor
1
3,911
42,476,129
parsing tab delimited values from text file to variables
<p>Hello I've been struggling with this problem, I'm trying to iterate over rows and select data from them and then assign them to variables. this is the first time I'm using pandas and I'm not sure how to select the data</p> <pre><code>reader = pd.read_csv(file_path, sep="\t" ,lineterminator='\r', usecols=[0,1,2,9,10],) for row in reader: print(row) #id_number = row[0] #name = row[2] #ip_address = row[1] #latitude = row[9] </code></pre> <p>and this is the output from the row that I want to assign to the variables:</p> <pre><code>050000 129.240.228.138 planetlab2.simula.no 59.93 </code></pre> <p>Edit: Perhaps this is not a problem for pandas but for general Python. I am fairly new to python and what I'm trying to achieve is to parse tab separated file line by line and assign data to the variables and print them in one loop.</p> <p>this is the input file sample:</p> <pre><code>050263 128.2.211.113 planetlab-1.cmcl.cs.cmu.edu NA US Allegheny County Pittsburgh http://www.cs.cmu.edu/ Carnegie Mellon University 40.4446 -79.9427 unknown 050264 128.2.211.115 planetlab-3.cmcl.cs.cmu.edu NA US Allegheny County Pittsburgh http://www.cs.cmu.edu/ Carnegie Mellon University 40.4446 -79.9427 unknown </code></pre>
<p>The general workflow you're describing is: you want to read in a csv, find a row in the file with a certain ID, and unpack all the values from that row into variables. This is simple to do with pandas.</p> <p>It looks like the CSV file has at least 10 columns in it. Providing the usecols arg should filter out the columns that you're not interested in, and read_csv will ignore them when loading into the pandas DataFrame object (which you've called reader).</p> <p>Steps to do what you want: </p> <ol> <li>Read the data file using <code>pd.read_csv()</code>. You've already done this, but I recommend calling this variable df instead of reader, as read_csv returns a DataFrame object, not a Reader object. You'll also find it convenient to use the names argument to read_csv to assign column names to the dataframe. It looks like you want <code>names=['id', 'ip_address', 'name', 'latitude','longitude']</code> to get those as columns. (Assuming col10 is longitude, which makes sense that 9,10 would be lat/long pairs)</li> <li>Query the dataframe object for the row with that ID that you're interested in. There are a variety of ways to do this. One is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html" rel="nofollow noreferrer">using the query syntax</a>. Hard to know why you want that specific row without more details, but you can look up more information about index lookups in pandas. Example: <code>row = df.query("id == 50000")</code></li> <li>Given a single row, you want to extract the row values into variables. This is easy if you've assigned column names to your dataframe. You can treat the row as a dictionary of values. E.g. <code>lat = row['lat']</code> <code>lon = row['long]</code></li> </ol>
python|parsing|pandas|csv
1
3,912
43,043,805
Converting Theano Euclidean distance to keras engine format
<p>I have the following code in <code>theano</code> in order to calculate <code>L2</code> distance </p> <pre><code>def distance(square=False): X = T.fmatrix('X') Y = T.fmatrix('Y') squared_euclidean_distances = (X ** 2).sum(1).reshape((X.shape[0], 1)) + (Y ** 2).sum(1).reshape \ ((1, Y.shape[0])) - 2 * X.dot(Y.T) if square: return theano.function([X, Y], T.sqrt(squared_euclidean_distances)) else: return theano.function([X, Y], squared_euclidean_distances) </code></pre> <p><a href="https://stackoverflow.com/a/26065509/1211174">source</a></p> <pre><code>print(distance()([[1, 0], [1, 1]], [[1, 0]])) </code></pre> <p>result with: [[ 0.] [ 1.]]</p> <p>which is the distance matrix between the the left set(two vectors - [1, 0], [1, 1]) and the right set which contains single vector [1,0].</p> <p>This works well with theano even if X and Y has different dim as above. I would like to get a general <code>keras</code> function to produce the same result. I tried:</p> <pre><code>def distance_matrix(vects): x, y = vects # &lt;x,x&gt; + &lt;y,y&gt; - 2&lt;x,y&gt; x_shape = K.int_shape(x) y_shape = K.int_shape(y) return K.reshape(K.sum(K.square(x), axis=1), (x_shape[0], 1)) + \ K.reshape(K.sum(K.square(y), axis=1), (1, y_shape[0])) - \ 2 * K.dot(x, y) </code></pre> <p>but the following code does not produce the right result:</p> <pre><code>x = K.variable(np.array([[1, 0], [1, 1]])) y = K.variable(np.array([[1, 0]])) obj = distance_matrix objective_output = obj((x, y)) print (K.eval(objective_output)) </code></pre> <p>result with </p> <pre><code>ValueError: Shape mismatch: x has 2 cols (and 4 rows) but y has 4 rows (and 2 cols) Apply node that caused the error: Dot22Scalar(/variable, /variable, TensorConstant{2.0}) Toposort index: 0 Inputs types: [TensorType(float32, matrix), TensorType(float32, matrix), TensorType(float32, scalar)] Inputs shapes: [(4, 2), (4, 2), ()] Inputs strides: [(8, 4), (8, 4), ()] Inputs values: ['not shown', 'not shown', array(2.0, dtype=float32)] Outputs clients: [[Elemwise{Composite{((i0 + i1) - i2)}}[(0, 2)](InplaceDimShuffle{0,x}.0, InplaceDimShuffle{x,0}.0, Dot22Scalar.0)]] </code></pre> <p><strong>Edit</strong>: added outputs to code </p>
<p>I found the mistake. I forgot to <code>transpose</code> Y</p> <p>def distance_matrix(vects):</p> <pre><code>x, y = vects # &lt;x,x&gt; + &lt;y,y&gt; - 2&lt;x,y&gt; x_shape = K.int_shape(x) y_shape = K.int_shape(y) return K.reshape(K.sum(K.square(x), axis=1), (x_shape[0], 1)) +\ K.reshape(K.sum(K.square(y), axis=1), (1, y_shape[0])) - \ 2 * K.dot(x,K.transpose(y)) </code></pre>
tensorflow|theano|keras|keras-layer
1
3,913
72,311,770
Pytorch: tensor normalization giving bad result
<p>I have a tensor of longitudes/latitudes that i want to normalize. I want to use this tensor to perform a neural network algorithm on it that returns me the best trip between these different long/lat. I used this function:</p> <pre><code>from torch.nn.functional import normalize t=normalize(locations) </code></pre> <p>This is a lign in my tensor [ 0.0000, 36.4672, 36.4735, 36.4705, 36.4638, 36.4671], [ 0.0000, 10.7637, 10.7849, 10.7822, 10.7821, 10.7637]],</p> <p>This is after normalization: [0.0000, 0.2181, 0.2181, 0.2181, 0.2179, 0.2179], [0.0000, 0.2186, 0.2194, 0.2194, 0.2196, 0.2188]],</p> <p>As you can see the result is not good because there are many values repeating and this is affecting my results. Is there another way to normalize my tensor? I'm using pytorch in this project.</p>
<p><a href="https://pytorch.org/docs/stable/generated/torch.nn.functional.normalize.html?highlight=functional%20norm" rel="nofollow noreferrer">This</a> is how <code>torch.nn.functional.normalize</code> works.</p> <p>In my opinion, you should divide your original tensor value with the maximum value of longitudes/latitudes can have, making the tensor to have values range of <code>[0, 1]</code>.</p> <hr /> <p>Additionally, I've tried:</p> <pre><code>import torch import torch.nn.functional as F a = torch.tensor([[0.0000, 36.4672, 36.4735, 36.4705, 36.4638, 36.4671], [ 0.0000, 10.7637, 10.7849, 10.7822, 10.7821, 10.7637]]) res = F.normalize(a) </code></pre> <p>and the results was:</p> <pre><code>tensor([[0.0000, 0.4472, 0.4473, 0.4472, 0.4472, 0.4472], [0.0000, 0.4467, 0.4476, 0.4475, 0.4475, 0.4467]]) </code></pre> <p>How did you get your results?</p>
python|pytorch|tensor|normalize
0
3,914
72,458,259
Changing date column of csv with Python Pandas
<p>I have a csv file like this:</p> <p>Tarih, Şimdi, Açılış, Yüksek, Düşük, Hac., Fark %<bR> 31.05.2022, 8,28, 8,25, 8,38, 8,23, 108,84M, 0,61%</p> <p>(more than a thousand lines)</p> <p>I want to change it like this:</p> <p>Tarih, Şimdi, Açılış, Yüksek, Düşük, Hac., Fark %<bR> 5/31/2022, 8.28, 8.25, 8.38, 8.23, 108.84M, 0.61%</p> <p>Especially &quot;Date&quot; format is Day.Month.Year and I need to put it in Month/Day/Year format.</p> <p>i write the code like this:</p> <pre><code>import pandas as pd import numpy as np import datetime data=pd.read_csv(&quot;qwe.csv&quot;, encoding= 'utf-8') df.Tarih=df.Tarih.str.replace(&quot;.&quot;,&quot;/&quot;) df.Şimdi=df.Şimdi.str.replace(&quot;,&quot;,&quot;.&quot;) df.Açılış=df.Açılış.str.replace(&quot;,&quot;,&quot;.&quot;) df.Yüksek=df.Yüksek.str.replace(&quot;,&quot;,&quot;.&quot;) df.Düşük=df.Düşük.str.replace(&quot;,&quot;,&quot;.&quot;) for i in df['Tarih']: q = 1 datetime_obj = datetime.datetime.strptime(i, &quot;%d/%m/%Y&quot;) df['Tarih'].loc[df['Tarih'].values == q] = datetime_obj </code></pre> <p>But the &quot;for&quot; loop in my code doesn't work. I need help on this. Thank you</p>
<p>Just looking at converting the date, you can import to a datetime object with arguments for <code>pd.read_csv</code>, then convert to your desired format by applying <code>strftime</code> to each entry.</p> <p>If I have the following <code>tmp.csv</code>:</p> <pre><code>date, value 30.05.2022, 4.2 31.05.2022, 42 01.06.2022, 420 </code></pre> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.read_csv('tmp.csv', parse_dates=['date'], dayfirst=True) df['date'] = df['date'].dt.strftime('%m/%d/%Y') print(df) </code></pre> <p>output:</p> <pre class="lang-none prettyprint-override"><code> date value 0 05/30/2022 4.2 1 05/31/2022 42.0 2 06/01/2022 420.0 </code></pre>
python|pandas|dataframe|datetime
2
3,915
72,169,307
Normalize a list with jsons to a dataframe in steps
<p>I have a problem. I have a <code>list</code> with <code>JSONs</code>. I want to create a complete dataframe in steps. My idea is, for example: My list contains 100 elements. I want to say the size of the steps should be 25. So I say <code>len(list) / size = 4 = 100 / 25</code>. I have 4 runs of the for loop and 4 times of concat the small dataframe to the complete. For the MVP I have build a list, with 4 elements and with a step of 2. So every loops it should contactened two elements.</p> <p>At the end my <code>dataframe_complete</code> contains only two rows. What is the problem for that?</p> <p>The first loop should contain <code>my_Dict</code> and <code>my_Dict2</code> the second <code>my_Dict2</code> and <code>my_Dict2</code>. So the list should go from 0-1 and from 2-3. So every loop run should contain two elements.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd my_Dict = { '_key': '1', 'group': 'test', 'data': {}, 'type': '', 'code': '007', 'conType': '1', 'flag': None, 'createdAt': '2021', 'currency': 'EUR', 'detail': { 'selector': { 'number': '12312', 'isTrue': True, 'requirements': [{ 'type': 'customer', 'requirement': '1'}] } } } my_Dict2 = { '_key': '2', 'group': 'test', 'data2': {}, 'type': '', 'code': '007', 'conType': '1', 'flag': None, 'createdAt': '2021', 'currency': 'EUR', 'detail2': { 'selector': { 'number': '12312', 'isTrue': True, 'requirements': [{ 'type': 'customer', 'requirement': '1'}] } } } </code></pre> <pre class="lang-py prettyprint-override"><code>list_dictionaries = [my_Dict, my_Dict2, my_Dict2, my_Dict2] df_complete = pd.DataFrame() size= 1 for i in range((len(list_dictionaries) // size)): print(i) df = pd.json_normalize(list_dictionaries[i], sep='_') df_complete= pd.concat([df_complete, df]) print(df_complete) [OUT] _key group type code conType flag createdAt currency detail_selector_number detail_selector_isTrue detail_selector_requirements detail2_selector_number detail2_selector_isTrue detail2_selector_requirements 0 1 test 007 1 None 2021 EUR 12312 True [{'type': 'customer', 'requirement': '1'}] NaN NaN NaN 0 2 test 007 1 None 2021 EUR NaN NaN NaN 12312 True [{'type': 'customer', 'requirement': '1'}] </code></pre> <p>Expected output</p> <pre class="lang-py prettyprint-override"><code> _key group type code conType flag createdAt currency detail_selector_number detail_selector_isTrue detail_selector_requirements detail2_selector_number detail2_selector_isTrue detail2_selector_requirements 0 1 test 007 1 None 2021 EUR 12312 True [{'type': 'customer', 'requirement': '1'}] NaN NaN NaN 1 2 test 007 1 None 2021 EUR NaN NaN NaN 12312 True [{'type': 'customer', 'requirement': '1'}] 2 2 test 007 1 None 2021 EUR NaN NaN NaN 12312 True [{'type': 'customer', 'requirement': '1'}] 3 2 test 007 1 None 2021 EUR NaN NaN NaN 12312 True [{'type': 'customer', 'requirement': '1'}] </code></pre>
<p>The problem most likely arose that at the first iteration an empty dataframe was obtained.Since the list_dictionaries[:0] slice was used. Try the code below.</p> <pre><code>list_dictionaries = [my_Dict, my_Dict2, my_Dict2, my_Dict2] df_complete = pd.DataFrame() for i in range(0, len(list_dictionaries)): df = pd.json_normalize(list_dictionaries[i], sep='_') df_complete = pd.concat([df_complete, df]) print(df_complete.reset_index()) </code></pre> <p>Is that what you need?</p> <p>If you need two dictionaries at each iteration:</p> <pre><code>for i in range(0, len(list_dictionaries), 2): print(list_dictionaries[i:i+2]) </code></pre> <p>If you want to connect all normalized frames in two iterations.</p> <pre><code>for i in range(0, len(list_dictionaries), 2): df1 = pd.json_normalize(list_dictionaries[i], sep='_') df2 = pd.json_normalize(list_dictionaries[i+1], sep='_') df_complete = pd.concat([df_complete, df1, df2]) df_complete = df_complete.reset_index() print(df_complete) </code></pre> <p>Or in general, so as not to create unnecessary four dictionaries in 'list_dictionaries'. It is necessary to pass in a loop a list of the necessary elements at each iteration and take indexes from them.The first iteration is the first and second[0, 1] dictionary, the second is both second[1, 1] dictionaries.</p> <pre><code>list_dictionaries = [my_Dict, my_Dict2] df_complete = pd.DataFrame() for i in [[0, 1], [1, 1]]: df1 = pd.json_normalize(list_dictionaries[i[0]], sep='_') df2 = pd.json_normalize(list_dictionaries[i[1]], sep='_') df_complete = pd.concat([df_complete, df1, df2]) df_complete = df_complete.reset_index() </code></pre>
python|json|pandas|loops|for-loop
1
3,916
50,636,635
resample a pandas df within each group
<p>I have a df that has a <code>MultiIndex</code> of <code>(id, date)</code> and I would like to do 2 things:</p> <ol> <li><p>convert the <code>DateTimeIndex</code> named <code>date</code> to a <code>PeriodIndex</code> within each <code>id</code> group</p></li> <li><p><code>resample</code> the frequency of the <code>PeriodIndex</code> to monthly from daily</p></li> </ol> <p>My current (non-working) method is to (even before converting to <code>PeriodIndex</code>):</p> <pre><code>df = pd.DataFrame(data = {"val": np.arange(30), "id": np.tile([1,2], 15), "date": np.repeat(pd.date_range(start = "2000-01-01", periods = 15, name="date"), 2) }) df = df.set_index(["id", "date"]).sort_index() df.groupby("id")["val"].resample(rule = "M", closed = "right", label = "right").apply(lambda x: np.sqrt(sum(x)/10)) </code></pre> <p>This raises:</p> <pre><code>TypeError: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'MultiIndex' </code></pre> <p>What's the right way to do the whole procedure? I'm a bit confused about how to think about <code>groupby</code>: my mental model is that anything that follows a <code>groupby</code> operation will only receive the subframe corresponding to that group (ie the <code>MultiIndex</code> becomes a single index of just <code>date</code> within that particular group). Is this not correct?</p>
<p>If use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.resample.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.resample</code></a> is necessary <code>DatetimeIndex</code> set before <code>groupby</code>, also <code>apply</code> is not necessary, faster is <code>resample</code> <code>sum</code>, then divide final <code>Series</code> by 10 and then use <code>np.sqrt</code>:</p> <pre><code>df = df.set_index(["date"]).sort_index() df1 = (np.sqrt(df.groupby("id")["val"] .resample(rule = "M", closed = "right", label = "right") .sum() .div(10))) print (df1) id date 1 2000-01-31 4.582576 2 2000-01-31 4.743416 Name: val, dtype: float64 </code></pre>
python|python-3.x|pandas
2
3,917
50,562,684
separate list dictonary into columns and values
<p>I have data frame which contains a column "ExtData" with values <code>[{"key":"title","value":"activation"},{"key":"remarks","value":"activation"}]</code></p> <p>I have to separate this data and create a new data frame with "title" and "remarks" column name and their value "activation" i.e "key" is column name and their "value" as value.</p> <p>I have data frame like this</p> <pre><code>partner ExtData xyz [{"key":"title","value":"activation"}, {"key":"remarks","value":"activation"}] abc [{"key":"title","value":"activation"}, {"key":"remarks","value":"activation"}] </code></pre> <p>I need output as new data frame with</p> <pre><code>**partner** **title** **remarks** xyz activation activation abc activation activation </code></pre> <p>using pandas and python.</p> <hr> <p><img src="https://i.stack.imgur.com/ijYIj.png" alt=""></p>
<p>Here is solution, using <a href="https://pandas.pydata.org/pandas-docs/version/0.21.0/generated/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>DataFrame.apply</code></a> method:</p> <pre><code>def separate_extdata(row): for d in row['ExtData']: row[d['key']] = d['value'] return row.drop('ExtData') df = pd.DataFrame( [ ('xyz', [{"key": "title", "value": "activation"}, {"key":"remarks","value":"activation"}]), ('abc', [{"key":"title","value":"activation"}, {"key":"remarks","value":"activation"}])], columns=['partner', 'ExtData'] ) df.apply(separate_extdata, axis=1) # partner title remarks # 0 xyz activation activation # 1 abc activation activation </code></pre>
python|pandas
1
3,918
50,627,481
Create multiple dataframe columns containing calculated values from an existing column
<p>I have a dataframe, <code>sega_df</code>: </p> <pre><code>Month 2016-11-01 2016-12-01 Character Sonic 12.0 3.0 Shadow 5.0 23.0 </code></pre> <p>I would like to create multiple new columns, by applying a formula for each already existing column within my dataframe (to put it shortly, pretty much double the number of columns). That formula is <code>(100 - [5*eachcell])*0.2</code>. </p> <p>For example, for November for Sonic, <code>(100-[5*12.0])*0.2 = 8.0</code>, and December for Sonic, <code>(100-[5*3.0])*0.2 = 17.0</code> My ideal output is:</p> <pre><code>Month 2016-11-01 2016-12-01 Weighted_2016-11-01 Weighted_2016-12-01 Character Sonic 12.0 3.0 8.0 17.0 Shadow 5.0 23.0 15.0 -3.0 </code></pre> <p>I know how to create a for loop to create one column. This is for if only <em>one</em> month was in consideration:</p> <pre><code>for w in range(1,len(sega_df.index)): sega_df['Weighted'] = (100 - 5*sega_df)*0.2 sega_df[sega_df &lt; 0] = 0 </code></pre> <p>I haven't gotten the skills or experience yet to create <em>multiple</em> columns. I've looked for other questions that may answer what exactly I am doing but haven't gotten anything to work yet. Thanks in advance.</p>
<p>One vectorised approach is to drown to <code>numpy</code>:</p> <pre><code>A = sega_df.values A = (100 - 5*A) * 0.2 res = pd.DataFrame(A, index=sega_df.index, columns=('Weighted_'+sega_df.columns)) </code></pre> <p>Then join the result to your original dataframe:</p> <pre><code>sega_df = sega_df.join(res) </code></pre>
python|pandas|for-loop|dataframe
1
3,919
45,609,903
np.where on multiple variables
<p>I have a data frame with:</p> <pre><code>customer_id [1,2,3,4,5,6,7,8,9,10] feature1 [0,0,1,1,0,0,1,1,0,0] feature2 [1,0,1,0,1,0,1,0,1,0] feature3 [0,0,1,0,0,0,1,0,0,0] </code></pre> <p>Using this I want to create a new variable (say new_var) to say when feature 1 is 1 then the new_var=1, if feature_2=1 then new_var=2, feature3=1 then new_var=3 else 4. I was trying np.where but though it doesn't give me an error, it doesn't do the right thing - so I guess a nested np.where works on a single variable only. In which case, what's the best way to perform a nested if/case when in pandas?</p> <p>My np.where code was something like this:</p> <pre><code>df[new_var]=np.where(df['feature1']==1,'1', np.where(df['feature2']==1,'2', np.where(df[feature3']==1,'3','4'))) </code></pre>
<p>I think you need <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.select.html" rel="nofollow noreferrer"><code>numpy.select</code></a> - it select first <code>True</code> values and all another are not important:</p> <pre><code>m1 = df['feature1']==1 m2 = df['feature2']==1 m3 = df['feature3']==1 df['new_var'] = np.select([m1, m2, m3], ['1', '2', '3'], default='4') </code></pre> <p><strong>Sample</strong>:</p> <pre><code>customer_id = [1,2,3,4,5,6,7,8,9,10] feature1 = [0,0,1,1,0,0,1,1,0,0] feature2 = [1,0,1,0,1,0,1,0,1,0] feature3 = [0,0,1,0,0,0,1,0,0,0] df = pd.DataFrame({'customer_id':customer_id, 'feature1':feature1, 'feature2':feature2, 'feature3':feature3}) m1 = df['feature1']==1 m2 = df['feature2']==1 m3 = df['feature3']==1 df['new_var'] = np.select([m1, m2, m3], ['1', '2', '3'], default='4') print (df) customer_id feature1 feature2 feature3 new_var 0 1 0 1 0 2 1 2 0 0 0 4 2 3 1 1 1 1 3 4 1 0 0 1 4 5 0 1 0 2 5 6 0 0 0 4 6 7 1 1 1 1 7 8 1 0 0 1 8 9 0 1 0 2 9 10 0 0 0 4 </code></pre> <p>If in <code>features</code> only <code>1</code> and <code>0</code> is possible convert <code>0</code> to <code>False</code> and <code>1</code> to <code>True</code>:</p> <pre><code>m1 = df['feature1'].astype(bool) m2 = df['feature2'].astype(bool) m3 = df['feature3'].astype(bool) df['new_var'] = np.select([m1, m2, m3], ['1', '2', '3'], default='4') print (df) customer_id feature1 feature2 feature3 new_var 0 1 0 1 0 2 1 2 0 0 0 4 2 3 1 1 1 1 3 4 1 0 0 1 4 5 0 1 0 2 5 6 0 0 0 4 6 7 1 1 1 1 7 8 1 0 0 1 8 9 0 1 0 2 9 10 0 0 0 4 </code></pre>
python|pandas|if-statement|case-when|np
1
3,920
45,633,591
What does this groupby function mean I have done?
<p>I have 3 columns: topic, year and country. I am trying to find the maximum number of topics per country over the years and want it to group by the maximum project count for each country. </p> <p>I've used this groupby function below: </p> <pre><code>df.groupby(['topic', 'year', 'country']).count() </code></pre> <p>However I can't really tell what it is showing me as it has just grouped each topic with each year and the country. I don't have another column for count also as I would like to see how many per topic so I have numbers to plot later also. How do I do this?</p> <pre><code>topic year country Mechanics 2014.0 FI IT 2015.0 NL UK 2016.0 FR UK biochemistry 2014.0 DE 2015.0 AT BE DE DK ES FI 2016.0 AT </code></pre>
<p>that groupby should give you the count of each unique tuple of <code>('topic', 'year', 'country')</code></p>
python|pandas|plot|group-by
0
3,921
45,556,099
Error when placing markers at specific points using matplotlib and markevery
<p>I'd like to place markers on a line specific points. I have a TRUE/FALSE list that says where on the x-axis the markers are wanted. Here is the snippet I am using:</p> <pre><code>markers_on = list(compress(self.x, self.titanic1)) a0.plot(self.x, self.nya, marker='v', markevery=markers_on) </code></pre> <p>self.titanic1 is a list of Boolean values, self.x is a list of our x axis values and self.nya is a list of y values. I get the following error message.</p> <p><a href="https://i.stack.imgur.com/zsvx4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zsvx4.png" alt="error message"></a></p> <p>The interesting bit is that the list in the error message is correct, those are the correct x-axis values for the markers. Does anybody know what this message means? The markevery doc clearly says it'll take a list of integers.</p> <pre><code>markevery [None | int | length-2 tuple of int | slice | list/array of int | float | length-2 tuple of float] </code></pre>
<p>If the list supplied to <code>markevery</code> is a list of integers, it is interpreted as a list of <strong>indices</strong> of the values to mark. </p> <pre><code>import matplotlib.pyplot as plt import numpy as np x = np.arange(0,1,0.2) y = np.random.rand(len(x)) boolean= [True, False, False, True, True] mark = list(np.arange(len(x))[boolean]) plt.plot(x,y, marker="o", markevery=mark) plt.show() </code></pre> <p>However, note that you may directly supply a list of <strong>booleans</strong> to the <code>markevery</code></p> <pre><code>import matplotlib.pyplot as plt import numpy as np x = np.arange(0,1,0.2) y = np.random.rand(len(x)) boolean= [True, False, False, True, True] plt.plot(x,y, marker="o", markevery=boolean) plt.show() </code></pre>
python|numpy|matplotlib
2
3,922
62,473,828
Identical random crop on two images Pytorch transforms
<p>I am trying to feed two images into a network and I want to do identical transform between these two images. <code>transforms.Compose()</code> takes one image at a time and produces output independent to each other but I want same transformation. I did my own coding for <code>hflip()</code> now I am interested to get the random crop. Is there any way to do that without writing custom functions? </p>
<p>I would use workaround like this - make my own crop class inherited from RandomCrop, redefining <strong>call</strong> with</p> <pre class="lang-py prettyprint-override"><code> … if self.call_is_even : self.ijhw = self.get_params(img, self.size) i, j, h, w = self.ijhw self.call_is_even = not self.call_is_even </code></pre> <p>instead of</p> <pre class="lang-py prettyprint-override"><code> i, j, h, w = self.get_params(img, self.size) </code></pre> <p>The idea is to suppress randomizer on odd calls</p>
deep-learning|pytorch|vgg-net|spacy-transformers
1
3,923
62,575,164
Creating a 'vocabulary' to group words having same meaning for word frequency
<p>I have this output from n-grams analysis by using CountVectorizer (texts are stored in pandas dataframe):</p> <pre><code> Frequency Words playstation 5 106 hours app 32 app store 20 5 playstation 17 hour app 16 ... ... </code></pre> <p>I would like to know if it is possible to create a 'vocabulary' of synonymous where I can set:</p> <pre><code>playstation 5 = 5 playstation </code></pre> <p>in order to sum 106 + 17 in the final frequency list. It is not about lemmatising but rather order. I can do it manually, but I would like to know how to do it.</p> <p>Many thanks</p>
<p>How about using the Levenshtein distance to check how closer the two words are like</p> <pre><code>from fuzzywuzzy import fuzz fuzz.token_sort_ratio('playstation 5','5 playstation') &gt;&gt; 100 fuzz.token_sort_ratio('playstation 5','4 playstation') &gt;&gt; 92 </code></pre> <p>I have used the <a href="https://pypi.org/project/fuzzywuzzy/" rel="nofollow noreferrer">fuzzy wuzzy</a> python module for this.</p>
python|pandas|n-gram
1
3,924
62,616,253
numpy where output - how can I use the value?
<p>I have a list <code>some_list = [[1, 2], [3, 4], [3, 6]]</code> I want to find some index where expression was evaluated as true:</p> <pre><code>np.where([3 in sublist for sublist in some_list]) </code></pre> <p>The output is <code>(array([1, 2], dtype=int64),)</code>.</p> <p>Since I'd like to remove sublist with <code>3</code> inside, how can I access such array (in an elegant way)? With the array I can do <code>[some_list.pop(index) for index in array]</code>.</p> <p>Edit: seems like it works using <code>for index in np.where([3 in sublist for sublist in some_list])[0]</code></p>
<p><code>where</code> just returns a <strong>tuple of arrays</strong> that index where the element values True.</p> <pre><code>In [447]: some_list = [[1, 2], [3, 4], [3, 6]] </code></pre> <p>Your list test:</p> <pre><code>In [448]: [3 in sublist for sublist in some_list] Out[448]: [False, True, True] In [449]: np.where([3 in sublist for sublist in some_list]) Out[449]: (array([1, 2]),) </code></pre> <p>That's a one element tuple, for a 1 dimensional list [448]. We can extract that array with simple indexing:</p> <pre><code>In [450]: _[0] Out[450]: array([1, 2]) </code></pre> <p>and use it to select sublists from <code>some_list</code>:</p> <pre><code>In [451]: [some_list[i] for i in _] Out[451]: [[3, 4], [3, 6]] </code></pre> <p>If the list was an array:</p> <pre><code>In [455]: arr = np.array(some_list) In [456]: arr Out[456]: array([[1, 2], [3, 4], [3, 6]]) </code></pre> <p>we could do the same search for 3 with:</p> <pre><code>In [457]: arr==3 Out[457]: array([[False, False], [ True, False], [ True, False]]) In [458]: (arr==3).any(axis=1) Out[458]: array([False, True, True]) In [459]: np.where(_) Out[459]: (array([1, 2]),) </code></pre> <p>That [459] tuple can be used to index the [458] array directly. In this case it can also be used to index rows of <code>arr</code>:</p> <pre><code>In [460]: arr[_] Out[460]: array([[3, 4], [3, 6]]) </code></pre> <p>Here that tuple derived from 1d [458] works, but if it didn't we could (again) extract the array with indexing, and use that:</p> <pre><code>In [461]: np.where((arr==3).any(axis=1))[0] Out[461]: array([1, 2]) In [462]: arr[_, :] Out[462]: array([[3, 4], [3, 6]]) </code></pre> <p>===</p> <p>A pure-list way of doing this:</p> <pre><code>In [476]: [i for i,sublist in enumerate(some_list) if 3 in sublist] Out[476]: [1, 2] </code></pre> <p>It could well be faster, since <code>np.where</code> converts list inputs to arrays, and that takes time.</p>
python|python-3.x|numpy|tuples
2
3,925
62,781,228
Efficient way to extend a numpy array to n*length and duplicate its elements?
<p>I'm looking for a fast way to take a numpy array, such as <code>[[1,2,3,4]]</code> and turn it into an extended version of itself with its elements repeated <code>N</code> times. I.E. if <code>N = 2</code>, then <code>[[1,2,3,4]]</code> -&gt; <code>[[1,1,2,2,3,3,4,4]]</code></p> <p>Obviously I can brute force it with an explicit for, but I'm wondering if this can be vectorized somehow to speed it up?</p> <p>edit: i'm using [[1,2,3,4]] as shorthand for np.array([1,2,3,4]), sorry for the confusion. Also, thanks to those of you who mentioned np.repeat! That's just what I needed.</p>
<p>Try this:</p> <pre><code>from numpy import repeat x = [[1,2,3,4]] N = 3 y = repeat(x, N).reshape((1,-1)) print(y) </code></pre> <p><strong>Edit:</strong> Quang's solution is shorter, I admit ...</p> <pre><code>y = repeat(x, N, axis=-1) </code></pre>
python|arrays|numpy
1
3,926
62,741,673
Unable to install tensorflow module in Python3
<p>I'm installing tensorflow. My system has the following specifications:</p> <pre><code>py --version Python 3.8.2 </code></pre> <p>I tried the following commands to install tensorflow module</p> <pre><code>py -m pip install --upgrade tensorflow ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none) ERROR: No matching distribution found for tensorflow </code></pre> <pre><code>py -m pip install tensorflow==2.2 ERROR: Could not find a version that satisfies the requirement tensorflow==2.2 (from versions: none) ERROR: No matching distribution found for tensorflow==2.2 </code></pre> <pre><code>py -m pip install tensorflow==2.2.0 ERROR: Could not find a version that satisfies the requirement tensorflow==2.2.0 (from versions: none) ERROR: No matching distribution found for tensorflow==2.2.0 </code></pre> <p>What can I do to install this successfully on my machine?</p>
<p>TensorFlow supports only 64-bit system architecture. Hence it shows compatibility issue in 32-bit and no matching distribution found.</p>
python|tensorflow
0
3,927
54,557,767
Combinations in python with different elements
<p>I'm struggling when calculating different types of combinations.</p> <p>Let's explain with an example, I have this array or it could be a dataframe and I want different combinations of some columns from it.</p> <p>As I will then multiply this matrix by the combination to sum the numbers.</p> <pre><code>test = np.array ([[10,11,12,21,22,31,32,33], [10,11,12,21,22,31,32,33], [10,11,12,21,22,31,32,33], [10,11,12,21,22,31,32,33], [10,11,12,21,22,31,32,33], [10,11,12,21,22,31,32,33], [10,11,12,21,22,31,32,33]]) </code></pre> <p>The possible combinations for the first three columns are [1,0,0],[0,1,0],[0,0,1], so I need 10 or 11 or 12 Following columns, 21 or 22, therefore, combinations [1,0], [0,1] And the last three columns, 31,3 2, 33, then it will be [1,0,0],[0,1,0],[0,0,1]</p> <p>So, I get the possible combination by using this function i found in another question.</p> <pre><code>n=3 for i in range(2**n): s = bin(i)[2:] s = "0" * (n-len(s)) + s print (list(s)) </code></pre> <p>Which gives me:</p> <pre><code>['0', '0', '0'] ['0', '0', '1'] ['0', '1', '0'] ['0', '1', '1'] ['1', '0', '0'] ['1', '0', '1'] ['1', '1', '0'] ['1', '1', '1'] </code></pre> <p>All possible combinations, including the zeros. Although I managed to delete those.</p> <p>It calculates more than the combinations I need, and I find myself eliminating too many combinations I do not need.</p> <p>When I only need these cases:</p> <pre><code>[1,0,0, 1,0, 1,0,0] [0,1,0, 1,0, 1,0,0] [0,0,1, 1,0, 1,0,0] [1,0,0, 0,1, 1,0,0] [0,1,0, 0,1, 1,0,0] [0,0,1, 0,0, 1,0,0] etc.... </code></pre> <p>I need to delete many rows which are not relevant on the 8 cases, and delete rows where I find more than three 1's, and select where the 1's are positioned correctly,etc... not efficient at all. I'm a bit lost.</p>
<p>I really don't understand the logic behind the example, but does this solve your problem?</p> <pre><code>from itertools import product,permutations a = set(permutations([0,0,1])) b = set(permutations([0,1])) comb = [] for t1,t2,t3 in product(a,b,a): comb.append([*t1,*t2,*t3]) print(comb) # [[1, 0, 0, 0, 1, 1, 0, 0], # [1, 0, 0, 0, 1, 0, 1, 0], # [1, 0, 0, 0, 1, 0, 0, 1], # [1, 0, 0, 1, 0, 1, 0, 0], # [1, 0, 0, 1, 0, 0, 1, 0], # [1, 0, 0, 1, 0, 0, 0, 1], # [0, 1, 0, 0, 1, 1, 0, 0], # [0, 1, 0, 0, 1, 0, 1, 0], # [0, 1, 0, 0, 1, 0, 0, 1], # [0, 1, 0, 1, 0, 1, 0, 0], # [0, 1, 0, 1, 0, 0, 1, 0], # [0, 1, 0, 1, 0, 0, 0, 1], # [0, 0, 1, 0, 1, 1, 0, 0], # [0, 0, 1, 0, 1, 0, 1, 0], # [0, 0, 1, 0, 1, 0, 0, 1], # [0, 0, 1, 1, 0, 1, 0, 0], # [0, 0, 1, 1, 0, 0, 1, 0], # [0, 0, 1, 1, 0, 0, 0, 1]] </code></pre>
python|python-3.x|numpy
0
3,928
54,462,351
Running session failed due to tensors datatype and shape Tensorflow
<p>I tried to load the model and graph using the following methodology: </p> <pre><code>saver = tf.train.import_meta_graph(tf.train.latest_checkpoint(model_path)+".meta") graph = tf.get_default_graph() outputs = graph.get_tensor_by_name('output:0') outputs = tf.cast(outputs,dtype=tf.float32) X = graph.get_tensor_by_name('input:0') sess = tf.Session() sess.run(tf.global_variables_initializer()) sess.run(tf.local_variables_initializer()) if(tf.train.checkpoint_exists(tf.train.latest_checkpoint(model_path))): saver.restore(sess, tf.train.latest_checkpoint(model_path)) print(tf.train.latest_checkpoint(model_path) + "Session Loaded for Testing") </code></pre> <p>It worked!...<br> But when I tried to run the session, I got the following error: </p> <pre><code>y_test_output= sess.run(outputs, feed_dict={X: x_test}) </code></pre> <p>The error is: </p> <pre><code>Caused by op 'output', defined at: File "testing_reality.py", line 21, in &lt;module&gt; saver = tf.train.import_meta_graph(tf.train.latest_checkpoint(model_path)+".meta") File "C:\Python35\lib\site-packages\tensorflow\python\training\saver.py", line 1674, in import_meta_graph meta_graph_or_file, clear_devices, import_scope, **kwargs)[0] File "C:\Python35\lib\site-packages\tensorflow\python\training\saver.py", line 1696, in _import_meta_graph_with_return_elements **kwargs)) File "C:\Python35\lib\site-packages\tensorflow\python\framework\meta_graph.py", line 806, in import_scoped_meta_graph_with_return_elements return_elements=return_elements) File "C:\Python35\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func return func(*args, **kwargs) File "C:\Python35\lib\site-packages\tensorflow\python\framework\importer.py", line 442, in import_graph_def _ProcessNewOps(graph) File "C:\Python35\lib\site-packages\tensorflow\python\framework\importer.py", line 234, in _ProcessNewOps for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3440, in _add_new_tf_operations for c_op in c_api_util.new_tf_operations(self) File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3440, in &lt;listcomp&gt; for c_op in c_api_util.new_tf_operations(self) File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3299, in _create_op_from_tf_operation ret = Operation(c_op, self) File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__ self._traceback = tf_stack.extract_stack() InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'output' with dtype float and shape [?,1] [[node output (defined at testing_reality.py:21) = Placeholder[dtype=DT_FLOAT, shape=[?,1], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]] </code></pre> <p>Not getting what is the issue that caused this problem to me.<br> Please help me get the missing link. </p> <p>I have checked: </p> <pre><code>&gt;&gt;&gt; outputs &lt;tf.Tensor 'output:0' shape=(?, 1) dtype=float32&gt; </code></pre> <p>Still could not understand the reason for the error. </p> <p>I am using the latest version Tensorflow '1.12.0' on Windows 10 OS. </p> <p>This is how the graph I am creating: </p> <pre><code>X = tf.placeholder(tf.float32, [None, n_steps, n_inputs],name="input") y = tf.placeholder(tf.float32, [None, n_outputs],name="output") layers = [tf.contrib.rnn.LSTMCell(num_units=n_neurons,activation=tf.nn.relu6, use_peepholes = True,name="layer"+str(layer)) for layer in range(n_layers)] multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers) rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32) stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons]) stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs) outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs]) outputs = outputs[:,n_steps-1,:] # keep only last output of sequence loss = tf.reduce_mean(tf.square(outputs - y)) # loss function = mean squared error optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) training_op = optimizer.minimize(loss) </code></pre>
<p>This happens when you are trying to evaluate a node in the graph which is dependent on a value from a placeholder. Because of that, you are getting an error that states that you must feed a value for the placeholder. Have a look at this example:</p> <pre><code>tf.reset_default_graph() a = tf.placeholder(tf.float32) b = tf.placeholder(tf.float32) c = a + b d = a with tf.Session() as sess: print(c.eval(feed_dict={a:1.0})) # Error because in order to evaluate c we must have the value for b. with tf.Session() as sess: print(d.eval(feed_dict={a:1.0})) # It works because d is not dependent on b. </code></pre> <p>Now, in your case, you should not execute the <code>outputs</code> placeholder. What you should execute is the operation you use to do predictions with your model, while feeding a value in the <code>X</code> placeholder (Assuming that you are using that one to feed the input in the model). On the other hand, I guess that you use the <code>output</code> placeholder to feed the labels while training, so there is no need to feed data in that placeholder.</p> <p>Based on your latest update:</p> <p>By doing: <code>outputs = graph.get_tensor_by_name('output:0')</code> you are loading the placeholder named output. You don't need that, you need the operation that slices the output. In the part of the code where the graph creation is, do:</p> <pre><code>outputs = tf.identity(outputs[:,n_steps-1,:], name="prediction") </code></pre> <p>Then, when loading the model, load these two tensors:</p> <pre><code>X = graph.get_tensor_by_name('input:0') prediction = graph.get_tensor_by_name('prediction:0') </code></pre> <p>Lastly, to get predictions on the input you want:</p> <pre><code>sess = tf.Session() sess.run(tf.global_variables_initializer()) sess.run(prediction, feed_dict={X: x_test}) </code></pre>
python|python-3.x|tensorflow|machine-learning
1
3,929
54,549,519
Save arrays from loop in one txt file in columns
<p>I'm creating an array of zeros (587x1) in which I want to replace a zero with a one in a specific line of the array given as an index from another file. This part works fine in my code so far. </p> <p>Afterwards, I want to save all these newly created arrays in one txt file as columns next to each other, separated by tabs. What my code does, however, is overwriting the arrays. How do I manage to append them next to each other in one file?</p> <p>Thanks a lot for your help!</p> <p>Update: I managed to write all the arrays to one file, however, they are now simply printed on top of each other - how do I write them to columns appearing next to each other?</p> <pre><code>import os import numpy as np participants = ['001'] for vp in participants: with open('file.txt') as f: content = f.readlines() content = [x.strip() for x in content] content = map(int, content) f = open(outfile.txt, 'w') for line in content: with open('outfile.txt', 'a') as f: arr = np.zeros((587, 1), dtype = int) np.put(arr, [line], [1]) np.savetxt(f, arr, fmt='%i') f.close() </code></pre>
<p>You can try this code, that adds a given <code>(n,1)</code> shaped array as a column to a textfile that contains a <code>(n,m)</code> shaped matrix of integers:</p> <pre><code>def appendAsColumn(arr): fileContent = np.loadtxt('outfile.txt', dtype = int, ndmin = 2) fileContent = np.hstack((fileContent, arr.astype(int))) np.savetxt('outfile.txt', fileContent, fmt='%i') </code></pre> <p>Note that this will not work for the very first entry of the file, you will need to call</p> <pre><code>np.savetxt('outfile.txt', arr, fmt='%i') </code></pre> <p>for the first column.</p>
python|arrays|numpy
0
3,930
73,625,640
GUI for editing and saving a python pandas dataframe
<p>In a python function I want to show the user a pandas dataframe and let the user edit the cells in the dataframe. My function should use the edited values in that dataframe (i.e. they should be saved).</p> <p>I've tried pandasgui, but it does not seem to return the edits to the function.</p> <p>Is there a function/library I can use for this?</p>
<p>Recently solved this problem with <a href="https://github.com/man-group/dtale" rel="nofollow noreferrer">dtale</a></p> <pre><code>import pandas as pd import dtale df = pd.read_csv('table_data.csv') dt = dtale.show(df) # create dtale with our df dt.open_browser() # bring it to a new tab (optional) df = dt.data # connect all the updates from dtale gui and our df # (so rn if u edit any cell, you will immediately get the result saved in ur df) </code></pre> <p>Yesterday I came across with some bugs while using dtale. Filtering broke my changes and creating some new rows I dont need. Usually I use dtale and pandasgui together. Hope it helps!</p>
python|pandas|user-interface
0
3,931
60,609,917
How can I use a generator to run my Neural Network model?
<p>I have a neural network model that run perfectly, but I am using a very large data and I try to use a generator to run the model, but it gives me the following error: </p> <pre><code>"UnimplementedError: File "&lt;ipython-input-63-352f4097b60f&gt;", line 146, in &lt;module&gt; validation_steps = len(df_valid)/batch_size, shuffle=True) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 1297, in fit_generator steps_name='steps_per_epoch') File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_generator.py", line 265, in model_iteration batch_outs = batch_function(*batch_data) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 973, in train_on_batch class_weight=class_weight, reset_metrics=reset_metrics) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2_utils.py", line 264, in train_on_batch output_loss_metrics=model._output_loss_metrics) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_eager.py", line 311, in train_on_batch output_loss_metrics=output_loss_metrics)) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_eager.py", line 252, in _process_single_batch training=training)) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_eager.py", line 166, in _model_loss per_sample_losses = loss_fn.call(targets[i], outs[i]) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\losses.py", line 221, in call return self.fn(y_true, y_pred, **self._fn_kwargs) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\losses.py", line 978, in sparse_categorical_crossentropy y_true, y_pred, from_logits=from_logits, axis=axis) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\backend.py", line 4530, in sparse_categorical_crossentropy target = cast(target, 'int64') File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\backend.py", line 1571, in cast return math_ops.cast(x, dtype) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\util\dispatch.py", line 180, in wrapper return target(*args, **kwargs) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\ops\math_ops.py", line 704, in cast x = gen_math_ops.cast(x, base_type, name=name) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\ops\gen_math_ops.py", line 2211, in cast _six.raise_from(_core._status_to_exception(e.code, message), None) File "&lt;string&gt;", line 3, in raise_from UnimplementedError: Cast string to int64 is not supported [Op:Cast] name: loss/dense_38_loss/Cast/ </code></pre> <p>What is wrong with the generator?</p> <pre><code>import numpy as np from sklearn.preprocessing import LabelEncoder, OneHotEncoder from keras.preprocessing.text import Tokenizer from keras.preprocessing import sequence from keras.utils import np_utils import pandas as pd batch_size = 64 # Read in the training and validation data df_train = pd.read_csv(r'C:\train.txt', sep='|', encoding='latin-1', low_memory=False) df_valid = pd.read_csv(r'C:\valid.txt', sep='|', encoding='latin-1', low_memory=False) print('training rows:', len(df_train)) print('validation rows:', len(df_valid)) def generator(df,vocab_size,batch_size, tokenizer,input_encoder,onehot): n_examples = len(df) number_of_batches = n_examples / batch_size counter = 0 while 1: start_index = counter * batch_size end_index = start_index + batch_size X_out1 = np.array(([batch_size, n_examples, vocab_size]), dtype=int) if counter &gt; number_of_batches + 1: # reshuffle dataframe and start over df.sample(frac=1).reset_index(drop=True) counter = 0 counter += 1 X_out1 = tokenizer.texts_to_sequences(df.iloc[start_index: end_index]['var1']) X_out1 = sequence.pad_sequences(X_out1, maxlen=200) X_out2 = df.iloc[start_index: end_index][['var2','var3']] X_out2 = input_encoder.transform(df.iloc[start_index: end_index][[ 'var2','var3']]) X_out2 = onehot.transform(df.iloc[start_index: end_index][[ 'var2','var3']]) Y_out = df.iloc[start_index: end_index]['code'] yield [X_out1, X_out2], [Y_out] tokenizer = Tokenizer() tokenizer.fit_on_texts(df_train['var1']) input_encoder = MultiColumnLabelEncoder() train_X2=df_train[['var2','var3']] valid_X2 =df_valid[['var2','var3']] input_encoder.fit(train_X2) onehot = OneHotEncoder(sparse=False,categories='auto') onehot.fit(train_X2) code_type = 'code' train_labels = df_train[code_type] valid_labels = df_valid[code_type] label_encoder = LabelEncoder() labels = set(df_train[code_type].tolist() + df_valid[code_type].tolist()) label_encoder.fit(list(labels)) n_classes = len(set(labels)) print('n_classes = %s' % n_classes) input_text = Input(shape=(200,), dtype='int32', name='input_text') meta_input = Input(shape=(2,), name='meta_input') embedding = Embedding(input_dim=len(tokenizer.word_index) + 1, output_dim=300, input_length=200)(input_text) lstm = Bidirectional(LSTM(units=128, dropout=0.5, recurrent_dropout=0.5, return_sequences=True), merge_mode='concat')(embedding) pool = GlobalMaxPooling1D()(lstm) dropout = Dropout(0.5)(pool) text_output = Dense(n_codes, activation='sigmoid', name='aux_output')(dropout) output = concatenate([text_output, meta_input]) output = Dense(n_codes, activation='relu')(output) main_output = Dense(n_codes, activation='softmax', name='main_output')(output) model = Model(inputs=[input_text,meta_input], outputs=[output]) optimer = Adam(lr=.001) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # Generators train_generator = generator(df_train,vocab_size,batch_size, tokenizer,input_encoder,onehot) validation_generator = generator(df_valid,vocab_size,batch_size, tokenizer,input_encoder,onehot) model.summary() model.fit_generator(generator=train_generator, validation_data=validation_generator, epochs=20,steps_per_epoch = len(df_train)/batch_size, validation_steps = len(df_valid)/batch_size, shuffle=True) </code></pre>
<p>From what I can see the problem is a type mismatch due to the use of a CSV file. A number (the labels) is read from the CSV as a string and it's not converted to an int automatically.</p> <p>This is the expected outputs (labels) which you read from the CSV:</p> <pre><code>Y_out = df.iloc[start_index: end_index]['code'] </code></pre> <p>It is probably an string, try printing <code>Y_out.dtypes</code> to confirm.</p> <p>Your model's output (prediction) is a one-hot encoded label which is an <code>int64</code>; the type mismatch happens when TF is trying to calculate the loss value subtracting a number from a string.</p>
python|pandas
0
3,932
60,567,183
'tf.data()' throwing Your input ran out of data; interrupting training
<p>I'm seeing weird issues when trying to use <code>tf.data()</code> to generate data in batches with keras api. It keeps throwing errors saying it's running out of training_data.</p> <p><strong>TensorFlow 2.1</strong></p> <pre class="lang-py prettyprint-override"><code>import numpy as np import nibabel import tensorflow as tf from tensorflow.keras.layers import Conv3D, MaxPooling3D from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import Flatten from tensorflow.keras import Model import os import random """Configure GPUs to prevent OOM errors""" gpus = tf.config.experimental.list_physical_devices('GPU') for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) """Retrieve file names""" ad_files = os.listdir("/home/asdf/OASIS/3D/ad/") cn_files = os.listdir("/home/asdf/OASIS/3D/cn/") sub_id_ad = [] sub_id_cn = [] """OASIS AD: 178 Subjects, 278 3T MRIs""" """OASIS CN: 588 Subjects, 1640 3T MRIs""" """Down-sampling CN to 278 MRIs""" random.Random(129).shuffle(ad_files) random.Random(129).shuffle(cn_files) """Split files for training""" ad_train = ad_files[0:276] cn_train = cn_files[0:276] """Shuffle Train data and Train labels""" train = ad_train + cn_train labels = np.concatenate((np.ones(len(ad_train)), np.zeros(len(cn_train))), axis=None) random.Random(129).shuffle(train) random.Random(129).shuffle(labels) print(len(train)) print(len(labels)) """Change working directory to OASIS/3D/all/""" os.chdir("/home/asdf/OASIS/3D/all/") """Create tf data pipeline""" def load_image(file, label): nifti = np.asarray(nibabel.load(file.numpy().decode('utf-8')).get_fdata()) xs, ys, zs = np.where(nifti != 0) nifti = nifti[min(xs):max(xs) + 1, min(ys):max(ys) + 1, min(zs):max(zs) + 1] nifti = nifti[0:100, 0:100, 0:100] nifti = np.reshape(nifti, (100, 100, 100, 1)) nifti = tf.convert_to_tensor(nifti, np.float64) return nifti, label @tf.autograph.experimental.do_not_convert def load_image_wrapper(file, labels): return tf.py_function(load_image, [file, labels], [tf.float64, tf.float64]) dataset = tf.data.Dataset.from_tensor_slices((train, labels)) dataset = dataset.shuffle(6, 129) dataset = dataset.repeat(50) dataset = dataset.map(load_image_wrapper, num_parallel_calls=6) dataset = dataset.batch(6) dataset = dataset.prefetch(buffer_size=1) iterator = iter(dataset) batch_images, batch_labels = iterator.get_next() ######################################################################################## with tf.device("/cpu:0"): with tf.device("/gpu:0"): model = tf.keras.Sequential() model.add(Conv3D(64, input_shape=(100, 100, 100, 1), data_format='channels_last', kernel_size=(7, 7, 7), strides=(2, 2, 2), padding='valid', activation='relu')) with tf.device("/gpu:1"): model.add(Conv3D(64, kernel_size=(3, 3, 3), padding='valid', activation='relu')) with tf.device("/gpu:2"): model.add(Conv3D(128, kernel_size=(3, 3, 3), padding='valid', activation='relu')) model.add(MaxPooling3D(pool_size=(2, 2, 2), padding='valid')) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss=tf.keras.losses.binary_crossentropy, optimizer=tf.keras.optimizers.Adagrad(0.01), metrics=['accuracy']) ######################################################################################## model.fit(batch_images, batch_labels, steps_per_epoch=92, epochs=50) </code></pre> <p>After creating the dataset, I'm shuffling and adding the repeat parameter to the <code>num_of_epochs</code>, i.e. 50 in this case. This works, but it crashes after the 3rd epoch, and I can't seem to figure out what I'm doing wrong in this particular instance. Am I supossed to declare the repeat and shuffle statements at the top of the pipeline?</p> <p>Here is the error:</p> <pre><code>Epoch 3/50 92/6 [============================================================================================================================================================================================================================================================================================================================================================================================================================================================================] - 3s 36ms/sample - loss: 0.1902 - accuracy: 0.8043 Epoch 4/50 5/6 [========================&gt;.....] - ETA: 0s - loss: 0.2216 - accuracy: 0.80002020-03-06 15:18:17.804126: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[BiasAddGrad_3/_54]] 2020-03-06 15:18:17.804137: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[sequential/conv3d_3/Conv3D/ReadVariableOp/_21]] 2020-03-06 15:18:17.804140: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[Conv3DBackpropFilterV2_3/_68]] 2020-03-06 15:18:17.804263: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[sequential/dense/MatMul/ReadVariableOp/_30]] 2020-03-06 15:18:17.804364: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[BiasAddGrad_5/_62]] 2020-03-06 15:18:17.804561: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 4600 batches). You may need to use the repeat() f24/6 [========================================================================================================================] - 1s 36ms/sample - loss: 0.1673 - accuracy: 0.8750 Traceback (most recent call last): File "python_scripts/gpu_farm/tf_data_generator/3D_tf_data_generator.py", line 181, in &lt;module&gt; evaluation_ad = model.evaluate(ad_test, ad_test_labels, verbose=0) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 930, in evaluate use_multiprocessing=use_multiprocessing) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py", line 490, in evaluate use_multiprocessing=use_multiprocessing, **kwargs) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py", line 426, in _model_iteration use_multiprocessing=use_multiprocessing) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py", line 646, in _process_inputs x, y, sample_weight=sample_weights) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 2383, in _standardize_user_data batch_size=batch_size) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 2489, in _standardize_tensors y, self._feed_loss_fns, feed_output_shapes) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_utils.py", line 810, in check_loss_and_target_compatibility ' while using as loss `' + loss_name + '`. ' ValueError: A target array with shape (5, 2) was passed for an output of shape (None, 1) while using as loss `binary_crossentropy`. This loss expects targets to have the same shape as the output. </code></pre> <p><strong>Update:</strong> So <code>model.fit()</code> should be supplied with <code>model.fit(x=data, y=labels)</code>, when using <code>tf.data()</code> because of a weird problem. This removes the <code>list out of index</code> error. And now I'm back to my original error. However it looks like this could be a tensorflow problem: <a href="https://github.com/tensorflow/tensorflow/issues/32" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/32</a></p> <p>So when I increase the batch size from 6 to higher numbers, and decrease the <code>steps_per_epoch</code>, it goes through more epochs without throwing the <code>StartAbort: Out of range</code> errors</p> <p><strong>Update2:</strong> As per @jkjung13 suggestion, <code>model.fit()</code> takes one parameter when using a dataset, <code>model.fit(x=batch)</code>. This is the correct implementation.</p> <p>But, you are supposed to supply the <code>dataset</code> instead of an iterable object if you're only using the <code>x</code> parameter in <code>model.fit()</code>.</p> <p>So, it should be: <code>model.fit(dataset, epochs=50, steps_per_epoch=46, validation_data=(v, v_labels))</code></p> <p>And with that I get a new error: <a href="https://github.com/tensorflow/tensorflow/issues/24520" rel="nofollow noreferrer">GitHub Issue</a></p> <p>Now to overcome this, I'm converting the dataset to a numpy_iterator(): <code>model.fit(dataset.as_numpy_iterator(), epochs=50, steps_per_epoch=46, validation_data=(v, v_labels))</code></p> <p>This solves the problem, however, the performance is appaling, similar to the old keras <code>model.fit_generator</code> without multiprocessing. So this defeats the whole purpose of 'tf.data'.</p>
<p><strong>TF 2.1</strong></p> <p>This is now working with the following parameters:</p> <pre><code>def load_image(file, label): nifti = np.asarray(nibabel.load(file.numpy().decode('utf-8')).get_fdata()).astype(np.float32) xs, ys, zs = np.where(nifti != 0) nifti = nifti[min(xs):max(xs) + 1, min(ys):max(ys) + 1, min(zs):max(zs) + 1] nifti = nifti[0:100, 0:100, 0:100] nifti = np.reshape(nifti, (100, 100, 100, 1)) return nifti, label @tf.autograph.experimental.do_not_convert def load_image_wrapper(file, label): return tf.py_function(load_image, [file, label], [tf.float64, tf.float64]) dataset = tf.data.Dataset.from_tensor_slices((train, labels)) dataset = dataset.map(load_image_wrapper, num_parallel_calls=32) dataset = dataset.prefetch(buffer_size=1) dataset = dataset.apply(tf.data.experimental.prefetch_to_device('/device:GPU:0', 1)) # So, my dataset size is 522, i.e. 522 MRI images. # I need to load the entire dataset as a batch. # This should exceed 60GiBs of RAM, but it doesn't go over 12GiB of RAM. # I'm not sure how tf.data batch() stores the data, maybe a custom file? # And also add a repeat parameter to iterate with each epoch. dataset = dataset.batch(522, drop_remainder=True).repeat() # Now initialise an iterator iterator = iter(dataset) # Create two objects, x &amp; y, from batch batch_image, batch_label = iterator.get_next() ################################################################################## with tf.device("/cpu:0"): with tf.device("/gpu:0"): model = tf.keras.Sequential() model.add(Conv3D(64, input_shape=(100, 100, 100, 1), data_format='channels_last', kernel_size=(7, 7, 7), strides=(2, 2, 2), padding='valid', activation='relu')) with tf.device("/gpu:1"): model.add(Conv3D(64, kernel_size=(3, 3, 3), padding='valid', activation='relu')) with tf.device("/gpu:2"): model.add(Conv3D(128, kernel_size=(3, 3, 3), padding='valid', activation='relu')) model.add(MaxPooling3D(pool_size=(2, 2, 2), padding='valid')) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(0.7)) model.add(Dense(1, activation='sigmoid')) model.compile(loss=tf.keras.losses.binary_crossentropy, optimizer=tf.keras.optimizers.Adagrad(0.01), metrics=['accuracy']) ################################################################################## # Now supply x=batch_image, y= batch_label to Keras' model.fit() # And finally, supply your batchs_size here! model.fit(batch_image, batch_label, epochs=100, batch_size=12) ################################################################################## </code></pre> <p>With this, it takes around 8 Minutes for the training to start. But once training starts, I'm seeing incredible speeds!</p> <pre><code>Epoch 30/100 522/522 [==============================] - 14s 26ms/sample - loss: 0.3526 - accuracy: 0.8640 Epoch 31/100 522/522 [==============================] - 15s 28ms/sample - loss: 0.3334 - accuracy: 0.8448 Epoch 32/100 522/522 [==============================] - 16s 31ms/sample - loss: 0.3308 - accuracy: 0.8697 Epoch 33/100 522/522 [==============================] - 14s 26ms/sample - loss: 0.2936 - accuracy: 0.8755 Epoch 34/100 522/522 [==============================] - 14s 26ms/sample - loss: 0.2935 - accuracy: 0.8851 Epoch 35/100 522/522 [==============================] - 14s 28ms/sample - loss: 0.3157 - accuracy: 0.8889 Epoch 36/100 522/522 [==============================] - 16s 31ms/sample - loss: 0.2910 - accuracy: 0.8851 Epoch 37/100 522/522 [==============================] - 14s 26ms/sample - loss: 0.2810 - accuracy: 0.8697 Epoch 38/100 522/522 [==============================] - 14s 26ms/sample - loss: 0.2536 - accuracy: 0.8966 Epoch 39/100 522/522 [==============================] - 16s 31ms/sample - loss: 0.2506 - accuracy: 0.9004 Epoch 40/100 522/522 [==============================] - 15s 28ms/sample - loss: 0.2353 - accuracy: 0.8927 Epoch 41/100 522/522 [==============================] - 14s 26ms/sample - loss: 0.2336 - accuracy: 0.9042 Epoch 42/100 522/522 [==============================] - 14s 26ms/sample - loss: 0.2243 - accuracy: 0.9234 Epoch 43/100 522/522 [==============================] - 15s 29ms/sample - loss: 0.2181 - accuracy: 0.9176 </code></pre> <p>15 seconds per epoch compared to the old 12 minutes per epoch!</p> <p>I will do further testing to see if this is actually working, and what impact it has on my test data. If there are any errors, I will come back and update this post.</p> <p>Why does this work? I have no idea. I couldn't find anything in the Keras documentation.</p>
python|tensorflow|keras|tensorflow-datasets
0
3,933
72,608,651
How to get the Pearson correlation between matrices
<p>This is the Python analog to <a href="https://stats.stackexchange.com/questions/24980/correlation-between-matrices-in-r">this question</a> asked about R. In summary, I have to numpy matrices of identical shape and I want to get their Pearson correlation. I just need one number. Feeding the matrices to np.corrcoef produces just another matrix for each position. Flattening the matrices into one line array (similar to what was suggested in R) also provides one form of matrix.</p> <pre><code>a = np.matrix('1 2 3; 3 4 5; 1 2 4') b = np.matrix('1 2 3; 4 3 5; 3 4 5') np.corrcoef(a.flatten(),b.flatten()) np.corrcoef(np.squeeze(np.asarray(a)), np.squeeze(np.asarray(b))) </code></pre> <p>How can I get the correlation between to matrices as a single number?</p> <p>---EDIT---</p> <p>A more advanced and comprehensive version of this questions would be: How to get a correlation matrix containing the correlation of several matrices? For example:</p> <pre><code>a = np.matrix('1 2 3; 3 4 5; 1 2 4') b = np.matrix('1 2 3; 4 3 5; 3 4 5') c = np.matrix('1 2 3; 4 3 5; 3 4 5') To produce something like matrix([[1. , 0.72280632, 1. ], [0.72280632, 1. ,1. ], [1. , 1. ,1. ]]) </code></pre>
<p>IIUC, you can <code>stack</code> and <code>reshape</code>:</p> <pre><code>l = [a, b, c] # only if matrices l = list(map(np.asarray, l)) x = np.stack(l).reshape(len(l), -1) np.corrcoef(x) </code></pre> <p>Output:</p> <pre><code>array([[1. , 0.72280632, 0.72280632], [0.72280632, 1. , 1. ], [0.72280632, 1. , 1. ]]) </code></pre>
python|r|arrays|numpy|matrix
0
3,934
72,690,687
Reseting indexes when I have same name multi index in pandas
<p>I have this dataset:</p> <pre class="lang-py prettyprint-override"><code>mydf = pd.DataFrame({'date':pd.date_range('01/01/2020', periods=48, freq='15D'), 'value':np.random.randint(20,30,48)}) mydf date value 0 2020-01-01 20 1 2020-01-16 28 2 2020-01-31 23 3 2020-02-15 27 4 2020-03-01 25 5 2020-03-16 25 ... </code></pre> <p>And I want to count the values per each month, so I applied this:</p> <pre class="lang-py prettyprint-override"><code>mydf.groupby([mydf['date'].dt.year,mydf['date'].dt.month]).size() date date 2020 1 3 2 1 3 3 4 2 5 2 ... </code></pre> <p>Now, I want to reset the indexes, in order to have this expected result:</p> <pre class="lang-py prettyprint-override"><code>Year Month Size 2020 1 3 2020 2 1 2020 3 3 2020 4 2 2020 5 2 </code></pre> <p>So I tried this:</p> <pre class="lang-py prettyprint-override"><code>mydf.groupby([mydf['date'].dt.year,mydf['date'].dt.month]).size().\ to_frame().reset_index() </code></pre> <p>But I received this error:</p> <pre class="lang-py prettyprint-override"><code>ValueError: cannot insert date, already exists </code></pre> <p>Then I tried to rename the columns to avoid this problem, but I received a multi index list of tuples, which I don't want:</p> <pre class="lang-py prettyprint-override"><code>mydf.groupby([mydf['date'].dt.year,mydf['date'].dt.month]).size().\ to_frame().rename(columns={0:'Size'}).index.rename(['Year','Month']) MultiIndex([(2020, 1), (2020, 2), (2020, 3), (2020, 4), (2020, 5), ... </code></pre> <p>What I am doing wrong? Please, is that my index name assignment is wrong? Do I have to reset levels or drop levels?</p>
<p>A simple trick to know is that you can rename series inline like this</p> <p>Instead of <code>mydf['date'].dt.year</code></p> <p>Do <code>mydf['date'].dt.year.rename(&quot;year&quot;)</code>.</p>
python|pandas
1
3,935
72,663,020
Numpy - Vectorized calculation of a large csv file
<p>I have a 20 GB <code>trades.csv</code> file. It has two columns (trade_time and price). And the csv file contains 650 million rows.</p> <p><strong>Sample Data</strong></p> <p><a href="https://gist.github.com/dsstex/bc885ed04a6de98afc7102ed08b78608" rel="nofollow noreferrer">https://gist.github.com/dsstex/bc885ed04a6de98afc7102ed08b78608</a></p> <p><strong>Pandas DataFrame</strong></p> <pre><code>df = pd.read_csv(&quot;trades.csv&quot;, index_col=0, parse_dates=True) </code></pre> <p>I would like to check whether price is up or down based on percentage. If the price hits up_value (e.g. +1%) first, the result is 1. If the price hits down_value (e.g. -0.5%) first, then the result is 0. I need to do this for all 650 million rows.</p> <p>At the moment, the dataframe has only two columns. <code>trade_time(index), price</code>. I would like to have a new column named &quot;<strong>result</strong>&quot;.</p> <pre><code>import pandas as pd df = pd.read_csv(&quot;trades.csv&quot;, index_col=0, parse_dates=True) df[&quot;result&quot;] = None print(df) up_percentage = 0.2 down_percentage = 0.1 def calc_value_from_percentage(percentage, whole): return (percentage / 100) * whole def set_result(index): up_value = 0 down_value = 0 for _, current_row_price, _ in df.loc[index:].itertuples(): if up_value == 0 or down_value == 0: up_delta = calc_value_from_percentage(up_percentage, current_row_price) down_delta = calc_value_from_percentage(down_percentage, current_row_price) up_value = current_row_price + up_delta down_value = current_row_price - down_delta if current_row_price &gt; up_value: df.loc[index, &quot;result&quot;] = 1 return if current_row_price &lt; down_value: df.loc[index, &quot;result&quot;] = 0 return for ind, _, _ in df.itertuples(): set_result(ind) df.to_csv(&quot;results.csv&quot;, index=True, header=True) print(df) </code></pre> <p><strong>Results</strong></p> <p><a href="https://gist.github.com/dsstex/fe3759beedbf9c46ace382a7eef3d12c" rel="nofollow noreferrer">https://gist.github.com/dsstex/fe3759beedbf9c46ace382a7eef3d12c</a></p> <p>Note: Due to insufficient data, most of the bottom rows in the above file has the value &quot;None&quot; for &quot;result&quot;. So the value is blank.</p> <hr /> <p>At the moment, I'm using pandas <code>itertuples()</code> to process the file. I would like to have a vectorized solution since I have a huge file.</p> <p><strong>Note</strong>: Last month I asked <a href="https://stackoverflow.com/questions/72335956/python-multiprocessing-multiple-large-size-files-using-pandas">this question</a>. This is a follow-up question. And it is related to <a href="https://stackoverflow.com/a/72337982/18201044">this answer</a>. In that answer, the author is using a fixed size <code>up_value/down_value</code> of <code>200</code>. But I'm after a percentage based vectorized solution.</p> <p>Any help is greatly appreciated.</p> <p>Thanks</p>
<p>The original algorithm is super slow because it is doing a nested loop with iterrows/tuples.</p> <p>If I understood good, for <em>each</em> row, you check if <em>any</em> of the posterior rows reach to the &quot;fixed&quot; percentage. If it reaches <code>up</code>, you tag as 1, if it reaches <code>down</code> you tag 0, otherwise it is not tagged (<code>None</code>)</p> <p>I reached to this code. It is not vectorized, but it runs in my machine much faster than the initial question and the accepted solution.</p> <p>It could be that with 650M rows it becomes slower though.</p> <pre><code>import pandas as pd import numpy as np from time import time df = pd.read_csv(&quot;trades.csv&quot;, index_col=0, parse_dates=True) t0=time() up_percentage = 0.2 down_percentage = 0.1 # Precalculate the percentages df['upper'] = df['price']*(1+up_percentage/100) df['lower'] = df['price']*(1-down_percentage/100) pupper = np.array([np.argmax(df.price.values[n:] &gt; up_value) for n,up_value in enumerate(df.upper)])-1 plower = np.array([np.argmax(df.price.values[n:] &lt; down_value) for n,down_value in enumerate(df.lower)])-1 df[&quot;result&quot;] = None # These two cases occur when the index is not found, but no need to re-set to None. # df.loc[pupper&lt;0,'result']=None # df.loc[plower&lt;0,'result']=None # If the upper value is found and it occurs before the lower, set it to 1 df.loc[(pupper&gt;0)&amp;((plower&lt;0)|(pupper&lt;plower)),'result']=1 # If the upper value is found and it occurs before the lower, set it to 1 df.loc[(pupper&lt;0)&amp;(plower&gt;0),'result']=0 print(f&quot;{1000*(time()-t0):0.2f}ms&quot;) </code></pre> <p><strong>Benchmark:</strong> counting only the time to perform the operations, not loading/saving the CSV.</p> <ul> <li>Original: 19s</li> <li>Crissal's: 6537ms</li> <li>This code: 135ms</li> </ul> <p>Checked for equality running original code + proposed code as <code>df2</code> and comparing:</p> <pre><code>df3 = df.merge(df2, left_index=True, right_index=True).fillna('empty') print(f&quot;The result columns are equal: {(df3.result_x==df3.result_y).all()}&quot;) </code></pre>
python|pandas|dataframe|numpy
1
3,936
59,494,111
Pandas Python - value_counts() or idxmax() returns different value each time
<p>I have a Series which consists of a list of some random products. This is what it looks like if I print the describe:</p> <pre><code>&lt;bound method NDFrame.describe of 176 reversible jacket 231 the north face resolve 2 jacket 234 columbia pike lake jacket 279 girl's 7-16 knitworks skater belted dress faux... 303 flocked quilted jacket ... 7665 tommy hilfiger big boys wayne colorblocked bas... 7685 men's toronto raptors columbia red flash forwa... 7796 the north face uo exclusive topography fanorak... 7809 lauren ralph lauren solid ultraflex classic-fi... 7922 tommy hilfiger sport faux-sherpa colorblocked ... Name: desc, Length: 146, dtype: object&gt; &lt;class 'pandas.core.series.Series'&gt; </code></pre> <p>I have these 2 statements after this</p> <pre><code>max_occurence_prod = prod.where(prod.str.len() &gt; 1) curr_product = max_occurence_prod.value_counts().idxmax() </code></pre> <p>However, every time I run this piece of code, the value of <code>curr_product</code> seems to be different. For instance, the first time I ran this code, the value of <code>curr_product</code> was "<code>diamond quilted packable jacket</code>", the second time it was "<code>boys' logan jacket</code>"</p> <p>From my understanding the <code>value_counts()</code> function should return a Series which contains the count of unique values. If this Series is returned as the exact same each time, then shouldn't the <code>idxmax()</code> return the same corresponding value as well? I can't seem to figure out why it would return a different value each time.</p> <p>Here is the overall code</p> <pre><code>max_occurence_prod = prod.where(prod.str.len() &gt; 1) curr_product = max_occurence_prod.value_counts().idxmax() #new value is printed each time print(max_occurence_prod.value_counts().idxmax()) </code></pre> <p>Apologies if anything is unclear, I'm fairly new to Python and Pandas</p>
<p>It seems there has been <a href="https://stackoverflow.com/questions/51933763/pandas-series-value-counts-returns-inconsistent-order-for-equal-count-strings">previous issues</a> regarding how pandas <code>value_counts()</code> deals with tied values, in an inconsistent way.</p> <p>As for <code>idxmax()</code> the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.idxmax.html#pandas.Series.idxmax" rel="nofollow noreferrer">documentation</a> states clearly:</p> <blockquote> <p>If multiple values equal the maximum, the first row label with that value is returned.</p> </blockquote> <p>I am afraid the amount of information you provide is not enough for me to generate a full example with your data but here is an attempt:</p> <pre><code>import pandas as pd data = {'col_1':['a','a','b','b','c','c'],'col_2':['one','two','three','one','two','three']} df = pd.DataFrame(data) for i in range(3): print(df['col_1'].value_counts().idxmax()) </code></pre> <p>Run once in command:</p> <pre><code>c c c </code></pre> <p>Second time in command:</p> <pre><code>b b b </code></pre> <p>Third time:</p> <pre><code>a a a </code></pre> <p>The conclusion is that you are getting different values each time due to <code>value_counts()</code> and not <code>idxmax()</code>. Some solutions to make it always replicable is to use <code>sort_index()</code> too so that the output is not dependent on a random value. For example:</p> <pre><code>for i in range(3): print(df['col_1'].value_counts().sort_index().idxmax()) </code></pre> <p>Returns always:</p> <pre><code>a a a </code></pre>
python|python-3.x|pandas
1
3,937
32,463,573
converting python pandas column to numpy array in place
<p>I have a csv file in which one of the columns is a semicolon-delimited list of floating point numbers of variable length. For example:</p> <pre><code>Index List 0 900.0;300.0;899.2 1 123.4;887.3;900.1;985.3 </code></pre> <p>when I read this into a pandas DataFrame, the datatype for that column is object. I want to convert it, ideally in place, to a numpy array (or just a regular float array, it doesn't matter too much at this stage). </p> <p>I wrote a little function which takes a single one of those list elements and converts it to a numpy array:</p> <pre><code>def parse_list(data): data_list = data.split(';') return np.array(map(float, data_list)) </code></pre> <p>This works fine, but what I want to do is do this conversion directly in the DataFrame so that I can use pandasql and the like to manipulate the whole data set after the conversion. Can someone point me in the right direction?</p> <p>EDIT: I seem to have asked the question poorly. I would like to convert the following data frame:</p> <pre><code>Index List 0 900.0;300.0;899.2 1 123.4;887.3;900.1;985.3 </code></pre> <p>where the dtype of List is 'object'</p> <p>to the following dataframe:</p> <pre><code>Index List 0 [900.0, 300.0, 899.2] 1 [123.4, 887.3, 900.1, 985.3] </code></pre> <p>where the datatype of List is numpy array of floats</p> <p>EDIT2: some progress, thanks to the first answer. I now have the line:</p> <pre><code>df['List'] = df['List'].str.split(';') </code></pre> <p>which splits the column in place into an array, but the dtypes remain object When I then try to do</p> <pre><code>df['List'] = df['List'].astype(float) </code></pre> <p>I get the error: return arr.astype(dtype) ValueError: setting an array element with a sequence.</p>
<p>If I understand you correctly, you want to transform your data from pandas to numpy arrays. I used this:</p> <pre><code>pandas_DataName.as_matrix(columns=None) </code></pre> <p>And it worked for me. For more information visit <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.as_matrix.html" rel="nofollow noreferrer">here</a></p> <p>I hope this could help you.</p>
python-2.7|csv|numpy|pandas
0
3,938
40,653,937
Numpy: how to find the unique local minimum of sub matrixes in matrix A?
<p>Given a matrix A of dimension MxN (4x4), how would one find the next-best minimum of each 2x2 submatrix?</p> <pre><code>A = array([[ 32673. , 15108.2 , 26767.2 , 9420. ], [ 32944.2 , 14604.01 , 26757.01 , 9127.2 ], [ 26551.2 , 9257.01 , 26595.01 , 9309.2 ], [ 26624. , 8935.2 , 26673.2 , 8982. ]]) </code></pre> <p>The next-best minimum of a set of submatrixes, is the minimum of that submatrix that does not conflict with the local position of other minima:</p> <p>Example Algorithm: </p> <pre class="lang-none prettyprint-override"><code>1. Find the minimum in A: 8935.2 global coords[3,1], local coords [1,1] 2. No other matrix has been evaluated so no conflict yet. 3. Find the next submatrix min: 8982. gc [3,3], lc [1,1] 4. Conflict exists, find next min in same submatrix: 9309.2 gc [2,3], lc [0,1] 5. Find next submatrix min: 9420 gc [0,3] lc[0,1] 6. Conflict exists, find next min: 26757.01 gc [1,2] lc [1,0] 7. Find next submatrix min: 14604 -- conflict with lc[1,1] 8. Find next submatrix min: 15108.2 -- conflict with lc [0,1] 9. Find next submatrix min: 32673. gc [0,0], lc [0,0] </code></pre> <p>one approach I have thought of trying is to follow the algorithm above, but instead of exhaustively searching each submatrix again, I globally update each submatrix local position with a 'high' value (>> max(A)), which is incremented on each successful find of a minima.</p> <p>The expected output would be a list:</p> <pre><code>[((0, 0), (0, 0), 32673), ((0, 1), (1, 0), 26757.01), ((1, 0), (1, 1), 8935.2), ((1, 1), (0, 1), 9309.2)] </code></pre> <p>of the form [((t1), (t2), value) ... ], where t1 is the coordinates of the submatrix in A, and t2 is the coordinates of the selected minimum in the submatrix.</p> <p><strong>Edit:</strong> the submatrices are defined as ZxZ, where MxN modulo ZxZ == 0, and are non-overlapping starting at (0,0), and tiled to match the dimensions of MxN.</p> <p>Edit: Below is a solution I've constructed, but it is slow. I suspect that that if I delete submatrices from the matrix on each iteration, then the performance might be improved, but I am not sure how to do this.</p> <pre><code> def get_mins(self, result): # result is the 2d array dim = 2 # 2x2 submatrix mins = [] count = 0 while count &lt; dim**2: a, b = result.shape M4D = result.reshape(a//dim, dim, b//dim, dim) lidx = M4D.transpose(0, 2, 1, 3).reshape(-1, b//dim, dim**2).argmin(-1) r, c = numpy.unravel_index(lidx, [dim, dim]) yy = M4D.min(axis=(1, 3)) ww = numpy.dstack((r, c)) super_min = numpy.unravel_index(numpy.argmin(yy), (dim, dim)) rows = super_min[0] cols = super_min[1] # ww[rows,cols] g_ves us 2x2 position offset_r, offset_c = ww[rows, cols] # super_min gives us submatrix position mins.append((tuple(super_min), (offset_r, offset_c), yy.min())) if dim &gt; 1: # update all other positions with inf &gt;&gt; max(result) result[numpy.ix_([offset_r + (d * dim) for d in range(dim)], [offset_c + (d * dim) for d in range(dim)])] = numpy.inf # update the submatrix to all == numpy.inf result[rows*dim:((rows*dim)+dim), cols*dim:((cols*dim)+dim)] = numpy.inf count += 1 return mins </code></pre>
<p>Given the dependency between iterations in choosing the global minimum, here's an approach with one-loop -</p> <pre><code>def unq_localmin(A, dim): m, n = A.shape M4D = A.reshape(m//dim, dim, n//dim, dim) M2Dr = M4D.swapaxes(1,2).reshape(-1,dim**2) a = M2Dr.copy() N = M2Dr.shape[0] R = np.empty(N,dtype=int) C = np.empty(N,dtype=int) shp = M2Dr.shape for i in range(N): r,c = np.unravel_index(np.argmin(a),shp) a[r] = np.inf a[:,c] = np.inf R[i], C[i] = r, c out = M2Dr[R,C] idr = np.column_stack(np.unravel_index(R,(dim,dim))) idc = np.column_stack(np.unravel_index(C,(dim,dim))) return zip(map(tuple,idr),map(tuple,idc),out) </code></pre> <p>Let's verify results with a random bigger <code>9x9</code> array and <code>3x3</code> submatrix/subarray to test out variety against OP's implementation <code>get_mins</code> -</p> <pre><code>In [66]: A # Input data array Out[66]: array([[ 927., 852., 18., 949., 933., 558., 519., 118., 82.], [ 939., 782., 178., 987., 534., 981., 879., 895., 407.], [ 968., 187., 539., 986., 506., 499., 529., 978., 567.], [ 767., 272., 881., 858., 621., 301., 675., 151., 670.], [ 874., 221., 72., 210., 273., 823., 784., 289., 425.], [ 621., 510., 303., 935., 88., 970., 278., 125., 669.], [ 702., 722., 620., 51., 845., 414., 154., 154., 635.], [ 600., 928., 540., 462., 772., 487., 196., 499., 208.], [ 654., 335., 258., 297., 649., 712., 292., 767., 819.]]) In [67]: unq_localmin(A, dim = 3) # Using proposed approach Out[67]: [((0, 0), (0, 2), 18.0), ((2, 1), (0, 0), 51.0), ((1, 0), (1, 2), 72.0), ((1, 1), (2, 1), 88.0), ((0, 2), (0, 1), 118.0), ((2, 2), (1, 0), 196.0), ((2, 0), (2, 2), 258.0), ((1, 2), (2, 0), 278.0), ((0, 1), (1, 1), 534.0)] In [68]: out = np.empty((9,9)) In [69]: get_mins(out,A) # Using OP's soln with dim = 3 edited Out[69]: [((0, 0), (0, 2), 18.0), ((2, 1), (0, 0), 51.0), ((1, 0), (1, 2), 72.0), ((1, 1), (2, 1), 88.0), ((0, 2), (0, 1), 118.0), ((2, 2), (1, 0), 196.0), ((2, 0), (2, 2), 258.0), ((1, 2), (2, 0), 278.0), ((0, 1), (1, 1), 534.0)] </code></pre> <p><strong>Simplification</strong></p> <p>The above solution gets us the row and col indices that could be used to construct the indices tuples as printed with <code>get_mins</code>. If you don't need those, we could simplify the proposed approach a bit, like so -</p> <pre><code>def unq_localmin_v2(A, dim): m, n = A.shape M4D = A.reshape(m//dim, dim, n//dim, dim) M2Dr = M4D.swapaxes(1,2).reshape(-1,dim**2) N = M2Dr.shape[0] out = np.empty(N) shp = M2Dr.shape for i in range(N): r,c = np.unravel_index(np.argmin(M2Dr),shp) out[i] = M2Dr[r,c] M2Dr[r] = np.inf M2Dr[:,c] = np.inf return out </code></pre> <p><strong>Runtime test -</strong></p> <pre><code>In [52]: A = np.random.randint(11,999,(9,9)).astype(float) In [53]: %timeit unq_localmin_v2(A, dim=3) 10000 loops, best of 3: 93.1 µs per loop In [54]: out = np.empty((9,9)) In [55]: %timeit get_mins(out,A) 1000 loops, best of 3: 907 µs per loop </code></pre>
python|algorithm|numpy|matrix
2
3,939
18,737,942
Clustering in python(scipy) with space and time variables
<p>The format of my dataset: [x-coordinate, y-coordinate, hour] with hour an integer value from 0 to 23.</p> <p>My question now is how can I cluster this data when I need an euclidean distance metric for the coordinates, but a different one for the hours (since d(23,0) is 23 in the euclidean distance metric). Is it possible to cluster data with different distance metrics for each feature in scipy? How? </p> <p>Thank you</p>
<p>You'll need to define your own metric, which handles "time" in an appropriate way. In the docs for <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html" rel="nofollow">scipy.spatial.distance.pdist</a> you can define your own function</p> <pre><code>Y = pdist(X, f) </code></pre> <blockquote> <p>Computes the distance between all pairs of vectors in X using the user supplied 2-arity function f. [...] For example, Euclidean distance between the vectors could be computed as follows:</p> </blockquote> <pre><code>dm = pdist(X, lambda u, v: np.sqrt(((u-v)**2).sum())) </code></pre> <p>The metric can be passed to any scipy clustering algorithm, via the <code>metric</code> keyword. For example, using <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html#scipy.cluster.hierarchy.linkage" rel="nofollow"><code>linkage</code></a>:</p> <pre><code>scipy.cluster.hierarchy.linkage(y, method='single', metric='euclidean') </code></pre>
python|numpy|scipy|cluster-analysis|euclidean-distance
3
3,940
61,662,711
How can I get columns from Multi-Index?
<p>I have a dataframe called "keytable" which features a Multi-Index composed of 'Month', 'Day' and 'Hour'. I want to keep that multi-index while creating 3 new columns with the values of 'Month', 'Day' and 'Hour'. How can I do it? Here's the dataframe:</p> <pre><code>keytable.head() Out[59]: pp pres rad rh ... ws WeekDay Power_kW Power_kW18 Month Day Hour ... 1 3 0 0.0 1027.6 4.1 78.9 ... 0.0 3 77.303046 117.774419 1 0.0 1027.0 3.3 79.7 ... 0.0 3 72.319602 110.710928 2 0.0 1027.0 3.3 81.8 ... 0.0 3 71.831852 106.067667 3 0.0 1027.0 1.9 86.6 ... 0.0 3 69.555751 106.325955 4 0.0 1027.0 3.8 92.2 ... 0.0 3 69.525780 102.855393 [5 rows x 11 columns] </code></pre>
<p>To make new columns called 'Month', 'day', 'year', just <code>new_table=key_table.reset_index()</code> will work. Having index duplicated as columns is a very poor practice, but if you really insist, then </p> <pre><code>newdf = new_table[['Year', 'Month', 'Year']].set_index(['Year', 'Month', 'Year']). new_table.set_index(newdf.index, inplace=True) </code></pre> <p>should work.</p>
python|pandas|dataframe|multi-index
1
3,941
61,998,103
Trying to get consistent time format in Pandas
<p>I’m having an issue getting some timestamps into a consistent format.</p> <p>I have the timestamps: ‘00:00:02.285932’ ‘00:00:07’ ‘00:00:11.366717’ ‘00:00:11.367594’ In pandas from a CSV file. I would like the second line to be consistent with the others. ‘00:00:07.000000’</p> <p>If I run: timestps.pd.to_datetime(timestps) over the timestps data I do get the format with all the decimals, but it adds a date, which I can’t seem to remove without losing the time format. Any help would be appreciated. Thanks</p>
<p>you can either use <code>datetime</code> and simply not use the year/month/day in your code or convert to <code>timedelta</code>. methods available for both types are different so the choice depends on what you want to do... Ex:</p> <pre><code>import pandas as pd # example series: s = pd.Series(['00:00:02.285932', '00:00:07', '00:00:11.366717', '00:00:11.367594']) # cast string to datetime: s_dt = pd.to_datetime(s) ### extract a certain attribute: # s_dt.dt.second # 0 2 # 1 7 # 2 11 # 3 11 ### format to string for display: # s_dt.dt.strftime('%H:%M:%S.%f') # 0 00:00:02.285932 # 1 00:00:07.000000 # 2 00:00:11.366717 # 3 00:00:11.367594 # dtype: object # cast string to timedelta: s_td = pd.to_timedelta(s) # s_td # 0 00:00:02.285932 # 1 00:00:07 # 2 00:00:11.366717 # 3 00:00:11.367594 # dtype: timedelta64[ns] ### extract total seconds as float: # s_td.dt.total_seconds() # 0 2.285932 # 1 7.000000 # 2 11.366717 # 3 11.367594 </code></pre>
python|pandas|formatting|timestamp
0
3,942
58,161,122
How to filter rows in Python pandas dataframe with duplicate values in the columns to be filtere
<p><strong>Overall context:</strong></p> <p>I have a data frame that contains observations for every five minute starting at 5 AM in the morning and ending at 8 PM in the evening for several days. I need to filter all the observations that start from 9 AM in the morning and end at 5 PM in the evening for every day.</p> <p>the input data frame looks like this:</p> <pre><code>Date Time 2019-09-20 05:00:00,..,.. 2019-09-20 05:05:00,..,.. ... 2019-09-20 09:00:00,..,.. ... 2019-09-20 17:00:00,..,.. 2019-09-20 17:05:00,..,.. ... 2019-09-20 20:00:00,..,.. 2019-09-21 05:00:00,..,.. 2019-09-21 05:05:00,..,.. ... 2019-09-21 09:00:00,..,.. ... 2019-09-21 17:00:00,..,.. 2019-09-21 17:05:00,..,.. ... 2019-09-21 20:00:00,..,.. </code></pre> <p>and the output data frame should look like this:</p> <pre><code>2019-09-20 09:00:00,..,.. ... 2019-09-20 17:00:00,..,.. 2019-09-21 09:00:00,..,.. ... 2019-09-21 17:00:00,..,.. </code></pre> <p><strong>Steps taken so far</strong></p> <p>In order to extract the rows between 9 am and 5 pm, I determined the number of seconds since midnight for every row by extracting the hours, minutes and seconds using vectorized data operations so input dataframe will have column like</p> <pre><code>Date Time, Number of seconds since midnight 2019-09-20 05:00:00,xxxx,..,.. 2019-09-20 05:05:00,yyyy,..,.. ... 2019-09-21,05:00:00,xxxx,..,.. 2019-09-21, 05:05:00,yyyy,..,.. </code></pre> <p>Note that for the same time on every day, the number of seconds will remain the same Now I was hoping to extract alll the rows between 9 am and 5 pm by</p> <pre><code>df[(df['Number of seconds since midnight'] &gt; (nseconds for 9 am from midnight)) &amp; ((df['Number of seconds since midnight'] &lt; (nseconds for 5 pm from midnight)) </code></pre> <p>but I get the rows <strong><em>from only the last date between</em></strong> 9am and 5 pm. TO me, it looks it is ignoring all the duplicate rows witht ehs ame calue.</p> <p>Can anyone suggest a possible solution that does not iterate over each row and uses the vectorized operations as the database is very large</p>
<p>Use dateTime.hour that is present in the dateTime object in your data, you could then filter the data based on which is greater than 9 and which is less than or equal to 5 or (17) and then add into your resulting data frame or array </p> <p>The following piece of code might help you,</p> <pre><code>dummy = [] for d in dt: if d.hour&gt;=9 and d.hour&lt;=17: dummy.append(d) print(dummy) </code></pre> <p>I have created my sample data from the following and it works on multiple dates too,</p> <pre><code>start = datetime.datetime(2000, 1, 1) dt = np.array([start + datetime.timedelta(hours=i) for i in range(24)]) </code></pre> <p>Any corrections are welcomed.</p>
python|pandas|dataframe
2
3,943
57,745,575
How to update multiple rows and columns in pandas?
<p>I want to update multiple rows and columns in a CSV file, using <code>pandas</code></p> <p>I've tried using <code>iterrows()</code> method but it only works on a single column. </p> <p>here is the logic I want to apply for multiple rows and columns:</p> <pre class="lang-py prettyprint-override"><code>if(value &lt; mean): value += std_dev else: value -= std_dev </code></pre>
<p>Here is another way of doing it,</p> <p>Consider your data is like this:</p> <pre><code> price strings value 0 1 A a 1 2 B b 2 3 C c 3 4 D d 4 5 E f </code></pre> <p>Now lets make <code>strings</code> column as the index:</p> <pre><code>df.set_index('strings', inplace='True') #Result price value strings A 1 a B 2 b C 3 c D 4 d E 5 f </code></pre> <p>Now set the values of rows <code>C, D, E</code> as <code>0</code></p> <pre><code>df.loc[['C', 'D','E']] = 0 #Result price value strings A 1 a B 2 b C 0 0 D 0 0 E 0 0 </code></pre> <p>or you can do more precisely</p> <pre><code>df.loc[df.strings.isin(["C", "D", "E"]), df.columns.difference(["strings"])] = 0 df Out[82]: price strings value 0 1 A a 1 2 B b 2 0 C 0 3 0 D 0 4 0 E 0 </code></pre>
python|pandas|csv|dataframe|data-manipulation
2
3,944
34,241,680
Returning only one boolean statement if two matrices identical
<p>Suppose there is a matrix <code>A</code> and a matrix <code>B</code>. Is there a logical statement that can return only one value, either <code>True</code> or <code>False</code> based on whether all elements of <code>A</code> are identical to all elements in <code>B</code>?</p> <p>For example <code>A = array([[1, 0, 0],[0, 1, 0]])</code> and <code>B = array([[1, 0, 0],[0, 1, 0]])</code>, <code>A == B</code> returns <code>True</code> and <code>False</code> per element of every row and every column</p>
<p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.array_equal.html" rel="nofollow"><code>np.array_equal</code></a>.</p> <p>Also, you can apply <code>.all()</code> to the equality-bool-array you got by comparing <code>A==B</code>, like this:</p> <pre><code>(A==B).all() </code></pre> <p>The latter is slightly less efficient than the former (creates a temporary bool array), but just as common.</p> <p>If comparing floats, where you typically want the value to be close but not necessarily identical, use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html" rel="nofollow"><code>np.allclose</code></a>.</p>
python|python-2.7|numpy|matrix
3
3,945
34,116,402
Tensorflow Convolution Neural Net - Training with a small dataset, applying random changes to Images
<p>Say I have a very small dataset, just 50 Images. I want to re-use the code from the tutorial at <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/tutorials/mnist/pros/index.md" rel="nofollow noreferrer">Red Pill</a>, but apply random transformations to the same set of Images in each Batch of training, say random changes to Brightness, Contrast etc. I added just one function:</p> <pre class="lang-py prettyprint-override"><code>def preprocessImages(x): retValue = numpy.empty_like(x) for i in range(50): image = x[i] image = tf.reshape(image, [28,28,1]) image = tf.image.random_brightness(image, max_delta=63) #image = tf.image.random_contrast(image, lower=0.2, upper=1.8) # Subtract off the mean and divide by the variance of the pixels. float_image = tf.image.per_image_whitening(image) float_image_Mat = sess.run(float_image) retValue[i] = float_image_Mat.reshape((28*28)) return retValue </code></pre> <p>Small change to the old code:</p> <pre class="lang-py prettyprint-override"><code>batch = mnist.train.next_batch(50) for i in range(1000): #batch = mnist.train.next_batch(50) if i%100 == 0: train_accuracy = accuracy.eval(feed_dict={ x:preprocessImages(batch[0]), y_: batch[1], keep_prob: 1.0}) print("step %d, training accuracy %g"%(i, train_accuracy)) train_step.run(feed_dict={x: preprocessImages(batch[0]), y_: batch[1], keep_prob: 0.5}) </code></pre> <p>First iteration is successful, thereafter it crashes:</p> <pre class="lang-py prettyprint-override"><code>step 0, training accuracy 0.02 W tensorflow/core/common_runtime/executor.cc:1027] 0x117e76c0 Compute status: Invalid argument: ReluGrad input is not finite. : Tensor had NaN values [[Node: gradients_4/Relu_12_grad/Relu_12/CheckNumerics = CheckNumerics[T=DT_FLOAT, message="ReluGrad input is not finite.", _device="/job:localhost/replica:0/task:0/cpu:0"](add_16)]] W tensorflow/core/common_runtime/executor.cc:1027] 0x117e76c0 Compute status: Invalid argument: ReluGrad input is not finite. : Tensor had NaN values [[Node: gradients_4/Relu_13_grad/Relu_13/CheckNumerics = CheckNumerics[T=DT_FLOAT, message="ReluGrad input is not finite.", _device="/job:localhost/replica:0/task:0/cpu:0"](add_17)]] W tensorflow/core/common_runtime/executor.cc:1027] 0x117e76c0 Compute status: Invalid argument: ReluGrad input is not finite. : Tensor had NaN values [[Node: gradients_4/Relu_14_grad/Relu_14/CheckNumerics = CheckNumerics[T=DT_FLOAT, message="ReluGrad input is not finite.", _device="/job:localhost/replica:0/task:0/cpu:0"](add_18)]] Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/media/sf_Data/mnistConv.py", line 69, in &lt;module&gt; train_step.run(feed_dict={x: preprocessImages(batch[0]), y_: batch[1], keep_prob: 0.5}) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1267, in run _run_using_default_session(self, feed_dict, self.graph, session) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2763, in _run_using_default_session session.run(operation, feed_dict) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 345, in run results = self._do_run(target_list, unique_fetch_targets, feed_dict_string) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 419, in _do_run e.code) tensorflow.python.framework.errors.InvalidArgumentError: ReluGrad input is not finite. : Tensor had NaN values [[Node: gradients_4/Relu_12_grad/Relu_12/CheckNumerics = CheckNumerics[T=DT_FLOAT, message="ReluGrad input is not finite.", _device="/job:localhost/replica:0/task:0/cpu:0"](add_16)]] Caused by op u'gradients_4/Relu_12_grad/Relu_12/CheckNumerics', defined at: File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/media/sf_Data/mnistConv.py", line 58, in &lt;module&gt; train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 165, in minimize gate_gradients=gate_gradients) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 205, in compute_gradients loss, var_list, gate_gradients=(gate_gradients == Optimizer.GATE_OP)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients.py", line 414, in gradients in_grads = _AsList(grad_fn(op_wrapper, *out_grads)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_grad.py", line 107, in _ReluGrad t = _VerifyTensor(op.inputs[0], op.name, "ReluGrad input is not finite.") File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_grad.py", line 100, in _VerifyTensor verify_input = array_ops.check_numerics(t, message=msg) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 48, in check_numerics name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 633, in apply_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1710, in create_op original_op=self._default_original_op, op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 988, in __init__ self._traceback = _extract_stack() ...which was originally created as op u'Relu_12', defined at: File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/media/sf_Data/mnistConv.py", line 34, in &lt;module&gt; h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 506, in relu return _op_def_lib.apply_op("Relu", features=features, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 633, in apply_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1710, in create_op original_op=self._default_original_op, op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 988, in __init__ self._traceback = _extract_stack() </code></pre> <p>This is exactly the same error that I get with my personal dataset with 50 training examples. </p>
<p>One thing to start with: Instead of computing y_conv and then the cross-entropy, use the merged <code>tf.softmax_cross_entropy_with_logits</code> operator. This may not solve your problem, but it's more numerically stable than the naive version in the Red Pill example.</p> <p>Second, try printing out the cross_entropy at every iteration.</p> <pre><code>cross_entropy = .... (previous code here) cross_entropy = tf.Print(cross_entropy, [cross_entropy], "Cross-entropy: ") </code></pre> <p>to get an idea if it's going to infinity as the model progresses, or if it just jumps to inf or NaN. If it progressively blows up, then it's probably the learning rate. If it jumps, it could be a numerical boundary condition that could be solved as above. If it's there from the get-go, you may have an error in the way you're applying distortions that ends up feeding in horribly broken data in some way.</p>
tensorflow|training-data|conv-neural-network
4
3,946
34,360,375
Python Pandas Dataframe filter and replace
<p>I constructed a dataframe which looks like this:</p> <pre><code>title category1 category2 category3 category4 'a' 0.44214 NAN 0.99 0.35 'b' NAN NAN NAN NAN 'c' 0.31 0.41 0.5 0.53 </code></pre> <p>For each row, I want to indicate the two highest values with 1 and all others with 0. </p> <p>Result should look like this:</p> <pre><code> title category1 category2 category3 category4 'a' 1 0 1 0 'b' 0 0 0 0 'c' 0 0 1 1 </code></pre> <p>Is there a buildin-function solve this or how could this be implemented otherwise?</p>
<p>You can rank the rows (setting <code>axis=1</code>) in descending order all numeric values in the dataframe. Then do a boolean comparison to find the rank values less than or equal to two (<code>le(2)</code>), which would be rank values 1 and 2. Finally, convert the boolean mask to integers.</p> <pre><code>&gt;&gt;&gt; df.rank(axis=1, ascending=False, numeric_only=True).le(2).astype(int) category1 category2 category3 category4 title 'a' 1 0 1 0 'b' 0 0 0 0 'c' 0 0 1 1 </code></pre>
python|pandas|filter
2
3,947
73,195,941
Extract data and add to new column based on value id
<p>I am trying to extract elevation data from my stations information dataframe and add it to my rides dataframe.</p> <p>Take df1 and df2 for example:</p> <pre><code>df1 = pd.DataFrame( { &quot;Ride ID&quot;: [&quot;100&quot;, &quot;101&quot;, &quot;102&quot;, &quot;103&quot;], &quot;StartStation ID&quot;: [&quot;2&quot;, &quot;3&quot;, &quot;4&quot;, &quot;1&quot;], &quot;Endstation ID&quot;: [&quot;3&quot;, &quot;1&quot;, &quot;2&quot;, &quot;4&quot;], }) </code></pre> <pre><code>df2 = pd.DataFrame( { &quot;Station ID&quot;: [&quot;1&quot;, &quot;2&quot;, &quot;3&quot;, &quot;4&quot;], &quot;Elevation&quot;: [&quot;24&quot;, &quot;13&quot;, &quot;10&quot;, &quot;20&quot;], }) </code></pre> <p>I want to extract the elevation per station (based on ID number) and add this data to the main dataset</p> <p>So I end up with this:</p> <pre><code> Should I use a loop of write a function to do this? I was thinking about a for loop with if statement but I have not managed to make it work. Thank you df3 = pd.DataFrame( { &quot;Ride ID&quot;: [&quot;100&quot;, &quot;101&quot;, &quot;102&quot;, &quot;103&quot;], &quot;StartStation ID&quot;: [&quot;2&quot;, &quot;3&quot;, &quot;4&quot;, &quot;1&quot;], &quot;StartStation Elevation&quot;: [&quot;13&quot;, &quot;10&quot;, &quot;20&quot;, &quot;24&quot;], &quot;Endstation ID&quot;: [&quot;3&quot;, &quot;1&quot;, &quot;2&quot;, &quot;4&quot;], &quot;Endstation Elevation&quot;: [&quot;10&quot;, &quot;24&quot;, &quot;13&quot;, &quot;20&quot;], }) </code></pre>
<p>simple merge with Station ID Column Try this,</p> <pre><code>pd.merge(df1, df2, left_on=['StartStation ID'], right_on=['Station ID']) </code></pre> <p>O/P:</p> <pre><code> Ride ID StartStation ID Endstation ID Station ID Elevation 0 100 2 3 2 13 1 101 3 1 3 10 2 102 4 2 4 20 3 103 1 4 1 24 </code></pre> <p>Note: Rearrange columns as your wish.</p>
python|pandas
0
3,948
73,259,957
create a dataframe from multiple JSON file with unique keys
<p>I have a JSON that looks something like this:</p> <pre><code>translation_map: str_empty: nl: {} bn: {} str_6df066da34e6: nl: value: &quot;value 1&quot; publishedAt: 16438 audio: &quot;value1474.mp3&quot; bn: value: &quot;value2&quot; publishedAt: 164322907 str_9036dfe313457: nl: value: &quot;value3&quot; publishedAt: 1647611912 audio: &quot;value3615.mp3&quot; bn: value: &quot;value4&quot; publishedAt: 1238641456 </code></pre> <p>I am trying to take some of the fields and put them into a dataframe that I can later export to a CSV, however I am having trouble with the unique keys I have this code which works for one unique value:</p> <pre><code>import os, json import pandas as pd # json files path_to_json = 'C:\\Users\\bob\\Videos\\Captures' json_files = [pos_json for pos_json in os.listdir(path_to_json) if pos_json.endswith('.json')] print(json_files) # define my pandas Dataframe columns jsons_data = pd.DataFrame(columns=['transcription', 'meaning', 'sound']) for index, js in enumerate(json_files): with open(os.path.join(path_to_json, js)) as json_file: json_text = json.load(json_file) transcription = json_text['translation_map']['str_6df066da34e6']['nl']['value'] sound = json_text['translation_map']['str_6df066da34e6']['nl']['audio'] meaning = json_text['translation_map']['str_6df066da34e6']['bn']['value'] jsons_data.loc[index] = [transcription, meaning, sound] # look at json data in our DataFrame print(jsons_data) </code></pre> <p>However, I am not sure how to loop through the unique values with this.</p>
<p>Use a <a href="https://www.w3schools.com/python/gloss_python_for_nested.asp" rel="nofollow noreferrer">nested loop</a> and <a href="https://docs.python.org/3.8/library/stdtypes.html#dict.values" rel="nofollow noreferrer"><code>dict.values()</code></a> like so:</p> <pre><code>json_text = { &quot;translation_map&quot;: { &quot;str_9asihdu7dcb&quot;: { &quot;nl&quot;: { &quot;value&quot;: &quot;value2&quot;, &quot;audio&quot;: &quot;8007.mp3&quot; }, &quot;bn&quot;: { &quot;value&quot;: &quot;value4&quot; } }, &quot;str_f4c8ashuh524&quot;: { &quot;nl&quot;: { &quot;value&quot;: &quot;value1&quot;, &quot;audio&quot;: &quot;8026.mp3&quot; }, &quot;bn&quot;: { &quot;value&quot;: &quot;Maet.&quot; } }, &quot;str_39asjashfk6&quot;: { &quot;nl&quot;: { &quot;value&quot;: &quot;value5&quot;, &quot;audio&quot;: &quot;40.mp3&quot; }, &quot;bn&quot;: { &quot;value&quot;: &quot;value4&quot; } } } } for translation_map in json_text: for v in json_text[translation_map].values(): if v[&quot;nl&quot;]: transcription = v[&quot;nl&quot;][&quot;value&quot;] sound = v[&quot;nl&quot;][&quot;audio&quot;] else: transcription = &quot;empty&quot; sound = &quot;empty&quot; if v[&quot;bn&quot;]: meaning = v[&quot;bn&quot;][&quot;value&quot;] else: meaning = &quot;empty&quot; print(transcription, sound, meaning) </code></pre> <p>Output</p> <pre><code>value2 8007.mp3 value4 value1 8026.mp3 Maet. value5 40.mp3 value4 empty empty empty </code></pre>
python|json|pandas|dataframe
1
3,949
73,312,440
Pandas append/concat two values from a dictionary object into a data frame
<p>I am trying to combine a set of two values from a dictionary object into a data frame column I am replacing. This is what my fruits column data frame looks like:</p> <pre><code>Fruits ------ {'type': 'Apples - Green', 'storeNum': '123456', 'kind': 'Granny Smith'} {'type': 'Banana', 'storeNum': '246810', 'kind': 'Organic'} {'type': 'Orange', 'storeNum': '36912', 'kind': 'Cuties - Small'} </code></pre> <p>What I would like is this:</p> <pre><code>Fruits ------ Apples - Green | Granny Smith Banana | Organic Orange | Cuties - Small </code></pre> <p>I have this so far but I only get the types. Is there anyway I can combine the 'type' and 'kind' together and replace that df? I have this code so far:</p> <pre><code>def funct(df): fruits_list = [] for i in range(len(df['Fruits'])): fruits_list.append(list(df['Fruits'][i].values())[0]) df['Fruits'] = fruits_list return df dataframe = funct(df) </code></pre>
<p>You can concatenate string columns with <code>+</code>.</p> <pre class="lang-py prettyprint-override"><code>data = [{'type': 'Apples - Green', 'storeNum': '123456', 'kind': 'Granny Smith'}, {'type': 'Banana', 'storeNum': '246810', 'kind': 'Organic'}, {'type': 'Orange', 'storeNum': '36912', 'kind': 'Cuties - Small'}] df = pd.DataFrame({&quot;Fruits&quot;: data}) fruits = pd.DataFrame.from_records(df.Fruits) print(fruits.type + &quot; | &quot; + fruits.kind) </code></pre> <p>Returns</p> <pre class="lang-py prettyprint-override"><code>0 Apples - Green | Granny Smith 1 Banana | Organic 2 Orange | Cuties - Small dtype: object </code></pre> <p>To assign it to the dataframe, you need to do</p> <pre class="lang-py prettyprint-override"><code>df['Fruits'] = fruits.type + &quot; | &quot; + fruits.kind </code></pre>
python|pandas
1
3,950
35,119,310
How to sort a dataframe based on idxmax?
<p>I have a dataframe like this:</p> <pre><code> A B C 0 1 2 1 1 3 -8 10 2 10 3 -20 3 50 7 1 </code></pre> <p>I would like to rearrange its columns based on the index of the maximal absolute value in each column. In column <code>A</code>, the maximal absolute value is in row 3, in <code>B</code> it is row 1 and in <code>C</code> it is row 2 which means that my new dataframe should be in the order <code>B C A</code>.</p> <p>Currently I do this as follows:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'A': [1, 3, 10, 50], 'B': [2, -8, 3, 7], 'C': [1, 10, -20, 1]}) indMax = abs(df).idxmax(axis=0) df = df[np.argsort(indMax)] </code></pre> <p>So I first determine the indices of the maximal value per column which are stored in <code>indMax</code>, then I sort it and rearrange the dataframe accordingly which gives me the desired output:</p> <pre><code> B C A 0 2 1 1 1 -8 10 3 2 3 -20 10 3 7 1 50 </code></pre> <p>My question is whether there is the possibility to pass the function <code>idxmax</code> directly to a <code>sort</code> function and change the dataframe <code>inplace</code>.</p>
<p>This is ugly, but it seems to work using <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.reindex_axis.html" rel="nofollow"><code>reindex_axis</code></a>:</p> <pre><code>import numpy as np &gt;&gt;&gt; df.reindex_axis(df.columns[list(np.argsort(abs(df).idxmax(axis=0)))], axis=1) B C A 0 2 1 1 1 -8 10 3 2 3 -20 10 3 7 1 50 </code></pre>
python|sorting|pandas
1
3,951
35,273,135
Get a value in a numpy array from a index in a variable
<p>I am trying to access a value in a multi dimensional numpy array. This can be easily done when you know everything, for exemple :</p> <p><code>T = numpy.arrange(9).reshape(3, 3) T[2, 2]</code></p> <p>And it returns 8, which is what I want. Now, Let's assume <code>[2, 2]</code> is stored in <code>index</code> variable. How can I do to take the value in T with the index stored in <code>index</code> ? I would like to do <code>T[index]</code> but it returns the last row twice (pretty logical but not what I want).</p> <p>Thank you !</p>
<p>Try </p> <pre><code>ind=tuple(2,2) x[ind] </code></pre> <p><code>x[2,2]</code> is the same as <code>x[(2,2)]</code> which is translated into a method call: <code>x.__getitem__((2,2))</code>.</p> <p>Some <code>numpy</code> functions build an index as a list or array, then convert it to a <code>tuple</code> for use in the index.</p>
python|arrays|numpy|indexing
2
3,952
30,818,367
How to present numpy array into pygame surface?
<p>I'm writing a code that part of it is reading an image source and displaying it on the screen for the user to interact with. I also need the sharpened image data. I use the following to read the data and display it in <code>pyGame</code></p> <pre><code>def image_and_sharpen_array(file_name): #read the image data and return it, with the sharpened image image = misc.imread(file_name) blurred = ndimage.gaussian_filter(image,3) edge = ndimage.gaussian_filter(blurred,1) alpha = 20 out = blurred + alpha*(blurred - edge) return image,out #get image data scan,sharpen = image_and_sharpen_array('foo.jpg') w,h,c = scan.shape #setting up pygame pygame.init() screen = pygame.display.set_mode((w,h)) pygame.surfarray.blit_array(screen,scan) pygame.display.update() </code></pre> <p>And the image is displayed on the screen only rotated and inverted. Is this due to differences between <code>misc.imread</code> and <code>pyGame</code>? Or is this due to something wrong in my code? </p> <p>Is there other way to do this? The majority of solution I read involved saving the figure and then reading it with ``pyGame''. </p>
<p>I often use the numpy <code>swapaxes()</code> method: In this case we only need to invert x and y axis (axis number 0 and 1) before displaying our array :</p> <blockquote> <pre><code>return image.swapaxes(0,1),out </code></pre> </blockquote>
python|numpy|pygame|pygame-surface
4
3,953
67,249,850
Pandas: how to select columns to be displayed in groupby results?
<p>I have a dataframe with 10 columns from which I want to list some columns of rows where <code>ROUGE_L</code> is maximum grouped by <code>Model</code>, I tried:</p> <pre><code>sdf = df.groupby(['Model','Checkpoint'])['ROUGE_L'].max() </code></pre> <p>It prints:</p> <pre><code>Model Checkpoint ROUGE_L 4 1005100 0.204 1010200 0.202 1015300 0.205 1020400 0.203 1025500 0.204 ... 16000 1030600 0.396 1035700 0.396 1040800 0.408 </code></pre> <p>But I look for this:</p> <pre><code>Model Checkpoint ROUGE_L 4 1005300 0.205 16000 1040800 0.408 </code></pre> <p>I didn't find a statement that does above in similar questions.</p>
<p>Need more precision about your original dataframe but the code below should work:</p> <pre><code>&gt;&gt;&gt; df.loc[df.groupby(&quot;Model&quot;)[&quot;ROUGE_L&quot;].idxmax()] Model Checkpoint ROUGE_L 2 4 1015300 0.205 7 16000 1040800 0.408 </code></pre> <p>To select columns, append <code>[[&quot;Model&quot;, &quot;Checkpoint&quot;, &quot;ROUGE_L&quot;]]</code> at the previous instruction:</p> <pre><code>&gt;&gt;&gt; df.loc[df.groupby(&quot;Model&quot;)[&quot;ROUGE_L&quot;].idxmax()][[&quot;Model&quot;, &quot;Checkpoint&quot;, &quot;ROUGE_L&quot;]] Model Checkpoint ROUGE_L 2 4 1015300 0.205 7 16000 1040800 0.408 </code></pre>
python|pandas
1
3,954
67,309,104
BertForTokenClassification Has Extra Output
<p>I am using PyTorch's BertForTokenClassification pretrained model to do custom word tagging (not NER or POS, but essentially the same). There are 20 different possible tags (using BIO scheme): 9 B's, 9 I's, and an O. Despite there being 19 possible tags, the feed-forward layer that is added on top of BERT has 20 tags. I have used other datasets, too, and the result is the same: there is always one more output than the number of classes. Can anyone tell me why this is?</p>
<p>I figured it out. The reason is because I was not accounting for the <code>PAD</code> token.</p>
pytorch|bert-language-model|huggingface-transformers|named-entity-recognition
0
3,955
67,500,461
How does NumPy compute the inverse of a matrix?
<p>Given a square matrix <code>A</code> as a <a href="https://numpy.org" rel="nofollow noreferrer">NumPy</a> array, like</p> <pre class="lang-py prettyprint-override"><code>import numpy as np A = np.array( [ [1, 2, 3], [3, 4, 6], [7, 8, 9], ] ) </code></pre> <p>which algorithm does NumPy's <a href="https://numpy.org/doc/stable/reference/generated/numpy.linalg.inv.html" rel="nofollow noreferrer"><code>linalg.inv</code></a> use internally when</p> <pre><code>np.linalg.inv(A) </code></pre> <p>is invoked to compute the matrix inverse of <code>A</code>?</p> <p>Particularly, as matrix inversion may be numerically unstable (depending on the <a href="https://en.wikipedia.org/wiki/Condition_number" rel="nofollow noreferrer">condition number</a> of the matrix), are there any special cases considered depending on certain matrix properties?</p>
<p>Following @projjal 's comment, all of these are equivalent to compute the inverse of a square matrix:</p> <pre><code>import numpy as np from scipy.linalg import lu_factor, lu_solve A = np.array([[1, 2, 3],[3, 4, 6],[7, 8, 9]]) A_inv_1 = np.linalg.inv(A) A_inv_2 = np.linalg.solve(A,np.eye(A.shape[0])) A_LU = lu_factor(A) # this way, you can potentially reuse the factorization for different RHS A_inv_3 = lu_solve(A_LU,np.eye(A.shape[0])) # check np.allclose(A_inv_1,A_inv_2) &gt;&gt;&gt; True np.allclose(A_inv_1,A_inv_3) &gt;&gt;&gt; True </code></pre>
python|numpy|linear-algebra
1
3,956
59,909,904
How to reshape numpy array of shape (4, 1, 1) into (4, 2, 1)?
<p>Suppose I have a numpy array</p> <pre><code>arr = np.array([1, 4, 4, 5]).reshape((4, 1, 1)) </code></pre> <p>Now I want to reshape <code>arr</code> into <code>arr1</code> such that</p> <pre><code>&gt;&gt;&gt; print(arr1) [[[1] [1]] [[4] [4]] [[4] [4]] [[5] [5]]] &gt;&gt;&gt; arr1.shape (4, 2, 1) </code></pre> <p>Please help I really got stuck at this.</p>
<pre><code>In [484]: arr = np.array([1, 4, 4, 5]).reshape((4, 1, 1)) In [485]: np.concatenate([arr,arr],axis=1) Out[485]: array([[[1], [1]], [[4], [4]], [[4], [4]], [[5], [5]]]) In [486]: np.repeat(arr,2,1) Out[486]: array([[[1], [1]], [[4], [4]], [[4], [4]], [[5], [5]]]) </code></pre> <p>Speeds are similar; with a slight edge for the <code>repeat</code>, but not enough to fight over. <code>np.hstack</code> is a concatenate on axis 1.</p>
numpy
0
3,957
59,907,842
Explode List containing many dictionaries in Pandas dataframe
<p>I am having a dataset which look like follows(in dataframe):</p> <pre><code>**_id** **paper_title** **references** **full_text** 1 XYZ [{'abc':'something','def':'something'},{'def':'something'},...many others] something 2 XYZ [{'abc':'something','def':'something'},{'def':'something'},...many others] something 3 XYZ [{'abc':'something'},{'def':'something'},...many others] something </code></pre> <p>Expected:</p> <pre><code>**_id** **paper_title** **abc** **def** **full_text** 1 XYZ something something something something something . . (all the dic in list with respect to_id column) 2 XYZ something something something something something . . (all the dic in list with respect to_id column) </code></pre> <p>I have tried <code>df['column_name'].apply(pd.Series).apply(pd.Series)</code> to split the list and dictionaries into columns of dataframe but doesn't help as it didn't split dictionaries.</p> <p><strong>First row of my dataframe:</strong> <a href="https://i.stack.imgur.com/SbquL.png" rel="nofollow noreferrer">df.head(1)</a></p>
<p>Assuming your original DataFrame is a list of dictionaries with one key:value pair and a key named 'reference':</p> <pre><code>print(df) id paper_title references full_text 0 1 xyz [{'reference': 'description1'}, {'reference': ... some text 1 2 xyz [{'reference': 'descriptiona'}, {'reference': ... more text 2 3 xyz [{'reference': 'descriptioni'}, {'reference': ... even more text </code></pre> <p>Then you can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> to separate out your references with their index:</p> <pre><code>df1 = pd.concat([pd.DataFrame(i) for i in df['references']], keys = df.index).reset_index(level=1,drop=True) print(df1) reference 0 description1 0 description2 0 description3 1 descriptiona 1 descriptionb 1 descriptionc 2 descriptioni 2 descriptionii 2 descriptioniii </code></pre> <p>Then use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a> to join the columns back together on their index:</p> <pre><code>df = df.drop('references', axis=1).join(df1).reset_index(drop=True) print(df) id paper_title full_text reference 0 1 xyz some text description1 1 1 xyz some text description2 2 1 xyz some text description3 3 2 xyz more text descriptiona 4 2 xyz more text descriptionb 5 2 xyz more text descriptionc 6 3 xyz even more text descriptioni 7 3 xyz even more text descriptionii 8 3 xyz even more text descriptioniii </code></pre>
python|pandas|dataframe|machine-learning|data-cleaning
2
3,958
60,032,032
python Keyword matching(keyword list - column)
<p>supposed dataset,</p> <pre><code> Name Value 0 K Ieatapple 1 Y bananaisdelicious 2 B orangelikesomething 3 Q bluegrape 4 C appleislike </code></pre> <p>and I have keyword list like</p> <pre><code>[apple, banana] </code></pre> <p>In this dataset, matching column 'Value' - [keyword list] </p> <p>*I mean matching is keyword in list in 'Value'</p> <p>I would like to see how the keywords in the list match column, so.. I want to find out how much the matching rate is.</p> <p>Ultimately, what I want to know is 'Finding match rate between keywords and columns' Percentage, If I can, filtered dataframe</p> <p>Thank you.</p> <p><strong>Edit</strong></p> <p>In my real dataset, There are keywords in the sentence, </p> <p>Ex, </p> <pre><code>Ilikeapplethanbananaandorange </code></pre> <p>so It doesn`t work if use keyword - keyword matching(1:1).</p>
<p>Use <code>str.contains</code> to match words to your sentences:</p> <pre><code>keywords = ['apple', 'banana'] df['Value'].str.contains("|".join(keywords)).sum() / len(df) # 0.6 </code></pre> <p>Or if you want to keep the rows:</p> <pre><code> df[df['Value'].str.contains("|".join(keywords))] Name Value 0 K I eat apple 1 Y banana is delicious 4 C appleislike </code></pre> <hr> <p><strong>More details</strong></p> <p>The pipe <code>|</code> is the <code>or</code> operator in regular expression:</p> <p>So we join our list of words with a pipe to match one of these words:</p> <pre><code>&gt;&gt;&gt; keywords = ['apple', 'banana'] &gt;&gt;&gt; "|".join(keywords) 'apple|banana' </code></pre> <p>So in regular expression we have the statement now:</p> <blockquote> <p><em>match rows where the sentence contains "apple" OR "banana"</em></p> </blockquote>
python|pandas|dataframe|match|keyword
2
3,959
60,128,513
sum function issue
<p>i am not good at python right now, i have been trying something for a long time but i couldn't do, i want to sum values in a column but i had an error like this:</p> <pre><code>&lt;lambda&gt;() missing 2 required positional arguments: 'y' and 'z' </code></pre> <p>These are codes:</p> <pre><code>threshold = sum(data2.budget)/len(data2.budget) print(threshold) data2["budget_level"] = ["high" if i &gt; threshold else "low" for i in data2.budget] data2.loc[:10,["budget_level","budget"]] </code></pre> <p>This is all alert:</p> <pre><code>TypeError Traceback (most recent call last) &lt;ipython-input-240-f1303a50f5b0&gt; in &lt;module&gt; ----&gt; 1 threshold = sum(data2.budget)/len(data2.budget) 2 print(threshold) 3 data2["budget_level"] = ["high" if i &gt; threshold else "low" for i in data.budget] 4 data2.loc[:10,["budget_level","budget"]] TypeError: &lt;lambda&gt;() missing 2 required positional arguments: 'y' and 'z' </code></pre> <p>I am writing this code from a source, but this guy hadn't have an error alert but i had. What can i do? Thanks. </p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.sum.html" rel="nofollow noreferrer"><code>pd.Series.sum</code></a> and check threshold with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.gt.html" rel="nofollow noreferrer"><code>pd.Series.gt</code></a>. Then we use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a> to get <code>high</code> or <code>low</code> instead <code>True</code> or <code>False</code> in <code>np.array</code> that we assign to <code>data2["budget_level"]</code></p> <pre><code>threshold = data2['budget'].sum()/len(data2) data2["budget_level"] = np.where(data2['budget'].gt(threshold),'high','low') # if you are really checking data and not data2 #data2["budget_level"] = np.where(data['budget'].gt(threshold),'high','low') </code></pre> <p>or</p> <pre><code>data2["budget_level"] = data2['budget'].gt(threshold).map({True:'high',False:'low'}) </code></pre> <p><strong>Example</strong></p> <pre><code>df = pd.DataFrame({'A':[1,2,3,4,5]}) import numpy as np threshold = 2 df['B'] = np.where(df['A'].gt(threshold),'high','low') print(df) A B 0 1 low 1 2 low 2 3 high 3 4 high 4 5 high </code></pre>
python|pandas
0
3,960
59,989,080
How to convert 3D string to numpy array which is originated after saving 3D image in CSV
<p>I have a CSV file, which has one column with image data. Before saving to CSV each image was a 3D numpy array. So each cell of this column was a 3D array. After saving to CSV and reading using pandas they converted to string. Now I want to recreate an array from them. Below you can find a sample of string which I want to convert to 3D numpy array.</p> <pre><code>import numpy as np my_string_array = str(np.random.randint(0, high=255, size=(51, 52, 3))) </code></pre> <p>I tried the staff described here <a href="https://stackoverflow.com/questions/35612235/how-to-read-numpy-2d-array-from-string">how to read numpy 2D array from string?</a>, but seems that I need to have something different, since I have 3D array.</p> <p>I know that if the arrays were converted to <code>list</code> before saving to CSV, then </p> <pre><code>import ast my_array = np.array(ast.literal_eval(my_string_array)) </code></pre> <p>would work, but unfortunately this is not the case. After running this I got an error:</p> <pre><code>Traceback (most recent call last): File "/opt/lyp-venv/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3319, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "&lt;ipython-input-25-3e5a6dae7682&gt;", line 2, in &lt;module&gt; my_array = np.array(ast.literal_eval(my_string_array)) File "/usr/lib/python3.7/ast.py", line 46, in literal_eval node_or_string = parse(node_or_string, mode='eval') File "/usr/lib/python3.7/ast.py", line 35, in parse return compile(source, filename, mode, PyCF_ONLY_AST) File "&lt;unknown&gt;", line 1 [[[205 60 145] ^ SyntaxError: invalid syntax </code></pre>
<p>Regarding the error that you added:</p> <pre><code>ast.literal_eval(my_string_array) .... [[[205 60 145] ^ SyntaxError: invalid syntax </code></pre> <p><code>literal_eval</code> works on a limited subset of Python syntax. For example it will work on a valid list input, e.g. <code>"[[205, 60, 145]]"</code>. But the string in the error message does not match that; it's missing the commas. The <code>str(an_array)</code> omits the commas. <code>str(an_array.tolist())</code> does not.</p> <p>Most of the answers that deal with loading <code>csv</code> files like this stress that you will need to replace the spaces (or blank delimiters) with commas. </p> <p>So in this case the error has nothing to do with the array being 3d.</p> <p>Let me illustrate:</p> <p>make 3d array:</p> <pre><code>In [720]: arr = np.arange(24).reshape(2,3,4) In [722]: arr Out[722]: array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) </code></pre> <p>It's <code>str</code> representation, which is probably what <code>pandas</code> writes to the csv:</p> <pre><code>In [723]: str(arr) Out[723]: '[[[ 0 1 2 3]\n [ 4 5 6 7]\n [ 8 9 10 11]]\n\n [[12 13 14 15]\n [16 17 18 19]\n [20 21 22 23]]]' </code></pre> <p>Compare that with what a list str looks like:</p> <pre><code>In [724]: arr.tolist() Out[724]: [[[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]] In [725]: str(arr.tolist()) Out[725]: '[[[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]' </code></pre> <p><code>literal_eval</code> has no problem with this triple nested list string:</p> <pre><code>In [726]: ast.literal_eval(_) Out[726]: [[[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]] </code></pre> <p><code>literal_eval</code> applied to the array string produces your error:</p> <pre><code>In [727]: ast.literal_eval(Out[721]) Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 3319, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "&lt;ipython-input-727-700e3f960e29&gt;", line 1, in &lt;module&gt; ast.literal_eval(Out[721]) File "/usr/lib/python3.6/ast.py", line 48, in literal_eval node_or_string = parse(node_or_string, mode='eval') File "/usr/lib/python3.6/ast.py", line 35, in parse return compile(source, filename, mode, PyCF_ONLY_AST) File "&lt;unknown&gt;", line 1 [[[ 0 1 2 3] ^ SyntaxError: invalid syntax </code></pre> <p>I might be able to fix that with a couple of string substitutions, effectively converting <code>Out[721]</code> to <code>Out[725]</code>.</p> <p>@Mad pointed out that if the array is large enough (over 1000 elements) <code>str</code> will produce a condensed version, replacing a lot of the values with '...'. You can verify that yourself. If that is the case, no amount of string editing will fix the problem. That string is useless.</p> <p>In <a href="https://stackoverflow.com/questions/35612235/how-to-read-numpy-2d-array-from-string">how to read numpy 2D array from string?</a>, my answer has limited values since you already have string. <a href="https://stackoverflow.com/a/44323021/901925">https://stackoverflow.com/a/44323021/901925</a> is better. I've also SO questions that deal specifically with the strings that appear in <code>pandas</code> csv. In any case you need to pay attention to the details of the string, especially delimiters and special characters. </p>
python|numpy
0
3,961
65,411,383
saved model can not load layer which contains custom method
<p>I have a model which applies a custom function in the output layer. But the path to this function is static. Whenever I try to load the model on a different system it can not find the function because it searches the wrong path. Actually it uses the path in which the function was located at on the system I saved the model in the first place.</p> <p>Here a example of the simplyfied Model:</p> <pre><code> from tensorflow.keras.models import Model from tensorflow.keras.losses import mse, mean_squared_error from tensorflow.keras.layers import Input, LSTM, Dense, Lambda from tensorflow.keras.optimizers import RMSprop from helper_functions import poly_transfer Input_layer = Input(shape=(x_train.shape[1],x_train.shape[2])) hidden_layer1 = LSTM(units=45, return_sequences=False,stateful=False)(Input_layer) hidden_layer3 = Dense(25,activation='relu')(hidden_layer1) speed_out = Lambda(poly_transfer)(hidden_layer3 ) model = Model(inputs=[Input_layer], outputs=[speed_out]) model.compile(loss=mse, optimizer=RMSprop(lr= 0.0005), metrics=['mae','mse']) </code></pre> <p>The function I am speaking of is <code>poly_transfer</code> in the outpul layer. If I load my model with <code>tensorflow.keras.models.load_model</code> it searches as described in the wrong dir for <code>poly_transfer</code> and I get the error <code>SystemError: unknown opcode</code>. Is there a way to tell <code>tensorflow.keras.models.load_model</code> where <code>helper_function.py</code> (the skript of <code>poly_transfer</code>) lyes on a different system?</p> <p>I use tensorflow 2.0.0.</p> <p><strong>Edit</strong> This is the error. Please note that the path <code>d:/test_data_pros/restructured/helper_functions.py</code> did only exist on the system the model was trained on. The system on which I load the model has the same skript with the same function but naturally on a different path.</p> <pre><code>2020-12-22 19:43:10.841197: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2020-12-22 19:43:10.844407: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2699905000 Hz 2020-12-22 19:43:10.844874: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x475d920 executing computations on platform Host. Devices: 2020-12-22 19:43:10.844906: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version XXX lineno: 11, opcode: 160 Traceback (most recent call last): File &quot;/home/ebike/workspaces/ebike2x_ws/src/pred_trajectory_pkg/src/trajectory_prediction_node.py&quot;, line 122, in &lt;module&gt; LSTM = lstm_s_g_model(t_pred) File &quot;/home/ebike/workspaces/ebike2x_ws/src/pred_trajectory_pkg/src/vehicle_models.py&quot;, line 126, in __init__ self.model = load_model('/home/ebike/workspaces/ebike2x_ws/src/pred_trajectory_pkg/ml_models/model_test_vivek_150ep.h5') File &quot;/home/ebike/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/saving/save.py&quot;, line 146, in load_model return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile) File &quot;/home/ebike/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/saving/hdf5_format.py&quot;, line 168, in load_model_from_hdf5 custom_objects=custom_objects) File &quot;/home/ebike/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/saving/model_config.py&quot;, line 55, in model_from_config return deserialize(config, custom_objects=custom_objects) File &quot;/home/ebike/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/layers/serialization.py&quot;, line 102, in deserialize printable_module_name='layer') File &quot;/home/ebike/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/utils/generic_utils.py&quot;, line 191, in deserialize_keras_object list(custom_objects.items()))) File &quot;/home/ebike/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py&quot;, line 906, in from_config config, custom_objects) File &quot;/home/ebike/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py&quot;, line 1852, in reconstruct_from_config process_node(layer, node_data) File &quot;/home/ebike/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py&quot;, line 1799, in process_node output_tensors = layer(input_tensors, **kwargs) File &quot;/home/ebike/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py&quot;, line 842, in __call__ outputs = call_fn(cast_inputs, *args, **kwargs) File &quot;/home/ebike/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/layers/core.py&quot;, line 795, in call return self.function(inputs, **arguments) File &quot;d:/test_data_pros/restructured/helper_functions.py&quot;, line 11, in poly_transfer from pyproj import Proj, transform SystemError: unknown opcode </code></pre>
<p>The problem has nothing to do with paths, when you saved your model, your custom function was serialized and saved inside the HDF5 by Keras, but this format is specific to a python version, so the file can only be loaded with the same python version (it could work with newer versions, but not with older versions of python).</p> <p>So if you load your model on the same version of python, it should work fine.</p>
python|tensorflow|keras
2
3,962
49,983,957
Not getting top5 values for each month using grouper and groupby in pandas
<p>I'm trying to get top5 values for amount for each month along with the text column. I've tried <strong>resampling</strong> and <strong>group by</strong> statement</p> <p><strong>Dataset:</strong></p> <pre><code>text amount date 123… 11.00 11-05-17 123abc… 10.00 11-08-17 Xyzzy… 22.00. 12-07-17 Xyzzy… 221.00. 11-08-17 Xyzzy… 212.00. 10-08-17 Xyzzy… 242.00. 18-08-17 </code></pre> <p><strong>Code:</strong></p> <pre><code>df1 = df.groupby([’text', pd.Grouper(key=‘date', freq='M')])[‘amount'].apply(lambda x: x.nlargest(5)) </code></pre> <p>I get group of text but not arranged by month or largest values sorted in descending order.</p> <pre><code>df1 = df.groupby([pd.Grouper(key=‘date', freq='M')])[‘amount'].apply(lambda x: x.nlargest(5)) </code></pre> <p>THis code works fine but does not give <strong>text</strong> column.</p>
<p>assuming that <code>amount</code> is a numeric column:</p> <pre><code>In [8]: df.groupby(['text', pd.Grouper(key='date', freq='M')]).apply(lambda x: x.nlargest(2, 'amount')) Out[8]: text amount date text date 123abc… 2017-11-30 1 123abc… 10.0 2017-11-08 123… 2017-11-30 0 123… 11.0 2017-11-05 Xyzzy… 2017-08-31 5 Xyzzy… 242.0 2017-08-18 2017-10-31 4 Xyzzy… 212.0 2017-10-08 2017-11-30 3 Xyzzy… 221.0 2017-11-08 2017-12-31 2 Xyzzy… 22.0 2017-12-07 </code></pre>
pandas
2
3,963
63,756,716
Pandas .apply with conditional if in different columns
<p>I have a dataframe as below. I am trying to check if there is a <code>nan</code> in the <code>Liq_Factor</code>, if yes, put 1 otherwise divide <code>use/TW</code>. Result in column Testing.</p> <pre><code>+---+------------+------------+--------+--------+--------+ | 1 | | Liq_Factor | Zscire | Use | Tw | | 2 | 01/10/2020 | 36.5 | 44 | 43.875 | 11.625 | | 3 | 02/10/2020 | Nan | 43.625 | 13.625 | 33.25 | | 4 | 03/10/2020 | 6.125 | 47.875 | 22.5 | 4.625 | | 5 | 04/10/2020 | Nan | 34.25 | 37.125 | 36 | | 6 | 05/10/2020 | 43.875 | 17.375 | 5.5 | 36.25 | | 7 | 06/10/2020 | 40 | 14.125 | 21.125 | 14.875 | | 8 | 07/10/2020 | 42.25 | 44.75 | 21.25 | 31.75 | +---+------------+------------+--------+--------+--------+ </code></pre> <p>I was wondering if i can use <code>.apply</code> in the sense of</p> <pre><code>DF1['Testing']=(DF1['Liq_Factor'].apply(lambda x: x=1 if pd.isna(DF1['Zscore']) else DF1['Use']/DF1['Tw']) </code></pre> <p>Can you please help?</p> <p>Thanks, H</p>
<p>You can use apply or another alternative is where function from numpy:</p> <pre><code>df['Liq_Factor'] = np.where(df['Liq_Factor'] == np.Nan, 1, df['Use']/df['TW']) </code></pre> <p>Following your comments below you can do:</p> <pre><code># create another column with the calculation df['calc'] = (1/3)* df['ATV']/df['TW']*100000000 # create two rules (you can use one rule and then the opposite) mask_0 = (df['calc'] &lt; 1) mask_1 = (df['calc'] &gt; 1) # change result value by condition df.loc[mask_0, 'Liq Factor'] = df['calc'] df.loc[mask_1, 'Liq Factor'] = 1 </code></pre>
python|pandas|apply
2
3,964
64,049,115
Pandas group by and count by date. Then transpose the count to column name
<p>I have this dataframe</p> <pre><code>import pandas as pd from datetime import datetime df = pd.DataFrame([ {&quot;_id&quot;: &quot;1&quot;, &quot;date&quot;: datetime.strptime(&quot;2020-09-29 07:00:00&quot;, '%Y-%m-%d %H:%M:%S'), &quot;status&quot;: &quot;started&quot;}, {&quot;_id&quot;: &quot;2&quot;, &quot;date&quot;: datetime.strptime(&quot;2020-09-29 14:00:00&quot;, '%Y-%m-%d %H:%M:%S'), &quot;status&quot;: &quot;end&quot;}, {&quot;_id&quot;: &quot;3&quot;, &quot;date&quot;: datetime.strptime(&quot;2020-09-25 17:00:00&quot;, '%Y-%m-%d %H:%M:%S'), &quot;status&quot;: &quot;started&quot;}, {&quot;_id&quot;: &quot;4&quot;, &quot;date&quot;: datetime.strptime(&quot;2020-09-17 09:00:00&quot;, '%Y-%m-%d %H:%M:%S'), &quot;status&quot;: &quot;end&quot;}, {&quot;_id&quot;: &quot;5&quot;, &quot;date&quot;: datetime.strptime(&quot;2020-09-19 07:00:00&quot;, '%Y-%m-%d %H:%M:%S'), &quot;status&quot;: &quot;end&quot;}, {&quot;_id&quot;: &quot;6&quot;, &quot;date&quot;: datetime.strptime(&quot;2020-09-19 08:00:00&quot;, '%Y-%m-%d %H:%M:%S'), &quot;status&quot;: &quot;end&quot;}, ]).set_index('date') </code></pre> <p>Which looks like this:</p> <pre><code> _id status date 2020-09-29 07:00:00 1 started 2020-09-29 14:00:00 2 end 2020-09-25 17:00:00 3 started 2020-09-17 09:00:00 4 end 2020-09-19 07:00:00 5 end </code></pre> <hr /> <p>I'm trying to group by days and to count each status. But I would like to have the name of the name in the column name.</p> <p>Here is the desired output:</p> <pre><code> status_started status_end date 2020-09-29 07:00:00 1 1 2020-09-25 17:00:00 1 0 2020-09-17 09:00:00 0 1 2020-09-19 07:00:00 0 2 </code></pre> <hr /> <p>I've tried this:</p> <pre><code>df = df.groupby([pd.Grouper(freq='d'), 'status']).agg({'status': &quot;count&quot;}) df = df.reset_index(level=&quot;status&quot;) out: status date status 2020-09-17 end 1 2020-09-19 end 2 2020-09-25 started 1 2020-09-29 end 1 2020-09-29 started 1 </code></pre> <p>but did not successfully transform the df.</p>
<p>You only need <code>unstack</code>:</p> <pre><code>df.groupby([pd.Grouper(freq='d'), 'status']).size().unstack('status', fill_value=0) </code></pre> <p>Output:</p> <pre><code>status end started date 2020-09-17 1 0 2020-09-19 2 0 2020-09-25 0 1 2020-09-29 1 1 </code></pre>
python|pandas
1
3,965
63,852,176
Set manual location of legend with matplotlib and GetDistTool
<p>I try to set manually the location for the main legend of a main plot produced by <a href="https://getdist.readthedocs.io/en/latest/" rel="nofollow noreferrer"> Getdist tool</a>.</p> <p>The plot below represents the 1/2 sigma confidence levels coming from a covariance matrix with joint distributions. It is produced by <a href="https://getdist.readthedocs.io/en/latest/" rel="nofollow noreferrer"> Getdist tool</a>.</p> <p><a href="https://i.stack.imgur.com/4iNU5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4iNU5.png" alt="issue with legend" /></a></p> <p><a href="https://getdist.readthedocs.io/en/latest/" rel="nofollow noreferrer"> Getdist tool</a> tool allows to specify the location of legends : I tried with <code>legend_loc = 'upper right'</code> but as you can see, there is an overlapping at the top of figure. I want to shift on the right the legend to avoid this overlapping : is it possible ? if yes, how to ?</p> <p>The main routine that generates this plot is :</p> <pre><code>g.triangle_plot([matrix1, matrix2], names, filled = True, legend_labels = ['Opt. Flat. No Gamma. - optimistic case - cross - standard situation - Criterion taking into accound a = 200', 'Pess. Flat. No Gamma. - pessimistic case - cross - standard situation - Criterion taking into account a = 300' ], legend_loc = 'upper right', contour_colors = ['darkblue','red'], line_args = [{'lw':2, 'color':'darkblue'}, {'lw':2, 'color':'red'}] ) </code></pre> <h2>Update 1</h2> <p>I can't manage to apply shifts on the lower left corner on the legend by doing :</p> <pre><code>g.triangle_plot([matrix1, matrix2], names, filled = True, legend_labels = ['Opt. Flat. No Gamma. - optimistic case - cross - standard situation - Criterion taking into accound a = 200', 'Pess. Flat. No Gamma. - pessimistic case - cross - standard situation - Criterion taking into account a = 300' ], legend_loc = 'center right', contour_colors = ['darkblue','red'], line_args = [{'lw':2, 'color':'darkblue'}, {'lw':2, 'color':'red'}], bbox_to_anchor = [0.1, 0.5] ) </code></pre> <p>As you can see, I tried to put legend at 0.1 from the top and in the middle.</p> <p>But unfortunately, this doesn't change anything, I get with these parameters the following plot:</p> <p><a href="https://i.stack.imgur.com/4oKT9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4oKT9.png" alt="offsets not applied" /></a></p> <p>You can remark that with these parameters, a simple shift to the top would be enough but for instant, I don't know how to perform this.</p> <p>I have also tried to remove <code>loc</code> and <code>bbox_to_anchor</code> from <code>g.triangle_plot</code> and attempt to do directly :</p> <pre><code>g.fig.legend(loc='center right', bbox_to_anchor=[0.1, 0.1]) </code></pre> <p>But same issues of offset not applied in the final figure.</p>
<p>There's an argument in pyplot's <code>legend</code> function called <code>bbox_to_anchor</code>. It's something like a relative deviation and I'm not exactly sure how it works. But basically you can set there some values of horizontal and vertical shifts and then adjust to fit your desired position.</p> <p>For example you can write <code>plt.legend(bbox_to_anchor=[2.32, 0.5], loc='center right')</code></p>
python|numpy|matplotlib|legend
1
3,966
63,836,589
How many days does it take to accumulate x inches for everyday?
<p>Here is a snapshot of my dataframe. It goes on for another 60+ years. The only thing I done is set my index as the <strong>DATE</strong> column.</p> <pre><code> PRCP DATE 1950-01-01 0.00 1950-01-02 0.00 1950-01-03 0.08 1950-01-04 0.00 1950-01-05 0.00 1950-01-06 0.00 1950-01-07 0.21 1950-01-08 0.00 1950-01-09 0.00 1950-01-10 0.55 1950-01-11 0.00 1950-01-12 0.00 1950-01-13 0.15 1950-01-14 0.00 1950-01-15 0.00 1950-01-16 0.00 1950-01-17 0.00 1950-01-18 0.20 1950-01-19 0.00 </code></pre> <p>What I would like to do is accumulate the <strong>PRCP</strong> column until it reaches a value greater than or equal to 1.0. Once it reaches this I want it to do to the same for the next date.</p> <p>For example it would look something like this witht the date in one column and the number of days it took to reach 1.0 in the second column. The numbers I am using below are not exact (aside from the first date), but the pattern would be along these lines.:</p> <pre><code> Days to Reach 1.0 DATE 1950-01-01 18 1950-01-02 6 1950-01-03 2 1950-01-04 20 1950-01-05 5 1950-01-06 1 1950-01-07 14 </code></pre> <p>Once I have this, I will then do a simple...</p> <pre><code>groupby(df.index.dayofyear).mean() </code></pre> <p>so the final product would be...</p> <pre><code>DayOfYear Days to Reach 1.0 01 9 02 20 03 12 04 14 ... 365 14 366 12 </code></pre>
<p>For anyone curious, I tried a different, more basic approach. It may not be efficient but here it is.</p> <pre><code> depth_list = [1.0,4.0,10.0,20.0] # various threshold depths to reach. df_j = pd.DataFrame(df['DATE']) # creating an empty dataframe with DATE as the index for d in depth_list: j_list = [] # starting an empty list for each threshold depth count1 = 0 # adding in a 365 day counter assuming it does not take more than a year to accumulate. count2 = 366 for i in df['DATE']: # looping through i values after looping through Precipitation values from count1 to count2 j_sum = 0 # reset sums j_count = 0 for j in df['PRCP'][count1:count2]: j_count = j_count + 1 j_sum = j_sum + j if j_sum &gt;= d: # Checking if value is greater than threshold. j_list.append(j_count) break # break out of loop when value is reached. count1 += 1 count2 += 1 df_j[d] = pd.DataFrame(j_list) # Putting list into a dataframe with dates as the index knowing they are the same length df_join = df_j.set_index('DATE') df_daymean = df_join.groupby(df_join.index.dayofyear).mean() # Grouping by day of the year and taking the mean. print(df_daymean) </code></pre> <p>For a dataframe that looks like this...</p> <pre><code> 1.0 4.0 10.0 20.0 DATE 1 11.928571 39.385714 91.371429 168.985714 2 11.971429 39.400000 91.185714 168.800000 3 11.728571 39.314286 91.100000 168.300000 4 11.871429 38.771429 90.528571 168.442857 5 11.900000 39.014286 90.528571 168.371429 ... ... ... ... 362 11.485714 40.115942 90.956522 170.159420 363 11.314286 40.101449 91.217391 169.927536 364 12.257143 40.318841 91.956522 169.637681 365 12.681159 39.913043 92.202899 169.260870 366 14.941176 44.176471 97.352941 174.529412 </code></pre>
python|pandas
0
3,967
63,846,510
pandas print full column values
<p>Pandas print full ID column when I convert it to string</p> <p><code>&quot;RiversideCA_&quot; + str(df_clark_county['ID'])</code></p> <p>I only want to get those ID that is associate with particular row. <a href="https://i.stack.imgur.com/sDdkr.png" rel="nofollow noreferrer">Please view picture for more calrity</a></p>
<p>You need to change type of column to string use <code>astype</code></p> <pre><code>&quot;RiversideCA_&quot; + df_clark_county['ID'].astype(str) </code></pre>
pandas|dataframe|data-analysis
0
3,968
46,967,581
Adding values to all rows of dataframe
<p>I have two pandas dataframes <strong><em>df1</em></strong> (of length 2) and <strong><em>df2</em></strong> (of length about 30 rows). Index values of df1 are always different and never occur in df2. I would like to add the average of columns from <strong><em>df1</em></strong> to corresponding columns of <strong><em>df2</em></strong>. Example: add 0.6 to all rows of c1 and 0.9 to all rows of c2 etc ...</p> <pre><code>df1: Date c1 c2 c3 c4 c5 c6 ... c10 2017-09-10 0.5 0.6 1.2 0.7 1.3 1.8 ... 1.3 2017-09-11 0.7 1.2 1.3 0.4 0.7 0.4 ... 1.5 df2: Date c1 c2 c3 c4 c5 c6 ... c10 2017-09-12 0.9 0.1 1.4 0.9 1.5 1.9 ... 1.9 2017-09-13 0.2 1.8 1.2 1.4 2.7 0.8 ... 1.1 : : : : 2017-10-10 1.5 0.9 1.5 0.9 1.6 1.8 ... 1.7 2017-10-11 2.7 1.1 1.9 0.4 0.8 0.8 ... 1.3 </code></pre> <p>How can I do that ?</p>
<p>When using <code>mean</code> on <code>df1</code>, it calculates over each column by default and produces a <code>pd.Series</code>. </p> <p>When adding adding a <code>pd.Series</code> to a <code>pd.DataFrame</code> it aligns the index of the <code>pd.Series</code> with the columns of the <code>pd.DataFrame</code> and broadcasts along the index of the <code>pd.DataFrame</code>... by default. </p> <p>The only tricky bit is handling the <code>Date</code> column. </p> <p><strong>Option 1</strong> </p> <pre><code>m = df1.mean() df2.loc[:, m.index] += m df2 Date c1 c2 c3 c4 c5 c6 c10 0 2017-09-12 1.5 1.0 2.65 1.45 2.5 3.0 3.3 1 2017-09-13 0.8 2.7 2.45 1.95 3.7 1.9 2.5 2 2017-10-10 2.1 1.8 2.75 1.45 2.6 2.9 3.1 3 2017-10-11 3.3 2.0 3.15 0.95 1.8 1.9 2.7 </code></pre> <p>If I know that <code>'Date'</code> is always in the first column, I can: </p> <pre><code>df2.iloc[:, 1:] += df1.mean() df2 Date c1 c2 c3 c4 c5 c6 c10 0 2017-09-12 1.5 1.0 2.65 1.45 2.5 3.0 3.3 1 2017-09-13 0.8 2.7 2.45 1.95 3.7 1.9 2.5 2 2017-10-10 2.1 1.8 2.75 1.45 2.6 2.9 3.1 3 2017-10-11 3.3 2.0 3.15 0.95 1.8 1.9 2.7 </code></pre> <hr> <p><strong>Option 2</strong><br> Notice that I use the <code>append=True</code> parameter in the <code>set_index</code> just incase there are things in the index you don't want to mess up. </p> <pre><code>df2.set_index('Date', append=True).add(df1.mean()).reset_index('Date') Date c1 c2 c3 c4 c5 c6 c10 0 2017-09-12 1.5 1.0 2.65 1.45 2.5 3.0 3.3 1 2017-09-13 0.8 2.7 2.45 1.95 3.7 1.9 2.5 2 2017-10-10 2.1 1.8 2.75 1.45 2.6 2.9 3.1 3 2017-10-11 3.3 2.0 3.15 0.95 1.8 1.9 2.7 </code></pre> <p>If you don't care about the index, you can shorten this to</p> <pre><code>df2.set_index('Date').add(df1.mean()).reset_index() Date c1 c2 c3 c4 c5 c6 c10 0 2017-09-12 1.5 1.0 2.65 1.45 2.5 3.0 3.3 1 2017-09-13 0.8 2.7 2.45 1.95 3.7 1.9 2.5 2 2017-10-10 2.1 1.8 2.75 1.45 2.6 2.9 3.1 3 2017-10-11 3.3 2.0 3.15 0.95 1.8 1.9 2.7 </code></pre>
python|pandas|dataframe|addition
4
3,969
46,923,541
Python Dataframe set value by position and not Index
<p><a href="https://i.stack.imgur.com/g2UpZ.png" rel="nofollow noreferrer">What I tried</a></p> <p><a href="https://i.stack.imgur.com/g2UpZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g2UpZ.png" alt="so far"></a></p> <p>So I'm trying to change a single cell in an <code>dataframe</code> but by using <code>set_value</code> I can't use the position and need to use the index and if I got two equal indexes it will change both. </p> <p>How can I change a single cell avoiding this? Thank you,</p>
<p>To set by position, use <code>.iloc</code>, e.g.</p> <pre><code>df.iloc[0,0] = 2. </code></pre> <p>This modifies your dataframe in-place.</p>
python|pandas|dataframe|cell
0
3,970
46,841,269
What does the String mean in numpy.r_?
<p>in numpy's <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.r_.html" rel="nofollow noreferrer">documents</a>:</p> <pre><code>&gt;&gt;&gt; np.r_['0,2,0', [1,2,3], [4,5,6]] array([[1], [2], [3], [4], [5], [6]]) </code></pre> <p>what does the third number mean in the string '0,2,0'?</p>
<p>I haven't used the string parameter of <code>r_</code> much; it's easier, for me, to work directly with <code>concatanate</code> and its variantes.</p> <p>But looking at the docs:</p> <blockquote> <p>A string with three comma-separated integers allows specification of the axis to concatenate along, the minimum number of dimensions to force the entries to, and which axis should contain the start of the arrays which are less than the specified number of dimensions. </p> </blockquote> <pre><code>'0.2.0' axis = 0 make it 2d start with 0d In [79]: np.r_['0,2,0', [1,2,3], [4,5,6]] Out[79]: array([[1], [2], [3], [4], [5], [6]]) </code></pre> <p>A concatenate equivalent</p> <pre><code>In [80]: np.concatenate(([1,2,3], [4,5,6])) Out[80]: array([1, 2, 3, 4, 5, 6]) In [81]: np.concatenate(([1,2,3], [4,5,6]))[:,None] Out[81]: array([[1], [2], [3], [4], [5], [6]]) </code></pre> <p>Here I've concatenated on axis=0, and expanded to 2d after concatenate. But it sounds like <code>r_</code> expands the dimensions of the elements first (but we can double check in the code).</p> <pre><code>In [83]: alist = ([1,2,3], [4,5,6]) In [86]: [np.expand_dims(a,1) for a in alist] Out[86]: [array([[1], [2], [3]]), array([[4], [5], [6]])] In [87]: np.concatenate(_, axis=0) Out[87]: array([[1], [2], [3], [4], [5], [6]]) </code></pre> <p>I'm using <code>expand_dims</code> to make the inputs 2 d, and to add the new dimension after the first. Having done that I can concatenate on axis 0.</p> <p>Note that the inputs to <code>r_</code> could already be 2d, as in:</p> <pre><code>np.r_['0,2,0',[1,2,3], [[4],[5],[6]]] np.r_['0,2,0',[1,2,3], np.expand_dims([4,5,6],1)] np.r_['0,2,0',[1,2,3], np.atleast_2d([4,5,6]).T] </code></pre> <p>The 3d number, if 1, turns the components into </p> <pre><code>In [105]: np.atleast_2d([4,5,6]) Out[105]: array([[4, 5, 6]]) In [103]: np.r_['0,2,1',[1,2,3],[4,5,6]] Out[103]: array([[1, 2, 3], [4, 5, 6]]) </code></pre> <p>Often if documentation is unclear I like to either dig into the code, or experiment with alternative inputs.</p> <pre><code>In [107]: np.r_['1,2,1',[1,2,3], [4,5,6]] Out[107]: array([[1, 2, 3, 4, 5, 6]]) In [108]: np.r_['1,2,0',[1,2,3], [4,5,6]] Out[108]: array([[1, 4], [2, 5], [3, 6]]) </code></pre> <hr> <p>Looking at the code, I see it uses</p> <pre><code>array(newobj, copy=False, subok=True, ndmin=ndmin) </code></pre> <p>to expand the components to the desired <code>ndmin</code>. The 3d number is used to construct a <code>transpose</code> parameter. Details are messy, but the effect is something like:</p> <pre><code>In [111]: np.array([1,2,3], ndmin=2) Out[111]: array([[1, 2, 3]]) In [112]: np.array([1,2,3], ndmin=2).transpose(1,0) Out[112]: array([[1], [2], [3]]) </code></pre>
python|numpy
1
3,971
46,961,952
How to make a tuple including a numpy array hashable?
<p>One way to make a numpy array hashable is setting it to read-only. This has worked for me in the past. But when I use such a numpy array in a tuple, the whole tuple is no longer hashable, which I do not understand. Here is the sample code I put together to illustrate the problem:</p> <pre><code>import numpy as np npArray = np.ones((1,1)) npArray.flags.writeable = False print(npArray.flags.writeable) keySet = (0, npArray) print(keySet[1].flags.writeable) myDict = {keySet : 1} </code></pre> <p>First I create a simple numpy array and set it to read-only. Then I add it to a tuple and check if it is still read-only (which it is).</p> <p>When I want to use the tuple as key in a dictionary, I get the error <code>TypeError: unhashable type: 'numpy.ndarray'</code>.</p> <p>Here is the output of my sample code:</p> <pre><code>False False Traceback (most recent call last): File "test.py", line 10, in &lt;module&gt; myDict = {keySet : 1} TypeError: unhashable type: 'numpy.ndarray' </code></pre> <p>What can I do to make my tuple hashable and why does Python show this behavior in the first place?</p>
<p>You claim that</p> <blockquote> <p>One way to make a numpy array hashable is setting it to read-only</p> </blockquote> <p>but that's not actually true. Setting an array to read-only just makes it read-only. It doesn't make the array hashable, for multiple reasons.</p> <p>The first reason is that an array with the <code>writeable</code> flag set to <code>False</code> is still mutable. First, you can always set <code>writeable=True</code> again and resume writing to it, or do more exotic things like reassign its <code>shape</code> even while <code>writeable</code> is <code>False</code>. Second, even without touching the array itself, you could mutate its data through another view that has <code>writeable=True</code>.</p> <pre><code>&gt;&gt;&gt; x = numpy.arange(5) &gt;&gt;&gt; y = x[:] &gt;&gt;&gt; x.flags.writeable = False &gt;&gt;&gt; x array([0, 1, 2, 3, 4]) &gt;&gt;&gt; y[0] = 5 &gt;&gt;&gt; x array([5, 1, 2, 3, 4]) </code></pre> <p>Second, for hashability to be meaningful, objects must first be <em>equatable</em> - <code>==</code> must return a boolean, and must be an equivalence relation. NumPy arrays don't do that. The purpose of hash values is to quickly locate equal objects, but when your objects don't even have a built-in notion of equality, there's not much point to providing hashes.</p> <hr> <p>You're not going to get hashable tuples with arrays inside. You're not even going to get hashable arrays. The closest you can get is to put some other representation of the array's data in the tuple.</p>
python-3.x|numpy|tuples|hashable
14
3,972
38,615,121
Limit Tensorflow CPU and Memory usage
<p>I've seen several questions about GPU Memory with Tensorflow but I've installed it on a Pine64 with no GPU support.</p> <p>That means I'm running it with very limited resources (CPU and RAM only) and Tensorflow seems to want it all, completely freezing my machine.</p> <p><br></p> <p>Is there a way to limit the amount of processing power and memory allocated to Tensorflow? Something similar to bazel's own <code>--local_resources</code> flag?</p>
<p>This will create a session that runs one op at a time, and only one thread per op</p> <pre><code>sess = tf.Session(config= tf.ConfigProto(inter_op_parallelism_threads=1, intra_op_parallelism_threads=1)) </code></pre> <p>Not sure about limiting memory, it seems to be allocated on demand, I've had TensorFlow freeze my machine when my network wanted 100GB of RAM, so my solution was to make networks that need less RAM</p>
python|memory-management|tensorflow|cpu-usage
16
3,973
67,943,218
Pandas Plot Multiple Lines Based on Per Column Trend
<p>So I have the following data below, so basically every column after <code>tree</code> is a progression of the value of it's values(e.g Tree_0, Tree_1 and etc.)</p> <pre><code>tree,ave_1-2021-06-12,ave_2-2021-06-12,ave_3-2021-06-12 Tree_0,290.7,248.7,247.8 Tree_1,261.1,258.7,221.5 Tree_2,220.0,251.9,233.5 Tree_3,246.3,242.1,275.4 Tree_4,248.3,254.1,243.8 Tree_5,251.4,251.1,261.4 </code></pre> <p>I want to be able to make a plot that shows all Tree_* values and shows it's trend based on the ave_* values which are on it's next column, how do I do that with matplotlib and pandas?</p> <p>For example:</p> <pre><code>Tree_01, linechart starts at 290 goes lower 248 then gows one point lower 247 Tree_02 starts at 261 goes down at 258, goes much lower 221. </code></pre> <p>I want to represent them all in one chart. Like this: <a href="https://i.stack.imgur.com/zELiB.png" rel="nofollow noreferrer">linechart</a> Tree's will be the legend below and the dates will be the ave_* columns. But I dont know how to do this with pandas and matplotlib</p>
<p>You can first set the Tree column aside as the index and then take the <code>T</code>ranspose to put the trees in the legend and <code>ave_*</code> to the x-axis:</p> <pre><code>df.set_index(&quot;tree&quot;).T.plot() </code></pre> <p>to get</p> <p><a href="https://i.stack.imgur.com/kOir0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kOir0.png" alt="enter image description here" /></a></p>
pandas|matplotlib|data-visualization
0
3,974
67,979,101
Pandas JSON_normalize (nested json) - int object error
<p>i have tried the below code to normalize JSON, but getting error - &quot; AttributeError: 'int' object has no attribute 'values'&quot;</p> <p>Code:</p> <pre><code>import pandas as pd import http.client import json conn = http.client.HTTPSConnection(&quot;api.buyucoin.com&quot;) payload = '' headers = { 'Content-Type': 'application/json' } conn.request(&quot;GET&quot;, &quot;/ticker/v1.0/liveData?symbol=USDT-INR&quot;, payload, headers) res = conn.getresponse() data1 = res.read() print(data1.decode(&quot;utf-8&quot;)) df = pd.json_normalize(data1) </code></pre> <p>It works fine when i use df=pd.read_json(data1)</p> <p>Below is JSON Data:</p> <p>'''</p> <pre><code>{&quot;status&quot;:&quot;success&quot;,&quot;sub_status&quot;:null,&quot;data&quot;:[{&quot;bid&quot;:&quot;76.5&quot;,&quot;ask&quot;:&quot;78.9&quot;,&quot;sprd&quot;:&quot;3.041&quot;,&quot;tVolAsk&quot;:&quot;123783.0199&quot;,&quot;tVolBid&quot;:&quot;265729.9668&quot;,&quot;h24&quot;:&quot;99.92&quot;,&quot;l24&quot;:&quot;76.5&quot;,&quot;v24&quot;:&quot;33942.2532&quot;,&quot;tp24&quot;:&quot;2603097.335376&quot;,&quot;LTRate&quot;:&quot;76.5&quot;,&quot;LTVol&quot;:&quot;3178.9228&quot;,&quot;LBRate&quot;:&quot;76.5&quot;,&quot;LBVol&quot;:&quot;3178.9228&quot;,&quot;LSRate&quot;:&quot;76.5&quot;,&quot;LSVol&quot;:&quot;3178.9228&quot;,&quot;c24&quot;:&quot;-2.5&quot;,&quot;c24p&quot;:&quot;-3.16&quot;,&quot;marketName&quot;:&quot;INR-USDT&quot;,&quot;currToName&quot;:&quot;TETHER&quot;}]} </code></pre> <p>'''</p> <p>Please advise a solution to avoid that int error.</p> <p>thanks in advance</p>
<p>If you check the help of <code>pd.json_normalize(...)</code>, it says</p> <pre><code>Parameters ---------- data : dict or list of dicts Unserialized JSON objects. </code></pre> <p>It means that you have to parse it first. Since your <code>data1['data']</code> is a list of jsons, you need to specify <code>data</code> as a key and two upper level fields as metadata fields.</p> <pre><code>&gt;&gt;&gt; import json &gt;&gt;&gt; df = pd.json_normalize(json.loads(data1), 'data', ['status', 'sub_status']) &gt;&gt;&gt; df bid ask sprd tVolAsk tVolBid h24 l24 v24 tp24 LTRate LTVol LBRate LBVol LSRate LSVol c24 c24p marketName currToName status sub_status 0 76.5 78.9 3.041 123783.0199 263083.9668 99.92 76.5 36584.2532 2805200.335376 76.5 2646 76.5 2646 76.5 2646 -0.53 -0.68 INR-USDT TETHER success None </code></pre>
python|json|pandas
0
3,975
67,788,061
How can I create a line plot with plotly_express, where a pandas dataframe can be selected over a drop down menu?
<p>I want to create a line plot in which the underlying data can be selected over a drop down menu. The data is in a pandas dataframe and I am using plotly_express.</p> <p>I tried to use this <a href="https://stackoverflow.com/questions/46410738/plotly-how-to-select-graph-source-using-dropdown">post</a> as a basis but it does not use plotly_express and the data is not in a pandas dataframe.</p> <p>I have this code in which I define a <code>data1</code> and <code>data2</code> and then put those into the buttons. I am converting those dataframes into a dictionnary because if not I will have the error that dataframes were not &quot;json-able&quot;.</p> <pre><code># making two new dataframes out of the all-data dataframe (for drop down select) dfe_deworming=dfe.loc['Deworming needed'].reset_index() dfe_anemia=dfe.loc['Anemia'].reset_index() # making the parameters for each button #button 1 data1=dict(dfe_deworming) x1=dfe_deworming.Month y1=dfe_deworming.Count color1=dfe_deworming.Facility #button2 data2=dict(dfe_anemia) x2=dfe_anemia.Month y2=dfe_anemia.Count color2=dfe_anemia.Facility #initial plot fig_deworming = px.line(data_frame=data1,x=x1,y=y1,color=color1) # update menus updatemenus = [ { 'buttons': [ { 'method': 'restyle', 'label': 'Deworming needed', 'args': [ {'data_frame':[data1],'x': [x1],'y':[y1],'color':[color1]}, ] }, { 'method': 'restyle', 'label': 'Anemia', 'args': [ {'data_frame':[data2],'x': [x2],'y':[y2],'color':[color2]}, ] } ], 'direction': 'down', 'showactive': True, } ] fig_deworming.update_layout( updatemenus=updatemenus ) fig_deworming.update_traces(mode='markers+lines') fig_deworming.show() </code></pre> <p>In its initial state it looks good. However if I try to select an option, all lines get exactly the same dataset. It could be the combination of all the different datasets.</p> <p>Those pictures illustrate the problem:</p> <p><a href="https://i.stack.imgur.com/Hrswl.png" rel="nofollow noreferrer">First option of the drop down menu after first selection</a></p> <p><a href="https://i.stack.imgur.com/LwZmN.png" rel="nofollow noreferrer">Second option of the drop down menu after second selection</a></p>
<p>fundamentally you need to use <strong>graph object</strong> parameter structure for <strong>updatemenus</strong></p> <ul> <li>have generated a dataframe that appears to match your structure</li> <li>create the graph using <strong>plotly express</strong></li> <li>generate <strong>updatemenu</strong> which are parameters you would pass to <strong>go.Scatter</strong></li> <li>used a <strong>list comprehension</strong> as each menu is really the same</li> <li>finally fix an issue with trace generated by <strong>plotly express</strong> for <code>hovertemplate</code></li> </ul> <pre><code>import numpy as np import pandas as pd import plotly.express as px # generate a dataframe that matches structure in question dfe = ( pd.DataFrame( { &quot;Month&quot;: pd.date_range(&quot;1-jan-2020&quot;, freq=&quot;M&quot;, periods=50).month, &quot;Facility&quot;: np.random.choice([&quot;Deworming needed&quot;, &quot;Anemia&quot;], 50), &quot;Count&quot;: np.random.randint(5, 20, 50), } ) .groupby([&quot;Facility&quot;, &quot;Month&quot;], as_index=False) .agg({&quot;Count&quot;: &quot;sum&quot;}) ) # the line plot with px... fig = px.line( dfe.loc[dfe.Facility.eq(&quot;Deworming needed&quot;)], x=&quot;Month&quot;, y=&quot;Count&quot;, color=&quot;Facility&quot; ) # fundametally need to be working with graph object parameters not express parameters updatemenus = [ { &quot;buttons&quot;: [ { &quot;method&quot;: &quot;restyle&quot;, &quot;label&quot;: f, &quot;args&quot;: [ { &quot;x&quot;: [dfe.loc[dfe.Facility.eq(f), &quot;Month&quot;]], &quot;y&quot;: [dfe.loc[dfe.Facility.eq(f), &quot;Count&quot;]], &quot;name&quot;: f, &quot;meta&quot;: f, }, ], } for f in [&quot;Deworming needed&quot;, &quot;Anemia&quot;] # dfe[&quot;Facility&quot;].unique() ], &quot;direction&quot;: &quot;down&quot;, &quot;showactive&quot;: True, } ] fig = fig.update_layout(updatemenus=updatemenus) # px does not set an appropriate hovertemplate.... fig.update_traces( hovertemplate=&quot;Facility=%{meta}&lt;br&gt;Month=%{x}&lt;br&gt;Count=%{y}&lt;extra&gt;&lt;/extra&gt;&quot;, meta=&quot;Deworming needed&quot;, ) </code></pre>
python|pandas|drop-down-menu|plotly|plotly-express
0
3,976
67,969,133
Block matrix with optimization variables in CVXPY
<p>I want to build a block matrix that has the form</p> <pre><code>Q = [[A, B], [C, D]] </code></pre> <p>where each of the blocks <code>A,B,C,D</code> are the following matrices:</p> <ul> <li><code>A</code> is simply 2x2 identity matrix</li> <li><code>B</code> is the embdedding of a 2x1 vector <code>b=(b_1,b_2)</code> in the diagonal of the identity <code>B = diag(b_1, b_2)</code></li> <li><code>C</code> is the transpose of <code>B</code></li> <li><code>D</code> is the 2x2 identity myltiplied by some constant <code>d</code>, <code>D = d*A</code></li> </ul> <p>My optimization problem is <code>min d</code> such that <code>Q &gt;&gt; 0</code>, that is PSD condition. <strong>I need help with this problem.</strong></p> <p>Confusions:</p> <ol> <li>I know how to make a 2x2 identity matrix using Numpy <code>np.eye(2)</code>. I am not sure if it makes sense to use it within my matrix <code>Q</code>.</li> <li>Furthemore, for <code>b</code> I define <code>b = cvxpy.Variable((2,1))</code>. Then, I can also define its transpose as <code>b_t = b.T</code> but how do I make this as a matrix <code>B</code> and <code>C</code> given there is no outer product? That is, I need to embed the elements of the vector in the diagonal of the matrix.</li> <li>Since <code>d</code> is the variable I minimize over I need to define it as <code>d = cvxpy.Variable(1)</code>. But then I cannot simply multiply it to the identity matrix in order to have <code>diag(t,t)</code> which I call <code>T = t*np.eye(2)</code>.</li> </ol> <p>In general I cannot figure out if I need to first define the block matrix <code>Q</code> as <code>Q = cvxpy.Variable((4,4))</code> or <code>Q = cvxpy.Parameter((4,4))</code> and then having all terms from above use <code>cvxpy.bmat</code>to give it the precise form.</p> <p>Any help appreciated.</p>
<p>You might get a better answer at other than a programming site. However in this case it's straightforword.</p> <p>Since swapping two rows and (the same) two columns can be done by</p> <pre><code>M -&gt; Q'*M*Q </code></pre> <p>where Q is a permutation matrix (and hence orthogonal), such a transformed matrix will be PSD iff the original is PSD.</p> <p>If we write</p> <pre><code>M = ( 1 0 b1 0 ) ( 0 1 0 b2) ( b1 0 d 0 ) ( 0 b2 0 d ) </code></pre> <p>and swap the middle two rows and columns we get</p> <pre><code>M = ( 1 b1 0 0 ) ( b1 d 0 0 ) ( 0 0 1 b2 ) ( 0 0 b2 d ) </code></pre> <p>And this matrix will be PSD iff the two blocks on the diagonal are PSD, iff</p> <pre><code>d &gt; max( b1*b1, b2*b2) </code></pre>
python|numpy|mathematical-optimization|cvxpy
0
3,977
67,927,011
Python SQLAlchemy Importing Table Names in Lowercase (Snowflake)
<p>Using both pandas.read_sql as well as pandas.read_sql_table, I keep getting the entire table back with all the column names in lowercase. Is there anyway around this?</p> <p>I wanted to do some transformations on the data then replace the table in the DB, but it's a pain if doing so changes all the column names to lowercase.</p> <pre><code>#both of these produce the same lowercase columns sql = 'SELECT * from &quot;DB&quot;.&quot;SCHEMA&quot;.&quot;'+&quot;tablename&quot;+'&quot;; ' df = pd.read_sql( sql, con=engine ) df = pd.read_sql_table( &quot;tablename&quot;, con=engine ) </code></pre> <p><a href="https://i.stack.imgur.com/2mhSI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2mhSI.png" alt="enter image description here" /></a></p>
<blockquote> <p>Snowflake stores all case-insensitive object names in uppercase text. In contrast, SQLAlchemy considers all lowercase object names to be case-insensitive. Snowflake SQLAlchemy converts the object name case during schema-level communication, i.e. during table and index reflection. If you use uppercase object names, SQLAlchemy assumes they are case-sensitive and encloses the names with quotes. This behavior will cause mismatches agaisnt data dictionary data received from Snowflake, so unless identifier names have been truly created as case sensitive using quotes, e.g., &quot;TestDb&quot;, all lowercase names should be used on the SQLAlchemy side.</p> </blockquote> <p><a href="https://github.com/snowflakedb/snowflake-sqlalchemy" rel="nofollow noreferrer">https://github.com/snowflakedb/snowflake-sqlalchemy</a></p> <p><code>lowercase</code> is default behavior in Sqlalchemy. You should not using uppercase in Sqlalchemy or pandas if it is not necessary to. You can either use <code>&quot;...&quot;</code> or <code>quote_names</code> in Sqlalchemy to specify case-sensitive. If you insists on getting uppercase columns for all, this post of using events listener could be helpful <a href="https://stackoverflow.com/a/34322171/12032355">https://stackoverflow.com/a/34322171/12032355</a></p>
python|sql|pandas|sqlalchemy|snowflake-cloud-data-platform
1
3,978
31,845,258
multi index plotting
<p>I have some data where I've manipulated the dataframe using the following code:</p> <pre><code>import pandas as pd import numpy as np data = pd.DataFrame([[0,0,0,3,6,5,6,1],[1,1,1,3,4,5,2,0],[2,1,0,3,6,5,6,1],[3,0,0,2,9,4,2,1],[4,0,1,3,4,8,1,1],[5,1,1,3,3,5,9,1],[6,1,0,3,3,5,6,1],[7,0,1,3,4,8,9,1]], columns=["id", "sex", "split", "group0Low", "group0High", "group1Low", "group1High", "trim"]) data #remove all where trim == 0 trimmed = data[(data.trim == 1)] trimmed #create df with columns to be split columns = ['group0Low', 'group0High', 'group1Low', 'group1High'] to_split = trimmed[columns] to_split level_group = np.where(to_split.columns.str.contains('0'), 0, 1) # output: array([0, 0, 1, 1]) level_low_high = np.where(to_split.columns.str.contains('Low'), 'low', 'high') # output: array(['low', 'high', 'low', 'high'], dtype='&lt;U4') multi_level_columns = pd.MultiIndex.from_arrays([level_group, level_low_high], names=['group', 'val']) to_split.columns = multi_level_columns to_split.stack(level='group') sex = trimmed['sex'] split = trimmed['split'] horizontalStack = pd.concat([sex, split, to_split], axis=1) horizontalStack finalData = horizontalStack.groupby(['split', 'sex', 'group']) finalData.mean() </code></pre> <p>My question is, how do I plot the mean data using ggplot or seaborn such that for each "split" level I get a graph that looks like this:</p> <p><a href="https://i.stack.imgur.com/j7uNA.png" rel="noreferrer"><img src="https://i.stack.imgur.com/j7uNA.png" alt="enter image description here"></a></p> <p>At the bottom of the code you can see I've tried to split up the group factor so I can separate the bars, but that resulted in an error (KeyError: 'group') and I think that is related to the way I used multi indexing</p>
<p>I would use a factor plot from seaborn.</p> <p>Say you have data like this:</p> <pre><code>import numpy as np import pandas import seaborn seaborn.set(style='ticks') np.random.seed(0) groups = ('Group 1', 'Group 2') sexes = ('Male', 'Female') means = ('Low', 'High') index = pandas.MultiIndex.from_product( [groups, sexes, means], names=['Group', 'Sex', 'Mean'] ) values = np.random.randint(low=20, high=100, size=len(index)) data = pandas.DataFrame(data={'val': values}, index=index).reset_index() print(data) Group Sex Mean val 0 Group 1 Male Low 64 1 Group 1 Male High 67 2 Group 1 Female Low 84 3 Group 1 Female High 87 4 Group 2 Male Low 87 5 Group 2 Male High 29 6 Group 2 Female Low 41 7 Group 2 Female High 56 </code></pre> <p>You can then create the factor plot with one command + plus an extra line to remove some redundant (for your data) x-labels:</p> <pre><code>fg = seaborn.factorplot(x='Group', y='val', hue='Mean', col='Sex', data=data, kind='bar') fg.set_xlabels('') </code></pre> <p>Which gives me:</p> <p><a href="https://i.stack.imgur.com/KRBBq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KRBBq.png" alt="enter image description here"></a></p>
python|pandas|matplotlib|seaborn
33
3,979
41,336,576
What are the input/output tensors, for the translation(RNN) tutorial?
<p>As per <a href="https://stackoverflow.com/questions/39781946/unable-to-deploy-a-cloud-ml-model">Unable to deploy a Cloud ML model</a> if I want to deploy my model to the Google Cloud ML I need explicitly set the "input"/"output" collections that will store the references to the input/output tensors, like this:</p> <blockquote> <p>This collection should name all the input tensors for your graph. Similarly, a collection named “outputs” is required to name the output tensors for your graph. Assuming your graph has two input tensors x and y, and one output tensor scores, this can be done as follows:</p> <p>tf.add_to_collection(“inputs”, json.dumps({“x” : x.name, “y”: y.name})) tf.add_to_collection(“outputs”, json.dumps({“scores”: scores.name})) </p> <p>Here “x”, “y” and “scores” become aliases to the actual tensor names (x.name, y.name and scores.name)</p> </blockquote> <p>However, I do not know what are the input/output tensors in the translation(RNN) <a href="https://github.com/tensorflow/models/blob/master/tutorials/rnn/translate/translate.py" rel="nofollow noreferrer">tutorial</a>. Without this knowledge, I can't refactor the code and deploy my models to the Google Cloud ML.</p>
<p>According to the code below, the inputs are: encoder_inputs, decoder_inputs, target_weights, and the output is the third element of the output of the return value of step()</p> <p><a href="https://github.com/petewarden/tensorflow_makefile/blob/master/tensorflow/models/rnn/translate/seq2seq_model.py#L170" rel="nofollow noreferrer">https://github.com/petewarden/tensorflow_makefile/blob/master/tensorflow/models/rnn/translate/seq2seq_model.py#L170</a></p>
tensorflow
1
3,980
41,337,477
Select non-null rows from a specific column in a DataFrame and take a sub-selection of other columns
<p>I have a dataFrame which has several coulmns, so i choosed some of its coulmns to create a variable like this <code>xtrain = df[['Age','Fare', 'Group_Size','deck', 'Pclass', 'Title' ]]</code> i want to drop from these coulmns all raws that the Survive coulmn in the main dataFrame is nan.</p>
<p>You can pass a boolean mask to your df based on <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.notnull.html" rel="noreferrer"><code>notnull()</code></a> of 'Survive' column and select the cols of interest:</p> <pre><code>In [2]: # make some data df = pd.DataFrame(np.random.randn(5,7), columns= ['Survive', 'Age','Fare', 'Group_Size','deck', 'Pclass', 'Title' ]) df['Survive'].iloc[2] = np.NaN df Out[2]: Survive Age Fare Group_Size deck Pclass Title 0 1.174206 -0.056846 0.454437 0.496695 1.401509 -2.078731 -1.024832 1 0.036843 1.060134 0.770625 -0.114912 0.118991 -0.317909 0.061022 2 NaN -0.132394 -0.236904 -0.324087 0.570660 0.758084 -0.176421 3 -2.145934 -0.020003 -0.777785 0.835467 1.498284 -1.371325 0.661991 4 -0.197144 -0.089806 -0.706548 1.621260 1.754292 0.725897 0.860482 </code></pre> <p>Now pass a mask to <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-label" rel="noreferrer"><code>loc</code></a> to take only non <code>NaN</code> rows:</p> <pre><code>In [3]: xtrain = df.loc[df['Survive'].notnull(), ['Age','Fare', 'Group_Size','deck', 'Pclass', 'Title' ]] xtrain Out[3]: Age Fare Group_Size deck Pclass Title 0 -0.056846 0.454437 0.496695 1.401509 -2.078731 -1.024832 1 1.060134 0.770625 -0.114912 0.118991 -0.317909 0.061022 3 -0.020003 -0.777785 0.835467 1.498284 -1.371325 0.661991 4 -0.089806 -0.706548 1.621260 1.754292 0.725897 0.860482 </code></pre>
python|pandas
52
3,981
41,598,763
How to custom sort pandas multi-index?
<p>The following code generates the pandas table named <code>out</code>. </p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'Book': ['B1', 'B1', 'B2', 'B3', 'B3', 'B3'], 'Trader': ['T1', 'Z2', 'Z2', 'T1', 'U3', 'T2'], 'Position':[10, 33, -34, 87, 43, 99]}) df = df[['Book', 'Trader', 'Position']] table = pd.pivot_table(df, index=['Book', 'Trader'], values=['Position'], aggfunc=np.sum) print(table) tab_tots = table.groupby(level='Book').sum() tab_tots.index = [tab_tots.index, ['Total'] * len(tab_tots)] print(tab_tots) out = pd.concat( [table, tab_tots] ).sort_index().append( table.sum().rename(('Grand', 'Total')) ) </code></pre> <p>The table <code>out</code> look like <a href="https://i.stack.imgur.com/SULHQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SULHQ.png" alt="this."></a></p> <p>But I would like it to look like <a href="https://i.stack.imgur.com/ygZPM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ygZPM.png" alt="this."></a></p> <p>Notice how the second table always puts the 'Total' at the bottom. So basically I still want to sort alphabetically but I would like to always put 'Total' last. Could someone provide an adjustment to my code that gives my desired output?</p>
<p>Pandas has built-in functionality within the <code>pivot_table</code> function to compute the marginal totals.</p> <pre><code>table = pd.pivot_table(df, index='Book', columns='Trader', values='Position', aggfunc=np.sum, margins=True, margins_name='Total').drop('Total').stack() table[('Grand', 'Total')] = table.sum() table.name = 'Position' table.reset_index() Book Trader Position 0 B1 T1 10.0 1 B1 Z2 33.0 2 B1 Total 43.0 3 B2 Z2 -34.0 4 B2 Total -34.0 5 B3 T1 87.0 6 B3 T2 99.0 7 B3 U3 43.0 8 B3 Total 229.0 13 Grand Total 238.0 </code></pre> <h2>Solution based on sorting multi-index</h2> <p>This solution continues off from your analysis by starting from your <code>out</code> DataFrame. You can convert <code>Book</code> and <code>Trader</code> to Pandas categorical type which allows you to custom sort by passing in the argument <code>ordered=True</code> and a list of the <code>categories</code> in the order you want sorted.</p> <pre><code>out = out.reset_index() trader_cats = pd.Categorical(out['Trader'], categories=sorted(df.Trader.unique()) + ['Total'], ordered=True) book_cats = pd.Categorical(out['Book'], categories=sorted(df.Book.unique()) + ['Grand'], ordered=True) out['Trader'] = trader_cats out['Book'] = book_cats out.set_index(['Book', 'Trader'], inplace=True) out.sort_index(level=['Book', 'Trader']) Position Book Trader B1 T1 10 Z2 33 Total 43 B2 Z2 -34 Total -34 B3 T1 87 T2 99 U3 43 Total 229 Grand Total 238 </code></pre>
python|pandas
2
3,982
27,628,765
Apply numpy index to matrix
<p>I have spent the last hour trying to figure this out</p> <p>Suppose we have</p> <pre><code>import numpy as np a = np.random.rand(5, 20) - 0.5 amin_index = np.argmin(np.abs(a), axis=1) print(amin_index) &gt; [ 0 12 5 18 1] # or something similar </code></pre> <p>this does not work:</p> <pre><code>a[amin_index] </code></pre> <p>So, in essence, I need to find the minima along a certain axis for the array np.abs(a), but then extract the values from the array a at these positions. How can I apply an index to just one axis?</p> <p>Probably very simple, but I can't get it figured out. Also, I can't use any loops since I have to do this for arrays with several million entries. thanks </p>
<p>Is because <code>argmin</code> returns indexes of columns for each of the rows (with <code>axis=1</code>), therefore you need to access to each row at those particular columns:</p> <p><code>a[range(a.shape[0]), amin_index]</code></p>
python|arrays|numpy
1
3,983
61,397,388
Calculate grouped rolling cumulative sum with multiplier
<p>I would like to calculate the rolling cumulative sum after multiplying a column by a constant within a Pandas DataFrame. For example, given the series:</p> <pre><code>0 0 1 0 0 0 </code></pre> <p>I would like to apply a constant multiple, for example 1.5, to cumulatively to compute the following series:</p> <pre><code>0 0 1 1.5 2.25 3.375 </code></pre> <p>The series will need to be calculated over a group, for example:</p> <pre><code>pd.DataFrame({'Group': ['a', 'a', 'a', 'a', 'b', 'b'], 'Value': [0, 0, 1, 0, 0, 0]}) </code></pre> <p>Should compute for <code>a</code> and <code>b</code> respectively.</p> <p>The series will only ever contain either <code>0</code> or <code>1</code> as values, and <code>1</code> will only occur once in the series. As such the sum of a series <strong>before</strong> any computation is <code>1</code>.</p>
<p>This is a one-liner but it produces the output in the same <code>Value</code> column, since it only has <code>0</code> or <code>1</code> multiplying <code>1 time x</code> is the same as <code>0 + x</code></p> <pre><code>df.iloc[df.Value.idxmax()+1:, df.columns == 'Value'] = (df.iloc[df.Value.idxmax()+1:, df.columns == 'Value']+1.5).cumprod() df.Value # Value # 0 0.0 # 1 0.0 # 2 1.0 # 3 1.5 # 4 2.25 # 5 3.375 </code></pre> <p><strong>EDIT</strong></p> <p>If this require to be applied only for a specific group better to create a function</p> <pre><code>def fun(df_g): df = df_g.copy() df.iloc[df.Value.idxmax()+1:, df.columns == 'Value'] = (df.iloc[df.Value.idxmax()+1:, df.columns == 'Value']+1.5).cumprod() return df.Value df_result = df.groupby('Group').apply(fun).\ transform(pd.Series).reset_index(level=1, drop=True) df_result # Group 0 # a 0.0 # a 0.0 # a 1.0 # a 1.5 # b 0.0 # b 0.0 </code></pre>
python|pandas
2
3,984
61,357,913
How to create matrix from set of lists which contains more than 4 values?
<p>I have set of lists (i.e. list x , list y, list z), e.g.</p> <pre><code>x = ['41.95915452', '41.96333025', '41.98135503', '41.95096716', '41.96504172', '41.96526867', '41.98068483', '41.98117072', '41.98059828', '41.95915452', '41.96333025', '41.98135503', '41.95096716'] y = ['12.60718918', '12.62725589', '12.6201431', '12.60017199', '12.62774075', '12.62800706', '12.62812394', '12.6278259', '12.62810614', '12.60718918', '12.62725589', '12.6201431', '12.60017199'] z = ['9.215398066', '8.249650758', '8.791595671', '8.246394455', '9.27132698', '5.667547722', '7.783268126', '9.471492129', '9.668210684', '9.215398066', '8.249650758', '8.791595671', '8.246394455'] </code></pre> <p>There are such around 800 lists. I have to create a 3*3 matrix from the each of the lists <code>x</code>, <code>y</code> and <code>z</code> such that [x1, y1, z1], one of its row should be like ['41.95915452', '12.60718918', '9.215398066' ] and list must contain at least than 4 entries.</p> <p>My code : </p> <pre><code>for i in np.arange(41.70, 42.10, 0.05): #print(round(i,2), end=', ') for j in np.arange(12.30, 12.80, 0.05): # print(round(j,2), end=', ') for k in np.arange(0,26,5): #print("\n") #print(round(i,2),round(j,2),k, end=', ') xmax = round(i+0.05,2) ymax = round(j+ 0.05,2) zmax = round(k+5,2) #print("Voxel",xmax,ymax,zmax) v = [] x1 = [] y1 = [] z1 = [] count = 0; with open('a.csv') as csvfile: plots = csv.reader(csvfile,delimiter=',') for rows in plots: if(float(rows[0]) &gt;= i and float(rows[0])&lt;= xmax and float(rows[1]) &gt;=j and float(rows[1])&lt;=ymax and float(rows[2])&gt;=k and float(rows[2])&lt;=zmax): #print("points", float(rows[0]),float(rows[1]),float(rows[2])) x1.append(rows[0]) y1.append(rows[1]) z1.append(rows[2]) count= count+1 #f = open("demofile2.txt", "a") #f.write(str(i)+","+str(j)+","+str(k)+","+str(count)+"\n") #f.write(text) #f.close() #print(count) if(count &gt; 3): v1 = [i,j,k] v.append(v1) print(v) print(x1) print(y1) print(z1) print("\n") </code></pre>
<p>Use numpy vstack and transpose. </p> <p>Try this code.</p> <pre><code>np.vstack([x, y, z]).T </code></pre> <p>If you want the output is list, then use</p> <p><code>np.vstack([x, y, z]).T.tolist()</code></p>
python|list|numpy|matrix|eigenvalue
1
3,985
61,362,484
Why use the pip in a conda virtual environment makes the global effect?
<p>previously, I installed the tensorflow 1.13 in my machine. There are some projects depending on different version of tensorflow and I do not want to mixed up different version of tensowflow.</p> <p>So I just tried create a env called tf2.0 and used pip to install tensorflow 2.0.0b1 in that specific virtual environment.</p> <p>However, after I ran 'pip install tensorflow-gpu==2.0.0b1` in that "tf2.0" conda environment, I found that it takes effect globally, which mean I have to use tensorflow-gpu 2.0.0b1 even when that virtual env "tf2.0" disactivated.</p> <p>I wish I could use tensorflow 1.13 when virtual env is deactivated.</p>
<p>It's hard to troubleshoot the described conditions without more details (exact commands run, showing <code>PATH</code> before and after and post activation, etc.). Nevertheless, you can try switching to following <a href="https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#pip-in-env" rel="nofollow noreferrer">the most recent recommendations for mixing Conda and Pip</a>. Namely, avoid installing things <em>ad hoc</em>, which is prone to using the wrong <code>pip</code> and clobbering packages, but instead define a YAML file and always create the whole env in one go.</p> <p>As a minimal example:</p> <p><strong>my_env.yaml</strong></p> <pre><code>name: my_env channels: - defaults dependencies: - python - pip - pip: - tensorflow-gpu==2.0.0b1 </code></pre> <p>which can be created with <code>conda env create -f my_env.yaml</code>. Typically, it is best to include everything possible in the "non-pip" section of dependencies.</p>
tensorflow|pip|anaconda|conda|tensorflow2.0
0
3,986
68,506,555
Pandas If Else condition on multiple columns
<p>I have a df as:</p> <pre><code>df: col1 col2 col3 col4 col5 0 1.36 4.31 7.66 2 2 1 2.62 3.30 2.48 2 1 2 5.19 3.58 1.62 0 2 3 2.06 3.16 3.50 1 1 4 2.19 2.98 3.38 1 1 </code></pre> <p>I want</p> <p>col6 to return 1 when (col4 &gt; 1 and col5 &gt; 1) else 0</p> <p>and</p> <p>col7 to return 1 when (col4 &gt; 1 and col5 &gt; 1 and col 4 + col5 &gt; 2) else 0</p> <p>I am trying</p> <pre><code>df.loc[df['col4'] &gt; 0, df['col5'] &gt; 0, 'col6'] = '1' </code></pre> <p>however I am getting the error:</p> <pre><code>File &quot;pandas\_libs\index.pyx&quot;, line 269, in pandas._libs.index.IndexEngine.get_indexer File &quot;pandas\_libs\hashtable_class_helper.pxi&quot;, line 5247, in pandas._libs.hashtable.PyObjectHashTable.lookup TypeError: unhashable type: 'Series' </code></pre> <p>How can I perform this operation?</p>
<p>When you do a bitwise operation on Series objects or arrays, you get an array of booleans, each of whose elements is True or False. Those are basically 0 or 1, and in fact more convenient in most cases:</p> <pre><code>df['col6'] = (df['col4'] &gt; 1) &amp; (df['col5'] &gt; 1) df['col7'] = df['col6'] </code></pre> <p>That last one is not a clever trick. If two numbers are both &gt;1, then of course their sum must be &gt;2. If you absolutely want the integers 0 and 1 instead of booleans, use <code>Series.astype</code>:</p> <pre><code>df['col6'] = ((df['col4'] &gt; 1) &amp; (df['col5'] &gt; 1)).astype(int) </code></pre>
python|pandas|dataframe|if-statement
2
3,987
68,512,460
to group but not using the groupby function of Python/Pandas
<pre><code>INPUT DATA- array([['00:00:00', 20, 15.27], ['00:15:00', 20, 9.07], ['00:30:00', 20, 7.33], ..., ['00:30:00', 407, 34.0], ['00:00:00', 407, 172.0], ['00:10:00', 407, 187.0]], dtype=object) </code></pre> <p>First column - time second column - id third column - price</p> <p>60k+ rows</p> <p>Need to find sum of price <em>per</em> id for each time.</p> <p><strong>I am trying to work without the GROUPBY function</strong></p> <p>How can I achieve this? I've been trying using this.</p> <pre><code>result={} for t,id,price in trial.inputs(): result[t]={} if id not in result[t]: result[t][id]=0 result[t][id]+=price print (result) </code></pre>
<p>Update:</p> <pre><code>from collections import defaultdict d = defaultdict(list) for t,id,price in trial: d[t,id].append(price) print (d) </code></pre> <p>I am able to group the prices based on t and id. How do I find the sum of the prices for each id?</p>
python|pandas|dataframe|numpy
0
3,988
36,354,101
Linear regression with tensorflow is very slow
<p>I am trying to implement a simple linear regression in tensorflow (with the goal of eventually extending it to more advanced models). My current code looks as follows:</p> <pre><code>def linear_regression(data, labels): # Setup placeholders and variables num_datapoints = data.shape[0] num_features = data.shape[1] x = tf.placeholder(tf.float32, [None, num_features]) y_ = tf.placeholder(tf.float32, [None]) coeffs = tf.Variable(tf.random_normal(shape=[num_features, 1])) bias = tf.Variable(tf.random_normal(shape=[1])) # Prediction y = tf.matmul(x, coeffs) + bias # Cost function cost = tf.reduce_sum(tf.pow(y-y_, 2))/(2.*num_datapoints) # Optimizer NUM_STEPS = 500 optimizer = tf.train.AdamOptimizer() train_step = optimizer.minimize(lasso_cost) # Fit the model init = tf.initialize_all_variables() cost_history = np.zeros(NUM_STEPS) sess = tf.Session() sess.run(init) for i in range(NUM_STEPS): if i % 100 == 0: print 'Step:', i for xi, yi in zip(data, labels): sess.run(train_step, feed_dict={x: np.expand_dims(xi, axis=0), y_: np.expand_dims(yi, axis=0)}) cost_history[i] = sess.run(lasso_cost, feed_dict={x: data, y_:labels}) return sess.run(coeffs), cost_history </code></pre> <p>The code works, and finds the correct coefficients. However, it is extremely slow. On my MacBook Pro, it takes several minutes just to run a few training epochs for a data set with 1000 data points and 10 features. Since I'm running OSX I don't have GPU acceleration, which could explain some of the slowness, but I would think that it could be faster than this. I have experimented with different optimizers, but the performance is very similar.</p> <p>Is there some obvious way to speed up this code? Otherwise, it feels like tensorflow is pretty much useless for these types of problems.</p>
<p>It is so slow, since you train the network point by point which requires <code>NUM_STEPS * num_datapoints</code> iterations (which leads to <strong>5 hundred thousands</strong> cycles).</p> <p>All you actually need to train your network is</p> <pre><code>for i in range(NUM_STEPS): sess.run(train_step, feed_dict={x: data, y_:labels}) </code></pre> <p>This would take just a couple of seconds.</p>
python|performance|regression|tensorflow
5
3,989
36,462,909
How to set matrix columns iteratively and fast?
<p>I have the following Python code:</p> <pre><code>H = np.zeros(shape=(N-q+1,q),dtype=complex) for i in range(0,N-q+1): H[i,:] = u[i:q+i] </code></pre> <p>where <em>N</em> and <em>q</em> are constants and <em>u</em> is a vector long enough so no out of bounds error would occur when <em>u[i:q+i]</em>.</p> <p>I have tried to optimize the code by using list comprehension,</p> <pre><code>H = np.asarray([u[i:q+i] for i in range(0,N-q+1)]) </code></pre> <p>but <em>np.asarray()</em> makes it slower than previous code.</p> <p>Any idea in order to optimize the assignation of column values?</p>
<p>You could use <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.ndarray.strides.html" rel="nofollow"><code>stride.as_strided</code></a>:</p> <pre><code>import numpy.lib.stride_tricks as stride s = u.strides[0] H2 = stride.as_strided(u, shape=(N-q+1,q), strides=(s, s)).astype(complex) </code></pre> <p>Using <code>strides=(s, s)</code> is the key -- in particular, making the first stride <code>s</code> means that each row of <code>H2</code> advances the index into <code>u</code> by the number of bytes needed to advance one item. Hence the rows repeat, albeit shifted by one.</p> <hr> <p>For example,</p> <pre><code>import numpy as np import numpy.lib.stride_tricks as stride N, q = 10**2, 6 u = np.arange((N-q+1)*(N)) def using_loop(u): H = np.zeros(shape=(N-q+1,q),dtype=complex) for i in range(0,N-q+1): H[i,:] = u[i:q+i] return H def using_stride(u): s = u.strides[0] H2 = stride.as_strided(u, shape=(N-q+1,q), strides=(s, s)).astype(complex) return H2 H = using_loop(u) H2 = using_stride(u) assert np.allclose(H, H2) </code></pre> <hr> <p>Since <code>stride.as_strided</code> avoids the Python <code>for-loop</code>, <code>using_stride</code> is faster than <code>using_loop</code>. The advantage grows as <code>N-q</code> (the number of iterations) increases.</p> <p>With N = 10**2 <code>using_stride</code> is 5x faster:</p> <pre><code>In [119]: %timeit using_loop(u) 10000 loops, best of 3: 61.6 µs per loop In [120]: %timeit using_stride(u) 100000 loops, best of 3: 11.9 µs per loop </code></pre> <p>With N = 10**3 <code>using_stride</code> is 28x faster:</p> <pre><code>In [122]: %timeit using_loop(u) 1000 loops, best of 3: 636 µs per loop In [123]: %timeit using_stride(u) 10000 loops, best of 3: 22.4 µs per loop </code></pre>
python|numpy|optimization
2
3,990
53,331,692
Python Groupby Running Total/Cumsum column based on string in another column
<p>I want to create 2 Running Total columns that ONLY aggregate the <code>Amount</code> values based on whether <code>TYPE</code> is <code>ANNUAL</code> or <code>MONTHLY</code> within each <code>Deal</code> so it would be <code>DF.groupby(['Deal','Booking Month'])</code> then somehow apply a sum function when <code>TYPE==ANNUAL</code> for the first column and <code>TYPE==MONTHLY</code> for the second column.</p> <p>This if what my grouped DF looks like + the two Desired Columns.</p> <pre><code>Deal TYPE Month Amount Running Total(ANNUAL) Running Total(Monthly) A ANNUAL April 1000 1000 0 A ANNUAL April 2000 3000 0 A MONTHLY June 1500 3000 1500 B MONTHLY April 11150 0 11150 B ANNUAL July 700 700 11150 B ANNUAL August 303.63 1003.63 11150 C ANNUAL April 25624.59 25624.59 0 D ANNUAL June 5000 5000 0 D ANNUAL July 5000 10000 0 D ANNUAL August 5000 15000 0 E ANNUAL April 10 10 0 E MONTHLY May 1000 10 1000 E ANNUAL May 500 510 1000 E MONTHLY June 500.00 510 1500 E ANNUAL June 600 1110 1500 E MONTHLY July 300 1110 1800 E MONTHLY July 8200 1110 10000 </code></pre>
<p>Use <code>filters</code> and <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> + <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html" rel="nofollow noreferrer"><code>transform</code></a>:</p> <pre><code>mask = df.TYPE.eq('ANNUAL') cols = ['Running Total(ANNUAL)','Running Total(MONTHLY)'] df.loc[mask,'Running Total(ANNUAL)'] = df.loc[mask,'Amount'] df.loc[~mask,'Running Total(MONTHLY)'] = df.loc[~mask,'Amount'] df[cols] = df[cols].fillna(0) df[cols] = df.groupby(['Deal'])['Running Total(ANNUAL)','Running Total(MONTHLY)'].transform('cumsum') print(df) Deal TYPE Month Amount Running Total(ANNUAL) \ 0 A ANNUAL April 1000.00 1000.00 1 A ANNUAL April 2000.00 3000.00 2 A MONTHLY June 1500.00 3000.00 3 B MONTHLY April 11150.00 0.00 4 B ANNUAL July 700.00 700.00 5 B ANNUAL August 303.63 1003.63 6 C ANNUAL April 25624.59 25624.59 7 D ANNUAL June 5000.00 5000.00 8 D ANNUAL July 5000.00 10000.00 9 D ANNUAL August 5000.00 15000.00 10 E ANNUAL April 10.00 10.00 11 E MONTHLY May 1000.00 10.00 12 E ANNUAL May 500.00 510.00 13 E MONTHLY June 500.00 510.00 14 E ANNUAL June 600.00 1110.00 15 E MONTHLY July 300.00 1110.00 16 E MONTHLY July 8200.00 1110.00 Running Total(MONTHLY) 0 0.0 1 0.0 2 1500.0 3 11150.0 4 11150.0 5 11150.0 6 0.0 7 0.0 8 0.0 9 0.0 10 0.0 11 1000.0 12 1000.0 13 1500.0 14 1500.0 15 1800.0 16 10000.0 </code></pre>
python|pandas|cumulative-sum
3
3,991
52,973,306
Finding average of the smallest 4 values in an array which generates 200 random numbers
<p>I have the following code run in the spyder IDE:</p> <pre><code>idnum = 201034628 seed(idnum); w = np.random.rand(200) print(w) </code></pre> <p>This generates the following result:</p> <pre><code>[0.00176212 0.79092217 0.1759531 0.00239256 0.78842458 0.30404404 0.25633004 0.88271124 0.72031936 0.17356416 0.5674158 0.83897948 0.4133943 0.22471237 0.66562002 0.70207085 0.55722598 0.86308392 0.14584968 0.66224337 0.79900625 0.2687224 0.45508786 0.99014178 0.176943 0.42335567 0.41034833 0.75497287 0.41301282 0.11294302 0.58715198 0.01524138 0.58633177 0.9784454 0.14610789 0.68654175 0.94733177 0.93776749 0.17294272 0.7491281 0.94087871 0.60510781 0.43708462 0.77303273 0.13250525 0.50794632 0.36706808 0.46873059 0.99757662 0.144249 0.69427544 0.78359245 0.64836852 0.16574067 0.98633778 0.05613428 0.51713291 0.27246708 0.26216551 0.44605373 0.99963659 0.90569603 0.31139955 0.25559081 0.8295379 0.84638476 0.48194161 0.505123 0.57456517 0.62727722 0.11940848 0.49435157 0.07438197 0.11481526 0.74184931 0.94697125 0.93788422 0.3586455 0.852594 0.35167897 0.57139446 0.77923007 0.09070311 0.07821641 0.38140649 0.80945136 0.81820638 0.8140444 0.94458644 0.42983398 0.06609377 0.25737315 0.27873234 0.87183073 0.14317078 0.8964766 0.00731705 0.16095917 0.70980283 0.49757526 0.06990482 0.15304861 0.02710815 0.21319381 0.82069776 0.19839614 0.64250566 0.6383788 0.12539173 0.74583486 0.11041236 0.827742 0.20340574 0.03643315 0.62638826 0.12454928 0.64567226 0.04782684 0.88455847 0.62114705 0.82253557 0.12590787 0.99624612 0.0780055 0.38312778 0.56969024 0.21771078 0.18022973 0.06825607 0.05189065 0.19410785 0.93458232 0.84006441 0.8796388 0.00574523 0.92213916 0.60108549 0.48774697 0.79918579 0.05700109 0.42167703 0.26358089 0.37023659 0.05556867 0.1788227 0.63840475 0.79772203 0.20969062 0.55459356 0.81425831 0.06324903 0.274849 0.15092814 0.65504038 0.57138257 0.37113864 0.84318386 0.58306703 0.95677286 0.28962055 0.31085227 0.92607168 0.61132872 0.42862182 0.67385059 0.58591843 0.98309858 0.12926512 0.89650825 0.47853266 0.16842571 0.77785123 0.16004964 0.24379739 0.76415568 0.14338659 0.73812864 0.52921474 0.8678008 0.82205399 0.1219327 0.83831355 0.5219863 0.67680272 0.05486754 0.89255115 0.91609614 0.74104108 0.98763434 0.07343619 0.0879543 0.55360531 0.01048341 0.01083459 0.13080064 0.51212431 0.24552376 0.77620793 0.16560353 0.42042389] </code></pre> <p>I need to find the average of the smallest 4 values from the numbers in the w array. How would I do this ?</p>
<p>You can use <a href="https://docs.python.org/3.6/library/heapq.html#heapq.nsmallest" rel="nofollow noreferrer"><code>heapq.nsmallest</code></a> which should be slightly faster than sorting:</p> <pre><code>import heapq import statistics print(statistics.mean(heapq.nsmallest(4, w))) </code></pre>
python|numpy
2
3,992
65,715,347
Firestore with Bigquery or Tensorflow for training and predictions?
<p>I am using Google-Firestore as data storage for my mobile application. I want to Pub/Sub <code>onChange</code> or export data every day to train a custom AI model. The model would make predictions based on which I can nudge the user on the app.</p> <p>What is the best Google Cloud Platform architecture for something like this? I thought Bigquery or Tensorflow might work, but I have not been able to figure it out.</p>
<p>It all depends on what you are trying to achieve. Try and keep it as simple as possible otherwise, you will create a problematic model that is hard to debug if it goes wrong. I would take a look at the Firebase Machine Learning mobile SDK. I have added some of the documentation below for you to see if it meets your requirements.</p> <p><a href="https://firebase.google.com/docs/ml" rel="nofollow noreferrer">Documentation</a></p> <blockquote> <p>Firebase Machine Learning is a mobile SDK that brings Google's machine learning expertise to Android and iOS apps in a powerful yet easy-to-use package. Whether you're new or experienced in machine learning, you can implement the functionality you need in just a few lines of code. There's no need to have in-depth knowledge of neural networks or model optimization to get started. On the other hand, if you are an experienced ML developer, Firebase ML provides convenient APIs that help you use your custom TensorFlow Lite models in your mobile apps.</p> </blockquote> <p><a href="https://developers.google.com/ml-kit" rel="nofollow noreferrer">Documentation</a></p> <blockquote> <p>ML Kit brings Google's machine learning expertise to mobile developers in a powerful and easy-to-use package. Make your iOS and Android apps more engaging, personalized, and helpful with optimised solutions to run on the device.</p> </blockquote>
tensorflow|google-cloud-platform|google-cloud-firestore|google-bigquery
0
3,993
65,785,748
how can i call mysql function using sqlalchemy in python?
<p>i'm trying to call a function in mysql from python here is my code using sqlalchemy</p> <pre><code>CREATE DEFINER=`root`@`localhost` FUNCTION `my_function`() RETURNS int READS SQL DATA DETERMINISTIC BEGIN insert into employeedata (select * from employee where joiningdate &lt;'20160101'); RETURN 1; END </code></pre> <pre><code>engine=sqlalchemy.create_engine('mysql+pymysql://root:root@localhost:3306/sasft') pd.read_sql_query(&quot;select my_function()&quot;, engine) </code></pre> <p>tried using MySQL connector as well.</p> <p>function is getting called through workbench not from python. it doesn't give me any error it's executing but the result is unchanged.</p>
<p>If you just want to run a function you don't need to use Pandas SQL interface. You can use the following:</p> <pre><code>from sqlalchemy import text with engine.connect() as connection: result = connection.execute(text(&quot;SELECT my_function()&quot;)) print(result) </code></pre> <p>Reference: <a href="https://docs.sqlalchemy.org/en/14/core/connections.html#understanding-autocommit" rel="nofollow noreferrer">Understanding Autocommit</a></p> <p><code>text()</code> will then &quot;force&quot; this auto commit.</p> <p>An alternative would be using a context manager with explicit <code>autocommit</code> execution options:</p> <pre><code>with engine.connect().execution_options(autocommit=True) as connection: connection.execute(&quot;SELECT my_function()&quot;) </code></pre> <p>To test a MWE example using the SQLalchemy connection rather than Pandas API, say that you wanted to read 10 items from a table <code>table_name</code>:</p> <pre><code>with engine.connect() as connection: rs = connection.execute(&quot;SELECT * FROM table_name LIMIT 10&quot;) for row in rs: print(row) </code></pre>
python|mysql|pandas|database|sqlalchemy
0
3,994
65,874,080
ValueError: zero-dimensional arrays cannot be concatenated,
<p>I have two arrays with axis=0 (there are the result of the mean and the std of a df):</p> <pre><code>df_cats = 0 58.609619 1 105.926514 2 76.706543 3 75.405762 4 68.937744 ... 75 113.124268 76 125.557373 77 130.514893 78 141.373779 79 109.185791 Length: 80, dtype: float64 0 63.540835 1 55.053429 2 96.221076 3 42.963771 4 57.447924 ... 75 42.080755 76 55.309517 77 38.997856 78 57.364695 79 40.197461 Length: 80, dtype: float64 df_dogs = 0 86.870361 1 153.085205 2 89.576416 3 139.721924 4 107.218750 ... 75 129.498291 76 108.676025 77 113.125732 78 145.829346 79 100.272461 Length: 80, dtype: float64 0 57.699218 1 71.814790 2 40.130439 3 44.966932 4 48.964512 ... 75 50.994298 76 58.257198 77 89.240987 78 58.945353 79 68.841721 Length: 80, dtype: float64 </code></pre> <p>And I'm trying to concatenate the two arrays with axis=1, using this code:</p> <pre><code>dogs_and_cats = np.concatenate((df_dogs, df_cats), axis=1) </code></pre> <p>but always have this problem:</p> <pre><code>**ValueError:** zero-dimensional arrays cannot be concatenated </code></pre> <p>How can i can concatenate that?</p>
<p>One dimensionnal arrays don't have a second dimension. This is where your problem comes from. You can concatenate by modifying the shape of these arrays to turn them into 2D arrays with only one column with the following code :</p> <pre><code>df_dogs.shape=(df_dogs.shape[0],1) df_cats.shape=(df_cats.shape[0],1) </code></pre> <p>And now, you can concatenate your arrays.</p>
python|arrays|numpy|concatenation|pca
0
3,995
21,285,380
Find column whose name contains a specific string
<p>I have a dataframe with column names, and I want to find the one that contains a certain string, but does not exactly match it. I'm searching for <code>'spike'</code> in column names like <code>'spike-2'</code>, <code>'hey spike'</code>, <code>'spiked-in'</code> (the <code>'spike'</code> part is always continuous). </p> <p>I want the column name to be returned as a string or a variable, so I access the column later with <code>df['name']</code> or <code>df[name]</code> as normal. I've tried to find ways to do this, to no avail. Any tips?</p>
<p>Just iterate over <code>DataFrame.columns</code>, now this is an example in which you will end up with a list of column names that match:</p> <pre><code>import pandas as pd data = {'spike-2': [1,2,3], 'hey spke': [4,5,6], 'spiked-in': [7,8,9], 'no': [10,11,12]} df = pd.DataFrame(data) spike_cols = [col for col in df.columns if 'spike' in col] print(list(df.columns)) print(spike_cols) </code></pre> <p>Output:</p> <pre><code>['hey spke', 'no', 'spike-2', 'spiked-in'] ['spike-2', 'spiked-in'] </code></pre> <p>Explanation:</p> <ol> <li><code>df.columns</code> returns a list of column names</li> <li><code>[col for col in df.columns if 'spike' in col]</code> iterates over the list <code>df.columns</code> with the variable <code>col</code> and adds it to the resulting list if <code>col</code> contains <code>'spike'</code>. This syntax is <a href="http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions" rel="noreferrer">list comprehension</a>. </li> </ol> <p>If you only want the resulting data set with the columns that match you can do this:</p> <pre><code>df2 = df.filter(regex='spike') print(df2) </code></pre> <p>Output:</p> <pre><code> spike-2 spiked-in 0 1 7 1 2 8 2 3 9 </code></pre>
python|python-3.x|string|pandas|dataframe
363
3,996
63,415,585
Pandas multiple merge creates multidimensional duplicate columns
<p>My goal is to merge 4 excel worksheets into 1 based on similar Hostnames, Serial Number, Category... I'm using the pandas merge function below.</p> <pre><code>InventoryDf = pd.read_excel(&quot;Inventory.xlsx&quot;, sheet_name='Inventory') SoftwareDf = pd.read_excel(&quot;Inventory.xlsx&quot;, sheet_name='Software') HardwarewareDf = pd.read_excel(&quot;Inventory.xlsx&quot;, sheet_name='Hardware') CoverageDf = pd.read_excel(&quot;Inventory.xlsx&quot;, sheet_name='Coverage') data_frames = [InventoryDf, SoftwareDf, HardwarewareDf, CoverageDf] merge = partial(pd.merge, on=['Priority','Category','Product Family','Host Name','Serial Number'], how='outer') merge = reduce(merge, data_frames) </code></pre> <p>The issue is that each worksheet has a column &quot;IP Address&quot; with mostly similar IPs. For some reason, the merge Dataframe containes 4 columns, with 2 duplicate names: &quot;IP Address_x&quot;,&quot;IP Address_x&quot;,&quot;IP Address_y&quot;,&quot;IP Address_y&quot;</p> <p>I want to merge those 4 columns into 1 but I can't because they have similar names. I don't have the rename them manually because there are ~30 dataframe columns and it's tedious.</p> <p>Is there a say to merge them so that:</p> <ol> <li>If the ip is the same, merge it</li> <li>If the IP is different, use the first &quot;IP Address_x&quot; column on the left</li> <li>If one column is missing, just the first &quot;IP Address_x&quot; if IP is not empty</li> </ol> <p>This is a sample of the worksheet, I have more columns, like: Name, Url, Site Name, City...</p> <p><strong>InventoryDf</strong></p> <pre><code>+-----------+---------------+------------+----------+----------+ | Host Name | Serial Number | IP Address | Priority | Category | +-----------+---------------+------------+----------+----------+ | SwitchA | 1230 | 1.1.1.1 | 1 | Switch | +-----------+---------------+------------+----------+----------+ | SwitchA | 1231 | 1.1.1.1 | 1 | Switch | +-----------+---------------+------------+----------+----------+ | SwitchB | 1240 | 1.1.1.2 | 2 | Switch | +-----------+---------------+------------+----------+----------+ </code></pre> <p><strong>HardwareDf</strong></p> <pre><code>+-----------+---------------+------------+----------+----------+ | Host Name | Serial Number | IP Address | Priority | Category | +-----------+---------------+------------+----------+----------+ | SwitchA | 1230 | 1.1.0.1 | 1 | Switch | +-----------+---------------+------------+----------+----------+ | SwitchD | 1250 | 1.2.2.2 | 1 | Switch | +-----------+---------------+------------+----------+----------+ | SwitchE | 1260 | 1.3.3.3 | 2 | Switch | +-----------+---------------+------------+----------+----------+ </code></pre> <p><strong>SoftwareDf</strong></p> <pre><code>+-----------+---------------+------------+----------+----------+---------+ | Host Name | Serial Number | IP Address | Priority | Category | Version | +-----------+---------------+------------+----------+----------+---------+ | SwitchA | 1230 | 1.1.1.1 | 1 | Switch | X | +-----------+---------------+------------+----------+----------+---------+ | SwitchA | 1231 | 1.1.1.1 | 1 | Switch | X | +-----------+---------------+------------+----------+----------+---------+ | SwitchB | 1240 | 1.1.1.2 | 2 | Switch | Y | +-----------+---------------+------------+----------+----------+---------+ </code></pre> <p><strong>CoverageDf</strong></p> <pre><code>+-----------+---------------+------------+----------+----------+-------------+-------+ | Host Name | Serial Number | IP Address | Priority | Category | Coverage | Price | +-----------+---------------+------------+----------+----------+-------------+-------+ | SwitchA | 1230 | 1.1.1.1 | 1 | Switch | Not Covered | 100 | +-----------+---------------+------------+----------+----------+-------------+-------+ | SwitchA | 1231 | 1.1.1.1 | 1 | Switch | Covered | 300 | +-----------+---------------+------------+----------+----------+-------------+-------+ | SwitchB | 1240 | 1.1.1.2 | 2 | Switch | Not Covered | 200 | +-----------+---------------+------------+----------+----------+-------------+-------+ </code></pre> <p><strong>Expected result</strong> (IP Address are merged even though some are different for SwitchA)</p> <pre><code>+-----------+---------------+------------+----------+----------+---------+-------------+-------+ | Host Name | Serial Number | IP Address | Priority | Category | Version | Coverage | Price | +-----------+---------------+------------+----------+----------+---------+-------------+-------+ | SwitchA | 1230 | 1.1.1.1 | 1 | Switch | X | Not Covered | 100 | +-----------+---------------+------------+----------+----------+---------+-------------+-------+ | SwitchA | 1231 | 1.1.1.1 | 1 | Switch | X | Covered | 300 | +-----------+---------------+------------+----------+----------+---------+-------------+-------+ | SwitchB | 1240 | 1.1.1.2 | 2 | Switch | Y | Not Covered | 200 | +-----------+---------------+------------+----------+----------+---------+-------------+-------+ | SwitchD | 1250 | 1.2.2.2 | 1 | Switch | | | | +-----------+---------------+------------+----------+----------+---------+-------------+-------+ | SwitchE | 1260 | 1.3.3.3 | 2 | Switch | | | | +-----------+---------------+------------+----------+----------+---------+-------------+-------+ </code></pre> <p><strong>RAW extract of the result</strong>. Notice losts of redundant columns, IP Address_x</p> <pre><code> Source.Name_x Priority Item Type_x Category Product Family Product ID_x Software Type_x OS Version_x Suggested Version 1_x Host Name IP Address_x Serial Number Source.Name_y Product ID_y Software Type_y OS Version_y Current Milestone_x Suggested Version 1_y Suggested Version 2 Suggested Version 3 IP Address_y SW End of Life SW End of Sale URL_x Source.Name_x IP Address_x Item Type_y Product ID_x Current Milestone_y Hardware Lifecycle Status Replacement PID Replacement PID Info Replacement PID Price Replacement PID Price Discount Replacement PID Service Level Replacement PID Service Price Current PID Service Price Replacement PID Service Price Discount HW End of Life HW End of Sale URL_y Source.Name_y Item Type Product ID_y IP Address_y Coverage Status Contract Status Contract Number Coverage Start Date Coverage End Date SLA type inventory_30_Jun_15_19_35.xlsx 3 Chassis Switches Cisco Catalyst 2960-X Series Switches WS-C2960X-24PS-L IOS 15.2(4)E6 15.2(7)E2 SWITCH-IDF5-A1 10.1.1.8 XXXXX software_02_Jul_07_54_15.xlsx WS-C2960X-24PS-L IOS 15.2(4)E6 End of Vulnerability Support 15.2(7)E2 &lt;NA&gt; &lt;NA&gt; 10.1.1.8 2023-04-30 2018-05-01 https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-4500-series-switches/eos-eol-notice-c51-739919.html &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; None NaN &lt;NA&gt; &lt;NA&gt; NaN NaN &lt;NA&gt; NaT NaT &lt;NA&gt; coverage_24_Jul_10_06_49.xlsx Chassis WS-C2960X-24PS-L 10.1.1.8 Covered - Non-IBM Covered - Non-IBM Covered - Non-IBM NaT NaT None inventory_30_Jun_15_19_35.xlsx 3 Chassis Switches Cisco Catalyst 2960-X Series Switches WS-C2960X-48LPS-L IOS 15.2(4)E6 15.2(7)E2 SWITCH-IDF6-A1 10.1.1.9 YYYYY software_02_Jul_07_54_15.xlsx WS-C2960X-48LPS-L IOS 15.2(4)E6 End of Vulnerability Support 15.2(7)E2 &lt;NA&gt; &lt;NA&gt; 10.1.1.9 2023-04-30 2018-05-01 https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-4500-series-switches/eos-eol-notice-c51-739919.html &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; None NaN &lt;NA&gt; &lt;NA&gt; NaN NaN &lt;NA&gt; NaT NaT &lt;NA&gt; coverage_24_Jul_10_06_49.xlsx Chassis WS-C2960X-48LPS-L 10.1.1.9 Covered - Non-IBM Covered - Non-IBM Covered - Non-IBM NaT NaT None inventory_30_Jun_15_19_35.xlsx 3 Chassis Switches Cisco Catalyst 2960-X Series Switches WS-C2960X-24PS-L IOS 15.2(4)E6 15.2(7)E2 SWITCH-IDF7-A1 10.1.1.11 ZZZZZZ software_02_Jul_07_54_15.xlsx WS-C2960X-24PS-L IOS 15.2(4)E6 End of Vulnerability Support 15.2(7)E2 &lt;NA&gt; &lt;NA&gt; 10.1.1.11 2023-04-30 2018-05-01 https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-4500-series-switches/eos-eol-notice-c51-739919.html &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; None NaN &lt;NA&gt; &lt;NA&gt; NaN NaN &lt;NA&gt; NaT NaT &lt;NA&gt; coverage_24_Jul_10_06_49.xlsx Chassis WS-C2960X-24PS-L 10.1.1.11 Covered - Non-IBM Covered - Non-IBM Covered - Non-IBM NaT NaT None inventory_30_Jun_15_19_35.xlsx 3 Chassis Switches Cisco Catalyst 2960-X Series Switches WS-C2960X-24PS-L IOS 15.2(4)E6 15.2(7)E2 SWITCH-IDF8-A1 10.1.1.12 QQQQQ software_02_Jul_07_54_15.xlsx WS-C2960X-24PS-L IOS 15.2(4)E6 End of Vulnerability Support 15.2(7)E2 &lt;NA&gt; &lt;NA&gt; 10.1.1.12 2023-04-30 2018-05-01 https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-4500-series-switches/eos-eol-notice-c51-739919.html &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; None NaN &lt;NA&gt; &lt;NA&gt; NaN NaN &lt;NA&gt; NaT NaT &lt;NA&gt; coverage_24_Jul_10_06_49.xlsx Chassis WS-C2960X-24PS-L 10.1.1.12 Covered - Non-IBM Covered - Non-IBM Covered - Non-IBM NaT NaT None inventory_30_Jun_15_19_35.xlsx 3 Chassis Switches Cisco Catalyst 2960-X Series Switches WS-C2960X-48LPS-L IOS 15.2(4)E6 15.2(7)E2 SWITCH-IDF9-A1 10.1.1.13 WWWWW software_02_Jul_07_54_15.xlsx WS-C2960X-48LPS-L IOS 15.2(4)E6 End of Vulnerability Support 15.2(7)E2 &lt;NA&gt; &lt;NA&gt; 10.1.1.13 2023-04-30 2018-05-01 https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-4500-series-switches/eos-eol-notice-c51-739919.html &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; None NaN &lt;NA&gt; &lt;NA&gt; NaN NaN &lt;NA&gt; NaT NaT &lt;NA&gt; coverage_24_Jul_10_06_49.xlsx Chassis WS-C2960X-48LPS-L 10.1.1.13 Covered - Non-IBM Covered - Non-IBM Covered - Non-IBM NaT NaT None inventory_30_Jun_15_30_08.xlsx 3 Chassis Switches Cisco Catalyst 2960-C Series Switches WS-C2960C-12PC-L IOS 15.2(4)E6 15.2(7)E2 SWITCH-MGK-A1 10.1.1.39 EEEEEE software_02_Jul_08_14_40.xlsx WS-C2960C-12PC-L IOS 15.2(4)E6 End of Vulnerability Support 15.2(7)E2 &lt;NA&gt; &lt;NA&gt; 10.1.1.39 2023-04-30 2018-05-01 https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-4500-series-switches/eos-eol-notice-c51-739919.html hardware_02_Jul_07_25_04.xlsx 10.1.1.39 Chassis WS-C2960C-12PC-L EoL Date Announced EOL in more than 24 months WS-C2960L-16PS-LL None 920.7 50 PSUT 215.16 122.76 0 2025-10-31 2020-10-30 https://www.cisco.com/c/en/us/products/switches/catalyst-2960-c-series-switches/eos-eol-notice-c51-743071.html coverage_24_Jul_10_37_26.xlsx Chassis WS-C2960C-12PC-L 10.1.1.39 Uncovered with ELLW No Contract No Contract NaT NaT None inventory_30_Jun_15_19_35.xlsx 3 Chassis Switches Cisco Catalyst 2960-X Series Switches WS-C2960X-48LPS-L IOS 15.2(4)E6 15.2(7)E2 SWITCH-SRVROOM-A1 10.1.1.2 RRRRRR software_02_Jul_07_54_15.xlsx WS-C2960X-48LPS-L IOS 15.2(4)E6 End of Vulnerability Support 15.2(7)E2 &lt;NA&gt; &lt;NA&gt; 10.1.1.2 2023-04-30 2018-05-01 https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-4500-series-switches/eos-eol-notice-c51-739919.html &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; None NaN &lt;NA&gt; &lt;NA&gt; NaN NaN &lt;NA&gt; NaT NaT &lt;NA&gt; coverage_24_Jul_10_06_49.xlsx Chassis WS-C2960X-48LPS-L 10.1.1.2 Covered - Non-IBM Covered - Non-IBM Covered - Non-IBM NaT NaT None inventory_30_Jun_15_20_39.xlsx 3 Chassis Switches Cisco Catalyst 3850 Series Switches WS-C3850-24P-S IOS-XE 16.3.7 16.9.5 SWITCH-SRVROOM-C1 10.2.1.254 TTTTTT software_02_Jul_07_54_33.xlsx WS-C3850-24P-S IOS-XE 16.3.7 End of Engineering 16.9.5 &lt;NA&gt; &lt;NA&gt; 10.2.1.3 2023-07-31 2018-08-01 https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-3850-series-switches/eos-eol-notice-c51-740255.html software_02_Jul_07_02_48.xlsx 10.2.1.254 &lt;NA&gt; &lt;NA&gt; End of Engineering &lt;NA&gt; &lt;NA&gt; None NaN &lt;NA&gt; &lt;NA&gt; NaN NaN &lt;NA&gt; 2023-07-31 2018-08-01 https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-3850-series-switches/eos-eol-notice-c51-740255.html coverage_24_Jul_10_07_28.xlsx Chassis WS-C3850-24P-S 10.2.1.254 Covered - Non-IBM Covered - Non-IBM Covered - Non-IBM NaT NaT None software_30_Jun_15_21_13.xlsx 1 &lt;NA&gt; Security Cisco ASA 5500-X with FirePOWER Services &lt;NA&gt; ASA 9.7(1)4 9.12.3 Interim SRVROOM-FW2.umbrellacorp.com 10.60.127.19 YYYYYY software_02_Jul_07_55_54.xlsx ASA5506-K9 ASA 9.7(1)4 End of Engineering 9.12.3 Interim 9.8.4 Interim &lt;NA&gt; 10.1.122.9 2022-08-31 2017-08-25 http://www.cisco.com/c/en/us/products/collateral/security/asa-firepower-services/eos-eol-notice-c51-738646.html &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; None NaN &lt;NA&gt; &lt;NA&gt; NaN NaN &lt;NA&gt; NaT NaT &lt;NA&gt; coverage_24_Jul_10_07_48.xlsx Chassis ASA5506-K9 10.60.127.19 Covered - Non-IBM Covered - Non-IBM Covered - Non-IBM NaT NaT None software_30_Jun_15_21_13.xlsx 1 &lt;NA&gt; Security Cisco ASA 5500-X with FirePOWER Services &lt;NA&gt; ASA 9.7(1)4 9.12.3 Interim FW2.umbrellacorp.com 10.60.127.18 GGGGGGG software_02_Jul_07_55_54.xlsx ASA5506-K9 ASA 9.7(1)4 End of Engineering 9.12.3 Interim 9.8.4 Interim &lt;NA&gt; 10.1.122.8 2022-08-31 2017-08-25 http://www.cisco.com/c/en/us/products/collateral/security/asa-firepower-services/eos-eol-notice-c51-738646.html &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; None NaN &lt;NA&gt; &lt;NA&gt; NaN NaN &lt;NA&gt; NaT NaT &lt;NA&gt; coverage_24_Jul_10_07_48.xlsx Chassis ASA5506-K9 10.60.127.18 Covered - Non-IBM Covered - Non-IBM Covered - Non-IBM NaT NaT None </code></pre>
<p>Started with advanced techniques using <code>functools</code>. Add <code>inspect</code> to the mix <a href="https://stackoverflow.com/questions/18425225/getting-the-name-of-a-variable-as-a-string">get variable name</a></p> <ol> <li>iterate over your list of dataframes. Capture the name and rename the <em>IP address</em> column</li> <li>once have merged dataframe rename left most <em>IP address</em> column</li> <li><code>fillna()</code> from other <em>IP address</em> columns and drop them</li> </ol> <pre><code>import inspect import functools def retrieve_name(var): callers_local_vars = inspect.currentframe().f_back.f_locals.items() return [var_name for var_name, var_val in callers_local_vars if var_val is var] data_frames = [InventoryDf, SoftwareDf, HardwareDf, CoverageDf] names = [] for df in data_frames: n = retrieve_name(df)[1].replace(&quot;Df&quot;, &quot;&quot;) names.append(n) df.columns = [f&quot;{n} {c}&quot; if c==&quot;IP Address&quot; else c for c in df.columns] # merge = functools.partial(pd.merge, on=['Priority','Category','Product Family','Host Name','Serial Number'], how='outer') merge = functools.partial(pd.merge, on=['Priority','Category','Host Name','Serial Number'], how='outer') merge = functools.reduce(merge, data_frames) # take column LHS IP Address and rename it to &quot;IP Address&quot;, fillna() from all subsequent columns # then drop them merge.rename(columns={f&quot;{names[0]} IP Address&quot;:&quot;IP Address&quot;}, inplace=True) for n in names[1:]: merge.loc[:,&quot;IP Address&quot;].fillna(merge.loc[:,f&quot;{n} IP Address&quot;], inplace=True) merge.drop(columns=f&quot;{n} IP Address&quot;, inplace=True) </code></pre>
python|pandas|dataframe|merge|duplicates
1
3,997
63,488,042
Python fast insert multiple characters into all possible places of string
<p>I want to insert multiple characters into all possible places of string, my current implementation is using <code>itertools.combinations_with_replacement</code> (<a href="https://docs.python.org/2/library/itertools.html#itertools.combinations_with_replacement" rel="nofollow noreferrer">doc</a>) to list all possible places of a string, then converting the string to <code>numpy</code> array, calling <code>numpy.insert</code>(<a href="https://numpy.org/doc/stable/reference/generated/numpy.insert.html" rel="nofollow noreferrer">doc</a>) to insert the characters into the array, finally using <code>join</code> to convert inserted string array back to string. Taking inserting 2 characters as example:</p> <pre><code>import numpy as np import itertools string = &quot;stack&quot; str_array = np.array(list(string), dtype=str) characters = np.array([&quot;x&quot;, &quot;y&quot;], dtype=str) new_strings = [&quot;&quot;.join(np.insert(str_array, ix, characters)) for ix in itertools.combinations_with_replacement(range(len(string)+1), len(characters))] </code></pre> <p>Outputs:</p> <p><code>['xystack', 'xsytack', 'xstyack', 'xstayck', 'xstacyk', 'xstacky', 'sxytack', 'sxtyack', 'sxtayck', 'sxtacyk', 'sxtacky', 'stxyack', 'stxayck', 'stxacyk', 'stxacky', 'staxyck', 'staxcyk', 'staxcky', 'stacxyk', 'stacxky', 'stackxy']</code></p> <p>It seems complicated, but I can't find better way to achieve this if I want to insert any number (e.g., 3) of characters into a string. Did I miss any better and faster way to do this?</p>
<p>A recursive solution:</p> <pre><code>def mix(s, t, p=''): return s and t and mix(s[1:], t, p+s[0]) + mix(s, t[1:], p+t[0]) or [p + s + t] </code></pre> <p>My <code>p</code> is the prefix built so far. In each recursive step, I extend it with the first character from <code>s</code> or the first character from <code>t</code>. Unless one of them doesn't have a character left, in which case I just return the prefix plus whatever <em>is</em> left.</p> <p>Demo:</p> <pre><code>&gt;&gt;&gt; mix('xy', 'stack') ['xystack', 'xsytack', 'xstyack', 'xstayck', 'xstacyk', 'xstacky', 'sxytack', 'sxtyack', 'sxtayck', 'sxtacyk', 'sxtacky', 'stxyack', 'stxayck', 'stxacyk', 'stxacky', 'staxyck', 'staxcyk', 'staxcky', 'stacxyk', 'stacxky', 'stackxy'] </code></pre> <p>It's about 20 times faster than yours on your example case.</p>
python|numpy
1
3,998
21,467,628
Python method to compare 1 value_id against another columns' value_ids in separate dataframes?
<p>I have 2 csv files. Each contains a data set with multiple columns and an ASSET_ID column. I used pandas to read each csv file in as a df1 and df2. My problem has been trying to define a function to iterate over the ASSET_ID value in df1 and compare each value against all the ASSET_ID values in df2. From there I want to return all the corresponding rows from df1's ASSET_ID's that matched df2. Any help would be appreciated I've been working on this for hours with little to show for it. dtypes are float or int.</p> <p>My configuration = windows xp, python 2.7 anaconda distribution</p>
<p>Create a boolean mask of the values will index the rows where the 2 df's match, no need to iterate and much faster. Example:</p> <pre><code># define a list of values a = list('abcdef') b = range(6) df = pd.DataFrame({'X':pd.Series(a),'Y': pd.Series(b)}) # c has x values for 'a' and 'd' so these should not match c = list('xbcxef') df1 = pd.DataFrame({'X':pd.Series(c),'Y': pd.Series(b)}) print(df) print(df1) X Y 0 a 0 1 b 1 2 c 2 3 d 3 4 e 4 5 f 5 [6 rows x 2 columns] X Y 0 x 0 1 b 1 2 c 2 3 x 3 4 e 4 5 f 5 [6 rows x 2 columns] In [4]: # now index your df using boolean condition on the values df[df.X == df1.X] Out[4]: X Y 1 b 1 2 c 2 4 e 4 5 f 5 [4 rows x 2 columns] </code></pre> <p>EDIT:</p> <p>So if you have different length series then that won't work, in which case you can use <a href="http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.isin.html" rel="nofollow"><code>isin</code></a>:</p> <p>So create 2 dataframes of different lengths:</p> <pre><code>a = list('abcdef') b = range(6) d = range(10) df = pd.DataFrame({'X':pd.Series(a),'Y': pd.Series(b)}) c = list('xbcxefxghi') df1 = pd.DataFrame({'X':pd.Series(c),'Y': pd.Series(d)}) print(df) print(df1) X Y 0 a 0 1 b 1 2 c 2 3 d 3 4 e 4 5 f 5 [6 rows x 2 columns] X Y 0 x 0 1 b 1 2 c 2 3 x 3 4 e 4 5 f 5 6 x 6 7 g 7 8 h 8 9 i 9 [10 rows x 2 columns] </code></pre> <p>Now use <code>isin</code> to select rows from df1 where the id's exist in df:</p> <pre><code>In [7]: df1[df1.X.isin(df.X)] Out[7]: X Y 1 b 1 2 c 2 4 e 4 5 f 5 [4 rows x 2 columns] </code></pre>
python|csv|pandas|comparison|dataframe
2
3,999
53,744,941
Creating IsHoliday feature in pandas dataframe
<p>i'm trying to create an IsHoliday feature in my pd.dataframe,having datetime as index, based on a csv file whice includes the holidays in one year. Having little experience with pandas i can think of an iterative approach by comparing the values of the two dataframes To be more specific :</p> <pre><code>for i in range(0,len(Holidays)-1): for j in range(0,len(df)-1): if (Holidays.loc[i,'month']==df.loc[j,'month'] and Holidays.loc[i,'day']==df.loc[j,'day'] ): df.loc[j,'Isholiday']=1 else: df.loc[j,'Isholiday']=0 </code></pre> <p>My question is how can this be implemented the pandas way avoiding all the proccesing time?</p>
<p>You might want to use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a> for that:</p> <pre><code>df['IsHoliday'] = np.where(df.index.isin(Holidays),True,False) </code></pre>
python|pandas
0