Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
5,200
39,036,808
Graphviz: give color to lines with rainbow effect
<p>I have dataframe and I build graph using <code>graphviz</code></p> <pre><code>for id_key, group in df.groupby('ID'): f = Digraph('finite_state_machine', filename='fsm.gv', encoding='utf-8') f.body.extend(['rankdir=LR', 'size="5,5"']) f.attr('node', shape='box') for i in range(len(group)-1): f.edge(str(group['category'].iloc[i]), str(group['category'].iloc[i+1]), label=str(group['search_term'].iloc[i+1])) f.render(filename=str(id_key)) </code></pre> <p>and get this result <a href="https://i.stack.imgur.com/2s2WN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2s2WN.png" alt="image"></a>. How can I change lines color: first arrow - red, second - orange, third - yellow, etc?</p>
<p>You can use one of the <a href="http://graphviz.org/doc/info/colors.html#brewer" rel="noreferrer">brewer color schemes</a>. For example:</p> <pre><code>g = graphviz.Digraph(format='png') g.body.extend(["rankdir=LR"]) for i in range(9): g.edge(str(i),str(i+1),color="/spectral9/"+str(i+1)) g.render(filename="example") </code></pre> <p>produce:</p> <p><a href="https://i.stack.imgur.com/WE3gK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WE3gK.png" alt="example"></a></p> <p>If you wish to generate the colors yourself you can use the <a href="https://en.wikipedia.org/wiki/HSL_and_HSV" rel="noreferrer">hsv format</a> with constants <em>saturation</em> &amp; <em>value</em> and increasing <em>hue</em>:</p> <pre><code>n = 20 g = graphviz.Digraph(format='png') g.body.extend(["layout=circo"]) for i in range(n): g.edge(str(i),str(i+1),color="{h:} 1 1".format(h=i/n)) g.edge(str(n),str(0),color="1 1 1") g.render(filename="example") </code></pre> <p>produce:</p> <p><a href="https://i.stack.imgur.com/rQcPy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/rQcPy.png" alt="example2"></a></p>
python|pandas|graphviz
6
5,201
39,308,065
How to display Chinese characters inside a pandas dataframe?
<p>I can read a csv file in which there is a column containing Chinese characters (other columns are English and numbers). However, Chinese characters don't display correctly. see photo below</p> <p><a href="https://i.stack.imgur.com/nG6oN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nG6oN.png" alt="enter image description here" /></a></p> <p>I loaded the csv file with <code>pd.read_csv()</code>.</p> <p>Either <code>display(data06_16)</code> or <code>data06_16.head()</code> won't display Chinese characters correctly.</p> <p>I tried to add the following lines into my <code>.bash_profile</code>:</p> <pre><code>export LC_ALL=zh_CN.UTF-8 export LANG=zh_CN.UTF-8 export LC_ALL=en_US.UTF-8 export LANG=en_US.UTF-8 </code></pre> <p>but it doesn't help.</p> <p>Also I have tried to add <code>encoding</code> arg to <code>pd.read_csv()</code>:</p> <pre><code>pd.read_csv('data.csv', encoding='utf_8') pd.read_csv('data.csv', encoding='utf_16') pd.read_csv('data.csv', encoding='utf_32') </code></pre> <p>These won't work at all.</p> <p>How can I display the Chinese characters properly?</p>
<p>I just remembered that the source dataset was created using <code>encoding='GBK'</code>, so I tried again using </p> <pre><code>data06_16 = pd.read_csv("../data/stocks1542monthly.csv", encoding="GBK") </code></pre> <p>Now, I can see all the Chinese characters. </p> <p>Thanks guys!</p>
python|csv|pandas|encoding|chinese-locale
7
5,202
39,382,801
Two subplots of gridspec, one line plot and the other bar plot, with a time-series on x, are not displaying
<p>I can't figure out what I'm doing wrong in my plotting. Plotting some data vs datetime on two subplots, one is a line plot, the other one should be a bar plot. I have tried several different ways of plotting it but without much success. </p> <p><strong>UPDATE</strong> Minor progress - the first plot is displayed, the second one not.</p> <p>Data looks for example like this:</p> <pre><code> A B C 0 2014-10-23 15:44:00 1 1.5 1 2014-10-30 19:10:00 2 2.0 2 2014-11-05 21:30:00 3 3.3 3 2014-11-07 05:50:00 4 4.0 4 2014-11-12 12:20:00 5 5.8 </code></pre> <p>The latest version of the code:</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib import gridspec import datetime Test = pd.DataFrame({'A':[datetime.datetime(2014,10,23,15,44),datetime.datetime(2014,10,30,19,10), datetime.datetime(2014,11,5,21,30),datetime.datetime(2014,11,7,5,50), datetime.datetime(2014,11,12,12,20)],'B':[1,2,3,4,5],'C':[1.5,2,3.3,4,5.8]}) fig = plt.figure(figsize=(8,8)) gs=gridspec.GridSpec(2,1, hspace=0.1, wspace=0.3, height_ratios=[1,1]) axes1 = plt.subplot(gs[0,0]) ax1 = plt.plot(Test["A"],Test["B"], 'dimgrey', linewidth = 2.5, label = "test") #DepTime = DepTime.set_index("DATETIME") #for l in log_times: # ax1 = plt.bar(l,DepTime.ix[l,"Depth"], width = 1.5, color='b') ax1 = plt.xlim(Test.iloc[0,0],Test.iloc[-1,0]) xticks2 = pd.date_range(start=Test.iloc[0,0],end=Test.iloc[-1,0], freq='D', normalize=True) ax1 = plt.xticks(xticks2,xticks2) ax1 = plt.setp(axes1.get_xticklabels(), visible=False) ax1 = plt.ylim(1,5) ax1 = plt.yticks(np.arange(0,5,1),rotation=0) ax1 = plt.ylabel("B",fontsize = 12) axes2 = plt.subplot(gs[1,0]) ax2 = Test.plot("A","C",kind='bar', ax=axes2) # #ax2 = plt.bar(Test["A"],Test["C"],color='k') #ax2 = plt.xlim(Test.iloc[0,0],Test.iloc[-1,0]) ax2 = plt.xlim(Test.iloc[0,0],Test.iloc[-1,0]) ax2 = plt.xlabel("A",fontsize=12) #ax2 = plt.xaxis_date() #ax2.xaxis.set_major_formatter(mdates.DateFormatter('%d)) ax2 = plt.xticks(xticks2,xticks2, rotation=45) #axes2.xticklabels([x.strftime('%d/%m/%Y %H:%M') for x in xticks2]) #ax2.autofmt_xdate() #ax2 = plt.yticks([0,1]) ax2 = plt.ylim(1,5) ax2 = plt.setp(axes2.get_yticklabels(), visible=False) fig = plt.show() </code></pre> <p>And the resulting figure: <a href="https://i.stack.imgur.com/iPokE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iPokE.png" alt="The incomplete plot"></a></p>
<p>I found out what the problem was!!! I don't understand why it worked fine with the line plot and not with the bar plot, but all that needed to be changed in the code is the actual one line for the bar plot. It should go like this:</p> <pre><code>ax2 = plt.bar(Test["A"].values,Test["C"],width=1) </code></pre> <p>So bar plot can't handle datetime? (This <a href="https://stackoverflow.com/a/25234067/5553319">answer</a> helped)</p>
python|python-2.7|pandas|matplotlib|bar-chart
0
5,203
19,803,407
Extracting every 3rd data in an array
<p>I have thousand of data for x and y, for this case i will just use 12 data. The array are used to plot a graph</p> <pre><code>x = np.array([1000,2000,3000,4000,5000,6000,7000,8000,9000,10000,11000,12000]) y = np.array([1,2,3,4,5,6,7,8,9,10,11,12]) py.plot(x,y) </code></pre> <p>How can i extract every [3] multiplication for me to plot? For Example</p> <pre><code>x = np.array([3000,6000,9000,12000]) y = np.array([3,6,9,12]) py.plot(x,y) </code></pre> <p>How can i extract every [5] multiplication for me to plot ? For Example</p> <pre><code>x = np.array([5000,10000]) y = np.array([5,10]) py.plot(x,y) </code></pre>
<p>Extract every third item starting with the third item (one dimensional array) </p> <pre><code>x[2::3], y[2::3] </code></pre> <p>Extract every fifth item starting with the fifth item (one dimensional array) </p> <pre><code>x[4::5], y[4::5] </code></pre>
python|graph|numpy
3
5,204
33,679,382
How do I get the current value of a Variable?
<p>Suppose we have a variable:</p> <pre><code>x = tf.Variable(...) </code></pre> <p>This variable can be updated during the training process using the <code>assign()</code> method.</p> <p><em>What is the best way to get the current value of a variable?</em></p> <p>I know we could use this:</p> <pre><code>session.run(x) </code></pre> <p>But I'm afraid this would trigger a whole chain of operations.</p> <p>In Theano, you could just do</p> <pre><code>y = theano.shared(...) y_vals = y.get_value() </code></pre> <p>I'm looking for the equivalent thing in TensorFlow.</p>
<p>The only way to get the value of the variable is by running it in a <code>session</code>. In the <a href="http://tensorflow.org/resources/faq.md#contents" rel="noreferrer">FAQ it is written</a> that:</p> <blockquote> <p>A Tensor object is a symbolic handle to the result of an operation, but does not actually hold the values of the operation's output.</p> </blockquote> <p>So TF equivalent would be:</p> <pre><code>import tensorflow as tf x = tf.Variable([1.0, 2.0]) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) v = sess.run(x) print(v) # will show you your variable. </code></pre> <p>The part with <code>init = global_variables_initializer()</code> is important and should be done in order to initialize variables.</p> <p>Also, take a look at <a href="https://www.tensorflow.org/api_docs/python/tf/InteractiveSession" rel="noreferrer">InteractiveSession</a> if you work in IPython.</p>
python|tensorflow
39
5,205
22,666,026
Debug Pandas Dataframe Apply
<p>New to Pandas and I have the following question: I want to apply my_func (a custom created function) to each row of a dataframe.</p> <pre><code>res = df.apply(lambda x: my_func(x, par1, par2) </code></pre> <p>When I debug and I put a breakpoint on the first row of my function defined as:</p> <pre><code>def my_func(myinput, par1): (...) </code></pre> <p>if I evaluate my input variable myinput I will get the entire dataframe (df). I was expecting only the first row of df instrad.. Am I missing something?</p> <p>Many thanks</p> <p>Regards</p>
<p>You need to set <code>axis=1</code> in <code>apply</code>:</p> <pre><code>res = df.apply(lambda x: my_func(x, par1, par2), axis=1) </code></pre> <p>The <a href="http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.apply.html" rel="nofollow">online docs</a> states that <code>axis=0</code> is column-wise whilst <code>axis=1</code> is row-wise</p> <p>You could just pass the row in:</p> <pre><code>res = df.apply(lambda row: my_func(row), axis=1) </code></pre> <p>and then redefine your function:</p> <pre><code>def my_func(row): # do something with col1 row['col1'] = row['col1'] * 2 row['col2'] = row['col2'] + 2 # .... etc </code></pre>
python|debugging|pandas|dataframe|apply
3
5,206
22,639,840
Graph histogram and normal density with pandas
<p>I look for graphing both normal density and histogram density from a serie df like that but don't achieve. like link below <a href="http://hypertextbook.com/facts/2007/resistors/histogram.gif" rel="nofollow">http://hypertextbook.com/facts/2007/resistors/histogram.gif</a></p> <p>How to do that with DataFrame objects ?</p> <pre><code>&gt;&gt;&gt; fd Scenario s0000 -2.378963 ... s0999 1.368384 Name: values, Length: 1000, dtype: float64 &gt;&gt;&gt; fd.hist(bins=100) </code></pre>
<p>I think you can first plot the histogram like fd.hist()</p> <p>Then fit the normal density and plot it with matplotlob, please refer to:</p> <p><a href="https://stackoverflow.com/questions/7805552/fitting-a-histogram-with-python">Fitting a histogram with python</a></p>
python|pandas
1
5,207
15,288,158
Data frame creation from an input file
<p>I have a file which contain some floating values like</p> <pre><code>5.234234 6.434344 5.45435 7.243224 4.0999884 </code></pre> <p>I want to create a dataframe from this file.It should look like</p> <pre><code>Activity 5.234234 6.434344 5.45435 7.243224 4.0999884 </code></pre> <p>What I've tried is(its not working)</p> <pre><code>import pandas as pd from StringIO import StringIO data= open('Activity.txt').read() pd.read_csv(StringIO(data),names='Activity',header=None) </code></pre> <p>Any help would be appreciated.</p>
<p>You must use:</p> <pre><code>p.read_csv('Activity.txt', names=('Activity',)) </code></pre> <p>Note:<br> 1) You dont need to <em>open</em> your file.<br> 2) <code>names</code> is 'array-like', for example, a tuple or list.<br> 3) <code>header</code> is <code>None</code> by default when <code>names</code> parameter is specified (otherwise 0).</p>
python|input|pandas
2
5,208
62,349,703
Jupyter Notebook kernel dies while compiling Neural Network
<p>While creating and compiling a keras Dense Neural Network my jupyter notebook kernel always dies. The terminal gives me a message that it couldn't allocate space and that CUDA is out of memory. My GPU (2060 Super) has already run this model many times, just not on jupyter. I have already done a lot of searching whithout an answer that actually works. Some of the things I tried were changing my kernel, using numba device.reset() and reinstalling both conda and jupyter but nothing seems to work.</p> <p>Here is the code block that always gets the error:</p> <pre><code>inputs = keras.Input(shape=(69,)) fc1 = keras.layers.Dense(100, activation='relu')(inputs) d1 = keras.layers.Dropout(0.1)(fc1, training=True) bn1 = keras.layers.BatchNormalization()(d1) fc2 = keras.layers.Dense(150, activation='relu')(bn1) d2 = keras.layers.Dropout(0.1)(fc2, training=True) bn2 = keras.layers.BatchNormalization()(d2) fc3 = keras.layers.Dense(200, activation='relu')(bn2) d3 = keras.layers.Dropout(0.1)(fc3, training=True) bn3 = keras.layers.BatchNormalization()(d3) fc4 = keras.layers.Dense(120, activation='relu')(bn3) d4 = keras.layers.Dropout(0.1)(fc4, training=True) bn4 = keras.layers.BatchNormalization()(d4) fc5 = keras.layers.Dense(60, activation='relu')(bn4) bn5 = keras.layers.BatchNormalization()(fc5) outputs = keras.layers.Dense(2, activation='softmax')(bn5) model = keras.Model(inputs, outputs) model.summary() model.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"]) </code></pre>
<p>Try adding this function to your notebook. I am not certain but it may help.</p> <pre><code>def setup_gpus(): gpus = tf.config.list_physical_devices('GPU') if gpus: try: tf.config.experimental.set_visible_devices(gpus[0],'GPU') tf.config.experimental.set_virtual_device_configuration(gpus[0],[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1500)]) except RuntimeError as e: print(e) </code></pre>
python|tensorflow|keras|deep-learning|jupyter-notebook
1
5,209
62,197,944
How to change number of "colums" on diameter of a radial FacetGrid
<p>How can I change the number of months that are shown on belows circle? I would need to have december overlap with january The code produces the figure shown below but it also throws an error (which doesn't bother me as the figure is there), maybe this is a clue.</p> <p>Thanks for helping, best answer will be marked.</p> <p>I have following MinRepExample:</p> <pre><code>import pandas as pd import seaborn as sns data = {'month': ['jan','feb','mar','apr','may','jun','jul','aug','sep','oct','nov','dec'], 'value':['4.9','4.8','4.7','4.6','4.4','4.4','4.3','4.4','4.5','4.6','4.7','4.8']} df = pd.DataFrame(data, columns = ['month','value']) g = sns.FacetGrid(df, subplot_kws=dict(projection='polar'), despine=False, height=6) g.map(sns.lineplot(x="month", y="value", data=df ,sort=False, estimator=None)) </code></pre> <p>I also get following error when running with Jupyter:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-29-9e2f3dc0d427&gt; in &lt;module&gt; 6 df = pd.DataFrame(data, columns = ['month','value']) 7 g = sns.FacetGrid(df, subplot_kws=dict(projection='polar'), despine=False, height=6) ----&gt; 8 g.map(sns.lineplot(x="month", y="value", data=df ,sort=False, estimator=None)) 9 10 c:\python38\lib\site-packages\seaborn\axisgrid.py in map(self, func, *args, **kwargs) 760 761 # Draw the plot --&gt; 762 self._facet_plot(func, ax, plot_args, kwargs) 763 764 # Finalize the annotations and layout c:\python38\lib\site-packages\seaborn\axisgrid.py in _facet_plot(self, func, ax, plot_args, plot_kwargs) 844 845 # Draw the plot --&gt; 846 func(*plot_args, **plot_kwargs) 847 848 # Sort out the supporting information TypeError: 'PolarAxesSubplot' object is not callable </code></pre> <p>I get following output:</p> <p><a href="https://i.stack.imgur.com/Gxe4a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gxe4a.png" alt="radial FacetGrid"></a></p>
<pre><code>from matplotlib import pyplot as plt import pandas as pd import numpy as np data = {'month': ['jan','feb','mar','apr','may','jun','jul','aug','sep','oct','nov','dez'], 'value':['4.9','4.8','4.7','4.6','4.4','4.4','4.3','4.4','4.5','4.6','4.7','4.8']} df = pd.DataFrame(data, columns = ['month','value']) ax = plt.subplot(111, projection='polar') ax.plot(df.index*np.pi/6, df.value.astype(float)) ax.set_thetagrids(range(0, 360, 30), df.month) ax.set_rlim(4, 5) </code></pre> <p><a href="https://i.stack.imgur.com/2cf3W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cf3W.png" alt="enter image description here"></a></p> <p>If Dec shall overlap with Jan use:</p> <pre><code>ax.plot(df.index*2*np.pi/11, df.value.astype(float)) ax.set_thetagrids(np.arange(0, 360, 360/11), df.month) </code></pre> <p><a href="https://i.stack.imgur.com/pCXdU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pCXdU.png" alt="enter image description here"></a></p>
python|python-3.x|pandas|matplotlib|seaborn
0
5,210
62,453,764
How to do inference in parallel with tensorflow saved model predictors?
<p>Tensorflow version: 1.14</p> <p>Our current setup is using tensorflow estimators to do live NER i.e. perform inference one document at a time. We have 30 different fields to extract, and we run one model per field, so got total of 30 models. </p> <p>Our current setup uses python multiprocessing to do the inferences in parallel. (The inference is done on CPUs.) This approach reloads the model weights each time a prediction is made.</p> <p>Using the approach mentioned <a href="https://guillaumegenthial.github.io/serving-tensorflow-estimator.html" rel="nofollow noreferrer">here</a>, we exported the estimator models as <code>tf.saved_model</code>. This works as expected in that it does not reload the weights for each request. It also works fine for a single field inference in one process, but doesn't work with multiprocessing. All the processes hang when we make the predict function (<code>predict_fn</code> in the linked post) call.</p> <p><a href="https://stackoverflow.com/questions/36610290/tensorflow-and-multiprocessing-passing-sessions">This post</a> is related, but not sure how to adapt it for saved model.</p> <p>Importing tensorflow individually for each of the predictors did not work either:</p> <pre><code>class SavedModelPredictor(): def __init__(self, model_path): import tensorflow as tf self.predictor_fn = tf.contrib.predictor.from_saved_model(model_path) def predictor_fn(self, input_dict): return self.predictor_fn(input_dict) </code></pre> <p>How to make <code>tf.saved_model</code> work with multiprocessing?</p>
<p>Ray Serve, ray's model serving solution, also support offline batching. You can wrap your model in Ray Serve's backend and scale it to the number replica you want.</p> <pre><code>from ray import serve client = serve.start() class MyTFModel: def __init__(self, model_path): self.model = ... # load model @serve.accept_batch def __call__(self, input_batch): assert isinstance(input_batch, list) # forward pass self.model([item.data for item in input_batch]) # return a list of response return [...] client.create_backend(&quot;tf&quot;, MyTFModel, # configure resources ray_actor_options={&quot;num_cpus&quot;: 2, &quot;num_gpus&quot;: 1}, # configure replicas config={ &quot;num_replicas&quot;: 2, &quot;max_batch_size&quot;: 24, &quot;batch_wait_timeout&quot;: 0.5 } ) client.create_endpoint(&quot;tf&quot;, backend=&quot;tf&quot;) handle = serve.get_handle(&quot;tf&quot;) # perform inference on a list of input futures = [handle.remote(data) for data in fields] result = ray.get(futures) </code></pre> <p>Try it out with the nightly wheel and here's the tutorial: <a href="https://docs.ray.io/en/master/serve/tutorials/batch.html" rel="nofollow noreferrer">https://docs.ray.io/en/master/serve/tutorials/batch.html</a></p> <p>Edit: updated the code sample for Ray 1.0</p>
python|tensorflow|multiprocessing|python-multiprocessing|ray
3
5,211
62,111,642
Difference between DCGAN & WGAN
<p>In my understanding, DCGAN use convolution layer in both Generator and Discriminator, and WGAN adjust the loss function, optimizer, clipping and last sigmoid function. The part they control is not overlapping. So are there any conflict if i implement both changes of DCGAN &amp; WGAN in one model?</p>
<p>According to my experience, DCGAN proposed a well-tuned and simple model (or more specifically we can say it proposed a simple network structure with well-tuned optimizer) to generate images.WGAN proposed a new method of measuring the distance between data distribution and model distribution and theoretically solved the GAN's problem:unstable,mode collpase etc. </p> <p>So you can utilize the network structure and parameters proposed in DCGAN and the way of updating parameters of discriminator and generator proposed in WGAN. And i've done that before, It's not conflict. </p> <p>But in practice, you might not get a very good result when you implement WGAN.It's more advisable to implement WGAN-GP There is an image generated by WGAN-GP</p> <p><a href="https://i.stack.imgur.com/LJUtb.jpg" rel="nofollow noreferrer">images generated by WGAN-GP</a></p> <p>Hope my answer is helpful.</p>
pytorch|generative-adversarial-network
0
5,212
62,251,084
How to get ssl information formatted in dataframe for multiple domains?
<p>How to get these stats in a dataframe for multiple domains. ex : google.com , nyu.edu</p> <p>expected_output :</p> <pre><code>name expiry date status google.com Wednesday, 12 August 2020 at 17:30:48 active nyu.edu expired/No Cert expired/No Cert </code></pre> <p>tried this :</p> <pre><code>import ssl, socket import pandas as pd hostname =['google.com','nyu.edu'] datas=[] for i in hostname: ctx = ssl.create_default_context() with ctx.wrap_socket(socket.socket(), server_hostname=i) as s: s.connect((i, 443)) cert = s.getpeercert() issued_to = subject['commonName'] notAfter=cert['notAfter'] data=[{"name":issued_to},{"expiry":notAfter}] datas.append(data) df = pd.DataFrame(datas) </code></pre> <p>Tried this got this can't figure out which certs are expired and which are not.</p>
<p>If you use <code>getpeercert()</code> you won't be able to retrieve the certificate in the outpout if the verification failed (you can test with <a href="https://expired.badssl.com/" rel="nofollow noreferrer">https://expired.badssl.com/</a> for example).</p> <p>From <a href="https://stackoverflow.com/a/45480084/2614364">this answer</a>, an alternative is to use <code>getpeercert(True)</code> to get he binary outpout of the certificate and parse it using OpenSSL.</p> <p>In the following example, I've added an <code>expiry</code> column to check whether the certificate has expired by comparing with the current date :</p> <pre><code>import ssl, socket import pandas as pd from datetime import datetime import OpenSSL.crypto as crypto hostname = ['google.com','nyu.edu','expired.badssl.com'] datas = [] now = datetime.now() for i in hostname: ctx = ssl.create_default_context() ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE with ctx.wrap_socket(socket.socket(), server_hostname = i) as s: s.connect((i, 443)) cert = s.getpeercert(True) x509 = crypto.load_certificate(crypto.FILETYPE_ASN1,cert) commonName = x509.get_subject().CN notAfter = datetime.strptime(x509.get_notAfter().decode('ascii'), '%Y%m%d%H%M%SZ') notBefore = datetime.strptime(x509.get_notBefore().decode('ascii'), '%Y%m%d%H%M%SZ') datas.append({ "name": commonName, "notAfter": notAfter, "notBefore": notBefore, "expired": (notAfter &lt; now) or (notBefore &gt; now) }) df = pd.DataFrame(datas) print(df) </code></pre> <p>Output :</p> <pre><code> name notAfter notBefore expired 0 *.google.com 2020-08-12 12:00:48 2020-05-20 12:00:48 False 1 nyu.edu 2021-01-06 23:59:59 2019-01-07 00:00:00 False 2 *.badssl.com 2015-04-12 23:59:59 2015-04-09 00:00:00 True </code></pre> <p>Note that I've used <a href="https://stackoverflow.com/a/41121732/2614364">this post</a> to convert timestamp formatted date to datetime</p> <p>Also, if you want to test invalid certificates, you can check <a href="https://badssl.com/" rel="nofollow noreferrer">https://badssl.com/</a> for a list of testing domain which could arise specific verification error</p>
python|pandas|dataframe|ssl|openssl
1
5,213
51,319,692
Store array as a value within Pandas column
<p>I have a data set with two columns of categorical label data (NBA Team names). What I want to do is use one hot encoding to generate a binary, 1D vector as an array representing each team. Here is my code:</p> <pre><code>from sklearn.preprocessing import MultiLabelBinarizer one_hot_encoder = MultiLabelBinarizer() table["Teams"] = one_hot_encoder.fit_transform(table["Teams"]) </code></pre> <p>The encoder works appropriately, and it generates the arrays accordingly. In other words,</p> <pre><code>one_hot_encoder.fit_transform(table["Teams"]) </code></pre> <p>generates the following properly:</p> <p><a href="https://i.stack.imgur.com/ZhxkA.png" rel="nofollow noreferrer">Link to encoder result screenshot</a></p> <p>However, when I try to store the array into the column, as follows:</p> <pre><code>table["Teams"] = one_hot_encoder.fit_transform(table["Teams"]) </code></pre> <p>It seems like it's not being saved properly. </p> <p><a href="https://i.stack.imgur.com/7BO9L.png" rel="nofollow noreferrer">Link to data frame result screenshot</a></p> <p>Instead, it looks as if the column is just taking the first value of each array, and not storing the entire array. How should I go about resolving this?</p>
<p>I think need convert <code>2d</code> array to <code>list</code>s:</p> <pre><code>table = pd.DataFrame({"Teams":list('aaasdffds')}) from sklearn.preprocessing import MultiLabelBinarizer one_hot_encoder = MultiLabelBinarizer() table["Teams"] = one_hot_encoder.fit_transform(table["Teams"]).tolist() print (table) Teams 0 [1, 0, 0, 0] 1 [1, 0, 0, 0] 2 [1, 0, 0, 0] 3 [0, 0, 0, 1] 4 [0, 1, 0, 0] 5 [0, 0, 1, 0] 6 [0, 0, 1, 0] 7 [0, 1, 0, 0] 8 [0, 0, 0, 1] </code></pre> <p>But store arrays or lists to one column is not recommended because not possible use vectorized methods/functions, better is create <code>DataFrame</code>:</p> <pre><code>table = pd.DataFrame(one_hot_encoder.fit_transform(table["Teams"]), columns=one_hot_encoder.classes_) print (table) a d f s 0 1 0 0 0 1 1 0 0 0 2 1 0 0 0 3 0 0 0 1 4 0 1 0 0 5 0 0 1 0 6 0 0 1 0 7 0 1 0 0 8 0 0 0 1 </code></pre>
python|arrays|pandas|numpy|dataframe
1
5,214
51,141,875
Pandas: how to select a susbset of a dataframe with multiple conditions
<p>I have a dataframe like the following:</p> <pre><code> df = pd.DataFrame({'COND1' : [0,4,4,4,0], 'NAME' : ['one', 'one', 'two', 'three', 'three'], 'COND2' : ['a', 'b', 'a', 'a','b'], 'value': [30, 45, 18, 23, 77]}) </code></pre> <p>Where we have two conditions: <code>[0,4]</code> and <code>['a','b']</code></p> <pre><code> df COND1 COND2 NAME value 0 0 a one 30 1 4 a one 45 2 4 b one 25 3 4 a two 18 4 4 a three 23 5 4 b three 77 </code></pre> <p>For each name I want to select the a subset with the condition <code>COND1=0 &amp; COND2=a</code> if I have the information, <code>COND1=4 &amp; COND2=b</code> otherwise.</p> <p>the resulting dataframe will be:</p> <pre><code> df COND1 COND2 NAME value 0 0 a one 30 1 NaN Nan two NaN 2 4 b three 77 </code></pre> <p>I tried to do the following:</p> <pre><code>df[ ((df['COND1'] == 0 ) &amp; (df['COND2'] == 'a') | (df['COND1'] == 4 ) &amp; (df['COND2'] == 'b'))] </code></pre>
<p>Try to modify your result by using <code>drop_duplicates</code>(drop the NAME satisfied both condition only keep one ) with <code>reindex</code>(Add back the NAME does not satisfied any condition )</p> <pre><code>Newdf=df[ ((df['COND1'] == 0 ) &amp; (df['COND2'] == 'a') | (df['COND1'] == 4 ) &amp; (df['COND2'] == 'b'))] Newdf.sort_values('COND1').drop_duplicates(['NAME']).set_index('NAME').reindex(df.NAME.unique()).reset_index() Out[378]: NAME COND1 COND2 value 0 one 0.0 a 30.0 1 two NaN NaN NaN 2 three 4.0 b 77.0 </code></pre>
python|pandas|dataframe
0
5,215
51,465,813
implementing RNN with numpy
<p>I'm trying to implement the recurrent neural network with numpy.</p> <p>My current input and output designs are as follow:</p> <p><code>x</code> is of shape: (sequence length, batch size, input dimension)</p> <p><code>h</code> : (number of layers, number of directions, batch size, hidden size)</p> <p><code>initial weight</code>: (number of directions, 2 * hidden size, input size + hidden size)</p> <p><code>weight</code>: (number of layers -1, number of directions, hidden size, directions*hidden size + hidden size)</p> <p><code>bias</code>: (number of layers, number of directions, hidden size)</p> <p>I have looked up pytorch API of RNN the as reference (<a href="https://pytorch.org/docs/stable/nn.html?highlight=rnn#torch.nn.RNN" rel="noreferrer">https://pytorch.org/docs/stable/nn.html?highlight=rnn#torch.nn.RNN</a>), but have slightly changed it to include initial weight as input. (output shapes are supposedly the same as in pytorch)</p> <p>While it is running, I cannot determine whether it is behaving right, as I am inputting randomly generated numbers as input.</p> <p>In particular, I am not so certain whether my input shapes are designed correctly.</p> <p>Could any expert give me a guidance?</p> <pre><code>def rnn(xs, h, w0, w=None, b=None, num_layers=2, nonlinearity='tanh', dropout=0.0, bidirectional=False, training=True): num_directions = 2 if bidirectional else 1 batch_size = xs.shape[1] input_size = xs.shape[2] hidden_size = h.shape[3] hn = [] y = [None]*len(xs) for l in range(num_layers): for d in range(num_directions): if l==0 and d==0: wi = w0[d, :hidden_size, :input_size].T wh = w0[d, hidden_size:, input_size:].T wi = np.reshape(wi, (1,)+wi.shape) wh = np.reshape(wh, (1,)+wh.shape) else: wi = w[max(l-1,0), d, :, :hidden_size].T wh = w[max(l-1,0), d, :, hidden_size:].T for i,x in enumerate(xs): if l==0 and d==0: ht = np.tanh(np.dot(x, wi) + np.dot(h[l, d], wh) + b[l, d][np.newaxis]) ht = np.reshape(ht,(batch_size, hidden_size)) #otherwise, shape is (bs,1,hs) else: ht = np.tanh(np.dot(y[i], wi) + np.dot(h[l, d], wh) + b[l, d][np.newaxis]) y[i] = ht hn.append(ht) y = np.asarray(y) y = np.reshape(y, y.shape+(1,)) return np.asarray(y), np.asarray(hn) </code></pre>
<p>Regarding the shape, it probably makes sense if that's how PyTorch does it, but the Tensorflow way is a bit more intuitive - <code>(batch_size, seq_length, input_size)</code> - <code>batch_size</code> sequences of <code>seq_length</code> length where each element has <code>input_size</code> size. Both approaches can work, so I guess it's a matter of preferences.</p> <p>To see whether your rnn is behaving appropriately, I'd just print the hidden state at each time step, run it on some small random data (e.g. 5 vectors, 3 elements each) and compare the results with your manual calculations.</p> <p>Looking at your code, I'm unsure if it does what it's supposed to, but instead of doing this on your own based on an existing API, I'd recommend you read and try to replicate <a href="http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/" rel="nofollow noreferrer">this awesome tutorial from wildml</a> (in part 2 there's a pure numpy implementation). </p>
python|numpy|recurrent-neural-network|rnn
2
5,216
51,448,700
Python List Append with bumpy array error
<p>I am trying to use list append function to append a list to a list. </p> <p>But got error shows list indices must be integers or slices, not tuple. Not sure why. </p> <pre><code>pca_components = range(1,51) gmm_components = range(1,5) covariance_types = ['spherical', 'diag', 'tied', 'full'] # Spherical spherical_results = [] for i in pca_components: pca_model = PCA(n_components=i) pca_train = pca_model.fit(train_data).transform(train_data) for j in gmm_components: parameters = (i+i)*j*2 if parameters &gt; 50: pass else: gmm_model = GMM(n_components=j, covariance_type='spherical') gmm_model.fit(pca_train) pca_test = pca_model.transform(test_data) predictions = gmm_model.predict(pca_test) accuracy = np.mean(predictions.ravel() == test_labels.ravel()) accuracy=int(accuracy) spherical_results.append([accuracy, i,j, parameters]) spher_results = np.array(spherical_results) max_accuracy = np.amax(spherical_results[:,0]) print(f"highest accuracy score for spherical is {max_accuracy}") </code></pre>
<p>What's the purpose of this line?</p> <pre><code>spher_results = np.array(spherical_results) </code></pre> <p>It makes an array from a list. But you don't use <code>spher_results</code> in the following code.</p>
python|numpy
1
5,217
48,369,613
Python Pandas - what is best way to store pearson correlation values stored in as pandas dataframe
<p>What is best way to store pearson correlation values as a pandas dataframe.</p> <p>Correlation is calculated as below</p> <pre><code>Correlation = df.corr() </code></pre> <p>But I want to store the output as dataframe.</p>
<p>The output <em>is</em> a dataframe.</p> <p>The docstring in version 0.22 says:</p> <p>Compute pairwise correlation of columns, excluding NA/null values</p> <pre><code>Parameters ---------- method : {'pearson', 'kendall', 'spearman'} * pearson : standard correlation coefficient * kendall : Kendall Tau correlation coefficient * spearman : Spearman rank correlation min_periods : int, optional Minimum number of observations required per pair of columns to have a valid result. Currently only available for pearson and spearman correlation Returns ------- y : DataFrame </code></pre>
python|pandas
0
5,218
47,990,806
how to use groupby function for making row to columns based on similar IDS
<p>I have data which is in form of</p> <pre><code>arun 21-09-2017 raja 21-08-2016 arun 21-10-2017 raja 21-01-2017 </code></pre> <p>i want my input to be converted as this follows.</p> <pre><code>arun 21-09-2017 21-01-2017 raja 21-08-2016 21-01-2017 </code></pre>
<p>Assuming that you're using pandas dataframe, you can then use <em>groupby</em> to get the desired output.</p> <pre><code>data = {'name':['arun','raja','arun','raja'], 'date':['21-09-2017','21-08-2016','21-10-2017','21-01-2017'] } df1 = pd.DataFrame(data) df1.groupby('name')['date'].apply(list).reset_index() Out[1]: name date 0 arun [21-09-2017, 21-10-2017] 1 raja [21-08-2016, 21-01-2017] </code></pre>
python|pandas
0
5,219
48,511,359
Logging scalars from loss function
<p>I'm working on a ML model implemented in Keras. For this model I wrote a custom loss function which where the loss is a sum of performances of 3 other variables <code>(a_cost, b_cost, c_cost)</code>. The loss function works but I would like to tune it a bit and for that I would like to see how these 3 other variables behave. How do I log these scalars so that they could be displayed in TensorBoard?</p> <pre><code>def custom_cost(y_true, y_pred): # compute a_cost, b_cost, c_cost cost = a_cost + b_cost + c_cost return cost # ..build model... model.compile(loss=custom_cost, optimizer=optimizers.Adam()) tensorboard = callbacks.TensorBoard(log_dir="./logs", write_graph=True) tensorboard.set_model(model) model.fit_generator(generator=custom_generator, epochs=100, steps_per_epoch=180, callbacks=[tensorboard], verbose=True) </code></pre>
<p>As <code>a_cost</code>, <code>b_cost</code> and <code>c_cost</code> are computed you can define three separate functions to compute them separately. Let:</p> <pre><code>def a_cost(y_true, y_pred): # compute a_cost ... return a_cost def b_cost(y_true, y_pred): # compute b_cost ... return b_cost def c_cost(y_true, y_pred): # compute c_cost ... return c_cost </code></pre> <p>Now this as simple as adding these three functions as <code>metrics</code>:</p> <pre><code>model.compile(..., metrics=[a_cost, b_cost, c_cost]) </code></pre>
tensorflow|machine-learning|neural-network|deep-learning|keras
1
5,220
48,829,617
Find first row with condition after each row satisfying another condition
<p>in pandas I have the following data frame:</p> <pre><code>a b 0 0 1 1 2 1 0 0 1 0 2 1 </code></pre> <p>Now I want to do the following: Create a new column c, and for each row where a = 0 fill c with 1. Then c should be filled with 1s until the first row after each column fulfilling that, where b = 1 (and here im hanging), so the output should look like this:</p> <pre><code>a b c 0 0 1 1 1 1 2 1 0 0 0 1 1 0 1 2 1 1 </code></pre> <p>Thanks!</p>
<p>It seems you need:</p> <pre><code>df['c'] = df.groupby(df.a.eq(0).cumsum())['b'].cumsum().le(1).astype(int) print (df) a b c 0 0 0 1 1 1 1 1 2 2 1 0 3 0 0 1 4 1 0 1 5 2 1 1 </code></pre> <p><strong>Detail</strong>:</p> <pre><code>print (df.a.eq(0).cumsum()) 0 1 1 1 2 1 3 2 4 2 5 2 Name: a, dtype: int32 </code></pre>
python|pandas
2
5,221
48,488,278
Optimization for faster numpy 'where' with boolean condition
<p>I generate a bunch of 5-elements vectors with</p> <pre><code>def beam(n): # For performance considerations, see # https://software.intel.com/en-us/blogs/2016/06/15/faster-random-number-generation-in-intel-distribution-for-python try: import numpy.random_intel generator = numpy.random_intel.multivariate_normal except ModuleNotFoundError: import numpy.random generator = numpy.random.multivariate_normal return generator( [0.0, 0.0, 0.0, 0.0, 0.0 ], numpy.array([ [1.0, 0.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.2] ]), int(n) ) </code></pre> <p>This vector will be multiplied by 5x5 matrices (element wise) and checked for boundaries. I use this:</p> <pre><code> b = beam(1e5) bound = 1000 s = (b[:, 0]**2 + b[:, 3]**2) &lt; bound**2 #b[np.where(s)] (equivalent performances) b[s] # &lt;= returned value from a function </code></pre> <p>It seems that this operation with 100k elements is quite time consuming (3ms on my machine).</p> <blockquote> <p>Would there be an obvious (or less obvious) way to perform this operation (the <code>where</code> part, the random generation is there to give an example) ?</p> </blockquote>
<p>As your components are uncorrelated one obvious speedup would be to use the univariate normal instead of the multivariate:</p> <pre><code>&gt;&gt;&gt; from timeit import repeat &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; &gt;&gt;&gt; kwds = dict(globals=globals(), number=100) &gt;&gt;&gt; &gt;&gt;&gt; repeat('np.random.multivariate_normal(np.zeros((5,)), np.diag((1,1,1,1,0.2)), (100,))', **kwds) [0.01475344318896532, 0.01471381587907672, 0.013099645031616092] &gt;&gt;&gt; repeat('np.random.normal((0,0,0,0,0), (1,1,1,1,np.sqrt(0.2)), (100, 5))', **kwds) [0.003930734936147928, 0.004097769036889076, 0.004246715921908617] </code></pre> <p>Further, as it stands your condition is extremely unlikely to fail. So, just check <code>s.all()</code> and if <code>True</code> do nothing.</p>
python-3.x|numpy
0
5,222
48,568,022
Truncate column in data frame from float to int
<p>I have a data-frame (<code>df</code>) that has an <code>Age</code> column, and looks like</p> <pre><code> Age 0 59.917864 1 50.387406 2 50.387406 3 55.008898 </code></pre> <p>I am trying to crearte a new column call <code>Age_truncated</code> which would be the <code>Age</code> column as an integer and would look like:</p> <pre><code> Age Age_truncated 0 59.917864 59 1 50.387406 50 2 50.387406 50 3 55.008898 55 </code></pre> <p>I have tried:</p> <pre><code>df[(&quot;Age_truncated&quot;)] = df[&quot;Age&quot;].astype(int) df[&quot;Age_truncated&quot;] = int(df[&quot;Age&quot;]) </code></pre> <p>without success. What further things could I try?</p>
<p>You can using <code>floor</code> from <code>numpy</code></p> <pre><code>df['Age1']=np.floor(df.Age).astype(int) df Out[414]: Age Age1 0 59.917864 59 1 50.387406 50 2 50.387406 50 3 55.008898 55 </code></pre>
python|pandas
1
5,223
71,070,317
How to add values in rows and pass the result to the next column using pandas?
<p>I have a pandas dataframe with the values as follows:</p> <pre><code>Start end Depart Duration A B 5 2 B C 5 3 C D 5 6 D E 5 4 </code></pre> <p>I want the output as shown below. I need to add rows of 'Depart' and 'Duration' to form a new column 'Arrive' and pass the result value to the next row in Depart column and then add with Duration to form a value in Arrive and repeat the process again.</p> <p>I also need to verify if the column values 'Start' is same as the 'End' value in previous row.</p> <pre><code>Start end Depart Arrive A B 5 7 B C 7 10 C D 10 16 D E 16 20 </code></pre> <p>I tried something like this:</p> <pre><code>if np.where(d.Start == d.End): d['Arrive'] = d['Depart'] + d['Duration'] d['Depart'] = d.Arrive.shift(1) print(d) </code></pre> <p>The output is as follows. It still takes the original Depart value as 5 and adds up to give the result. Where am I going wrong?</p> <pre><code> Start End Depart Duration Arrive 0 A B NaN 2 7 1 B C 7.0 3 8 2 C D 8.0 6 11 3 D E 11.0 4 9 </code></pre>
<p>Try using <code>cumsum</code>:</p> <pre><code>depart = df['Depart'] + df['Duration'].cumsum().shift(1).fillna(0).astype(int) arrival = df['Depart'] + df['Duration'].cumsum() df['Depart'] = depart df['Arrival'] = arrival df = df.drop('Duration', axis=1) </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; df Start end Depart Arrival 0 A B 5 7 1 B C 7 10 2 C D 10 16 3 D E 16 20 </code></pre>
python|pandas|dataframe
0
5,224
70,746,660
How to predict memory requirement for np.linalg.inv?
<p>I'm inverting very large matrices on an HPC. Obviously, this has high RAM requirements. To avoid out-of-memory errors, as a temporary solution I've just been requesting a lot of memory (TBs). How might I predict the required memory from the input matrix size for a matrix inversion using numpy.linalg.inv to more efficiently run HPC jobs?</p>
<p><strong>TL;DR:</strong> up to <code>O(32 n²)</code> bytes for a <code>(n,n)</code> input matrix of type <code>float64</code>.</p> <p><a href="https://numpy.org/doc/stable/reference/generated/numpy.linalg.inv.html" rel="nofollow noreferrer"><code>numpy.linalg.inv</code></a> calls <code>_umath_linalg.inv</code> <a href="https://github.com/numpy/numpy/blob/v1.22.0/numpy/linalg/linalg.py#L476-L546" rel="nofollow noreferrer">internally</a> without performing any copy or creating any additional big temporary arrays. This internal function itself calls LAPACK functions <a href="https://github.com/numpy/numpy/blob/4adc87dff15a247e417d50f10cc4def8e1c17a03/numpy/linalg/umath_linalg.c.src#L1712" rel="nofollow noreferrer">internally</a>. As far as I understand, the wrapping layer of Numpy is responsible for allocating the output Numpy matrix. The C code itself allocates a temporary array (see: <a href="https://github.com/numpy/numpy/blob/4adc87dff15a247e417d50f10cc4def8e1c17a03/numpy/linalg/umath_linalg.c.src#L1598" rel="nofollow noreferrer">here</a>). No other array allocations appear to be performed by Numpy for this operation. There are several Lapack implementation and so it is not possible to generally know how much memory is requested by Lapack calls. However, AFAIK, almost all Lapack implementations does not allocate data in your back: the caller has to do it (especially for <code>sgesv</code>/<code>dgesv</code> which is used here). Assuming the <code>(n, n)</code> input matrix is of type <code>float64</code> and FORTRAN integers are 4-byte wise (which should be the case on most platform, especially on Windows), then the actual memory required (taken by both the input matrix, the output matrix and the temporary matrix) is <code>8 n² + 8 n² + (8 n² + 8 n² + 4 n)</code> bytes which is equal to <code>(32 n + 4) n</code> or simply <code>O(32 n²)</code> bytes. Note that the temporary buffer is the maximum size and may not be fully written which mean that the OS can physically map (ie. reserve in physical RAM) only a fraction of the allocated space. This is what happens on my (Linux) machine with OpenBLAS: only <code>24 n²</code> bytes appear to be actually physically mapped. For <code>float32</code> matrices, it is half the space.</p>
python|numpy
1
5,225
51,563,264
Adding report_tensor_allocations_upon_oom to cifar10_estimator example
<p>I'm running a modified version of the TensorFlow example <a href="https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10_estimator" rel="nofollow noreferrer">https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10_estimator</a> and I'm running out of memory.</p> <p>The ResourceExhausted error says: <code>Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.</code></p> <p>I've tried add this in the obvious places in main() but I get the variants of a protobuf error saying that the report_tensor_allocations_upon_oom run option is not found.</p> <pre><code>def main(job_dir, data_dir, num_gpus, variable_strategy, use_distortion_for_training, log_device_placement, num_intra_threads, **hparams): # The env variable is on deprecation path, default is set to off. os.environ['TF_SYNC_ON_FINISH'] = '0' os.environ['TF_ENABLE_WINOGRAD_NONFUSED'] = '1' # Session configuration. sess_config = tf.ConfigProto( allow_soft_placement=True, log_device_placement=log_device_placement, intra_op_parallelism_threads=num_intra_threads, report_tensor_allocations_upon_oom = True, # Nope gpu_options=tf.GPUOptions( force_gpu_compatible=True, report_tensor_allocations_upon_oom = True)) # Nope config = cifar10_utils.RunConfig( session_config=sess_config, model_dir=job_dir, report_tensor_allocations_upon_oom = True) #Nope tf.contrib.learn.learn_runner.run( get_experiment_fn(data_dir, num_gpus, variable_strategy, use_distortion_for_training), run_config=config, hparams=tf.contrib.training.HParams( is_chief=config.is_chief, **hparams)) </code></pre> <p>Where do I add <code>report_tensor_allocations_upon_oom = True</code> in this example?</p>
<p>You would need to register a session run hook to pass extra arguments to <code>session.run()</code> calls that estimator does.</p> <pre><code>class OomReportingHook(SessionRunHook): def before_run(self, run_context): return SessionRunArgs(fetches=[], # no extra fetches options=tf.RunOptions( report_tensor_allocations_upon_oom=True)) </code></pre> <p>Pass the hook in a list of <code>hooks</code> to the relevant method in estimator: <a href="https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator</a></p>
python|tensorflow
3
5,226
64,598,675
Size must be a 1-D int32 Tensor
<p>I have MNIST data and doing some Transformation using Tensorflow and keras in R</p> <pre><code> dim(train_images) &lt;- c(nrow(train_images), 28,28,1) dim(test_images) &lt;- c(nrow(test_images), 28,28,1) train_images &lt;- tf$image$grayscale_to_rgb(tf$convert_to_tensor(train_images)) test_images &lt;- tf$image$grayscale_to_rgb(tf$convert_to_tensor(test_images)) </code></pre> <p>Now data shape is: 60000,28,28,3</p> <p>But I need the data in shape: 60000,32,32,3</p> <pre><code>train_images &lt;- tf$image$resize(train_images, c(32,32)) test_images &lt;- tf$image$resize(test_images, c(32,32)) </code></pre> <p>It thows an error:</p> <pre><code>Error in py_call_impl(callable, dots$args, dots$keywords): ValueError: 'size' must be a 1-D int32 Tensor Detailed traceback: File &quot;/usr/local/share/.virtualenvs/r-reticulate/lib/python3.7/site- packages/tensorflow/python/util/dispatch.py&quot;, line 201, in wrapper return target(*args, **kwargs) File &quot;/usr/local/share/.virtualenvs/r-reticulate/lib/python3.7/site- packages/tensorflow/python/ops/image_ops_impl.py&quot;, line 1546, in resize_images_v2 skip_resize_if_same=False) File &quot;/usr/local/share/.virtualenvs/r-reticulate/lib/python3.7/site-packages/tensorflow/python/ops/image_ops_impl.py&quot;, line 1226, in _resize_images_common raise ValueError('\'size\' must be a 1-D int32 Tensor') Traceback: 1. tf$image$resize(train_images, c(32, 32)) 2. py_call_impl(callable, dots$args, dots$keywords) </code></pre>
<p>Simply write instead:</p> <pre><code>train_images &lt;- tf$image$resize(train_images, c(32L,32L)) test_images &lt;- tf$image$resize(test_images, c(32L,32L)) </code></pre>
r|tensorflow|keras
0
5,227
64,266,380
catplot issue Axes object
<p>Supposing I have a Pandas DataFrame variable called df which has columns col1, col2, col3, col4.</p> <p>Using sns.catplot() everything works fine:</p> <pre><code>fig = sns.catplot(x='col1', y='col2', kind='bar', data=df, col='col3', hue='col4') </code></pre> <p>However, as soon as I write:</p> <pre><code>fig.axes[0].get_xlabel() </code></pre> <p>I get the following error:</p> <pre><code>AttributeError: 'numpy.ndarray' object has no attribute 'get_xlabel' </code></pre> <p>I know I can use sns.barplot() with ax parameter but my goal is to keep using sns.catplot() and get an Axes object from fig.axes[0].</p>
<p>If you check the <a href="https://seaborn.pydata.org/generated/seaborn.catplot.html" rel="nofollow noreferrer">help page</a>, it writes:</p> <blockquote> <p>Figure-level interface for drawing categorical plots onto a FacetGrid</p> </blockquote> <p>So to get the xlabel like you did:</p> <pre><code>import seaborn as sns df = sns.load_dataset(&quot;tips&quot;) g = sns.catplot(x='day', y='tip', kind='bar', data=df, col='smoker', hue='sex') </code></pre> <p><a href="https://i.stack.imgur.com/L2JYz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L2JYz.png" alt="enter image description here" /></a></p> <p>In this example, you have a facet plot that is 1 by 2, so the axes for the plots are stored in an (1,2) array:</p> <pre><code>g.axes.shape (1, 2) </code></pre> <p>And to access for example the one of the left (Smoker =&quot;Yes&quot;), you do:</p> <pre><code>g.axes[0,0].get_xlabel() 'day' </code></pre> <p>To change the label:</p> <pre><code>g.axes[0,0].set_xlabel('day 1') g.fig </code></pre> <p><a href="https://i.stack.imgur.com/wd0dk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wd0dk.png" alt="enter image description here" /></a></p>
pandas|seaborn|catplot
1
5,228
64,324,492
make clickable url within Pandas dataframe in Flask
<p>I checked several questions from here and tried several methods on my own but didn't work out.</p> <p>I have a Pandas dataframe of product catalog which contains several columns, and there is a column called 'url', where it has url address to corresponding product.</p> <p>I'm using Flask to display this dataframe via html, but url column remains as plain text, instead of clickable url.</p> <p>But it ended up showing nothing in html result (even plain text of url were gone)</p> <p>Can someone tell me what should I do to convert my plain text url column as clickable, and display it to flask html?</p> <p>current code:</p> <pre><code>df['url'] = '&lt;a href=&quot;' + df['url'] + '&quot;&gt;' + '&lt;/a&gt;' return render_template('result.html', tables=[df.to_html(classes='data',escape=False)], titles=result.columns.values) </code></pre>
<p>If you can add some text inside your tag you will be able to see the link URL. Something like this</p> <pre><code>df['url'] = '&lt;a href=&quot;' + df['url'] + '&quot;&gt;' + df['url'] + '&lt;/a&gt;' </code></pre>
python|html|pandas|dataframe|flask
1
5,229
64,526,841
Can I use BERT as a feature extractor without any finetuning on my specific data set?
<p>I'm trying to solve a multilabel classification task of 10 classes with a relatively balanced training set consists of ~25K samples and an evaluation set consists of ~5K samples.</p> <p>I'm using the huggingface:</p> <pre><code>model = transformers.BertForSequenceClassification.from_pretrained(... </code></pre> <p>and obtain quite nice results (ROC AUC = 0.98).</p> <p>However, I'm witnessing some odd behavior which I don't seem to make sense of -</p> <p>I add the following lines of code:</p> <pre><code>for param in model.bert.parameters(): param.requires_grad = False </code></pre> <p>while making sure that the other layers of the model are learned, that is:</p> <pre><code>[param[0] for param in model.named_parameters() if param[1].requires_grad == True] gives ['classifier.weight', 'classifier.bias'] </code></pre> <p>Training the model when configured like so, yields some embarrassingly poor results (ROC AUC = 0.59).</p> <p>I was working under the assumption that an out-of-the-box pre-trained BERT model (without any fine-tuning) should serve as a relatively good feature extractor for the classification layers. So, where do I got it wrong?</p>
<p>From my experience, you are going wrong in your assumption</p> <blockquote> <p>an out-of-the-box pre-trained BERT model (without any fine-tuning) should serve as a relatively good feature extractor for the classification layers.</p> </blockquote> <p>I have noticed similar experiences when trying to use BERT's output layer as a word embedding value with little-to-no fine-tuning, which also gave very poor results; and this also makes sense, since you effectively have <code>768*num_classes</code> connections in the simplest form of output layer. Compared to the millions of parameters of BERT, this gives you an almost negligible amount of control over intense model complexity. However, I also want to cautiously point to overfitted results when training your full model, although I'm sure you are aware of that.</p> <p>The entire idea of BERT is that it <em>is</em> very cheap to fine-tune your model, so to get ideal results, I would advise against freezing any of the layers. The one instance in which it can be helpful to disable at least partial layers would be the embedding component, depending on the model's vocabulary size (~30k for BERT-base).</p>
pytorch|bert-language-model|huggingface-transformers
3
5,230
47,608,461
Corresponding scores based on edges of a graph
<pre><code>import numpy as np score = np.array([ [0.9, 0.7, 0.2, 0.6, 0.4], [0.7, 0.9, 0.6, 0.8, 0.3], [0.2, 0.6, 0.9, 0.4, 0.7], [0.6, 0.8, 0.4, 0.9, 0.3], [0.4, 0.3, 0.7, 0.3, 0.9]]) l2= [(3, 5), (1, 4), (2, 3), (3, 4), (2, 5), (4, 5)] </code></pre> <p>I want to get the corresponding scores vector based on the edge list.The score array represents an adjacency matrix where score(1,2) represents the edge (1,2)</p> <pre><code>OUTPUT: [0.7 0.6 0.6 0.4 0.3 0.3] </code></pre>
<p>We could convert the list of tuples holding the indices to array and then use <code>slicing</code> or tuple packed ones.</p> <p>So, convert to array :</p> <pre><code>l2_arr = np.asarray(l2)-1 </code></pre> <p>Then, one way would be -</p> <pre><code>score[l2_arr[:,0], l2_arr[:,1]] </code></pre> <p>Another -</p> <pre><code>score[tuple(l2_arr.T)] </code></pre> <p>For completeness, here's one using a loop-comprehension to extract the row, column indices and thus avoiding any array conversion -</p> <pre><code>score[[i[0]-1 for i in l2], [i[1]-1 for i in l2]] </code></pre>
python|numpy|networkx
3
5,231
47,729,029
Creating a Tensorflow and Pandas Environment
<p>I using the following command to use modules in my machine learning work.</p> <pre><code>conda create -n tensorflow python=3.5 activate tensorflow conda install pandas matplotlib jupyter notebook scipy scikit-learn nltk conda install -c conda-forge tensorflow keras </code></pre> <p>When i using import command in my ipython notebook</p> <pre><code>import numpy as np import pandas as pd </code></pre> <p>Following error is coming</p> <pre><code>Traceback (most recent call last): File "C:\Users\sompatha\Anaconda2\envs\Ml\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "&lt;ipython-input-1-eeff7c4f34af&gt;", line 1, in &lt;module&gt; import numpy as np File "C:\Users\sompatha\Anaconda2\envs\Ml\lib\site-packages\numpy\__init__.py", line 126, in &lt;module&gt; from numpy.__config__ import show as show_config File "C:\Users\sompatha\Anaconda2\envs\Ml\lib\site-packages\numpy\__config__.py", line 5 blas_mkl_info={'library_dirs': ['C:\Users\sompatha\Anaconda2\envs\ML\\Library\\lib'], 'define_macros': [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)], 'libraries': ['mkl_core_dll', 'mkl_intel_lp64_dll', 'mkl_intel_thread_dll'], 'include_dirs': ['C:\Users\sompatha\Anaconda2\envs\ML\\Library\\include']} ^ SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape </code></pre> <p>How to resolve this one ? Please help</p>
<p>You are using old versions of conda and python. The error tells you exactly what is going wrong: your escaped '\U' tells the interpreter there is a 8 digit code upcoming. That is not the case, the 's' is not valid in such a context.</p> <p>The best thing to do: use up-to-date versions of the software packages.</p> <p>The manual fix, go to:</p> <pre><code>C:\Users\sompatha\Anaconda2\envs\Ml\Lib\site-packages\numpy\__config__.py </code></pre> <p>And replace all single slashes in path references within this file with double slashes. Repeat the process for any other import running into the UnicodeError. Your interpreter will tell you which file is holding you back</p>
python|pandas|tensorflow|unicode
2
5,232
47,761,063
From pandas to Excel using StyleFrame: how to disable the wrap text & shrink to fit?
<p>I use <a href="https://github.com/DeepSpace2/StyleFrame" rel="nofollow noreferrer">StyleFrame</a> to export from pandas to Excel.</p> <p>Cells are formatted to 'wrap text' and 'shrink to fit' by default. (How) can I change these settings? </p> <p>The <a href="http://styleframe.readthedocs.io/en/latest/api_documentation.html#init-arguments" rel="nofollow noreferrer">API documentation</a> describes that the utils module contains the most widely used values for styling elements and that is possible to directly use a value that is not present in the utils module as long as Excel recognises it.</p> <p>What do I need to specify for Excel in this case? Of how/where can I find out what Excel expects? Many thanks in advance!</p> <p>Examples of what I have tried:</p> <p>This code works perfect:</p> <pre><code>sf.apply_column_style(cols_to_style=['A'], styler_obj=Styler(bg_color=utils.colors.blue)) </code></pre> <p>But my problem is that I do not know what to change to switch off the text wrapping and shrink to fit options:</p> <pre><code>sf.apply_column_style(cols_to_style=['A'], styler_obj=Styler(text_control=wrap_text.none)) NameError: name 'wrap_text' is not defined sf.apply_column_style(cols_to_style=['A'], styler_obj=Styler(text_control=utils.wrap_text.none)) AttributeError: module 'StyleFrame.utils' has no attribute 'wrap_text' sf.apply_column_style(cols_to_style=['A'], styler_obj=Styler(utils.wrap_text.none)) AttributeError: module 'StyleFrame.utils' has no attribute 'wrap_text' sf.apply_column_style(cols_to_style=['A'], styler_obj=Styler(wrap_text=False)) TypeError: __init__() got an unexpected keyword argument 'wrap_text' </code></pre>
<p>As of version <strong>1.3</strong> it is possible to pass <code>wrap_text</code> and <code>shrink_to_fit</code> directly to <code>Styler</code>, for example <code>no_wrap_text_style = Styler(wrap_text=False)</code></p>
python|excel|pandas|styleframe
3
5,233
58,992,224
Get timestamp from pandas dataframe
<p>I can't figure out how to extract the timestamp from a pandas column.</p> <p>With the following code I am getting the following information.</p> <pre><code>print("Nested ----------------------------") print(type(nested_full['data.tick_timestamp'])) ts2 = nested_full['data.tick_timestamp'] print("type of timestamp") print(ts2) diff_seconds = util.seconds_since_mightnight(ts2) # Fail here because of invalid data?? Nested ---------------------------- &lt;class 'pandas.core.series.Series'&gt; type of timestamp 0 1574417975007 Name: data.tick_timestamp, dtype: int64 </code></pre>
<p>I think you need scalar from one element <code>Series</code>, so use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.iat.html" rel="nofollow noreferrer"><code>Series.iat</code></a> and select first value:</p> <pre><code>out = nested_full['data.tick_timestamp'].iat[0] </code></pre> <p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.iloc.html" rel="nofollow noreferrer"><code>Series.iloc</code></a>:</p> <pre><code>out = nested_full['data.tick_timestamp'].iloc[0] </code></pre> <hr> <p>If want numpy repr of datetimes:</p> <pre><code>out = nested_full['data.tick_timestamp'].values[0] #pandas 0.24+ out = nested_full['data.tick_timestamp'].to_numpy()[0] </code></pre>
python|pandas
0
5,234
58,918,666
Dataframe with localized datetime index: how to drop days not having a given time
<p>I have one dataframe like the following:</p> <pre><code> A B 2014-06-02 09:00:00-04:00 ... ... 2014-06-02 10:00:00-04:00 ... ... 2014-06-02 11:00:00-04:00 ... ... 2014-06-02 12:00:00-04:00 ... ... 2014-06-03 09:00:00-04:00 ... ... 2014-06-03 10:00:00-04:00 ... ... 2014-06-03 11:00:00-04:00 ... ... 2014-06-04 09:00:00-04:00 ... ... 2014-06-04 10:00:00-04:00 ... ... 2014-06-04 11:00:00-04:00 ... ... 2014-06-04 12:00:00-04:00 ... ... </code></pre> <p>I need to drop the days that doesn't have the hour 12:00:00-04:00: In my example it would be the 2014-06-03. So the final dataframe would looks like:</p> <pre><code> A B 2014-06-02 09:00:00-04:00 ... ... 2014-06-02 10:00:00-04:00 ... ... 2014-06-02 11:00:00-04:00 ... ... 2014-06-02 12:00:00-04:00 ... ... 2014-06-04 09:00:00-04:00 ... ... 2014-06-04 10:00:00-04:00 ... ... 2014-06-04 11:00:00-04:00 ... ... 2014-06-04 12:00:00-04:00 ... ... </code></pre> <p>Please note that the index is localized (-04:00)</p> <p>pandas 0.24.2</p>
<p>You can <code>groupby</code> and <code>filter</code></p> <pre><code>df.groupby(df.index.date).filter(lambda s: 12 in s.index.hour) </code></pre> <p></p> <pre><code> A B 2014-06-02 09:00:00-04:00 ... ... 2014-06-02 10:00:00-04:00 ... ... 2014-06-02 11:00:00-04:00 ... ... 2014-06-02 12:00:00-04:00 ... ... 2014-06-04 09:00:00-04:00 ... ... 2014-06-04 10:00:00-04:00 ... ... 2014-06-04 11:00:00-04:00 ... ... 2014-06-04 12:00:00-04:00 ... ... </code></pre>
python|python-3.x|pandas|dataframe
1
5,235
58,777,438
Difference between convolve2d and filter2D. Why is there a difference in output shapes?
<p>I need to perform a 2D convolution. I have a similarity matrix of shape <code>100, 100</code> and a filter of shape <code>5,5</code>.</p> <p>If I do using scipy:</p> <pre><code>scipy.signal.convolve2d(similarity_matrix, np.diag(filter)) </code></pre> <p>I get <code>104,104</code> matrix in response.</p> <p>But if I do using OpenCV's filter2D method:</p> <pre><code>cv2.filter2D(similarity_matrix, -1, np.diag(filter)) </code></pre> <p>I get a <code>100,100</code> matrix in response. </p> <ul> <li>What is happening here? Why is there a difference in shape?</li> <li>With convolve2d why do I get 4 extra rows and 4 extra columns?</li> </ul>
<p>The default output mode of <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.convolve2d.html" rel="nofollow noreferrer">scipy.signal.convolve2d</a> is a full convolution. If you want your output size to be the same as your input size, set mode parameter as "same". </p> <pre><code>scipy.signal.convolve2d(similarity_matrix, np.diag(filter), mode="same") </code></pre> <p><strong>Update:</strong> When you convolve an image with a kernel, you can't directly calculate convolution results at the edges of the image because there are some neighbours missing. There are different ways to solve this issue </p> <p>1- Ignore values of the edges, eg. if your kernel is 3x3, ignore elements at the edges, if your kernel is 5x5, ignore last 2 elements at the edges and so on..<br> 2- Apply some kind of padding to image, meaning that enlarge your image in a specific way temporarily so that you can apply apply convolution kernel to values at the edge. Again, there are different ways of doing this (and this time more than 2), but most basic way is to zero-padding. For example if your image size is 100x100, and your kernel is 5x5, at each side of the image (right, up, left, down) we can not calculate convolution of two outer values due to lack of neighbours. With zero padding method you first enlarge your image to 104x104. This enlarged image is consist of your original 100x100 image at the center, and some zero values added to edges, now the main thing is now you can apply your filter to the 100x100 region because you created artificial neighbours for the edge values.</p> <p>About the mode names:<br> 1- If you choose "padding way" and keep your output size same as your input size, its called <strong>same</strong> convolution. With padding method above this is done by dropping outer values at your enlarged image.<br> 2- If you choose "ignore edge values way" of doing convolution, your output will be smaller. This is called <strong>valid</strong> convolution. As the name implies, you only performed convolution operation on "valid" region.<br> 3- If you choose "padding way" and keep added values also, its called <strong>full</strong> convolution. Notice that by cropping output of full convolution, you can obtain same and valid convolution too. </p>
python|numpy|opencv|scipy|convolution
1
5,236
58,829,409
how to generate tensorflow matrix with first of few columns as 1 and the rest as 0
<p>Given a list <code>len_list</code> of the number of ones for the vectors, e.g.:</p> <pre><code>[1] [3] [2] [1] </code></pre> <p>where the shape of <code>len_list</code> now is <code>(4, 1)</code></p> <p>And given the number of columns of the vector, e.g. <code>vec_dim = 5</code>.</p> <p>I'd like to generate a tensor with the first of few columns as 1 and the rest as 0. For example, a matrix with shape of <code>(4, 5)</code> as:</p> <pre><code>[1 0 0 0 0] [1 1 1 0 0] [1 1 0 0 0] [1 0 0 0 0] </code></pre> <p>How to do so? </p> <p>I understand I could generate this matrix with iteration. </p> <p>But in my case, the batch size is not set, i.e. the shape for <code>len_list</code> is <code>(None, 1)</code>, I have to feed the placeholder with batch size to fulfill this function. Thus, how could I generate a tensor with shape <code>(None, vec_dim)</code>???</p>
<p>You can achieve this using <code>tf.sequence_mask()</code> which returns a mask tensor representing the first N positions of each cell. </p> <p>Below is the code to get the output as you mentioned above. </p> <pre><code>len_list = np.array([1,3,2,1]) mask = tf.sequence_mask(len_list, maxlen=5, dtype=tf.int32) #maxlen set to 5 to generate (4,5) matrix. </code></pre> <p><strong>Output:</strong> </p> <pre><code>&lt;tf.Tensor: shape=(4, 5), dtype=int32, numpy= array([[1, 0, 0, 0, 0], [1, 1, 1, 0, 0], [1, 1, 0, 0, 0], [1, 0, 0, 0, 0]], dtype=int32)&gt; </code></pre> <p>If you don't want to set the <code>max_length</code> explicitly, you can instead take maximun number from the <code>len_list</code> as below. </p> <pre><code>len_list = np.array([1,3,2,1]) maxlen = tf.reduce_max(len_list) #3 in this case mask = tf.sequence_mask(len_list, maxlen=maxlen, dtype=tf.int32) </code></pre> <p><strong>Output:</strong> </p> <pre><code>&lt;tf.Tensor: shape=(4, 3), dtype=int32, numpy= array([[1, 0, 0], [1, 1, 1], [1, 1, 0], [1, 0, 0]], dtype=int32)&gt; </code></pre>
python|tensorflow|matrix
0
5,237
58,979,037
HOWTO tf.estimator with continuous and categorical columns
<p>I have a tf.estimator which works for continuous variables and I want to expand it to use categorical variables.</p> <p>Consider a pandas dataframe which looks like this:</p> <pre><code>label | con_col | cat_col (float 0 or 1) | (float -1 to 1) | (int 0-3) ----------------+-------------------+--------------- 0 | 0.123 | 2 0 | 0.456 | 1 1 | -0.123 | 3 1 | -0.123 | 3 0 | 0.123 | 2 </code></pre> <p>To build the estimator for just the label and the continuous variable column (con_col) I build the following feature_column variable.</p> <pre><code>feature_cols = [ tf.feature_column.numeric_column('con_col') ] </code></pre> <p>Then I pass it to the DNNClassifer like so. </p> <pre><code>tf.estimator.DNNClassifier(feature_columns=feature_cols ...) </code></pre> <p>Later I build a serving_input_fn(). In this function I also specify the columns. This routine is quite small and looks like this:</p> <pre><code>def serving_input_fn(): feat_placeholders['con_col'] = tf.placeholder(tf.float32, [None]) return tf.estimator.export.ServingInputReceiver(feat_placeholders.copy(), feat_placeholders) </code></pre> <p>This works. However, when I try to use the categorical column I have a problem. </p> <p>So using the categorical column, this part seems to work.</p> <pre><code>feature_cols = [ tf.feature_column.sequence_categorical_column_with_identity('cat_col', num_buckets=4)) ] tf.estimator.DNNClassifier(feature_columns=feature_cols ...) </code></pre> <p>For the serving_input_fn() I get suggestions from the stack trace but both suggestions fail.:</p> <pre><code>def serving_input_fn(): # try #2 # this fails feat_placeholders['cat_col'] = tf.SequenceCategoricalColumn(categorical_column=tf.IdentityCategoricalColumn(key='cat_col', number_buckets=4,default_value=None)) # try #1 # this also fails # feat_placeholders['cat_col'] = tf.feature_column.indicator_column(tf.feature_column.sequence_categorical_column_with_identity(column, num_buckets=4)) # try #0 # this fails. Its using the same form for the con_col # the resulting error gave hints for the above code. # Note, i'm using this url as a guide. My cat_col is # is similar to that code samples 'dayofweek' except it # is not a string. # https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/feateng/taxifare_tft/trainer/model.py #feat_placeholders['cat_col'] = tf.placeholder(tf.float32, [None]) return tf.estimator.export.ServingInputReceiver(feat_placeholders.copy(), feat_placeholders) </code></pre> <p>This is the error message if try #0 is used.</p> <pre><code>ValueError: Items of feature_columns must be a &lt;class 'tensorflow.python.feature_column.feature_column_v2.DenseColumn'&gt;. You can wrap a categorical column with an embedding_column or indicator_column. Given: SequenceCategoricalColumn(categorical_column=IdentityCategoricalColumn(key='cat_col', number_buckets=4, default_value=None)) </code></pre> <p><strong>Lak's answer implementation</strong></p> <p>Using Lak's answer as a guide, this works for both both feature columns.</p> <pre><code># This is the list of features we pass as an argument to DNNClassifier feature_cols = [] # Add the continuous column first feature_cols.append(tf.feature_column.numeric_column('con_col')) # Add the categorical column which is wrapped? # This creates new columns from a single column? category_feature_cols = [tf.feature_column.categorical_column_with_identity('cat_col', num_buckets=4)] for c in category_feature_cols: feat_cols.append(tf.feature_column.indicator_column(c)) # now pass this list to the DNN tf.estimator.DNNClassifier(feature_columns=feature_cols ...) def serving_input_fn(): feat_placeholders['con_col'] = tf.placeholder(tf.float32, [None]) feat_placeholders['cat_col'] = tf.placeholder(tf.int64, [None]) </code></pre>
<p>You need to wrap categorical columns before sending to DNN:</p> <pre><code>cat_feature_cols = [ tf.feature_column.sequence_categorical_column_with_identity('cat_col', num_buckets=4)) ] feature_cols = [tf.feature_column.indicator_column(c) for c in cat_feature_cols] </code></pre> <p>Use indicator column to one-hot encode, or embedded column to embed.</p>
python|tensorflow
1
5,238
58,628,641
Remove list string startswith in pandas df
<p>i have df rows contains lists and wants to remove the particular string combined with others. </p> <p>df['res']:</p> <pre><code>AL1 A 15, CY1 A 16, CY1 A 20, GL1 A 17, GL1 A 62,HOH A 604, HOH A 605, L21 A 18, MG A 550, PR1 A 36, TH1 A 19, TH1 A 37, TY1 A 34, VA1 A 14, HOH A 603, VA1 A 35 </code></pre> <p>Desired output: [ removed HOH with other number]</p> <pre><code>AL1 A 15, CY1 A 16, CY1 A 20, GL1 A 17, GL1 A 62, L21 A 18, MG A 550, PR1 A 36, TH1 A 19, TH1 A 37, TY1 A 34, VA1 A 14, VA1 A 35 </code></pre> <p>I tried this:</p> <pre><code>data['res'].str.split().apply(lambda x: [k for k in x if k.startswith('HOH')]) </code></pre>
<p>The problem is that if you use <code>.split()</code> without anything else every substring will also get split.</p> <p>So this <code>... ,HOH A 604 ...</code> will split into <code>['...', ',' ,'HOH', 'A', '604', '...']</code>.</p> <p>As far as I understood you want to remove every <code>HOH</code> with the following numbers right? </p> <p>Doing it the <code>.split()</code> way will result in removing <code>HOH</code> only and keeping <code>A</code> &amp; <code>604</code>.</p> <p>If you use <code>.split(',')</code> with the comma as parameter then we will get everything between commas seperated.</p> <p>The problem I see with <code>startswith</code> is that sometimes your strings have an additional space after the comma and sometimes they don´t (e.g. ,<code>HOH A 604 &amp; , HOH A 605</code>)</p> <p>Therefore I would suggest to use <code>not in</code> instead. BUT: aware that this removes all sub strings that contain <code>HOH</code> even if they are at the end.</p> <p>try this:</p> <pre><code>df['res'].str.split(',').apply(lambda x: [k for k in x if 'HOH' not in k]) </code></pre> <p>The cell value is now a list of strings if you need to have a string again try this:</p> <pre><code>df['res'].str.split(',').apply(lambda x: ','.join([k for k in x if 'HOH' not in k])) </code></pre>
python|regex|pandas|dataframe
1
5,239
58,989,162
Group values based on columns and conditions in pandas
<p>I want to group pandas dataframe column based on a condition that if the values are with in a range of +20. Below is the dataframe</p> <pre><code>{'Name': {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E', 5: 'F'}, 'ID': {0: 100, 1: 23, 2: 19, 3: 42, 4: 11, 5: 78}, 'Left': {0: 70, 1: 70, 2: 70, 3: 70, 4: 66, 5: 66}, 'Top': {0: 10, 1: 26, 2: 26, 3: 35, 4: 60, 5: 71}} </code></pre> <p>Here I want to group columns Left and Top. This is what I did:</p> <pre><code>df.groupby(['Top'],as_index=False).agg(lambda x: list(x)) </code></pre> <p>This is the result I got :</p> <pre><code> {'Top': {0: 10, 1: 26, 2: 35, 3: 60, 4: 71}, 'Name': {0: ['A'], 1: ['B', 'C'], 2: ['D'], 3: ['E'], 4: ['F']}, 'ID': {0: [100], 1: [23, 19], 2: [42], 3: [11], 4: [78]}, 'Left': {0: [70], 1: [70, 70], 2: [43], 3: [66], 4: [66]}} </code></pre> <p><strong>Desired output:</strong></p> <pre><code>{'Top': {0: [10, 26], 2: 35, 3: [60,71]}, 'Name': {0: ['A', 'B', 'C'], 2: ['D'], 3: ['E', 'F']}, 'ID': {0: [100, 23, 19], 2: [42], 3: [11, 78]}, 'Left': {0: [70, 50, 87], 2: [43], 3: [66, 99]}} </code></pre> <p><strong>NOTE:</strong></p> <p>An important thing to consider is that Top values 10 and 26 are in the range of 20, it forms a group. 35 should not be added to the group even though its difference between 26 and 35 are in the range of 20 because 10 and 20 are already in a group and the difference between 10(the least value in the group) and 35 is not in the range of 20.</p> <p>Is there any any alternate way to solve this?</p> <p><strong>EDIT:</strong></p> <p>I have a different use-case for which the top values increase and when it moves to a new page the top value changes and starts increasing again. This goes on for different inputs. And finally I want to group by Input File Name, Page Number and group. How can I group these?</p> <pre><code>{'Input File Name': {0: 268441, 1: 268441, 2: 268441, 3: 268441, 4: 268441, 5: 268441, 6: 268441, 7: 268441, 8: 268441, 9: 268441, 10: 268441, 11: 268441, 12: 268441, 13: 268441, 14: 268441, 15: 268441, 16: 268441, 17: 268441, 18: 268441, 19: 268441, 20: 268441, 21: 268441, 22: 268441, 23: 268441, 24: 268441, 25: 268441, 26: 268441, 27: 268441, 28: 268441, 29: 268441, 30: 268441, 31: 268441, 32: 268441, 33: 268441, 34: 268441, 35: 268441, 36: 268441, 37: 268441, 38: 268441, 39: 268441}, 'Page Number': {0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1, 7: 1, 8: 1, 9: 1, 10: 1, 11: 1, 12: 1, 13: 1, 14: 1, 15: 1, 16: 1, 17: 1, 18: 1, 19: 1, 20: 2, 21: 2, 22: 2, 23: 2, 24: 2, 25: 2, 26: 2, 27: 2, 28: 2, 29: 2, 30: 2, 31: 2, 32: 2, 33: 2, 34: 2, 35: 2, 36: 2, 37: 2, 38: 2, 39: 2}, 'Content': {0: '3708 Forestview Road', 1: 'AvailableForLease&amp;Sale', 2: '1,700± SFMedicalOffice', 3: '3708ForestviewRoad', 4: 'Suite107', 5: 'Raleigh,NC27612', 6: 'BuildingDescription', 7: '22,278± SFClassAOfficeBuilding', 8: 'OnlyOneSuiteLeft toLeaseand/orPurchase', 9: '(1)1,700± SFShell', 10: 'FlexibleLeaseTerms', 11: '2Floorsw/Elevator&amp;Stairsto2', 12: 'Level', 13: 'nd', 14: 'ClassAFinishes', 15: 'On-SitePropertyManagement', 16: 'LargeGlass Windows', 17: '5:1Parking', 18: 'Formoreinformation,contact:', 19: 'OtherTenants: PivotPhysicalTherapy,TheLundy', 20: 'LeasingDetails', 21: 'SpaceDescription', 22: 'LeaseRate', 23: 'CompetitiveNNN+$5.50TICAM', 24: 'Tenant', 25: 'Suite107:1,700± SF', 26: 'Janitorial&amp;Electric', 27: 'Responsibilities', 28: 'ShellSpacew/TIAllowance&amp;Architecturals', 29: 'ClassABuilding', 30: 'SalePrice', 31: '$374,000or$220PSF', 32: 'BeautifulDouble-DoorEntry', 33: '1,700', 34: '± SF', 35: 'Size', 36: 'LargeGlassWindows', 37: 'ColdDarkShellw/TIAllowance', 38: '5:1Parking', 39: 'Upfit'}, 'Top': {0: 6, 1: 6, 2: 49, 3: 103, 4: 103, 5: 103, 6: 590, 7: 637, 8: 656, 9: 676, 10: 695, 11: 716, 12: 716, 13: 717, 14: 736, 15: 755, 16: 775, 17: 794, 18: 813, 19: 835, 20: 111, 21: 138, 22: 142, 23: 142, 24: 169, 25: 174, 26: 179, 27: 190, 28: 195, 29: 216, 30: 217, 31: 217, 32: 238, 33: 247, 34: 247, 35: 248, 36: 259, 37: 274, 38: 282, 39: 285}} </code></pre>
<p>You can write a function to group the <code>Top</code> columns first and then use <code>groupby</code> on that column:</p> <pre><code>import pandas as pd df = pd.DataFrame({'Name': {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E', 5: 'F'}, 'ID': {0: 100, 1: 23, 2: 19, 3: 42, 4: 11, 5: 78}, 'Left': {0: 70, 1: 70, 2: 70, 3: 70, 4: 66, 5: 66}, 'Top': {0: 10, 1: 26, 2: 26, 3: 35, 4: 60, 5: 71}}) def group(l, group_range): groups = [] current_group = [] i = 0 group_count = 1 while i &lt; len(l): a = l[i] if len(current_group) == 0: if i == len(l) - 1: break current_group_start = a if a &lt;= current_group_start + group_range: current_group.append(group_count) if a &lt; current_group_start + group_range: i += 1 else: groups.extend(current_group) current_group = [] group_count += 1 groups.extend(current_group) return groups #group(df['Top'],20) -&gt; [1, 1, 1, 2, 3, 3] df['group'] = group(df['Top'],20) df.groupby(['group'],as_index=False).agg(list) </code></pre> <p>Output:</p> <pre><code> group ID Left Name Top 0 1 [100, 23, 19] [70, 70, 70] [A, B, C] [10, 26, 26] 1 2 [42] [70] [D] [35] 2 3 [11, 78] [66, 66] [E, F] [60, 71] </code></pre>
python|pandas
2
5,240
70,179,730
Numpy array: iterate through column and change value based on the current value and the next value
<p>I have an array like this: This is an extension of a recent question that I asked elsewhere <a href="https://stackoverflow.com/questions/70176293/numpy-array-iterate-through-column-and-change-value-depending-on-the-next-value">here</a>. I have a numpy array like this:</p> <pre><code>data = np.array([ [1,2,3], [1,2,3], [1,2,101], [4,5,111], [4,5,6], [4,5,6], [4,5,101], [4,5,112], [4,5,6], ]) </code></pre> <p>In the third column, I want the value to be replaced with 10001 if the next one along is <code>101</code> AND if the current one is <code>6</code>. which would result in an array like this:</p> <pre><code>data = np.array([ [1,2,3], [1,2,3], [1,2,101], [4,5,111], [4,5,6], [4,5,10001], [4,5,101], [4,5,112], [4,5,6], ]) </code></pre> <p>Any help on this would be greatly appreciated! Thanks!</p>
<p>One way using <code>numpy.roll</code>:</p> <pre><code>s = data[:, 2] data[np.logical_and(s == 6, np.roll(s, -1) == 101), 2] = 10001 </code></pre> <p>Output:</p> <pre><code>array([[ 1, 2, 3], [ 1, 2, 3], [ 1, 2, 101], [ 4, 5, 111], [ 4, 5, 6], [ 4, 5, 10001], [ 4, 5, 101], [ 4, 5, 112], [ 4, 5, 6]]) </code></pre>
python|numpy|iteration
2
5,241
70,053,242
How unfold operation works in pytorch with dilation and stride?
<p>In my case I am applying this unfold operation on a tensor of A as given below:</p> <pre><code>A.shape=torch.Size([16, 309,128]) A = A.unsqueeze(1) # that's I guess for making it 4 dim for unfold operation A_out= F.unfold(A, (7, 128), stride=(1,128),dilation=(3,1)) A_out.shape=torch.Size([16, 896,291]) </code></pre> <p>I am not getting this 291. If the dilation factor is not there, it would be [16,896,303] right? But if dialtion=3 then it's 291 how? Also here stride is not mentioned so deafualt is 1 but what if it is also mentioned like 4. Please guide.</p>
<blockquote> <p>Also here stride is not mentioned so default is 1 but what if it is also mentioned like 4.</p> </blockquote> <p>Your code already has <code>stride=(1,128)</code>. If stride is only set to <code>4</code> it will be used like <code>(4,4)</code> in this case. This can be easily verified with formula below.</p> <blockquote> <p>If the dilation factor is not there, it would be [16,896,303] right?</p> </blockquote> <p>Yes. Example below.</p> <blockquote> <p>But if dialtion=3 then it's 291 how?</p> </blockquote> <p>Following the formula given in pytorch docs it comes to <code>291</code>. After doing <code>A.unsqueeze(1)</code> the shape becomes, <code>[16, 1, 309, 128]</code>. Here, <code>N=16</code>, <code>C=1</code>, <code>H=309</code>, <code>W=128.</code></p> <p><a href="https://i.stack.imgur.com/BCyPD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BCyPD.png" alt="enter image description here" /></a></p> <p>The output dimension is, <code>(N, C * product(kernel_size), L)</code>. With <code>kernel_size=(7,128)</code> So this becomes, <code>(16, 1 * 7 * 128, L)</code> = <code>(16, 896, L)</code>.</p> <p><code>L</code> can be calculated using the formula below with multiplication over each dimension.</p> <pre><code>L = d3 * d4 </code></pre> <p>Over height dimension <code>spatial_size[3] = 309</code>, <code>padding[3] = 0</code> default, <code>dilation[3] = 3</code>, <code>kernel_size[3] = 7</code>, <code>stride[3] = 1</code>.</p> <pre><code>d3 = (309 + 2 * 0 - 3 * (7 - 1) - 1) / 1 + 1 = 291 </code></pre> <p>Over width dimension <code>spatial_size[4] = 128</code>, <code>padding[4] = 0</code> default, <code>dilation[4] = 1</code>, <code>kernel_size[4] = 128</code>, <code>stride[4] = 128</code>.</p> <pre><code>d4 = (128 + 2 * 0 - 1 * (128 - 1) - 1) / 128 + 1 = 1 </code></pre> <p>So, using above formula <code>L</code> becomes <code>291</code>.</p> <h4>Code</h4> <pre><code>import torch from torch.nn import functional as F A = torch.randn([16, 309,128]) print(A.shape) A = A.unsqueeze(1) print(A.shape) A_out= F.unfold(A, kernel_size=(7, 128), stride=(1,128),dilation=(3,1)) print(A_out.shape) </code></pre> <h4>Output</h4> <pre><code>torch.Size([16, 309, 128]) torch.Size([16, 1, 309, 128]) torch.Size([16, 896, 291]) </code></pre> <h4>Links</h4> <ul> <li><a href="https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html</a></li> <li><a href="https://pytorch.org/docs/stable/generated/torch.nn.functional.unfold.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/generated/torch.nn.functional.unfold.html</a></li> </ul>
python|pytorch
0
5,242
70,036,777
Add rows if missing recent data
<p>I have a dataframe which contains some numbers with dates and country information:</p> <pre><code>df = pd.DataFrame(data={&quot;day&quot;: ['2021-01-01', '2021-01-01', '2021-01-02', '2021-01-02', '2021-01-03'], &quot;country&quot;: [&quot;France&quot;, &quot;Brazil&quot;, &quot;Brazil&quot;, &quot;Cuba&quot;, &quot;France&quot;], &quot;n&quot;: [1, 2, 3, 4, 5] }) </code></pre> <p>This looks like:</p> <pre><code> day country n 0 2021-01-01 France 1 1 2021-01-01 Brazil 2 2 2021-01-02 Brazil 3 3 2021-01-02 Cuba 4 4 2021-01-03 France 5 </code></pre> <p>I'd like to compute some statistics to monitor if the data I received on the 2021-01-03 contain some errors, and for this I need to compare the data I got on the 3rd Jan. to the mean (for example) of the previous data.</p> <p>So I'd like to add some rows which would indicate that I got nothing on the 3rd Jan for Brazil and Cuba, this is the output I'd like:</p> <pre><code> country day n 0 France 2021-01-03 5.0 1 France 2021-01-01 1.0 2 Brazil 2021-01-01 2.0 3 Brazil 2021-01-02 3.0 4 Cuba 2021-01-02 4.0 5 Brazil 2021-01-03 NaN 6 Cuba 2021-01-03 NaN </code></pre> <p>Here is the code I tried but I does not feel very &quot;pandas-like&quot;, I believe there is a built-in in Pandas or at least a better way of adding data for recent rows:</p> <pre><code>countries = pd.DataFrame({&quot;country&quot;: df.country.unique()}) recent_date = pd.DataFrame({&quot;day&quot;:[df.day.max()]}) countries.merge(recent_date, how=&quot;cross&quot;).merge(df, how=&quot;outer&quot;) </code></pre> <p>This is the result:</p> <pre><code> country day n 0 France 2021-01-03 5.0 1 Brazil 2021-01-03 NaN 2 Cuba 2021-01-03 NaN 3 France 2021-01-01 1.0 4 Brazil 2021-01-01 2.0 5 Brazil 2021-01-02 3.0 6 Cuba 2021-01-02 4.0 </code></pre> <p>(PS: I'm open to suggestions on the title of this post)</p>
<p>Idea is filter out all unique countries which has no maximal day and add to original with <a href="https://numpy.org/doc/stable/reference/generated/numpy.setdiff1d.html" rel="nofollow noreferrer"><code>numpy.setdiff1d</code></a>:</p> <pre><code>d = df.day.max() c = np.setdiff1d(df.country.unique(), df.loc[df['day'].eq(d), 'country']) df = df.append(pd.DataFrame({'country':c, 'day': d}), ignore_index=True) print (df) day country n 0 2021-01-01 France 1.0 1 2021-01-01 Brazil 2.0 2 2021-01-02 Brazil 3.0 3 2021-01-02 Cuba 4.0 4 2021-01-03 France 5.0 5 2021-01-03 Brazil NaN 6 2021-01-03 Cuba NaN </code></pre> <p>First idea, a bit complicated:</p> <pre><code>df['day'] = pd.to_datetime(df['day'] ) c = df.loc[df['day'].eq(df['day'].max()), 'country'] df = df.append(df[['country']].drop_duplicates() .assign(day = df['day'].max()) .query(&quot;country not in @c&quot;), ignore_index=True) print (df) day country n 0 2021-01-01 France 1.0 1 2021-01-01 Brazil 2.0 2 2021-01-02 Brazil 3.0 3 2021-01-02 Cuba 4.0 4 2021-01-03 France 5.0 5 2021-01-03 Brazil NaN 6 2021-01-03 Cuba NaN </code></pre>
python|pandas
1
5,243
56,129,426
Slicing Dataframe with elements as lists
<p>My dataframe has list as elements and I want to have more efficient way to check for some conditions. <p>My dataframe looks like this</p> <pre><code>col_a col_b 0 100 [1, 2, 3] 1 200 [2, 1] 2 300 [3] </code></pre> <p>I want to get only those rows which have 1 in col_b. <p>I have tried the naive way temp_list=list()</p> <pre><code>for i in range(len(df1.index)): if 1 in df1.iloc[i,1]: temp_list.append(df1.iloc[i,0]) </code></pre> <p>This takes a lot of time for big dataframes like this. How could I make the search more efficient for dataframes like this?</p>
<pre><code>df[df.col_b.apply(lambda x: 1 in x)] </code></pre> <p>Results in:</p> <pre><code>col_a col_b 0 100 [1, 2, 3] 1 200 [2, 1] </code></pre>
python|pandas
1
5,244
56,227,140
Using py_func inside Tensorflow - ValueError: callback pyfunc_0 is not found
<p>I try to build a tensorflow model - where i load a pickle file with another model as part of the tensorflow model. The code has two parts where I create the model (save) and use the model to predict (load). I get ValueError: callback pyfunc_0 is not found</p> <p>The .pb file itself is very small, so it looks like that it does not store the model in the .pickle-file inside the .pb file. I am not sure what to do about it.</p> <p>save-part</p> <pre><code>import tensorflow as tf from keras import backend as K from tensorflow.python.saved_model import builder as saved_model_builder from tensorflow.python.saved_model import tag_constants, signature_constants, signature_def_utils_impl from keras.callbacks import TensorBoard from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.optimizers import SGD import numpy as np import pickle model_version = "465555564" epoch = 100 tensorboard = TensorBoard(log_dir='./logs', histogram_freq = 0, write_graph = True, write_images = False) sess = tf.Session() K.set_session(sess) K.set_learning_phase(0) def my_func(x): with open(PATH_TO_PICKLE, "rb") as f: loadCF = pickle.load(f) return np.float32(loadCF.predict([x])[1]) input = tf.placeholder(tf.float32) y = tf.py_func(my_func, [input], tf.float32) prediction_signature = tf.saved_model.signature_def_utils.predict_signature_def({"inputs": input}, {"prediction": y}) builder = saved_model_builder.SavedModelBuilder('./'+model_version) legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op') builder.add_meta_graph_and_variables( sess, [tag_constants.SERVING], signature_def_map={ signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature, }, legacy_init_op=legacy_init_op) builder.save() </code></pre> <p>load-part</p> <pre><code>sess=tf.Session() signature_key = tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY input_key = 'inputs' output_key = 'prediction' export_path = './465555564/' meta_graph_def = tf.saved_model.loader.load( sess, [tf.saved_model.tag_constants.SERVING], export_path) signature = meta_graph_def.signature_def x_tensor_name = signature[signature_key].inputs[input_key].name y_tensor_name = signature[signature_key].outputs[output_key].name x = sess.graph.get_tensor_by_name(x_tensor_name) y = sess.graph.get_tensor_by_name(y_tensor_name) y_out = sess.run(y, {x: [0.0, 3.0,2.0,1.0,1.0,0.0,1.0,3.0,1.0,0.000,0.000,0.000,0.000,0.000,0.000,1.000,0.000,0.000,0.000,0.000,0.000, 0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000, 1.000,1.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,1.000,0.000,0.000,0.000, 0.000,0.000,1.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000, 0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000, 0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.0281021, 1.1674791,0.0772629,1.00919452640745377359,-0.40733408431212109191,0.27344889607694411460,-0.27692477736208176431, 0.90979100598229301067,0.30854060293899643330,-0.89088669667641318117,0.71015013257662451540,-0.45934534155660206034, -1.5771756172180175781,-0.44342430101500618367,0.99046792752212953204,0.77406677189800476846,0.22008506072840341994, -0.31012541014287209329,-0.30062459437047234223,-0.02684695402988129115,0.17956349253654479980, -0.46235901945167118265,0.42958878223887747572,-0.44371617585420608521,-0.84945221741994225706, 0.63907705081833732219,-0.70754766008920144671,0.48411194566223358926,-0.12378847102324168350, 0.15848264263735878377]}) print(y_out) </code></pre>
<p>tf.py_func does not support saving in pb format, please use checkpoint format instead</p>
python|tensorflow
1
5,245
55,983,383
Find the sum of previous count occurrences per unique ID in pandas
<p>I have a history of customer IDs and purchase IDs where no customer has ever bought the same product. However, for each purchase ID (which is unique), how can I find out the number of times the customer has made a previous purchase</p> <p>I have tried using groupby() and sort_values()</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'id_cust': [1,2,1,3,2,4,1], 'id_purchase': ['20A','143C','99B','78R','309D','90J','78J']}) df.sort_values(by='id_cust') df.groupby('id_cust')['id_purchase'].cumcount() </code></pre> <p>This is what I expect:</p> <pre><code> id_cust id_purchase value 1 20A 1 2 143C 1 1 99B 2 3 78R 1 2 3097D 2 4 900J 1 1 78J 3 </code></pre>
<p>You can just use the <code>cumcount()</code> on the <code>id_cust</code> column since the <code>id_purchase</code> is unique:</p> <pre><code>df['value']=df.groupby('id_cust')['id_cust'].cumcount()+1 print(df) </code></pre> <hr> <pre><code> id_cust id_purchase value 0 1 20A 1 1 2 143C 1 2 1 99B 2 3 3 78R 1 4 2 309D 2 5 4 90J 1 6 1 78J 3 </code></pre>
pandas
1
5,246
55,906,106
How to add multiple models at runtime in tf_serving without unloading the previous models?
<p>I'm trying to add a new model in <code>tensorflow_model_server</code> using the following code:</p> <pre><code>from tensorflow_serving.apis import model_service_pb2_grpc from tensorflow_serving.apis import model_management_pb2 from tensorflow_serving.config import model_server_config_pb2 import grpc def add_model_config(host, name, base_path, model_platform): channel = grpc.insecure_channel(host) stub = model_service_pb2_grpc.ModelServiceStub(channel) request = model_management_pb2.ReloadConfigRequest() model_server_config = model_server_config_pb2.ModelServerConfig() #Create a config to add to the list of served models config_list = model_server_config_pb2.ModelConfigList() one_config = config_list.config.add() one_config.name= name one_config.base_path=base_path one_config.model_platform=model_platform model_server_config.model_config_list.CopyFrom(config_list) request.config.CopyFrom(model_server_config) print(request.IsInitialized()) print(request.ListFields()) response = stub.HandleReloadConfigRequest(request,10) if response.status.error_code == 0: print("Reload sucessfully") return True else: print("Reload failed!") print(response.status.error_code) print(response.status.error_message) return False </code></pre> <p>But whenever I try to execute it the server unloads all of the previous models. Is it possible to keep serving all the models and then add a new model ?</p>
<p>There is no easy way to just add a new model while keeping the previous ones already loaded.</p> <p>What I have been doing is to always keep the last model config list on disk, and when I need to refresh the models for any reason (either add or remove or update), I read that config file from disk, and do the proper modification, and call the HandleReloadConfigRequest() with the full config list, and then save it to disk again. </p> <p>The file on disk (say <code>/models/models.config</code>) becomes the authoritative record of what models have been loaded into tf.serve at any given time. This way you can recover from a tf.serve reboot and have the comfort knowing that it will load the correct models. The option for specifying the config file during server start is <code>--model_config_file /models/models.config</code>.</p>
python|tensorflow|computer-vision|tensorflow-serving
0
5,247
64,873,311
How do I integrate groupby with query on?
<p>I am relative new to programming python so dont be to harsh on me please.</p> <p>I have a large covid19 dataframe with data from each country each day.</p> <p><a href="https://i.stack.imgur.com/J2pbv.png" rel="nofollow noreferrer">How it looks like</a></p> <p>I did query the date to just have 11.11.2020 to work with and dropped the row 56215 (cumulative data of the world).</p> <p><a href="https://i.stack.imgur.com/UkK4h.png" rel="nofollow noreferrer">After the query</a></p> <p>Now I want to group by continents on the specific day to compare e.g. deaths per million with a plot. How do I do that?</p> <p>Thank you for any help!</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html</a></p> <p>You should be able to do</p> <pre><code>covid19.groupby(['continent']) </code></pre>
python|pandas|csv
0
5,248
64,938,800
For step, (batch_x, batch_y) in enumerate(train_data.take(training_steps), 1) error syntax
<p>i am learning logistic regression from this website <a href="https://builtin.com/data-science/guide-logistic-regression-tensorflow-20" rel="nofollow noreferrer">click here</a></p> <p>Step 9 does not work, the error is <a href="https://i.stack.imgur.com/eLUCX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eLUCX.jpg" alt="enter image description here" /></a></p> <p>what is the solution?</p>
<p>You can reproduce the same error with:</p> <pre><code>For i in range(0,1): pass </code></pre> <p>Try changing the &quot;For&quot; to &quot;for&quot;</p> <p>It looks like they just made a syntax error and you copied it.</p>
python|keras|tensorflow2.0
0
5,249
64,725,982
Slicing on entire panda dataframe instead of series results in change of data type and values assignment of first field to NaN, what is happening?
<p>Was trying to do some cleaning on a dataset ,where instead providing a condition on a panda series<br> <code>head_only[head_only.BasePay &gt; 70000]</code><br> I applied the condition on the data frame<br> <code>head_only[head_only &gt; 70000]</code><br> attached images of my observation, could anyone help me understand what is it that's happening ? <a href="https://i.stack.imgur.com/2hxNu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2hxNu.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/rUHjX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rUHjX.png" alt="enter image description here" /></a></p>
<p>Your second solution raise error if numeric with strings columns:</p> <pre><code>df = pd.DataFrame({ 'A':list('abcdef'), 'B':[4,5,4,5,5,4], 'C':[7,8,9,4,2.0,3], 'D':[1,3,5,7,1,0], 'E':[5,3,6,9,2,4], 'F':list('aaabbb') }) print (df[df &gt; 5]) </code></pre> <blockquote> <p>TypeError: '&gt;' not supported between instances of 'str' and 'int'</p> </blockquote> <p>If compare only numeric columns it get values higher like <code>4</code> and all another numbers convert to misisng values:</p> <pre><code>df1 = df.select_dtypes(np.number) print (df1[df1 &gt; 4]) B C D E 0 NaN 7.0 NaN 5.0 1 5.0 8.0 NaN NaN 2 NaN 9.0 5.0 6.0 3 5.0 NaN 7.0 9.0 4 5.0 NaN NaN NaN 5 NaN NaN NaN NaN </code></pre> <p>Here are replaced at least one value in each column, so integers columns are converted to floats (because <code>NaN</code> is <code>float</code>):</p> <pre><code>print (df1[df1 &gt; 4].dtypes) B float64 C float64 D float64 E float64 dtype: object </code></pre> <p>If need compare all numeric columns if at least one of them match condition use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.any.html" rel="nofollow noreferrer"><code>DataFrame.any</code></a> for test if at least one value is <code>True</code>:</p> <pre><code>#returned boolean DataFrame print ((df1 &gt; 7)) B C D E 0 False False False False 1 False True False False 2 False True False False 3 False False False True 4 False False False False 5 False False False False print ((df1 &gt; 7).any(axis=1)) 0 False 1 True 2 True 3 True 4 False 5 False dtype: bool print (df1[(df1 &gt; 7).any(axis=1)]) B C D E 1 5 8.0 3 3 2 4 9.0 5 6 3 5 4.0 7 9 </code></pre> <p>Or if need filter original all columns is possible filter only numeric columns by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.select_dtypes.html" rel="nofollow noreferrer"><code>DataFrame.select_dtypes</code></a>:</p> <pre><code>print (df[(df.select_dtypes(np.number) &gt; 7).any(axis=1)]) A B C D E F 1 b 5 8.0 3 3 a 2 c 4 9.0 5 6 a 3 d 5 4.0 7 9 b </code></pre>
pandas|dataframe|nan
1
5,250
64,747,609
csv to complex nested json
<p>So, I have a huge CSV file that looks like:</p> <pre><code>PN,PCA Code,MPN Code,DATE_CODE,Supplier Code,CM Code,Fiscal YEAR,Fiscal MONTH,Usage,Defects 13-1668-01,73-2590,MPN148,1639,S125,CM1,2017,5,65388,0 20-0127-02,73-2171,MPN170,1707,S125,CM1,2017,9,11895,0 19-2472-01,73-2302,MPN24,1711,S119,CM1,2017,10,4479,0 20-0127-02,73-2169,MPN170,1706,S125,CM1,2017,9,7322,0 20-0127-02,73-2296,MPN170,1822,S125,CM1,2018,12,180193,0 15-14399-01,73-2590,MPN195,1739,S133,CM6,2018,11,1290,0 </code></pre> <p>What I want to do is group up all the data by PCA Code. So, a PCA Code will have certain number for parts, those parts would be manufactured by certain MPN Code and the final nested JSON structure that I want looks like:</p> <pre class="lang-json prettyprint-override"><code>[ { PCA: { &quot;code&quot;: &quot;73-2590&quot;, &quot;CM&quot;: [&quot;CM1&quot;, &quot;CM6&quot;], &quot;parts&quot;: [ { &quot;number&quot;: &quot;13-1668-01&quot;, &quot;manufacturer&quot;: [ { &quot;id&quot;: &quot;MPN148&quot; &quot;info&quot;: [ { &quot;date_code&quot;: 1639, &quot;supplier&quot;: { &quot;id&quot;: &quot;S125&quot;, &quot;FYFM&quot;: &quot;2020-9&quot;, &quot;usage&quot;: 65388, &quot;defects&quot;: 0, } } ] }, ] } ] } } ] </code></pre> <p>So, I want this structure for multiple part numbers (PNs) having different MPNs with different Date Codes and so on.</p> <p>I am currently using Pandas to do this but I'm stuck on how to proceed with the nesting.</p> <p>My code so far:</p> <pre class="lang-py prettyprint-override"><code>import json import pandas as pd dataframe = pd.read_csv('files/dppm_wc.csv') data = {'PCAs': []} for key, group in dataframe.groupby('PCA Code'): for index, row in group.itterrows(): temp_dict = {'PCA Code': key, 'CM Code': row['CM Code'], 'parts': []} with open('output.txt', 'w') as file: file.write(json.dumps(data, indent=4)) </code></pre> <p>How do I proceed to achieve the nested JSON format that I want? Is there a better way to do this than what I am doing?</p>
<p>I don't really understand what you wish to do with that structure, but I guess it could be achieved with something like this</p> <pre class="lang-py prettyprint-override"><code>data = {'PCAs': []} for key, group in df.groupby('PCA Code'): temp_dict = {'PCA Code': key, 'CM Code': [], 'parts': []} for index, row in group.iterrows(): temp_dict['CM Code'].append(row['CM Code']) temp_dict['parts'].append( {'number': row['PN'], 'manufacturer': [ { 'id': row['MPN Code'], 'info': [ { 'date_code': row['DATE_CODE'], 'supplier': {'id': row['Supplier Code'], 'FYFM': '%s-%s' % (row['Fiscal YEAR'], row['Fiscal MONTH']), 'usage': row['Usage'], 'defects': row['Defects']} } ] }] } ) data['PCAs'].append(temp_dict) </code></pre>
python|json|pandas|csv
0
5,251
44,297,719
Pandas / Numpy way of finding difference between column headers and index headers
<p>I have a pandas dataFrame that look like this:</p> <pre><code>import pandas as pd cols = [1,2,5,15] rows = [1,0,4] data = pd.DataFrame(np.zeros((len(rows),len(cols)))) data.columns = cols data.index = rows 1 2 5 15 1 0.0 0.0 0.0 0.0 0 0.0 0.0 0.0 0.0 4 0.0 0.0 0.0 0.0 </code></pre> <p>I want to find the difference between the column's headers and indexes/row headers, such that the absolute differences populate the table as such:</p> <pre><code> 1 2 5 15 1 0.0 1.0 4.0 14.0 0 1.0 2.0 5.0 15.0 4 3.0 2.0 1.0 11.0 </code></pre> <p>Is their a Pandas or Numpy way of doing this? Here I'm using a small data set, in reality I have nearly 1000,000 rows and 100 columns. I'm looking for a quick and efficient way of making this computation. Thanks</p>
<p>One approach using <a href="https://docs.scipy.org/doc/numpy-1.10.0/user/basics.broadcasting.html" rel="nofollow noreferrer"><code>NumPy broadcasting</code></a> -</p> <pre><code># Extract index and column as int arrays indx = df.index.values.astype(int) cols = df.columns.values.astype(int) # Perform elementwise subtracttion between all elems of indx against all cols a = np.abs(indx[:,None] - cols) df_out = pd.DataFrame(a, df.index, df.columns) </code></pre> <p>Sample input, output -</p> <pre><code>In [43]: df Out[43]: 1 2 5 15 1 0.0 0.0 0.0 0.0 0 0.0 0.0 0.0 0.0 4 0.0 0.0 0.0 0.0 In [44]: df_out Out[44]: 1 2 5 15 1 0 1 4 14 0 1 2 5 15 4 3 2 1 11 </code></pre> <p>Alternatively, for in-situ edit in <code>df</code>, assign back with <code>df[:]</code> -</p> <pre><code>In [58]: df[:] = a In [59]: df Out[59]: 1 2 5 15 1 0 1 4 14 0 1 2 5 15 4 3 2 1 11 </code></pre> <p>Also, if we do have access to the index and columns information, we can get <code>a</code> directly from them, like so -</p> <pre><code>a = np.abs(np.asarray(rows)[:,None] - cols) </code></pre> <p><strong>Further performance boost</strong></p> <p>We can boost it further with <a href="https://github.com/pydata/numexpr/wiki/Numexpr-Users-Guide" rel="nofollow noreferrer"><code>numexpr</code> module</a> to perform those <code>absolute</code> computations for large datasets to get <code>a</code>, like so -</p> <pre><code>import numexpr as ne def elementwise_abs_diff(rows, cols): # rows would be indx I = np.asarray(rows)[:,None] return ne.evaluate('abs(I - cols)') </code></pre> <p>This gives us <code>a</code>, which could be fed to create <code>df_out</code> shown earlier or assigned back to <code>df</code>.</p> <p>Timings -</p> <pre><code>In [93]: rows = np.random.randint(0,9,(5000)).tolist() In [94]: cols = np.random.randint(0,9,(5000)).tolist() In [95]: %timeit np.abs(np.asarray(rows)[:,None] - cols) 10 loops, best of 3: 65.3 ms per loop In [96]: %timeit elementwise_abs_diff(rows, cols) 10 loops, best of 3: 32 ms per loop </code></pre>
python|pandas|numpy
3
5,252
69,369,652
Download all postgresql tables to pandas
<p>Is there a simple way to download all tables from a postgresql database into pandas? For example can pandas just load from the .sql file? All the solutions I found on the line suggest connecting to the database and using select from commands, which seems far more complicated.</p>
<p>You have to connect to a database anyway. You can find out table names from odbc cursor and then use pandas.read_table for names ex. pypyodbc finding names:</p> <p>allnames=cursor.tables( schema='your_schema').fetchall()</p> <p>-- without view and indexes below</p> <p>tabnames=[el[2] for el in allnames if el[3]=='TABLE']</p>
python|pandas|postgresql
0
5,253
65,998,061
TypeError: unsupported operand type(s) for +: 'datetime.datetime' and 'datetime.time'
<p>I have a pandas DataFrame:</p> <pre><code>In [33]: df Out[33]: userId 2021-01-29 2021-01-30 2021-01-01 0 Nl3AG93Ss7L5aj 09:00:00 NaN NaN 1 NaN NaN NaN NaN 2 AbVpBHdfrI5aj1 12:10:00 NaN NaN 3 NaN NaN NaN NaN 4 NaN NaN NaN NaN 5 sad9283ds7L5aj1 NaN 15:35:00 22:22:00 6 NaN NaN NaN NaN 7 NaN NaN NaN NaN 8 NaN NaN NaN NaN </code></pre> <p>I need to get the date and time the script started working, but I get the error:</p> <pre><code>TypeError Traceback (most recent call last) &lt;ipython-input-40-7ed51ff0c115&gt; in &lt;module&gt; 1 for column in df.columns.tolist(): ----&gt; 2 for i in df.loc[df[column].apply(lambda x: isinstance(x, datetime.time))][column]: print(column + i) 3 TypeError: unsupported operand type(s) for +: 'datetime.datetime' and 'datetime.time' </code></pre> <p>Also I can output the schedule without merging:</p> <pre><code>In [22]: for column in df.columns.tolist(): ...: for time in df.loc[df[column].apply(lambda x: isinstance(x, datetime.time))][column]: print(column, time) ...: 2021-01-29 00:00:00 09:00:00 2021-01-29 00:00:00 12:10:00 2021-01-30 00:00:00 15:35:00 2021-01-01 00:00:00 22:22:00 </code></pre>
<p>Try converting the things into strings and just using the <code>+</code> as string concat...</p> <pre><code>for column in df.columns.tolist(): for i in df.loc[df[column].apply(lambda x: isinstance(x, datetime.time))][column]: print(column.strftime(&quot;%Y-%m-%d”) +&quot; &quot;+ i.strftime(&quot;%H:%M:%S”)) </code></pre>
python|pandas|dataframe|datetime
0
5,254
66,275,053
Read the keys and values from the columns of a dataframe in Python
<p>I have a csv file that has two columns. One for the timeslot and one for the Energy. I put this file into pandas dataframe and I have attached the screenshot of this.<a href="https://i.stack.imgur.com/V1Rhh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V1Rhh.png" alt="Screenshot of datafrmae" /></a></p> <p>Now I would like to have dictionary that has as the key values the entries from one column and as the values the entries from the other column. I tried all the options mentioned here <a href="https://stackoverflow.com/questions/26716616/convert-a-pandas-dataframe-to-a-dictionary">Convert a Pandas DataFrame to a dictionary</a> but it was not successfull. Here you can see my code and my tried. I indicated the desired dictionary:</p> <pre><code>import pyomo.environ as pyo import pandas as pd #Define the model model = pyo.ConcreteModel() #Define the sets model.set_timeslots = pyo.RangeSet(0,95) # Read the data for the parameters from a csv file dataframe = pd.read_csv(&quot;C:/Users/energy.csv&quot;, sep =&quot;;&quot;) dictionary_dict = dataframe.to_dict('dict') dictionary_list = dataframe.to_dict('list') dictionary_series = dataframe.to_dict('series') dictionary_split = dataframe.to_dict('split') desiredDictionary = {'t0':7696850, 't1':7765100 , 't2': 7833350} </code></pre> <p>Can you tell me how I can automatically create this desired dataframe? I'd appreciate every comment.</p>
<p>Use:</p> <pre><code>d = dataframe.head(3).set_index('Timeslot')['Energy'].to_dict() </code></pre> <p>If need all values:</p> <pre><code>d1 = dataframe.set_index('Timeslot')['Energy'].to_dict() </code></pre> <p>EDIT:</p> <p>If need 2 dimensional key - tuple use:</p> <pre><code>d = dataframe.head(3).set_index(['Timeslot', 'household'])['Energy'].to_dict </code></pre> <hr /> <pre><code>d1 = dataframe.set_index(['Timeslot', 'household'])['Energy'].to_dict() </code></pre>
python|pandas|dataframe|dictionary
1
5,255
52,759,806
filtering pandas over a list and sending email
<p>I am having a pandas data frame like below:-</p> <pre><code> Tweets 0 RT @cizzorz: THE CHILLER TRAP *TEMPLE RUN* OBS... 1 Disco Domination receives a change in order to... 2 It's time for the Week 3 #FallSkirmish Trials!... 3 Dance your way to victory in the new Disco Dom... 4 Patch v6.02 is available now with a return fro... 5 Downtime for patch v6.02 has begun. Find out a... 6 ⛏️... soon 7 Launch into patch v6.02 Wednesday, October 10!... 8 Righteous Fury.\n\nThe Wukong and Dark Vanguar... 9 RT @wbgames: WB Games is happy to bring @Fortn... </code></pre> <p>I also have a list suppose like below :-</p> <pre><code>my_list = ['Launch', 'Dance', 'Issue'] </code></pre> <p>Now I want to filter the rows if there is a matching word from the my_list and get the whole row and send it as an email or to slack.</p> <p>Like I should get output as row no is because its having Dance word in it.</p> <pre><code>3 Dance your way to victory in the new Disco Dom.. </code></pre> <p>I tried below code to filter but every time its giving me an empty values</p> <pre><code>data[data['Tweets'].str.contains('my_list')] </code></pre> <p>Also I only wants to send the email the same row as a body if I am having matching words from list else I dont want.</p>
<p>This will get it done:</p> <pre><code>import pandas as pd import numpy as np from io import StringIO s = ''' "RT @cizzorz: THE CHILLER TRAP *TEMPLE RUN* OBS..." "Disco Domination receives a change in order to..." "It's time for the Week 3 #FallSkirmish Trials!..." "Dance your way to victory in the new Disco Dom..." "Patch v6.02 is available now with a return fro..." "Downtime for patch v6.02 has begun. Find out a..." "⛏️... soon" "Launch into patch v6.02 Wednesday, October 10!..." "Righteous Fury.\n\nThe Wukong and Dark Vanguar..." "RT @wbgames: WB Games is happy to bring @Fortn... plane 5 [20 , 12, 30]" ''' ss = StringIO(s) df = pd.read_csv(ss, sep=r'\s+', names=['Data']) my_list = ['Launch', 'Dance', 'Issue'] cond = df.Data.str.contains(my_list[0]) for x in my_list[1:]: cond = cond | df.Data.str.contains(x) df[cond] </code></pre>
python|python-3.x|pandas|dataframe|slack-api
0
5,256
52,704,840
merge two dataframes with some common columns where the combining of the common needs to be a custom function
<p>my question is very similar to <a href="https://stackoverflow.com/questions/26307932/merge-pandas-dataframe-with-column-operation">Merge pandas dataframe, with column operation</a> but it doesn't answer my needs.</p> <p>Let's say I have two dataframes such as (note that the dataframe content could be float numbers instead of booleans):</p> <pre><code>left = pd.DataFrame({0: [True, True, False], 0.5: [False, True, True]}, index=[12.5, 14, 15.5]) right = pd.DataFrame({0.7: [True, False, False], 0.5: [True, False, True]}, index=[12.5, 14, 15.5]) </code></pre> <h3>right</h3> <pre><code> 0.5 0.7 12.5 True True 14.0 False False 15.5 True False </code></pre> <h3>left</h3> <pre><code> 0.0 0.5 12.5 True False 14.0 True True 15.5 False True </code></pre> <p>As you can see they have the same indexes and one of the column is common. In real life there might be more common columns such as one more at 1.0 or other numbers not yet defined, and more unique columns on each side. I need to combine the two dataframes such that all unique columns are kept and the common columns are combined using a specific function e.g. a boolean OR for this example, while the indexes are always identical for both dataframes.</p> <p>So the result should be:</p> <h3>result</h3> <pre><code> 0.0 0.5 0.7 12.5 True True True 14.0 True True False 15.5 False True False </code></pre> <p>In real life there will be more than two dataframes that need to be combined, but they can be combined sequentially one after the other to an empty first dataframe.</p> <p>I feel pandas.combine might do the trick but I can't figure it out from the documentation. Anybody would have a suggestion on how to do it in one or more steps.</p>
<p>You can concatenate the dataframes, and then groupby the column names to apply an operation on the similarly named columns: In this case you can get away with taking the sum and then typecasting back to bool to get the <code>or</code> operation.</p> <pre><code>import pandas as pd df = pd.concat([left, right], 1) df.groupby(df.columns, 1).sum().astype(bool) </code></pre> <h3>Output:</h3> <pre><code> 0.0 0.5 0.7 12.5 True True True 14.0 True True False 15.5 False True False </code></pre> <hr> <p>If you need to see how to do this in a less case-specific manner, then again just group by the columns and apply something to the grouped object over <code>axis=1</code></p> <pre><code>df = pd.concat([left, right], 1) df.groupby(df.columns, 1).apply(lambda x: x.any(1)) # 0.0 0.5 0.7 #12.5 True True True #14.0 True True False #15.5 False True False </code></pre> <hr> <p>Further, you can define a custom combining function. Here's one which adds twice the left Frame to 4 times the right Frame. If there is only one column, it returns 2x the left frame.</p> <h3>Sample Data</h3> <p>left:</p> <pre><code> 0.0 0.5 12.5 1 11 14.0 2 17 15.5 3 17 </code></pre> <p>right:</p> <pre><code> 0.7 0.5 12.5 4 2 14.0 4 -1 15.5 5 5 </code></pre> <h3>Code</h3> <pre><code>def my_func(x): try: res = x.iloc[:, 0]*2 + x.iloc[:, 1]*4 except IndexError: res = x.iloc[:, 0]*2 return res df = pd.concat([left, right], 1) df.groupby(df.columns, 1).apply(lambda x: my_func(x)) </code></pre> <h3>Output:</h3> <pre><code> 0.0 0.5 0.7 12.5 2 30 8 14.0 4 30 8 15.5 6 54 10 </code></pre> <hr> <p>Finally, if you wanted to do this in a consecutive manner, then you should make use of <code>reduce</code>. Here I'll combine 5 <code>DataFrames</code> with the above function. (I'll just repeat the right Frame 4x for the example)</p> <pre><code>from functools import reduce def my_comb(df_l, df_r, func): """ Concatenate df_l and df_r along axis=1. Apply the specified function. """ df = pd.concat([df_l, df_r], 1) return df.groupby(df.columns, 1).apply(lambda x: func(x)) reduce(lambda dfl, dfr: my_comb(dfl, dfr, func=my_func), [left, right, right, right, right]) # 0.0 0.5 0.7 #12.5 16 296 176 #14.0 32 212 176 #15.5 48 572 220 </code></pre>
python|pandas|merge|concat
4
5,257
52,497,451
Compute correlation between features and target variable
<p>What is the best solution to compute correlation between my features and target variable ?? My dataframe have 1000 rows and 40 000 columns... </p> <p>Exemple : </p> <pre><code>df = pd.DataFrame([[1, 2, 4 ,6], [1, 3, 4, 7], [4, 6, 8, 12], [5, 3, 2 ,10]], columns=['Feature1', 'Feature2','Feature3','Target']) </code></pre> <p>This code works fine but this is too long on my dataframe ... I need only the last column of correlation matrix : correlation with target (not pairwise feature corelation).</p> <pre><code>corr_matrix=df.corr() corr_matrix["Target"].sort_values(ascending=False) </code></pre> <p>The <strong>np.corcoeff()</strong> function works with array but can we exclude the pairwise feature correlation ?</p>
<p>You could use pandas <code>corr</code> on each column:</p> <pre><code>df.drop("Target", axis=1).apply(lambda x: x.corr(df.Target)) </code></pre>
python|numpy|dataframe|correlation
17
5,258
46,349,840
Dropping rows on a condition
<p>I'm working with an order process data set. Which contains two columns, Order_ID and Transaction_Phase. In the order process, there can be a few steps before an order is first booked and after it is booked. </p> <p>In my current problem, I want to keep all the rows until it hits approved. Any other rows after the approval should be dropped. I am only interested in what happened until the approval so I don't need any information following the approval. </p> <pre><code> Order_ID Tranaction_Phase 529334333 Quote 529334333 Deal approved 529334333 Rejected deal 470660845 Quote 470660845 Deal approved 470660845 Reject Deal </code></pre> <p>I want my output to look like the following: </p> <pre><code> Order_ID Tranaction_Phase 529334333 Quote 529334333 Deal approved 4706608452 Quote 4706608452 Deal approved </code></pre> <p>Can anyone help steer me in the right direction: Packages, logic, documentation etc. I am using python technologies to accomplish this.</p>
<pre><code>df[df.index&lt;=df.groupby('Order_ID')['Tranaction_Phase'].transform(lambda x:x.index[x=='Dealapproved'])] Out[649]: Order_ID Tranaction_Phase 0 529334333 Quote 1 529334333 Dealapproved 3 470660845 Quote 4 470660845 Dealapproved </code></pre>
python|pandas|analysis
2
5,259
46,536,971
How to use groups parameter in PyTorch conv2d function
<p>I am trying to compute a per-channel gradient image in PyTorch. To do this, I want to perform a standard 2D convolution with a Sobel filter on each channel of an image. I am using the <a href="http://pytorch.org/docs/0.1.12/nn.html#torch.nn.functional.conv2d" rel="noreferrer"><code>torch.nn.functional.conv2d</code></a> function for this</p> <p>In my minimum working example code below, I get an error:</p> <pre><code>import torch import torch.nn.functional as F filters = torch.autograd.Variable(torch.randn(1,1,3,3)) inputs = torch.autograd.Variable(torch.randn(1,3,10,10)) out = F.conv2d(inputs, filters, padding=1) </code></pre> <blockquote> <p>RuntimeError: Given groups=1, weight[1, 1, 3, 3], so expected input[1, 3, 10, 10] to have 1 channels, but got 3 channels instead</p> </blockquote> <p>This suggests that <code>groups</code> need to be 3. However, when I make <code>groups=3</code>, I get a different error:</p> <pre><code>import torch import torch.nn.functional as F filters = torch.autograd.Variable(torch.randn(1,1,3,3)) inputs = torch.autograd.Variable(torch.randn(1,3,10,10)) out = F.conv2d(inputs, filters, padding=1, groups=3) </code></pre> <blockquote> <p>RuntimeError: invalid argument 4: out of range at /usr/local/src/pytorch/torch/lib/TH/generic/THTensor.c:440</p> </blockquote> <p>When I check that code snippet in the THTensor class, it refers to a bunch of dimension checks, but I don't know where I'm going wrong.</p> <p>What does this error mean? How can I perform my intended convolution with this <code>conv2d</code> function? I believe I am misunderstanding the <code>groups</code> parameter.</p>
<p>If you want to apply a per-channel convolution then your <code>out-channel</code> should be the same as your <code>in-channel</code>. This is expected, considering each of your input channels creates a separate output channel that it corresponds to. </p> <p>In short, this will work</p> <pre><code>import torch import torch.nn.functional as F filters = torch.autograd.Variable(torch.randn(3,1,3,3)) inputs = torch.autograd.Variable(torch.randn(1,3,10,10)) out = F.conv2d(inputs, filters, padding=1, groups=3) </code></pre> <p>whereas, filters of size <code>(2, 1, 3, 3)</code> or <code>(1, 1, 3, 3)</code> will not work. </p> <p>Additionally, you can also make your <code>out-channel</code> a multiple of <code>in-channel</code>. This works for instances where you want to have multiple convolution filters for each input channel. </p> <p>However, This only makes sense if it is a multiple. If not, then pytorch falls back to its closest multiple, a number less than what you specified. This is once again expected behavior. For example a filter of size <code>(4, 1, 3, 3)</code> or <code>(5, 1, 3, 3)</code>, will result in an <code>out-channel</code> of size 3. </p>
convolution|pytorch
14
5,260
46,566,731
Error while installing tensorflow on Rstudio
<p>I´m trying to install Tensorflow on Rstudio windows.</p> <p>I have installed, Anaconda 3, all my R librarys are updated, and load library Keras on R</p> <p>When I try to install, using:</p> <pre><code>install_keras() </code></pre> <p>The installation was not completed and an error message prompt: Error: Error 2 occurred installing packages into conda environment r-tensorflow In addition: Warning message:</p> <p>"running command '"C:/PROGRA~3/ANACON~1/Scripts/activate" r-tensorflow &amp;&amp; pip install --upgrade --ignore-installed <a href="https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.3.0-cp36-cp36m-win_amd64.whl" rel="nofollow noreferrer">https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.3.0-cp36-cp36m-win_amd64.whl</a> had status 2"</p> <p>Can someone help me in this issue? Thanks!!!</p>
<p>I got the same error. I think it is because tensorflow is already installed through the install of keras. If I try to install keras again, I get the error that keras and tensorflow can only be installed in a fresh R session, where they had not been installed before. Otherwise there might be dll error. So it appears to me that you do not (and should not) install tensorflow if you have already installed keras. When I stopped Rstudio and started it again, I could install keras and re-install tensorflow through.</p>
r|tensorflow|keras|rstudio
0
5,261
46,226,037
How do I multiply a tensor by a matrix
<p>So I have array <code>A</code> with shape <code>[32,60,60]</code> and array <code>B</code> with shape <code>[32,60]</code>. The first dimension is the batch size, so the first dimension is independent. What I want to do is a simple matrix by vector multiplication. So for each sample in <code>A</code> I want to multiply matrix of shape <code>[60,60]</code> with vector of shape <code>[60]</code>. Multiplying across the batch <code>A</code>*<code>B</code> should give me an array of shape <code>[32,60]</code>.</p> <p>This should be simple but I'm doing something wrong:</p> <pre><code>&gt;&gt;&gt; v = np.matmul(A,B) ValueError: shapes (32,60,60) and (32,60) not aligned: 60 (dim 2) != 32 (dim 0) </code></pre> <p>This is for tensorflow, but a numpy answer may suffice if I can convert the notation.</p>
<p>It seems you are trying to <code>sum-reduce</code> the last axes from the two input arrays with that <code>matrix-multiplication</code>. So, with <code>np.einsum</code>, it would be -</p> <pre><code>np.einsum('ijk,ik-&gt;ij',A,B) </code></pre> <p>For <code>tensorflow</code>, we can use <a href="https://www.tensorflow.org/api_docs/python/tf/einsum" rel="nofollow noreferrer"><code>tf.einsum</code></a>.</p> <hr> <p>With <a href="https://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.matmul.html" rel="nofollow noreferrer"><code>np.matmul</code></a>, we need to extend <code>B</code> to <code>3D</code> by introducing a new axis at the last one. Thus, using <code>np.matmul</code> would get the second axis of <code>B's</code> extended version <code>sum-reduced</code> against the third one of <code>A</code>. The result would be <code>3D</code>. So, get the last singleton axis squeezed out with slicing or <code>np.squeeze</code>. Hence, the implementation would be -</p> <pre><code>np.matmul(A,B[...,None])[...,0] </code></pre> <p>We already have an existing <a href="https://www.tensorflow.org/api_docs/python/tf/matmul" rel="nofollow noreferrer"><code>tf.matmul</code></a> function there.</p>
python|numpy|tensorflow
2
5,262
58,468,182
Tensorflow: Error when calling optimizer to minimize loss
<p>I am trying to get tensorflow to minimize a loss function with respect to a variable, however, I keep getting an error and I am unsure of the cause. I have created a minimal example:</p> <pre><code>import tensorflow as tf def loss_func(x, target): return tf.pow(x - target, 2) x = tf.Variable(initial_value=1., name='x', dtype=tf.float32) target = tf.constant(value=10., dtype=tf.float32) adam = tf.train.AdamOptimizer() loss = lambda: loss_func(x, target) adam_op = adam.minimize(loss, var_list=tf.trainable_variables) </code></pre> <p>When running this piece of code I get the error </p> <pre><code>AttributeError: 'function' object has no attribute '_id' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "test.py", line 19, in &lt;module&gt; adam_op = adam.minimize(loss, var_list=tf.trainable_variables) File "C:...\optimizer.py", line 403, in minimize grad_loss=grad_loss) File "C:...\optimizer.py", line 461, in compute_gradients tape.watch(var_list) File "C:...\backprop.py", line 808, in watch tape.watch(self._tape, t) File "C:...\tape.py", line 59, in watch pywrap_tensorflow.TFE_Py_TapeWatch(tape._tape, tensor) # pylint: disable=protected-access SystemError: &lt;built-in function TFE_Py_TapeWatch&gt; returned a result with an error set </code></pre> <p>Does anyone know what it is I am doing wrong? Thank you. I am using tensorflow version 1.13.1</p>
<p>I don't quite understand what you are doing. It appears you are trying to optimize x towards 10. If this is the case you can just assign it to 10. Alternatively, if you will have more than one target, you can take an some sort average of those targets and assign x to that. I believe the problem in the code is that there x the input is also a trainable variable.</p>
python|tensorflow
0
5,263
58,194,951
OSError: SavedModel file does not exist at: /content\model\2016/{saved_model.pbtxt|saved_model.pb}
<p>I ran the code </p> <pre><code>export_path=os.getcwd()+'\\model\\'+'2016' with tf.Session(graph=tf.Graph()) as sess: tf.saved_model.loader.load(sess, ["myTag"], export_path) graph = tf.get_default_graph() # print(graph.get_operations()) input = graph.get_tensor_by_name('input:0') output = graph.get_tensor_by_name('output:0') # print(sess.run(output, # feed_dict={input: [test_data[1]]})) tf.train.write_graph(freeze_session(sess), export_path, "my_model.pb", as_text=False) </code></pre> <p>and incurred the following error </p> <pre><code>OSError Traceback (most recent call last) &lt;ipython-input-44-b154e11ca364&gt; in &lt;module&gt;() 3 4 with tf.Session(graph=tf.Graph()) as sess: ----&gt; 5 tf.saved_model.loader.load(sess, ["myTag"], export_path) 6 graph = tf.get_default_graph() 7 # print(graph.get_operations()) 3 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/loader_impl.py in parse_saved_model(export_dir) 81 (export_dir, 82 constants.SAVED_MODEL_FILENAME_PBTXT, ---&gt; 83 constants.SAVED_MODEL_FILENAME_PB)) 84 85 OSError: SavedModel file does not exist at: /content\model\2016/{saved_model.pbtxt|saved_model.pb} </code></pre> <p>It was previously ran in Windows system and now IOS. I am not sure if it was because of this. Any help is appreciated. Thank you.</p>
<p>You should avoid using a fixed character as directory separator (like <code>\</code> in your example), as that character differs between operating systems (see <a href="https://docs.python.org/3/library/os.html?highlight=os%20sep#os.sep" rel="noreferrer">os.sep</a>). You might want to use <a href="https://docs.python.org/3.7/library/os.path.html#os.path.join" rel="noreferrer">os.path.join</a> or <a href="https://docs.python.org/3/library/pathlib.html#pathlib.Path" rel="noreferrer">Path</a> to build your path, thereby ensuring suitable paths for the executing operating system are created, e.g.:</p> <pre><code>export_path = os.path.join(os.getcwd(), 'model', '2016') </code></pre>
python|tensorflow|google-colaboratory
5
5,264
69,001,440
Matplotlib pcolormesh with time on x-axis and boolean true/false as the colouring
<p>I want a plot with the xticklabels as datetime-like objects and the colors to be <code>True</code>/<code>False</code> (<code>1</code>,<code>0</code>) depending on whether that time has been sampled.</p> <p>The data I am working with looks like this:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt all_days = pd.date_range(&quot;2001-01-01&quot;, &quot;2020-12-31&quot;, freq=&quot;D&quot;) times = all_days[::10] arr = np.isin(all_days, times).reshape(1, -1) </code></pre> <p>The plotting code I have is here:</p> <p>Producing a plot without the colouring (but the correct labels)</p> <pre class="lang-py prettyprint-override"><code>plt.pcolormesh(all_days, np.ones(1), arr) </code></pre> <p><a href="https://i.stack.imgur.com/C5Wvg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C5Wvg.png" alt="Pcolormesh without the correct colouring" /></a></p> <p>Or without the labels but with the colouring</p> <pre class="lang-py prettyprint-override"><code>plt.pcolormesh(arr) </code></pre> <p><a href="https://i.stack.imgur.com/v0gN3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v0gN3.png" alt="Pcolormesh without the correct x-tick labels" /></a></p> <p>Ultimately, I want a combination of these two plots, the xticklabels of the former and the colouring of the latter.</p>
<p><a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.pcolormesh.html" rel="nofollow noreferrer"><strong><code>matplotlib.pyplot.pcolormesh</code></strong></a> optionally requires <code>X</code> and <code>Y</code> arrays which specify coordinates of the corners of quadrilaterals of the pcolormesh. So you need to pass a proper <code>Y</code> array:</p> <ul> <li>size 1</li> <li>from 0 to 1 as y axis limits</li> </ul> <p>So you just need to use:</p> <pre><code>plt.pcolormesh(all_days, [0, 1], arr) </code></pre> <p><a href="https://i.stack.imgur.com/nJh13.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJh13.png" alt="enter image description here" /></a></p>
python|pandas|numpy|datetime|matplotlib
2
5,265
68,934,531
How to replace column values with results from for loop?
<p>my dataframe looks something like this:</p> <pre><code>teams x_in_mins y_in_mins z_in_mins team_a 50 120 24 team_b 80 66 30 team_c 30 90 70 </code></pre> <p>I want to convert integer columns (which represent total minutes) to time format (hours &amp; minutes).</p> <p>For this step i've created a for loop:</p> <pre><code>for column in df[[&quot;x_in_mins&quot;,&quot;y_in_mins&quot;,&quot;z_in_mins&quot;]]: print(pd.to_timedelta(df[column], unit='min')) </code></pre> <p>this iterates over the specified columns, converting integers to timedelta.</p> <p>How can I then make the for loop results into a new dataframe?</p> <p>the final dataframe should look like:</p> <pre><code>teams x_in_hrs y_in_hrs z_in_hrs team_a 00:40:00 02:00:00 00:24:00 team_b 01:20:00 01:06:00 00:30:00 team_c 00:30:00 01:30:00 01:10:00 </code></pre>
<p>You can use <code>transform</code>:</p> <pre><code>def foo(col): return pd.to_timedelta(col, unit='min').astype(str).str.rsplit().str[-1] df[[&quot;x_in_mins&quot;,&quot;y_in_mins&quot;,&quot;z_in_mins&quot;]].transform(foo) </code></pre>
python|pandas|dataframe|for-loop
3
5,266
44,389,341
how to see summary image of former steps in tensorboard
<p>I'm using tensorboard to visualize the image(CelebA) generated by dcgan Specifically, I created a writer and image summary with:</p> <pre><code>tf.summary.image('generated', image_output) summary_op = tf.summary.merge_all() writer = tf.summary.FileWriter(logdir, graph) summary = sess.run(summary_op) </code></pre> <p>after each 100 step I would add a image summary with:</p> <pre><code>writer.add_summary(image, step) </code></pre> <p>I think the event file of tensorboard save all the images generated at each step since the event file keeps growing larger. But when I run tensorboard I can only see the latest image. Is there any way to see the former images? Or they are not saved in the event file and I can't see them.</p>
<p>The issue that you have raised was common enough to warrant a <a href="https://github.com/tensorflow/tensorflow/issues/4721" rel="nofollow noreferrer">feature request</a> several months ago that was rolled into TensorFlow 1.1.0.</p> <p>A small sliding bar appears below each image summary in Tensorboard with which you can scroll through the summary steps if you upgrade to TensorFlow 1.1.0+.</p>
tensorflow|tensorboard
1
5,267
44,820,115
Remove rows if any of a set of values are null
<p>I have a DataFrame with lots of columns, and I want to remove rows where the values for <em>some</em> columns are null. I know how to do this with one column: </p> <pre><code>df = df[df['Column'] != ''] </code></pre> <p>I want to do this with a set of columns, like so:</p> <pre><code>df = df['' not in [df['Column1'], df['Column2'], df['Column3']]' </code></pre> <p>However, this gives the error:</p> <blockquote> <p>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p> </blockquote> <p>How do I do this?</p>
<p>If values are empty strings create subset and for all <code>True</code>s per row add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>all</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.any.html" rel="nofollow noreferrer"><code>any</code></a>:</p> <pre><code>df = df[(df[['Column1', 'Column2', 'Column1']] != '').all(axis=1)] df = df[~(df[['Column1', 'Column2', 'Column1']] == '').any(axis=1)] </code></pre> <p>And if values are <code>NaN</code>s, <code>None</code>s use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="nofollow noreferrer"><code>dropna</code></a> with parameeter <code>subset</code>:</p> <pre><code>df = df.dropna(subset=['Column1', 'Column2', 'Column1']) </code></pre> <p>Sample:</p> <pre><code>df = pd.DataFrame({'A':[np.nan,'','p','hh','f'], 'B':['',np.nan,'','','o'], 'C':['a','s','d','f','g'], 'D':['f','g','h','j','k'], 'E':['l','i',np.nan,'u','o'], 'F':['','','o','i',np.nan]}) print (df) A B C D E F 0 NaN a f l 1 NaN s g i 2 p d h NaN o 3 hh f j u i 4 f o g k o NaN df1 = df.dropna(subset=['A', 'B', 'F']) print (df1) A B C D E F 2 p d h NaN o 3 hh f j u i df2 = df[(df[['A', 'B', 'F']] != '').all(axis=1)] print (df2) A B C D E F 4 f o g k o NaN df2 = df[~(df[['A', 'B', 'F']] == '').any(axis=1)] print (df2) A B C D E F 4 f o g k o NaN </code></pre> <p>EDIT:</p> <p>For comparing strings and some column is numeric get:</p> <blockquote> <p>TypeError: Could not compare [''] with block values</p> </blockquote> <p>There are 2 solutions for it - compare numpy array created by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.values.html" rel="nofollow noreferrer"><code>values</code></a> or convert values to <code>string</code>s by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.astype.html" rel="nofollow noreferrer"><code>astype</code></a>:</p> <pre><code>df = pd.DataFrame({'A':[np.nan,7,8,8,8], 'B':['',np.nan,'','','o'], 'C':['a','s','d','f','g'], 'D':['f','g','h','j','k'], 'E':['l','i',np.nan,'u','o'], 'F':['','','o','i',np.nan]}) print (df) A B C D E F 0 NaN a f l 1 7.0 NaN s g i 2 8.0 d h NaN o 3 8.0 f j u i 4 8.0 o g k o NaN df2 = df[(df[['A', 'B', 'F']].values != '').all(axis=1)] print (df2) A B C D E F 4 8.0 o g k o NaN df2 = df[(df[['A', 'B', 'F']].astype(str) != '').all(axis=1)] print (df2) A B C D E F 4 8.0 o g k o NaN </code></pre>
python|pandas
3
5,268
61,172,226
Memory leak with H20 in Python Web Application
<p>My decision engine is built on the python-flask framework with uWSGI and Nginx. As part of evaluating a user through an HTTP request, I run scorecards with h2o==3.20.0.7 to generate a score to make a decision on the user. Given below some clarity on how I'm using h2o in my app</p> <pre><code>h2o.init() # initialize predictions = h2o.mojo_predict_pandas(features_df, MODEL_MOJO_ZIP_FILE_PATH, MODEL_GENMODEL_JAR_PATH) # generate score # features_df -&gt; pandas DF </code></pre> <p>H2o details from app start </p> <pre><code>-------------------------- ---------------------------------------- H2O cluster uptime: 01 secs H2O cluster timezone: Etc/UTC H2O data parsing timezone: UTC H2O cluster version: 3.20.0.7 H2O cluster version age: 1 year, 7 months and 10 days !!! H2O cluster name: H2O_from_python_unknownUser_t8cqu9 H2O cluster total nodes: 1 H2O cluster free memory: 1.656 Gb H2O cluster total cores: 4 H2O cluster allowed cores: 4 H2O cluster status: accepting new members, healthy H2O connection url: http://localhost:54321 H2O connection proxy: H2O internal security: False H2O API Extensions: XGBoost, Algos, AutoML, Core V3, Core V4 -------------------------- ---------------------------------------- </code></pre> <p>Both H2o (running as a separate service) and the flask app are running in the same server (3 - 8 servers under a load balancer). </p> <p>Sometimes the memory usage is steadily increasing and throwing <code>Cannot allocate memory</code></p> <p>while computing the scorecards. Then it is settling automatically by itself after sometimes. The scorecard runs along with the other rules (sequential run) under an HTTP request but the error is reporting only while computing scorecards. Assuming, it requires more memory as it involves with h2o. The traffic looks to be the same across this cycle. So I hope this is not due to the high traffic.</p> <p>As per my investigation, some memory is hanging somewhere and it is not releasing.</p> <p><strong>I did the following workarounds</strong> to release the hanged memory and reduce the impact </p> <p><strong>1</strong> GC in h2o from python </p> <p><a href="https://aichamp.wordpress.com/2016/11/10/calling-h2o-garbage-collect-from-python" rel="nofollow noreferrer">https://aichamp.wordpress.com/2016/11/10/calling-h2o-garbage-collect-from-python</a></p> <p><a href="https://stackoverflow.com/questions/45435739/python-h2o-memory-management">Python H2O Memory Management</a></p> <ul> <li>Not experienced a positive impact. </li> </ul> <p><strong>2</strong> Scheduled service restart - Gracefully replacing the old servers with new servers.</p> <ul> <li>Experienced a positive impact. 60-70% of the errors are gone.</li> </ul> <p>I would like to understand what is happening internally and introduce a proper fix rather than a workaround. Help would be highly appreciated. </p> <p>For your information,</p> <p><strong>I haven't tried</strong></p> <p><strong>1</strong> updating H2o cluster to a new version as the current version is too old (1 year, 7 months and 11 days) - Agree that it is better to use the latest version but no guarantee that the same will not happen again and the effort required is also more in terms of validating the score, result, etc </p> <p><strong>2</strong> I haven't limited the memory usage of H2o by using <code>min_mem_size</code> as I don't want the scorecard evaluation to fail. </p> <p>and </p> <p><strong>I'm planning to</strong> </p> <p><strong>1</strong> add a memory profiler to easily understand the memory utilization of each piece/process related to my app</p> <p><strong>edit</strong></p> <p><strong>2</strong> separate out h2o from the flask app and host it in different servers so that scaling is easy. - still, the same issue is possible. </p> <ul> <li>I go through some memory profiler but still couldn't finalize one which is best for my current situation. I would like to get a suggestion on this as well. </li> </ul> <p>Thanks </p>
<p>The approach you describe is different from what I would recommend.</p> <p>For simplicity's sake (ignoring multiple servers and load balancing) I am going to draw your setup's architecture diagram like this:</p> <pre><code>[Client HTTP program] -&gt; [python flask app] -&gt; [java scoring backend] </code></pre> <p>This high-level architecture is fine, but you've gone about implementing the java scoring layer part in what I will say is the most difficult way possible instead of the intended way.</p> <p>The intended way is to use only the MOJO and the lightweight MOJO runtime. One straightforward way to do that is to wrap the MOJO in a very simple minimal web service.</p> <p>Here are links to the javadoc for the MOJO:</p> <ul> <li><a href="http://docs.h2o.ai/h2o/latest-stable/h2o-genmodel/javadoc/overview-summary.html#whatisamojo" rel="nofollow noreferrer">http://docs.h2o.ai/h2o/latest-stable/h2o-genmodel/javadoc/overview-summary.html#whatisamojo</a></li> </ul> <p>and a github repo demonstrating how to use a MOJO in a simple Java servlet container:</p> <ul> <li><a href="https://github.com/h2oai/app-mojo-servlet" rel="nofollow noreferrer">https://github.com/h2oai/app-mojo-servlet</a></li> </ul> <p>Also, here is an older github repo you might find useful that uses the POJO instead of the MOJO. The MOJO is better. Use the MOJO and not the POJO, but you may find reading the documentation in this repo helpful:</p> <ul> <li><a href="https://github.com/h2oai/app-consumer-loan" rel="nofollow noreferrer">https://github.com/h2oai/app-consumer-loan</a></li> </ul> <p>Note if you do things this way you can still scale/load balance the [python flask app] and [java scoring backend] services separately if you want, although my expectation would be the java will be substantially faster than the python, so it might be easier just to scale the python and java together in groups of two, and have the python make requests to a local java.</p> <hr> <p>OK, now that I've talked about the best practice way, let me point out some issues I can spot in what you are doing now (the difficult way).</p> <ol> <li><p>You didn't mention whether you are scoring one row at a time or doing batch scoring. Using the full H2O-3 server itself for scoring is much better suited for batch scoring and horrendously inefficient for scoring one row at a time. The parsing process is heavyweight and the scoring process is heavyweight for one row at a time. This will impact latency.</p></li> <li><p>While you can read in the MOJO object itself to a full H2O-3 server process and use it for batch scoring, doing this in a real-time HTTP workflow was never the intent. (Interestingly, support for doing this wasn't even possible for about the first 5 years of H2O-3's existence.)</p></li> <li><p>There are definitely memory leaks if you don't clean up after yourself.</p> <p>Running the H2O-3 server process as a long-running service for scoring is not recommended. But if you really want to do it, take these steps:</p> <ul> <li><p>The in-memory objects need to be cleaned out. You can find them with the h2o.ls() and remove them with the h2o.rm() calls in the R/python client APIs. Both the datasets and the scores would need to be cleaned up. You probably don't want to remove the model itself, though.</p></li> <li><p>I don't expect you need to manually trigger garbage collections in the Java process, but you can if you want to. Personally, I only do that when I have turned on Java flags like -XX:+PrintGCDetails -XX:+PrintGCTimeStamps so I can see the effect of the compaction on how much free heap memory remains after a Full GC. I do this so I can see whether objects are really being retained, so I can confirm they are getting cleared out. I like to give those logs to <a href="http://gceasy.io" rel="nofollow noreferrer">http://gceasy.io</a> and visualize them.</p></li> <li><p>Do monitor the logs to see the free heap remaining after a Full GC.</p></li> <li><p>Even if you are doing the right stuff in terms of cleaning up memory, give the H2O-3 server process lots of memory. I wouldn't even run it on my laptop with a smaller -Xmx than 5G. As such, I would characterize the original poster's Java heap as severely under-provisioned (<code>H2O cluster free memory: 1.656 Gb</code>).</p></li> <li><p>If you see the free heap remaining after Full GC creeping up, restart the Java process, since this is not the standard use case and not something that gets thoroughly tested. The H2O-3 clusters are thought of by the development team as more as short-to-medium lifetime services (hours/days) than long-running services (months+, e.g. nginx/apache).</p></li> </ul></li> </ol> <hr> <p>Hope that helps!</p>
python|pandas|h2o
1
5,269
60,841,235
How to prepare my dataset(Not Images) to implement FedAVG on Tensorflow Federated?
<p>I want to train a federated model with the FedAvg Algorithm on TFF (Tensorflow Federated) using a 3-channel (X, Y, Z) accelerometer dataset with a time frame length of 128.</p> <p>My goal is to train a federated model using </p> <pre><code>tff.learning.from_keras_model </code></pre> <p>The guides on the TensorFlow Federated website mostly deal with datasets which already comes in the desired format for the model</p> <pre><code>tensorflow_federated.python.simulation.hdf5_client_data.HDF5ClientData </code></pre> <p>I'm quite lost on how to convert my raw dataset to the desired format for TFF.</p> <p>The dataset I am using has the following shape: </p> <pre><code>X: (-1, 128, 3) and Y: (-1) </code></pre> <p>X: are floats Y: are the integer labels of my dataset ranging from 0-6</p> <p>Can anybody give me some pointers/examples on how I can tackle this?</p>
<p>First, for federated learning the dataset will need to be partitioned by user/participant. Does the dataset have a partitioning of the accelerometer readings and labels by user? If not, this is probably a task suited from standard centralized learning rather than federated learning.</p> <p>If there is a user partitioning, the following questions explain how to setup a <code>tff.simulation.ClientData</code> to model this distributed dataset. The fact that the data is images or not shouldn't matter, the techniques are applicable to any supervised learning of X, Y datasets:</p> <ul> <li><a href="https://stackoverflow.com/questions/60265798/tff-how-define-tff-simulation-clientdata-from-clients-and-fn-function">How define tff.simulation.ClientData.from_clients_and_fn Function?</a></li> <li><a href="https://stackoverflow.com/questions/59741397/federated-learning-convert-my-own-image-dataset-into-tff-simulation-clientdata">convert my own image dataset into tff simulation Clientdata</a></li> </ul>
python|deep-learning|tensorflow-federated
0
5,270
71,739,162
Basic question on transforming object into float
<p>I am new at programming and I'm having this trouble:</p> <p><code>y = pd.to_numeric(final_list\[1\])</code></p> <blockquote> <p>Traceback (most recent call last):</p> <p>File pandas_libs\lib.pyx:2315 in pandas._libs.lib.maybe_convert_numeric</p> <p>ValueError: Unable to parse string &quot;- &quot;</p> <p>During handling of the above exception, another exception occurred:</p> <p>Traceback (most recent call last):</p> <p>Input In [66] in &lt;cell line: 1&gt;</p> </blockquote> <pre><code>y = pd.to_numeric(final_list\[1\]) </code></pre> <blockquote> <p>File ~\anaconda3\lib\site-packages\pandas\core\tools\numeric.py:184 in to_numeric values, _ = lib.maybe_convert_numeric(</p> <p>File pandas_libs\lib.pyx:2357 in pandas._libs.lib.maybe_convert_numeric</p> <p>ValueError: Unable to parse string &quot;- &quot; at position 265066&quot;</p> </blockquote> <p>I tried</p> <pre><code>y = pd.to_numeric(final_list\[1\]) </code></pre> <p>and also</p> <pre><code>y = final_list\[1\].astype(float, errors = 'raise') </code></pre> <p>both say it is not possible because of &quot;- &quot;</p> <p>What do I need to do? Transform &quot;- &quot; into NaN? How do I do that?</p> <p>Thanks</p>
<p>One of the solutions is to convert such non-convertible cases to NaN:</p> <pre><code>y = pd.to_numeric(final_list[1], errors='coerce') </code></pre>
python|pandas|dataset|transformation
0
5,271
71,679,375
Pandas dataFrame : find if the current value is greater than values of last 10 rows
<p>Consider below dataFrame. I want to calculate if the current value of price column is greater than last 10 values. I was thinking to use shift, but not sure how to use for last 10 rows.</p> <pre><code> price 220 3.337 221 3.320 222 3.290 223 3.291 224 3.312 225 3.255 226 3.216 227 3.245 228 3.275 229 3.282 230 3.370 231 3.396 232 3.375 233 3.369 234 3.335 235 3.344 236 3.365 237 3.373 238 3.414 239 3.378 </code></pre> <p>Output dataframe:</p> <pre><code> price isGreater 220 3.337 NaN 221 3.320 NaN 222 3.290 NaN 223 3.291 NaN 224 3.312 NaN 225 3.255 NaN 226 3.216 NaN 227 3.245 NaN 228 3.275 NaN 229 3.282 NaN 230 3.370 1.0 231 3.396 1.0 232 3.375 NaN 233 3.369 NaN 234 3.335 NaN 235 3.344 NaN 236 3.365 NaN 237 3.373 NaN 238 3.414 1.0 239 3.378 NaN </code></pre>
<p>You can use <code>rolling</code>+<code>max</code> to get the max of the last 10 rows, if greater than it, then it's greater or equal than all (including self, thus the +1):</p> <pre><code>df['isGreater'] = df['price'].ge(df['price'].rolling(10+1).max()) </code></pre> <p>NB. technically if you really want to compare only to the previous rows and not self (for example to use strict comparison), you would need to shift:</p> <pre><code>df['isGreater'] = df['price'].gt(df['price'].shift().rolling(10).max()) </code></pre> <p>output:</p> <pre><code> price isGreater 220 3.337 False 221 3.320 False 222 3.290 False 223 3.291 False 224 3.312 False 225 3.255 False 226 3.216 False 227 3.245 False 228 3.275 False 229 3.282 False 230 3.370 True 231 3.396 True 232 3.375 False 233 3.369 False 234 3.335 False 235 3.344 False 236 3.365 False 237 3.373 False 238 3.414 True 239 3.378 False </code></pre>
pandas|dataframe
1
5,272
71,588,301
how to manipulate column header strings in a dataframe
<p>how to remove part of string &quot;test_&quot; in column headers. image the dataframe has many columns, so df.rename(columns={&quot;test_Stock B&quot;:&quot;Stock B&quot;}) is not the solution i am looking for!</p> <pre><code> import pandas as pd data = {'Stock A':[1, 1, 1, 1], 'test_Stock B':[3, 3, 4, 4], 'Stock C':[4, 4, 3, 2], 'test_Stock D':[2, 2, 2, 3], } df = pd.DataFrame(data) # expect data = {'Stock A':[1, 1, 1, 1], 'Stock B':[3, 3, 4, 4], 'Stock C':[4, 4, 3, 2], 'Stock D':[2, 2, 2, 3], } df_expacte = pd.DataFrame(data) </code></pre> <p>I expect all column headers only labeled as &quot;Stock x&quot; instead of &quot;test_Stock x&quot;. Thank you for the ideas!</p>
<p>You can redefine the columns via list comprehension with:</p> <pre><code>df.columns = [x.replace(&quot;test_&quot;,&quot;&quot;) for x in df] </code></pre> <p>This outputs:</p> <pre><code> Stock A Stock B Stock C Stock D 0 1 3 4 2 1 1 3 4 2 2 1 4 3 2 3 1 4 2 3 </code></pre>
python|pandas|dataframe|header
3
5,273
42,260,699
pandas.DataFrame: how to applymap() with external arguments
<p>SEE UPDATE AT THE END FOR A MUCH CLEARER DESCRIPTION.</p> <p>According to <a href="http://pandas.pydata.org/pandas-docs/version/0.18.1/generated/pandas.DataFrame.apply.html" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/version/0.18.1/generated/pandas.DataFrame.apply.html</a> you can pass external arguments to an apply function, but the same is not true of applymap: <a href="http://pandas.pydata.org/pandas-docs/version/0.18.1/generated/pandas.DataFrame.applymap.html#pandas.DataFrame.applymap" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/version/0.18.1/generated/pandas.DataFrame.applymap.html#pandas.DataFrame.applymap</a></p> <p>I want to apply an elementwise function <code>f(a, i)</code>, where <code>a</code> is the element, and <code>i</code> is a manually entered argument. The reason I need that is because I will do <code>df.applymap(f)</code> in a loop <code>for i in some_list</code>. </p> <p>To give an example of what I want, say I have a DataFrame <code>df</code>, where each element is a <code>numpy.ndarray</code>. I want to extract the <code>i</code>-th element of each <code>ndarray</code> and form a new DataFrame from them. So I define my <code>f</code>:</p> <pre><code>def f(a, i): return a[i] </code></pre> <p>So that I could make a loop which would return the i-th element of each of the <code>np.ndarray</code> contained in <code>df</code>:</p> <pre><code>for i in some_series: b[i] = df.applymap(f, i=i) </code></pre> <p>so that in each iteration, it would pass my value of <code>i</code> into the function <code>f</code>. </p> <p>I realise it would all have been easier if I had used MultiIndexing for <code>df</code> but for now, this is what I'm working with. Is there a way to do what I want within pandas? I would ideally like to avoid for-looping through all the columns in <code>df</code>, and I don't see why <code>applymap</code> doesn't take keyword arguments, while <code>apply</code> does.</p> <p>Also, the way I currently understand it (I may be wrong), when I use <code>df.apply</code> it would give me the <code>i</code>-th element of each row/column, instead of the <code>i</code>-th element of each <code>ndarray</code> contained in <code>df</code>.</p> <hr> <p>UPDATE:</p> <p>So I just realised I could split <code>df</code> into Series and then use the <code>pd.Series.apply</code> which could do what I want. Let me just generate some data to show what I mean:</p> <pre><code>def f(a,i): return a[i] b = pd.Series(index=range(10), dtype=object) for i in b.index: b[i] = np.random.rand(5) b.apply(f,args=(1,)) </code></pre> <p>Does exactly what I expect, and want it to do. However, trying with a DataFrame:</p> <pre><code>b = pd.DataFrame(index=range(4), columns=range(4), dtype=object) for i in b.index: for col in b.columns: b.loc[i,col] = np.random.rand(10) b.apply(f,args=(1,)) </code></pre> <p>Gives me <code>ValueError: Shape of passed values is (4, 10), indices imply (4, 4)</code>.</p>
<p>You can use it:</p> <pre><code>def matchValue(value, dictionary): return dictionary[value] a = {'first': 1, 'second': 2} b = {'first': 10, 'second': 20} df['column'] = df['column'].map(lambda x: matchValue(x, a)) </code></pre>
python|pandas|dataframe
3
5,274
42,578,133
Is there an enaml widget to display a table?
<p>I would like to display a (numeric and/or textual) table inside a Python GUI built with enaml, but surprisingly there seem to be no enaml widget for that.</p> <p>Some years ago, <a href="https://groups.google.com/forum/#!topic/enaml/soUKRvXT_BU" rel="nofollow noreferrer">here</a>, they said there would have been some developments in that direction, but nothing followed.</p> <p>More recently, <a href="https://github.com/enthought/traits-enaml/blob/update-data-frame-table/traits_enaml/widgets/data_frame_table.py" rel="nofollow noreferrer">here</a>, they provided an enaml widget for displaying a pandas dataframe (that would also be OK for the task) but it does not seem to work with the latest version of traits_enaml (I get an error message from traits_enaml.utils: 'cannot import name get_unicode_string').</p> <p>So the question is: Does anybody know of a table widget for enaml? For this (very useful, I'd say) widget are you forced to abandon enaml and resort to traditional imperative GUI frameworks?</p> <p>I am a newbie, so I apologize beforehand for errors or a missed straightforward solution.</p>
<p>You can either use RawWidget to use your own Qt TableView or have a look at an implementation here: <a href="https://github.com/frmdstryr/enamlx" rel="nofollow noreferrer">https://github.com/frmdstryr/enamlx</a></p>
python-2.7|user-interface|pandas|dataframe|enaml
1
5,275
69,916,993
define range in pandas column based on define input from list
<p>I have one data frame, wherein I need to apply range in one column, based on the list provided, I am able to achieve results using fixed values but input values will be dynamic in a list format and the range will be based on input.</p> <p>MY Data frame looks like below:</p> <pre><code>import pandas as pd rangelist=[90,70,50] data = {'Result': [75,85,95,45,76,8,10,44,22,65,35,67]} sampledf=pd.DataFrame(data) </code></pre> <p>range list is my list, from that I need to create range like 100-90,90-70 &amp; 70-50. These ranges may differ from time to time, till now I am achieving results using the below function.</p> <pre><code>def cat(value): cat='' if (value&gt;90): cat='90-100' if (value&lt;90 and value&gt;70 ): cat='90-70' else: cat='&lt; 50' return cat sampledf['category']=sampledf['Result'].apply(cat) </code></pre> <p>How can I pass dynamic value in function&quot;cat&quot; based on the range list? I will be grateful if someone can help me to achieve the below result.</p> <pre><code>Result category 0 75 90-70 1 85 90-70 2 95 &lt; 50 3 45 &lt; 50 4 76 90-70 5 8 &lt; 50 6 10 &lt; 50 7 44 &lt; 50 8 22 &lt; 50 9 65 &lt; 50 10 35 &lt; 50 11 67 &lt; 50 </code></pre>
<p>I would recommend <code>pd.cut</code> for this:</p> <pre><code>sampledf['Category'] = pd.cut(sampledf['Result'], [-np.inf] + sorted(rangelist) + [np.inf]) </code></pre> <p>Output:</p> <pre><code> Result Category 0 75 (70.0, 90.0] 1 85 (70.0, 90.0] 2 95 (90.0, inf] 3 45 (-inf, 50.0] 4 76 (70.0, 90.0] 5 8 (-inf, 50.0] 6 10 (-inf, 50.0] 7 44 (-inf, 50.0] 8 22 (-inf, 50.0] 9 65 (50.0, 70.0] 10 35 (-inf, 50.0] 11 67 (50.0, 70.0] </code></pre>
python-3.x|pandas|list|function
0
5,276
69,807,435
Export dataframe into different sheets in an Excel without deleting the existing ones
<p>I have a dataframe df1 and I want to export it to an Excel which already has some sheets.</p> <p>I tried using:</p> <pre><code>writer = pd.ExcelWriter(file_path, engine = 'xlsxwriter') df1.to_excel(writer, sheet_name = 'test1', index = False) writer.save() </code></pre> <p>This code deletes the existing sheets, and exports df1 into sheet named test1.</p>
<p>Check out the documentation <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_excel.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_excel.html</a> there is an append mode for ExcelWriter</p>
python|pandas
1
5,277
72,268,602
Can't pivot or transfer from long to wide format?
<p>I have the following csv <a href="https://justpaste.it/8spqq" rel="nofollow noreferrer">https://justpaste.it/8spqq</a></p> <p>I am trying to pivot it and transfer from long to wide format.</p> <pre><code>import pandas as pd pd.DataFrame(pd.pivot_table(df, values='data_point_value', index='requestId', columns='data_point_key', aggfunc=','.join).to_records()) </code></pre> <p>But for some reason I am getting:</p> <p><a href="https://i.stack.imgur.com/NHxxw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NHxxw.png" alt="enter image description here" /></a></p> <p>Please advise what I am missing here?</p> <p>When trying to do it with</p> <pre><code>df.pivot(values='data_point_value', index='requestId', columns='data_point_key') </code></pre> <p>I am getting an error:</p> <pre><code>ValueError: Index contains duplicate entries, cannot reshape </code></pre>
<p>So, with data you posted:</p> <pre class="lang-py prettyprint-override"><code>df = pd.read_table(&quot;8spqq.txt&quot;, sep=&quot;,&quot;).drop(columns=&quot;Unnamed: 0&quot;) print(df) # [598 rows x 3 columns] </code></pre> <p>The problem comes from the fact that the following values in <code>data_point_key</code> are repeated several times for each <code>requestId</code>:</p> <pre class="lang-py prettyprint-override"><code>print( df.loc[ df.duplicated(subset=[&quot;requestId&quot;, &quot;data_point_key&quot;], keep=False), &quot;data_point_key&quot;, ].unique() ) # duplicated values ['requestColumnGrants.columnInfoDetails.tags.name' 'requestColumnGrants.columnInfoDetails.name' 'requestColumnGrants.columnInfoDetails.trinoName' 'requestColumnGrants.operationType' 'requestColumnGrants.operationPayload'] </code></pre> <p>So, you can either drop them and pivot:</p> <pre class="lang-py prettyprint-override"><code>new_df = ( df .drop_duplicates(subset=[&quot;requestId&quot;, &quot;data_point_key&quot;], keep=False) .pivot(values=&quot;data_point_value&quot;, index=&quot;requestId&quot;, columns=&quot;data_point_key&quot;) ) print(new_df) # Output data_point_key requestColumnGrants.columnInfoDetails.tags.riskFactor ... userIds.0 requestId ... 17 HIGH ... 12 19 HIGH ... 12 20 HIGH ... 12 21 HIGH ... 12 22 HIGH ... 12 23 HIGH ... 12 24 HIGH ... 12 25 HIGH ... 12 26 HIGH ... 12 27 HIGH ... 12 [10 rows x 33 columns] </code></pre> <p>Or, if you need to keep them, then I suggest dealing with them first after extracting them, then merge them back and pivot:</p> <pre class="lang-py prettyprint-override"><code>temp_df = df.loc[df.duplicated(subset=[&quot;requestId&quot;, &quot;data_point_key&quot;], keep=False), :] df = df.drop(temp_df.index) temp_df = ( temp_df.groupby(by=[&quot;requestId&quot;, &quot;data_point_key&quot;]) .agg(&quot;,&quot;.join) .reset_index(drop=False) ) new_df = ( pd.concat([df, temp_df], axis=0) .drop_duplicates(subset=[&quot;requestId&quot;, &quot;data_point_key&quot;], keep=False) .pivot(values=&quot;data_point_value&quot;, index=&quot;requestId&quot;, columns=&quot;data_point_key&quot;) ) print(new_df) # Ouput data_point_key requestColumnGrants.columnInfoDetails.name ... userIds.0 requestId ... 17 credit_score,age,region,gender,year_income,job... ... 12 19 region,year_income,job,firstname,join_date,cre... ... 12 20 join_date,year_income,id,credit_score,age,last... ... 12 21 firstname,age,lastname,id,job,year_income,regi... ... 12 22 job,firstname,gender,lastname,region,age,id,cr... ... 12 23 age,gender,lastname,credit_score,id,firstname,... ... 12 24 id,gender,join_date,age,region,firstname,year_... ... 12 25 year_income,id,lastname,credit_score,age,job,f... ... 12 26 job,year_income,region,id,join_date,credit_sco... ... 12 27 age,lastname,gender,region,firstname,year_inco... ... 12 [10 rows x 36 columns] </code></pre>
python-3.x|pandas|dataframe|pivot|pivot-table
1
5,278
72,446,227
Pandas Timeseries Plotting from Multi Index Dataframe
<p>I have a multi index data frame which looks like:</p> <pre><code> col1 col2 col3 timestamp type class 1653385980729 audio dc 3 10 1 ec 5 7 2 video dc 10 18 5 ec 7 14 4 1653385981729 audio dc 4 8 4 ec 5 8 2 video dc 9 15 3 ec 9 17 6 1653385982729 audio dc 3 10 3 ec 6 8 2 video dc 11 16 6 ec 9 18 6 . . . </code></pre> <p>Now I am trying to create some graphs out of columns of this dataframe. Each graph for different columns will look like this (don't mind the random values):</p> <p><a href="https://i.stack.imgur.com/sWzVt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sWzVt.jpg" alt="example graph" /></a></p> <p>When I try to draw it by seperating columns and using <code>plot()</code> it becomes an absolute mess. How should I manipulate the dataframe to achieve what I want?</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>Series.unstack</code></a> by second and third level with flatten <code>MultiIndex in columns</code>:</p> <pre><code>df1 = df['col1'].unstack([1,2]) df1.columns = [f'{a}/{b}' for a, b in df1.columns] print (df1) audio/dc audio/ec video/dc video/ec timestamp 1653385980729 3 5 10 7 1653385981729 4 5 9 9 1653385982729 3 6 11 9 df1.plot() </code></pre>
python|pandas|dataframe|matplotlib
2
5,279
50,324,416
Insert new rows in pandas dataframe, altering some timestamps while keeping other data
<pre><code>1 2016-10-01 01:00:00 1014.7 23.6 2 2016-10-01 02:00:00 1014.3 23.6 3 2016-10-01 03:00:00 1014.3 23.8 4 2016-10-01 04:00:00 1014.3 23.8 5 2016-10-01 05:00:00 1014.4 24.3 6 2016-10-01 06:00:00 1014.9 24.6 7 2016-10-01 07:00:00 1015.6 25.7 8 2016-10-01 08:00:00 1015.8 26 9 2016-10-01 09:00:00 1016.3 27.3 10 2016-10-01 10:00:00 1016.5 25.8 11 2016-10-01 11:00:00 1016.6 26 12 2016-10-01 12:00:00 1016.6 27.3 </code></pre> <p>I have a dataframe as shown above with a timestamp column and some pressure columns. The problem is that the timestamps are hourly and i need a 10 minutes interval timeseries. I would therefore like to insert 5 new rows after each row where i add 10 minutes to the previous timestamp and just keep the pressure data. Can anyone help me with this, would be much appreciated. </p>
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html" rel="nofollow noreferrer"><code>resample</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.resample.Resampler.ffill.html" rel="nofollow noreferrer"><code>ffill</code></a>:</p> <pre><code>print (df) date a b 1 2016-10-01 01:00:00 1014.7 23.6 2 2016-10-01 02:00:00 1014.3 23.6 3 2016-10-01 03:00:00 1014.3 23.8 4 2016-10-01 04:00:00 1014.3 23.8 5 2016-10-01 05:00:00 1014.4 24.3 6 2016-10-01 06:00:00 1014.9 24.6 7 2016-10-01 07:00:00 1015.6 25.7 8 2016-10-01 08:00:00 1015.8 26.0 9 2016-10-01 09:00:00 1016.3 27.3 10 2016-10-01 10:00:00 1016.5 25.8 11 2016-10-01 11:00:00 1016.6 26.0 12 2016-10-01 12:00:00 1016.6 27.3 </code></pre> <hr> <pre><code>df['date'] = pd.to_datetime(df['date']) df = df.set_index('date').resample('10T').ffill() print (df.head(10)) a b date 2016-10-01 01:00:00 1014.7 23.6 2016-10-01 01:10:00 1014.7 23.6 2016-10-01 01:20:00 1014.7 23.6 2016-10-01 01:30:00 1014.7 23.6 2016-10-01 01:40:00 1014.7 23.6 2016-10-01 01:50:00 1014.7 23.6 2016-10-01 02:00:00 1014.3 23.6 2016-10-01 02:10:00 1014.3 23.6 2016-10-01 02:20:00 1014.3 23.6 2016-10-01 02:30:00 1014.3 23.6 </code></pre>
python|pandas
3
5,280
50,518,591
CNN: why it is making no difference whether i measure accuracy by logits or softmax layer?
<p>While measuring the accuracy of a CNN i understand that i should use the output of the softmax layer(Predicted label) to target label. But even if i compare logits (which are the output of last fully connected layer, as per my understanding) with target labels, i get almost same accuracy. Here is the relevant part of my code:</p> <pre><code>matches = tf.equal(tf.argmax(y_pred,1),tf.argmax(y,1)) acc = tf.reduce_mean(tf.cast(matches,tf.float32)) </code></pre> <p>whereas y_pred is the output of final normal fully connected layer without any activation function (only matrix multiplication and bias addition w*x+b)</p> <pre><code>y_pred = normal_full_layer(second_hidden_layer,6) </code></pre> <p><strong>6</strong> because I have 6 classes.</p> <p>Here is the accuracy graph using <code>y_pred</code>: <a href="https://i.stack.imgur.com/X07qA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X07qA.png" alt="Using only Logits"></a></p> <p><strong>Accuracy is around 96%</strong></p> <p>Now if I do same (calculate accuracy) by applying softmax activation on <code>y_pred</code>, let's call it <code>pred_softmax</code>, i get almost same accuracy</p> <pre><code>pred_softmax = tf.nn.softmax(y_pred). </code></pre> <p><strong>Accuracy</strong> Graph using softmax: <a href="https://i.stack.imgur.com/pDaov.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pDaov.png" alt="enter image description here"></a></p>
<p>In fact the accuracy should be exactly equal. Taking the argmax of an array of logits should return the same as taking the argmax of the softmax of that array. This is because the softmax function maps larger logits to be closer to 1 in a strictly increasing way.</p> <p>The softmax function takes a set of outputs (an array) <code>y</code> and maps it to <code>exp(y)/sum(exp(y))</code>, the larger the <code>y[i]</code> the larger the softmax of <code>y[i]</code> and so it must be that <code>argmax(y[i])==argmax(softmax(y[i]))</code></p>
python|tensorflow|machine-learning|deep-learning
3
5,281
45,687,731
How to do intersection match between 2 DataFrames in Pandas?
<p>Assume exists 2 <strong>DataFrames</strong> <code>A</code> and <code>B</code> like following</p> <p><code>A</code>:</p> <pre><code>a A b B c C </code></pre> <p><code>B</code>:</p> <pre><code>1 2 3 4 </code></pre> <p>How to produce <code>C</code> <strong>DataFrame</strong> like</p> <pre><code>a A 1 2 a A 3 4 b B 1 2 b B 3 4 c C 1 2 c C 3 4 </code></pre> <p>Is there some function in Pandas can do this operation?</p>
<p>First all values has to be unique in each <code>DataFrame</code>.</p> <p>I think you need <code>product</code>:</p> <pre><code>from itertools import product A = pd.DataFrame({'a':list('abc')}) B = pd.DataFrame({'a':[1,2]}) C = pd.DataFrame(list(product(A['a'], B['a']))) print (C) 0 1 0 a 1 1 a 2 2 b 1 3 b 2 4 c 1 5 c 2 </code></pre> <p>Pandas pure solutions with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.from_product.html" rel="nofollow noreferrer"><code>MultiIndex.from_product</code></a>:</p> <pre><code>mux = pd.MultiIndex.from_product([A['a'], B['a']]) C = pd.DataFrame(mux.values.tolist()) print (C) 0 1 0 a 1 1 a 2 2 b 1 3 b 2 4 c 1 5 c 2 </code></pre> <pre><code>C = mux.to_frame().reset_index(drop=True) print (C) 0 1 0 a 1 1 a 2 2 b 1 3 b 2 4 c 1 5 c 2 </code></pre> <p>Solution with cross join with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow noreferrer"><code>merge</code></a> and column filled by same scalars by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>assign</code></a>:</p> <pre><code>df = pd.merge(A.assign(tmp=1), B.assign(tmp=1), on='tmp').drop('tmp', 1) df.columns = ['a','b'] print (df) a b 0 a 1 1 a 2 2 b 1 3 b 2 4 c 1 5 c 2 </code></pre> <p>EDIT:</p> <pre><code>A = pd.DataFrame({'a':list('abc'), 'b':list('ABC')}) B = pd.DataFrame({'a':[1,3], 'c':[2,4]}) print (A) a b 0 a A 1 b B 2 c C print (B) a c 0 1 2 1 3 4 C = pd.merge(A.assign(tmp=1), B.assign(tmp=1), on='tmp').drop('tmp', 1) C.columns = list('abcd') print (C) a b c d 0 a A 1 2 1 a A 3 4 2 b B 1 2 3 b B 3 4 4 c C 1 2 5 c C 3 4 </code></pre>
pandas|dataframe
2
5,282
62,623,399
Grouping/identifying cumulative sum (Looping in pandas dataframe)
<p>I have a data frame like below :</p> <pre><code>df = pd.DataFrame({'Letter': ['C','B','A','D','E','H','G'], 'Number': [5,5,5,7,7,10,10], 'Value of Letter': [10,15,15,25,20,30,25], 'Amount': [100,'','',30,'',54,''], 'Realisation Index': [1,3,5,2,3,4,5] }) </code></pre> <p>In it, I want to write a loop with the following conditions.</p> <ol> <li>For each number in the number, column pandas should deduct the &quot;Value of Letter&quot; from the &quot;Amount&quot; column. Conditions are:</li> <li>When deducting the value of the letter, first pandas should prioritize based on Realization index (SAY if the realization index is 1 then the relevant amount in &quot;Value of letter&quot; column should be deducted first and so on until all security values are deducted)</li> <li>The highest value in &quot;Value of Letter&quot; should be deducted first from the amount.</li> </ol> <p>I am trying to write a loop using the above conditions in python/pandas and trying to compute the &quot;Amount 2&quot; column.</p> <p>The expected output is as follows.</p> <pre><code>df = pd.DataFrame({'Letter': ['C','B','A','D','E','H','G'], 'Number': [5,5,5,7,7,10,10], 'Value of Letter': [10,15,15,25,20,30,25], 'Amount': [100,'','',30,'',54,''], 'Realisation Index': [1,3,5,2,3,4,5], 'Amount 2':[90,75,60,5,-15,24,-1] }) </code></pre>
<p>Let do it</p> <pre><code>df.Amount=pd.to_numeric(df.Amount,errors='coerce') df['New']=df.sort_values('Realisation Index').groupby('Number').apply(lambda x : max(x['Amount'])-x['Value of Letter'].cumsum()).reset_index(level=0,drop=True) df Letter Number Value of Letter Amount Realisation Index New 0 C 5 10 100.0 1 90.0 1 B 5 15 NaN 3 75.0 2 A 5 15 NaN 5 60.0 3 D 7 25 30.0 2 5.0 4 E 7 20 NaN 3 -15.0 5 H 10 30 54.0 4 24.0 6 G 10 25 NaN 5 -1.0 </code></pre>
python|pandas
0
5,283
54,522,901
Add missing datetime columns to grouped dataframe
<p>Is it possible to add missing date columns from created date_range to grouped dataframe df without for loop and fill zeros as missing values? date_range has 7 date elements. df has 4 date columns. So how to add 3 missing columns to df? </p> <pre><code>import pandas as pd from datetime import datetime start = datetime(2018,6,4, ) end = datetime(2018,6,10,) date_range = pd.date_range(start=start, end=end, freq='D') DatetimeIndex(['2018-06-04', '2018-06-05', '2018-06-06', '2018-06-07', '2018-06-08', '2018-06-09', '2018-06-10'], dtype='datetime64[ns]', freq='D') df = pd.DataFrame({ 'date': ['2018-06-07', '2018-06-10', '2018-06-09','2018-06-09', '2018-06-08','2018-06-09','2018-06-08','2018-06-10', '2018-06-10','2018-06-10',], 'name': ['sogan', 'lyam','alex','alex', 'kovar','kovar','kovar','yamo','yamo','yamo',] }) df['date'] = pd.to_datetime(df['date']) df = (df .groupby(['name', 'date',])['date',] .count() .unstack(fill_value=0) ) df date date date date date 2018-06-07 00:00:00 2018-06-08 00:00:00 2018-06-09 00:00:00 2018-06-10 00:00:00 name alex 0 0 2 0 kovar 0 2 1 0 lyam 0 0 0 1 sogan 1 0 0 0 yamo 0 0 0 3 </code></pre>
<p>Thanks Sina Shabani for clue to making date columns as rows. And in this situation more suitable setting date as index and using .reindex appeared</p> <pre><code>df = (df.groupby(['date', 'name'])['name'] .size() .reset_index(name='count') .pivot(index='date', columns='name', values='count') .fillna(0)) df name alex kovar lyam sogan yamo date 2018-06-07 0.0 0.0 0.0 1.0 0.0 2018-06-08 0.0 2.0 0.0 0.0 0.0 2018-06-09 2.0 1.0 0.0 0.0 0.0 2018-06-10 0.0 0.0 1.0 0.0 3.0 df.index = pd.DatetimeIndex(df.index) df = (df.reindex(pd.date_range(start, freq='D', periods=7), fill_value=0) .sort_index()) df name alex kovar lyam sogan yamo 2018-06-04 0.0 0.0 0.0 0.0 0.0 2018-06-05 0.0 0.0 0.0 0.0 0.0 2018-06-06 0.0 0.0 0.0 0.0 0.0 2018-06-07 0.0 0.0 0.0 1.0 0.0 2018-06-08 0.0 2.0 0.0 0.0 0.0 2018-06-09 2.0 1.0 0.0 0.0 0.0 2018-06-10 0.0 0.0 1.0 0.0 3.0 df.T date 2018-06-07 00:00:00 2018-06-08 00:00:00 2018-06-09 00:00:00 2018-06-10 00:00:00 name alex 0.0 0.0 2.0 0.0 kovar 0.0 2.0 1.0 0.0 lyam 0.0 0.0 0.0 1.0 sogan 1.0 0.0 0.0 0.0 yamo 0.0 0.0 0.0 3.0 </code></pre>
python|pandas
1
5,284
54,602,005
Create column based on another column value, based on assigning value to sets of string values from the input column
<p>My problem seems like it must have a simple solution, but I am unable to solve it. I've tried <code>.loc</code> , <code>np.where</code> and <code>df.apply</code>. </p> <pre><code>#input datetime dty dtx status 2018-09-16 04:38:17 0.0 0.099854 F-On 2018-09-16 04:38:18 0.0 0.100098 F-On 2018-09-16 04:38:19 0.0 0.000000 S-On 2018-09-16 04:38:20 0.0 0.100098 F-On 2018-09-16 04:38:21 0.0 0.100098 circ 2018-09-16 04:38:22 0.0 0.100098 circInS 2018-09-16 04:38:21 0.0 0.100098 TH 2018-09-16 04:38:21 0.0 0.100098 R 2018-09-16 04:38:21 0.0 0.100098 S </code></pre> <p>'mapping' exists from domain -</p> <pre><code> (F-On,S-On) becomes 'On' (circ,TH,circInS) becomes 'fooON' (R) stays 'R' (S) stays 'S' #expected ouput datetime dty dtx status grouped_status 2018-09-16 04:38:17 0.0 0.099854 F-On On 2018-09-16 04:38:18 0.0 0.100098 F-On On 2018-09-16 04:38:19 0.0 0.000000 S-On On 2018-09-16 04:38:20 0.0 0.100098 F-On On 2018-09-16 04:38:21 0.0 0.100098 circ fooON 2018-09-16 04:38:22 0.0 0.100098 circInS fooON 2018-09-16 04:38:21 0.0 0.100098 TH fooON 2018-09-16 04:38:21 0.0 0.100098 R R 2018-09-16 04:38:21 0.0 0.100098 S S </code></pre> <blockquote> <p><code>The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</code></p> </blockquote> <p><a href="https://stackoverflow.com/questions/36921951/truth-value-of-a-series-is-ambiguous-use-a-empty-a-bool-a-item-a-any-o">I understand the code below is comparing an array to a single value</a>; Which is ambigious, hence it fails. To compare row wise I tried using <code>df.apply</code>, but it is not giving desired output.</p> <p>How do I make all three of the methods below work if possible, and which is the best way for a row-wise operation?</p> <pre><code>#using np.where df['grouped_status'] = np.where(df['status'] in ('circ','TH','circInS'), 'fooON', df['status']) #using df.loc df.loc[df['status'] in ('circ','TH','circInS'),['status']] = 'fooON' df['grouped_status'] = df['status'] #function for df.apply def group_status_fn (row): val = "" if row['grouped_status'] in ('F-On','B-On','S-On'): row['grouped_status'] = 'On' elif row['grouped_status'] in (circ,TH,circInS): row['grouped_status'] = fooON elif row['grouped_status'] == 'R': val = 'R' elif row['grouped_status'] == 'S': val = 'S' return val #using df.apply df["grouped_status2"]=df.apply(group_status_fn, axis = 1) #out - output column half empty datetime dHD status grouped_status grouped_status2 2018-09-16 04:38:35 0.000000 F-On F-On 2018-09-16 04:38:36 0.000000 F-On F-On 2018-09-16 04:38:37 0.000000 S-On S-On 2018-09-16 04:38:38 0.000000 S-On S-On 2018-09-16 04:38:39 0.000000 R R R 2018-09-16 04:38:40 0.099854 R R R 2018-09-16 04:38:41 0.100098 R R R 2018-09-16 04:38:42 0.000000 R R R 2018-09-16 04:38:43 0.000000 R R R </code></pre>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer">map</a>:</p> <pre><code>lookup = {'F-On' : 'On', 'S-On' : 'On', 'circ':'fooON', 'TH':'fooON', 'circInS':'fooON', 'R':'R', 'S':'S'} df['grouped_status'] = df.status.map(lookup) </code></pre> <p><strong>Output</strong></p> <pre><code> datetime dty dtx status grouped_status 2018-09-16 04:38:17 0.0 0.099854 F-On On 2018-09-16 04:38:18 0.0 0.100098 F-On On 2018-09-16 04:38:19 0.0 0.000000 S-On On 2018-09-16 04:38:20 0.0 0.100098 F-On On 2018-09-16 04:38:21 0.0 0.100098 circ fooON 2018-09-16 04:38:22 0.0 0.100098 circInS fooON 2018-09-16 04:38:21 0.0 0.100098 TH fooON 2018-09-16 04:38:21 0.0 0.100098 R R 2018-09-16 04:38:21 0.0 0.100098 S S </code></pre>
python|python-3.x|pandas
1
5,285
54,673,038
update / merge and update a subset of columns pandas
<p>I have <code>df1</code>:</p> <pre><code> ColA ColB ID1 ColC ID2 0 a 1.0 45.0 xyz 23.0 1 b 2.0 56.0 abc 24.0 2 c 3.0 34.0 qwerty 28.0 3 d 4.0 34.0 wer 33.0 4 e NaN NaN NaN NaN </code></pre> <p><code>df2</code>:</p> <pre><code> ColA ColB ID1 ColC ID2 0 i 0 45.0 NaN 23.0 1 j 0 56.0 NaN 24.0 2 NaN 0 NaN fd 25.0 3 NaN 0 NaN NaN 26.0 4 NaN 0 23.0 e 45.0 5 NaN 0 45.0 r NaN 6 NaN 0 56.0 NaN 29.0 </code></pre> <p>I am trying to update df2 only on columns which wil be a choice= <code>['ColA','ColB']</code> where <code>ID1</code> and <code>ID2</code> both matches in the 2 dfs.</p> <p>Expected output:</p> <pre><code> ColA ColB ID1 ColC ID2 0 a 1.0 45.0 NaN 23.0 1 b 2.0 56.0 NaN 24.0 2 NaN 0 NaN fd 25.0 3 NaN 0 NaN NaN 26.0 4 NaN 0 23.0 e 45.0 5 NaN 0 45.0 r NaN 6 NaN 0 56.0 NaN 29.0 </code></pre> <p>So far I have tried:</p> <pre><code>u = df1.set_index(['ID1','ID2']) u = u.loc[u.index.dropna()] v = df2.set_index(['ID1','ID2']) v= v.loc[v.index.dropna()] v.update(u) v.reset_index() </code></pre> <p>Which gives me the correct update(but I loose the Ids which are NaN) also the update takes place on <code>ColC</code> which i dont want:</p> <pre><code> ID1 ID2 ColA ColB ColC 0 45.0 23.0 a 1.0 xyz 1 56.0 24.0 b 2.0 abc 2 23.0 45.0 NaN 0.0 e 3 56.0 29.0 NaN 0.0 NaN </code></pre> <p>I have also tried merge and combine_first. cant figure out what is the best approach to do this based on the choicelist. </p>
<p>Use <code>merge</code> with <code>right</code> join and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html" rel="nofollow noreferrer"><code>combine_first</code></a>:</p> <pre><code>choice= ['ColA','ColB'] joined = ['ID1','ID2'] c = choice + joined df3 = df1[c].merge(df2[c], on=joined, suffixes=('','_'), how='right')[c] print (df3) ColA ColB ID1 ID2 0 a 1.0 45.0 23.0 1 b 2.0 56.0 24.0 2 NaN NaN NaN 25.0 3 NaN NaN NaN 26.0 4 NaN NaN 23.0 45.0 5 NaN NaN 45.0 NaN 6 NaN NaN 56.0 29.0 df2[c] = df3.combine_first(df2[c]) print (df2) ColA ColB ID1 ColC ID2 0 a 1.0 45.0 NaN 23.0 1 b 2.0 56.0 NaN 24.0 2 NaN 0.0 NaN fd 25.0 3 NaN 0.0 NaN NaN 26.0 4 NaN 0.0 23.0 e 45.0 5 NaN 0.0 45.0 r NaN 6 NaN 0.0 56.0 NaN 29.0 </code></pre>
python|python-3.x|pandas
3
5,286
54,299,762
How I can convert .tflite model to a .pb frozen graph in Tensorflow?
<p>At the beginning, I have the frozen graph .pb file of a model. I converted it to .tflite and post-training quantized this .tflite model. In the end, I would like to convert this quantized .tflite model into a .pb frozen graph. How can I achieve this last step?</p> <p>I have searched a lot but didn't find any solutions or any hints. Please help! </p>
<p>Try run toco with flags</p> <pre><code>toco --input_format=TFLITE --output_format=TENSORFLOW_GRAPHDEF ... </code></pre> <p>This may not work with some new operations of tflite</p>
tensorflow|tensorflow-lite
2
5,287
54,411,896
finding and replacing strings with numbers only within a pandas dataframe
<p>I am trying to replace the strings that contain numbers with another string (an empty one in this case) within a pandas DataFrame.</p> <p>I tried with the .replace method and a regex expression:</p> <pre><code># creating dummy dataframe data = pd.DataFrame({'A': ['test' for _ in range(5)]}) # the value that should get replaced with '' data.iloc[0] = 'test5' data.replace(regex=r'\d', value='', inplace=True) print(data) A 0 test 1 test 2 test 3 test 4 test </code></pre> <p>As you can see, it only replace the '5' within the string and not the whole string.</p> <p>I also tried using the .where method but it doesn't seem to fit my need as I don't want to replace any of the strings not containing numbers</p> <p>this is what it should look like:</p> <pre><code> A 0 1 test 2 test 3 test 4 test </code></pre>
<p>You can use Boolean indexing via <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>pd.Series.str.contains</code></a> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>loc</code></a>:</p> <pre><code>data.loc[data['A'].str.contains(r'\d'), 'A'] = '' </code></pre> <p>Similarly, with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.mask.html" rel="nofollow noreferrer"><code>mask</code></a> or <a href="https://www.numpy.org/devdocs/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a>:</p> <pre><code>data['A'] = data['A'].mask(data['A'].str.contains(r'\d'), '') data['A'] = np.where(data['A'].str.contains(r'\d'), '', data['A']) </code></pre>
python|python-3.x|string|pandas|dataframe
2
5,288
54,352,837
Python - Change Column Names, Merge and Re-Order a dataframe
<p>I have two DataFrames - DataFrameA and DataFrameB</p> <p>DataFrameA</p> <pre><code>ID ColA ColB ColC 1 12 23 40 2 21 24 45 3 23 31 50 </code></pre> <p>DataFrameB</p> <pre><code>ID ColA ColB ColC 1 21 23 40 2 20 44 45 3 29 51 70 4 49 51 70 </code></pre> <p>I want an output DataFrame like this,</p> <p>Prefix for DataFrame B Columns declared in a variable = "BBBBB"</p> <p>DataFrameC </p> <pre><code>ID ColA BBBBB.ColA ColB BBBBB.ColB ColC BBBBB.ColC 1 12 21 23 23 40 40 2 21 20 24 44 45 45 3 23 29 31 51 50 70 </code></pre> <p>I am doing an Inner Join between DataFrame A and Data Frame B and sorting the columns in order after that.</p> <p>The DataFrameA and DataFrameB are Pandas dataframe. So prefer a pandas method.</p>
<p>Using </p> <pre><code>yourdf=dfa.merge(dfb.add_prefix('BBBBB.'),left_on='ID',right_on='BBBBB.ID') yourdf Out[219]: ID ColA ColB ... BBBBB.ColA BBBBB.ColB BBBBB.ColC 0 1 12 23 ... 21 23 40 1 2 21 24 ... 20 44 45 2 3 23 31 ... 29 51 70 [3 rows x 8 columns] </code></pre> <p>If need reorder </p> <pre><code>yourdf.reindex(columns=sorted(yourdf.columns,key=lambda x : x.split('.')[-1])) </code></pre>
python|pandas
5
5,289
73,840,787
Finding the third Friday for an expiration date using pandas datetime
<p>I have a simple definition which finds the third friday of the month. I use this function to populate the dataframe for the third fridays and that part works fine.</p> <p>The trouble I'm having is finding the third friday for an expiration_date that doesn't fall on a third friday.</p> <p>This is my code simplified:</p> <pre><code>import pandas as pd def is_third_friday(d): return d.weekday() == 4 and 15 &lt;= d.day &lt;= 21 x = ['09/23/2022','09/26/2022','09/28/2022','09/30/2022','10/3/2022','10/5/2022', '10/7/2022','10/10/2022','10/12/2022','10/14/2022','10/17/2022','10/19/2022','10/21/2022', '10/24/2022','10/26/2022','10/28/2022','11/4/2022','11/18/2022','12/16/2022','12/30/2022', '01/20/2023','03/17/2023','03/31/2023','06/16/2023','06/30/2023','09/15/2023','12/15/2023', '01/19/2024','06/21/2024','12/20/2024','01/17/2025'] df = pd.DataFrame(x) df.rename( columns={0 :'expiration_date'}, inplace=True ) df['expiration_date']= pd.to_datetime(df['expiration_date']) expiration_date = df['expiration_date'] df[&quot;is_third_friday&quot;] = [is_third_friday(x) for x in expiration_date] third_fridays = df.loc[df['is_third_friday'] == True] df[&quot;current_monthly_exp&quot;] = third_fridays['expiration_date'].min() df[&quot;monthly_exp&quot;] = third_fridays[['expiration_date']] df.to_csv(path_or_buf = f'C:/Data/Date Dataframe.csv',index=False) </code></pre> <p>What I'm looking for is any expiration_date that is prior to the monthly expire, I want to populate the dataframe with that monthly expire. If it's past the monthly expire date I want to populate the dataframe with the following monthly expire.</p> <p>I thought I'd be able to use a new dataframe with only the monthly expires as a lookup table and do a timedelta, but when you look at 4/21/2023 and 7/21/2023 these dates don't exist in that dataframe.</p> <p>This is my current output:</p> <p><a href="https://i.stack.imgur.com/YxVtE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YxVtE.png" alt="enter image description here" /></a></p> <p>This is the output I'm seeking:</p> <p><a href="https://i.stack.imgur.com/vCGSF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vCGSF.png" alt="enter image description here" /></a></p> <p>I was thinking I could handle this problem with something like:</p> <pre><code>date_df[&quot;monthly_exp&quot;][0][::-1].expanding().min()[::-1] </code></pre> <p>But, it wouldn't solve for the 4/21/2023 and 7/21/2023 problem. Additionally, Pandas won't let you do this in a datetime dataframe.</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame([1, nan,2,nan,nan,nan,4]) &gt;&gt;&gt; df 0 0 1.0 1 NaN 2 2.0 3 NaN 4 NaN 5 NaN 6 4.0 &gt;&gt;&gt; df[&quot;b&quot;] = df[0][::-1].expanding().min()[::-1] &gt;&gt;&gt; df 0 b 0 1.0 1.0 1 NaN 2.0 2 2.0 2.0 3 NaN 4.0 4 NaN 4.0 5 NaN 4.0 6 4.0 4.0 </code></pre> <p>I've also tried something like the following in many different forms with little luck:</p> <pre><code>if df['is_third_friday'].any() == True: df[&quot;monthly_exp&quot;] = third_fridays[['expiration_date']] else: df[&quot;monthly_exp&quot;] = third_fridays[['expiration_date']].shift(third_fridays) </code></pre> <p>Any suggestions to get me in the right direction would be appreciated. I've been stuck on this problem for sometime.</p>
<p>You could add these additional lines of code (to replace <code>df[&quot;monthly_exp&quot;] = third_fridays[['expiration_date']]</code>:</p> <pre><code># DataFrame of fridays from minimum expiration_date to 30 days after last fri_3s = pd.DataFrame(pd.date_range(df[&quot;expiration_date&quot;].min(), df[&quot;expiration_date&quot;].max()+pd.tseries.offsets.Day(30), freq=&quot;W-FRI&quot;), columns=[&quot;monthly_exp&quot;]) # only keep those that are between 15th and 21st (as your function did) fri_3s = fri_3s[fri_3s.monthly_exp.dt.day.between(15, 21)] # merge_asof to get next third friday df = pd.merge_asof(df, fri_3s, left_on=&quot;expiration_date&quot;, right_on=&quot;monthly_exp&quot;, direction=&quot;forward&quot;) </code></pre> <p>This creates a second DataFrame with the 3rd Fridays, and then by merging with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer"><code>merge_asof</code></a> returns the next of these from the <code>expiration_date</code>.</p> <p>And to simplify your <code>date_df[&quot;monthly_exp&quot;][0][::-1].expanding().min()[::-1]</code> and use it for datetime, you could instead write <code>df[&quot;monthly_exp&quot;].bfill()</code> (which backward fills). As you mentioned, this will only include Fridays that exist in your DataFrame already, so creating a list of the possible Fridays might be the easiest way.</p>
python|pandas|dataframe
1
5,290
73,562,112
How to find first and last time for every day for each value
<p>I am trying to figure out a way to find the first and last time stamp for each asset within a dataframe for each day. For example, I have this data frame:</p> <pre><code>import pandas as pd data = { 'Date':['2022-01-01','2022-01-01','2022-01-01','2022-01-01','2022-01-01','2022-01-01', '2022-01-01' ,'2022-01-02','2022-01-02','2022-01-02','2022-01-02','2022-01-02','2022-01-02', '2022-01-02','2022-01-02','2022-01-03','2022-01-03','2022-01-03','2022-01-03','2022-01-03', '2022-01-03','2022-01-03','2022-01-03'], 'Time':['12:01','12:05','14:07','11:01','13:06','17:12','15:15', '9:02','8:06','14:06','19:19','10:00','13:01','17:00','10:15', '8:00','9:00','7:15','16:04','15:02','17:10','12:06','15:00'], 'Asset':[111,111,111,222,222,222,222, 111,111,111,111,111,222,222,222, 333,333,111,111,111,111,333,111] } df = pd.DataFrame(data) df </code></pre> <p>Which looks like:</p> <pre><code> Date Time Asset 0 2022-01-01 12:01 111 1 2022-01-01 12:05 111 2 2022-01-01 14:07 111 3 2022-01-01 11:01 222 4 2022-01-01 13:06 222 5 2022-01-01 17:12 222 6 2022-01-01 15:15 222 7 2022-01-02 9:02 111 8 2022-01-02 8:06 111 9 2022-01-02 14:06 111 10 2022-01-02 19:19 111 11 2022-01-02 10:00 111 12 2022-01-02 13:01 222 13 2022-01-02 17:00 222 14 2022-01-02 10:15 222 15 2022-01-03 8:00 333 16 2022-01-03 9:00 333 17 2022-01-03 7:15 111 18 2022-01-03 16:04 111 19 2022-01-03 15:02 111 20 2022-01-03 17:10 111 21 2022-01-03 12:06 333 22 2022-01-03 15:00 111 </code></pre> <p>I would like to group this data by day and remove all duplicates for each asset in each day, only keeping the first and last timestamp for each value within each day. My ideal outcome would look like this:</p> <pre><code>data1 = { 'Date':['2022-01-01','2022-01-01','2022-01-01','2022-01-01', '2022-01-02','2022-01-02','2022-01-02','2022-01-02', '2022-01-03','2022-01-03','2022-01-03','2022-01-03',], 'Time':['12:01','14:07','11:01','17:12', '8:06','19:19','10:15','17:00', '8:00','12:06','7:15','17:10'], 'Asset':[111,111,222,222, 111,111,222,222, 333,333,111,111] } df1 = pd.DataFrame(data1) df1 </code></pre> <p>Looking like:</p> <pre><code>Date Time Asset 0 2022-01-01 12:01 111 1 2022-01-01 14:07 111 2 2022-01-01 11:01 222 3 2022-01-01 17:12 222 4 2022-01-02 8:06 111 5 2022-01-02 19:19 111 6 2022-01-02 10:15 222 7 2022-01-02 17:00 222 8 2022-01-03 8:00 333 9 2022-01-03 12:06 333 10 2022-01-03 7:15 111 11 2022-01-03 17:10 111 </code></pre> <p>Ideally, I would like to solve this problem in Python, however if there is an easier solution in R or SQL I am able to use those. Any help would be appreciated! Thanks in advance!</p>
<pre><code>import pandas as pd data = { 'Date':['2022-01-01','2022-01-01','2022-01-01','2022-01-01','2022-01-01','2022-01-01', '2022-01-01' ,'2022-01-02','2022-01-02','2022-01-02','2022-01-02','2022-01-02','2022-01-02', '2022-01-02','2022-01-02','2022-01-03','2022-01-03','2022-01-03','2022-01-03','2022-01-03', '2022-01-03','2022-01-03','2022-01-03'], 'Time':['12:01','12:05','14:07','11:01','13:06','17:12','15:15', '9:02','8:06','14:06','19:19','10:00','13:01','17:00','10:15', '8:00','9:00','7:15','16:04','15:02','17:10','12:06','15:00'], 'Asset':[111,111,111,222,222,222,222, 111,111,111,111,111,222,222,222, 333,333,111,111,111,111,333,111] } df = pd.DataFrame(data) df_f = df.groupby(by=['Date', 'Asset']).first().reset_index() df_l = df.groupby(by=['Date', 'Asset']).last().reset_index() df_fl = pd.concat([df_f, df_l])[['Date', 'Time', 'Asset']] df_fl = df_fl.sort_values(by=['Date', 'Asset', 'Time']).reset_index().drop(columns=['index']) print(df_fl) </code></pre> <p>prints</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>Date</th> <th>Time</th> <th>Asset</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>2022-01-01</td> <td>12:01</td> <td>111</td> </tr> <tr> <td>1</td> <td>2022-01-01</td> <td>14:07</td> <td>111</td> </tr> <tr> <td>2</td> <td>2022-01-01</td> <td>11:01</td> <td>222</td> </tr> <tr> <td>3</td> <td>2022-01-01</td> <td>15:15</td> <td>222</td> </tr> <tr> <td>4</td> <td>2022-01-02</td> <td>10:00</td> <td>111</td> </tr> <tr> <td>5</td> <td>2022-01-02</td> <td>9:02</td> <td>111</td> </tr> <tr> <td>6</td> <td>2022-01-02</td> <td>10:15</td> <td>222</td> </tr> <tr> <td>7</td> <td>2022-01-02</td> <td>13:01</td> <td>222</td> </tr> <tr> <td>8</td> <td>2022-01-03</td> <td>15:00</td> <td>111</td> </tr> <tr> <td>9</td> <td>2022-01-03</td> <td>7:15</td> <td>111</td> </tr> <tr> <td>10</td> <td>2022-01-03</td> <td>12:06</td> <td>333</td> </tr> <tr> <td>11</td> <td>2022-01-03</td> <td>8:00</td> <td>333</td> </tr> </tbody> </table> </div> <p>Note: Times and dates will only be sorted properly if they are date / time values and not strings.</p>
python|python-3.x|pandas|time-series|data-science
0
5,291
73,550,681
Extract strings values from DataFrame column
<p>I have the following DataFrame:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Student</th> <th>food</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>R0100000</td> </tr> <tr> <td>2</td> <td>R0200000</td> </tr> <tr> <td>3</td> <td>R0300000</td> </tr> <tr> <td>4</td> <td>R0400000</td> </tr> </tbody> </table> </div> <p>I need to extract as a string the values of the &quot;food&quot; column of the df DataFrame when I filter the data.</p> <p>For example, when I filter by the Student=1, I need the return value of &quot;R0100000&quot; as a string value, without any other characters or spaces.</p> <p>This is the code to create the same DataFrame as mine:</p> <pre><code> data={'Student':[1,2,3,4],'food':['R0100000', 'R0200000', 'R0300000', 'R0400000']} df=pd.DataFrame(data) </code></pre> <p>I tried to select the Dataframe Column and apply str(), but it does not return me the desired results:</p> <pre><code> df_new=df.loc[df['Student'] == 1] df_new=df_new.food df_str=str(df_new) del df_new </code></pre>
<p>This works for me:</p> <p><code>s = df[df.Student==1]['food'][0]</code></p> <p><code>s.strip()</code></p>
python|pandas|numpy
2
5,292
71,370,318
How to preserve order in python while comparing images from 2 different directories?
<p>I was comparing images from 2 different directories. I write code but the sequence of comparing is this</p> <p>{ 0,1,10,11,12,...,19,2,20,21,..} not like {0,1,2,3,...,9,10,11,12,...}</p> <pre><code>L1 = os.listdir(&quot;D:\\image_dir_1\\&quot;) image_list_1 = list() image_list_2 = list() final_i = list() final_j = list() dirs = -1 for i in L1: dirs = dirs + 1 img1 = cv2.imread(&quot;D:\\image_dir_1\\&quot; +i ) L2 = os.listdir(&quot;D:\\image_dir_2\\&quot;+str(dirs) +'\\') for j in L2: img2 = cv2.imread(&quot;D:\\image_dir_2\\&quot; + str(dirs)+ '\\' +j ) image_list_1.append(FUN_1(img1 , img2)) image_list_2.append(FUN_2(img1 , img2)) final_i.append(i) final_j.append(j) filei = pd.DataFrame(final_i,columns=['Col_1']) filej = pd.DataFrame(final_j,columns=['Col_2']) frame_1 = pd.DataFrame(image_list_1,columns=['c1']) frame_2 = pd.DataFrame(Image_list_2,columns=['c2']) final_value = pd.concat([filei, filej, frame_1,frame_2],axis=1) final_value.to_csv('spreadsheet.csv',index=None) </code></pre>
<p>You have to sort the <strong>L1</strong> list:</p> <pre><code>L1 = sorted(os.listdir(&quot;D:\\image_dir_1\\&quot;), key=lambda str: int(str[:-4].strip())) </code></pre> <p>The issue is that without sorting them by their actual int value, you have the files names sorted lexicographically.</p>
python|python-3.x|pandas|list|glob
1
5,293
71,094,366
How to apply a function to specific columns of a pandas dataframe?
<p>I would like to apply a function to specific columns of a pandas data frame. Here is an illustration:</p> <pre><code># import modules from pandas_datareader import data as pdr # import parameters start = &quot;2020-01-01&quot; end = &quot;2021-01-01&quot; symbols = [&quot;AAPL&quot;] # get the data data = pdr.get_data_yahoo(symbols, start, end) def mult(row): return row['Close']*2, row['Open']/3 data[['Close', 'Open']].apply(mult, axis = 1) print(data.head()) </code></pre> <p>The result:</p> <pre><code>Attributes Adj Close Close High Low Open Volume Symbols AAPL AAPL AAPL AAPL AAPL AAPL Date 2020-01-02 73.894333 75.087502 75.150002 73.797501 74.059998 135480400.0 2020-01-03 73.175926 74.357498 75.144997 74.125000 74.287498 146322800.0 2020-01-06 73.759003 74.949997 74.989998 73.187500 73.447502 118387200.0 2020-01-07 73.412109 74.597504 75.224998 74.370003 74.959999 108872000.0 2020-01-08 74.593048 75.797501 76.110001 74.290001 74.290001 132079200.0 </code></pre> <p>Any thoughts as to why that doesn't work?</p>
<p>I think the problem is that you are not assigning the return of the <code>mult</code> functions to any variable.</p> <p>One way to achieve what you want is:</p> <pre><code># import modules from pandas_datareader import data as pdr # import parameters start = &quot;2020-01-01&quot; end = &quot;2021-01-01&quot; symbols = [&quot;AAPL&quot;] # get the data data = pdr.get_data_yahoo(symbols, start, end) def mult(df): df['Close'] = 2 * df['Close'] df['Open'] = df['Open'] / 3 return df mult(data) print(data.head()) Attributes Adj Close Close High Low Open \ Symbols AAPL AAPL AAPL AAPL AAPL Date 2020-01-02 73.894325 150.175003 75.150002 73.797501 24.686666 2020-01-03 73.175926 148.714996 75.144997 74.125000 24.762499 2020-01-06 73.759010 149.899994 74.989998 73.187500 24.482501 2020-01-07 73.412117 149.195007 75.224998 74.370003 24.986666 2020-01-08 74.593048 151.595001 76.110001 74.290001 24.763334 </code></pre>
python|pandas|dataframe
2
5,294
71,186,025
Outputting pandas timestamp to tuple with just month and day
<p>I have a pandas dataframe with a timestamp field which I have successfully to converted to datetime format and now I want to output just the month and day as a tuple for the first date value in the data frame. It is for a test and the output must not have leading zeros. I ahve tried a number of things but I cannot find an answer without converting the timestamp to a string which does not work. This is the format 2021-05-04 14:20:00.426577</p> <p><code>df_cleaned['trans_timestamp']=pd.to_datetime(df_cleaned['trans_timestamp'])</code> is as far as I have got with the code.</p> <p>I have been working on this for days and cannot get output the checker will accept.</p>
<p><strong>Update</strong></p> <p>If you want to extract month and day from the first record (<em>solution proposed by @FObersteiner</em>)</p> <pre><code>&gt;&gt;&gt; df['trans_timestamp'].iloc[0].timetuple()[1:3] (5, 4) </code></pre> <p>If you want extract all month and day from your dataframe, use:</p> <pre><code># Setup df = pd.DataFrame({'trans_timestamp': ['2021-05-04 14:20:00.426577']}) df['trans_timestamp'] = pd.to_datetime(df['trans_timestamp']) # Extract tuple df['month_day'] = df['trans_timestamp'].apply(lambda x: (x.month, x.day)) print(df) # Output trans_timestamp month_day 0 2021-05-04 14:20:00.426577 (5, 4) </code></pre>
pandas|dataframe|datetime
1
5,295
52,408,867
Angles Comparasion Loss Function for Tensorflow in Python
<p>I have a CNN, takes in an image, outs a single value - an angle. The data set is made of (x = image, y = angle) couples.</p> <p>I want the network for each image, to predict an angle.</p> <p>I have found this suggestion: <a href="https://stats.stackexchange.com/a/218547">https://stats.stackexchange.com/a/218547</a> But I can't seem to understand how to translate it into a working Tensorflow in Python code.</p> <pre><code>x_CNN = tf.placeholder(tf.float32, (None, 14, 14, 3)) y_CNN = tf.placeholder(tf.int32, (None)) keep_prob_CNN = tf.placeholder(tf.float32) one_hot_y_CNN = tf.one_hot(y_CNN, 1) def MyCNN(x): # Network's architecture: In: Image, Out: Angle. logits_CNN = MyCNN(x) # Loss Function attempt &lt;------------------------------ outs = tf.tanh(logits_CNN) outc = tf.tanh(logits_CNN) loss_operation_CNN = tf.reduce_mean(0.5 * (tf.square(tf.sin(one_hot_y_CNN) - outs) + tf.square(tf.cos(one_hot_y_CNN) - outc))) learning_rate_placeholder_CNN = tf.placeholder(tf.float32, shape=[]) optimizer_CNN = tf.train.AdamOptimizer(learning_rate = learning_rate_placeholder_CNN) training_operation_CNN = optimizer_CNN.minimize(loss_operation_CNN) correct_prediction_CNN = tf.equal(tf.argmax(logits_CNN, 1), tf.argmax(one_hot_y_CNN, 1)) accuracy_operation_CNN = tf.reduce_mean(tf.cast(correct_prediction_CNN, tf.float32)) # And a working Training and testing code... </code></pre>
<p>That is going in the right direction, but the idea is that, instead of having <code>MyCNN</code> produce a single angle value for each example, produce two values. So if the return value of <code>MyCNN</code> is currently something with shape like <code>(None,)</code> or <code>(None, 1)</code>, you should change it to <code>(None, 2)</code> - that is, the last layer should have one more output. If you have doubts about how to do this please provide more details about the body of <code>MyCNN</code>.</p> <p>Then you would just have:</p> <pre><code>outs = tf.tanh(logits_CNN[:, 0]) outc = tf.tanh(logits_CNN[:, 1]) out_radians = tf.atan2(outs, outc) # This is the final angle output in radians </code></pre> <p>About the loss, I am not sure I understand your Y input. If you are trying to predict an angle, shouldn't it be a float value, and not an integer? In that case you would have:</p> <pre><code># Example angle in radians y_CNN = tf.placeholder(tf.float32, (None,)) # ... loss_operation_CNN = tf.reduce_mean(0.5 * (tf.square(tf.sin(y_CNN) - outs) + tf.square(tf.cos(y_CNN) - outc))) </code></pre>
python|tensorflow|conv-neural-network|angle|loss-function
1
5,296
52,103,441
Summing multiple lists stored in dataframe
<p>I have a dataframe with multiple lists stored as:</p> <p>I have two dataframes as:</p> <pre><code>df1.ix[1:3] DateTime Col1 Col2 2018-01-02 [1, 2] [11, 21] 2018-01-03 [3, 4] [31, 41] </code></pre> <p>I want to sum the lists in the df1 to get:</p> <pre><code>DateTime sumCol 2018-01-02 [12, 23] 2018-01-03 [34, 45] </code></pre> <p>I tried <code>numpy.sum(df1, axis=1)</code> but that causes list concatenation instead of sum.</p> <p>Edit: My original dataframe has more than 2 columns. </p>
<p>Using a list comprehension and <code>np.array</code>:</p> <pre><code>df.assign(sumCol=[np.array(x) + np.array(y) for x, y in zip(df.Col1, df.Col2)]) </code></pre> <p></p> <pre><code> DateTime Col1 Col2 sumCol 0 2018-01-02 [1, 2] [11, 21] [12, 23] 1 2018-01-03 [3, 4] [31, 41] [34, 45] </code></pre> <p>If the arrays are always the same length:</p> <pre><code>df.assign(sumCol=[np.stack([x,y]).sum(0) for x, y in zip(df.Col1, df.Col2)]) </code></pre> <p>To apply this to many columns, you can use <code>iloc</code></p> <pre><code>zip(*df.iloc[:, 1:].values.T) </code></pre> <hr> <p>Here is an example on a wider DataFrame:</p> <pre><code> A B C D 0 1 [1, 2] [1, 2] [1, 2] 1 2 [3, 4] [3, 4] [3, 4] 2 3 [5, 6] [5, 6] [5, 6] </code></pre> <p></p> <p>Using <code>zip</code> with <code>df.values</code></p> <pre><code>df.assign(sumCol=[np.stack(a).sum(0) for a in zip(*df.iloc[:, 1:].values.T)]) </code></pre> <p></p> <pre><code> A B C D sumCol 0 1 [1, 2] [1, 2] [1, 2] [3, 6] 1 2 [3, 4] [3, 4] [3, 4] [9, 12] 2 3 [5, 6] [5, 6] [5, 6] [15, 18] </code></pre>
python|python-3.x|pandas|dataframe
3
5,297
52,252,449
cannot import name 'multiarray' in numpy with python3
<p>I usually work with python 2.7 but this time i have to test a script in python3.</p> <p>It is already installed on my computer, however when i start "python3", then go "import numpy", it show me "cannot import name 'multiarray'.</p> <p>I even installed anaconda3 to test, but nothing happens</p> <pre><code>myName:~/anaconda3/bin$ python3 Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) [GCC 7.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import numpy Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/lib/python2.7/dist-packages/numpy/__init__.py", line 180, in &lt;module&gt; from . import add_newdocs File "/usr/lib/python2.7/dist-packages/numpy/add_newdocs.py", line 13, in &lt;module&gt; from numpy.lib import add_newdoc File "/usr/lib/python2.7/dist-packages/numpy/lib/__init__.py", line 8, in &lt;module&gt; from .type_check import * File "/usr/lib/python2.7/dist-packages/numpy/lib/type_check.py", line 11, in &lt;module&gt; import numpy.core.numeric as _nx File "/usr/lib/python2.7/dist-packages/numpy/core/__init__.py", line 14, in &lt;module&gt; from . import multiarray ImportError: cannot import name 'multiarray' </code></pre> <p>I saw that it is looking in the lib/python2.7, but i cannot find what to do to get him search in the python3 library.</p> <p>I already tried <code>python -m pip install numpy</code>, and tried to create a virtualenv in python3 but i still get the same error.</p> <p>I cannot figure what to do. Can someone help me ?</p> <p>I would like to add, i cannot start command with 'sudo' as i'm working on a client machine.</p> <p><strong>edit:</strong></p> <p>i tried @gehbiszumeis answer and got this:</p> <pre><code>myName:~ $ cd anaconda3/bin/ myName:~/anaconda3/bin $ source activate /home/myName/anaconda3 (base) myName:~/anaconda3/bin $ conda list numpy # packages in environment at /home/myName/anaconda3: # # Name Version Build Channel numpy 1.14.3 py36hcd700cb_1 numpy-base 1.14.3 py36h9be14a7_1 numpydoc 0.8.0 py36_0 (base) myName:~/anaconda3/bin $ python3 Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) [GCC 7.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import numpy Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/lib/python2.7/dist-packages/numpy/__init__.py", line 180, in &lt;module&gt; from . import add_newdocs File "/usr/lib/python2.7/dist-packages/numpy/add_newdocs.py", line 13, in &lt;module&gt; from numpy.lib import add_newdoc File "/usr/lib/python2.7/dist-packages/numpy/lib/__init__.py", line 8, in &lt;module&gt; from .type_check import * File "/usr/lib/python2.7/dist-packages/numpy/lib/type_check.py", line 11, in &lt;module&gt; import numpy.core.numeric as _nx File "/usr/lib/python2.7/dist-packages/numpy/core/__init__.py", line 14, in &lt;module&gt; from . import multiarray ImportError: cannot import name 'multiarray' </code></pre> <p>I see there is numpy 36 installed when i type conda list numpy, but it seems not to work.. Did i miss something ?</p> <p><strong>edit2:</strong> After @Pal Szabo method, i tested command <code>python3 -m pip install --upgrade pip</code> and got this error : </p> <pre><code>(env) (base) myName:~/anaconda3/bin $ python3 -m pip install --upgrade pip Traceback (most recent call last): File "/home/myName/anaconda3/lib/python3.6/runpy.py", line 183, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/home/myName/anaconda3/lib/python3.6/runpy.py", line 142, in _get_module_details return _get_module_details(pkg_main_name, error) File "/home/myName/anaconda3/lib/python3.6/runpy.py", line 109, in _get_module_details __import__(pkg_name) File "/usr/lib/python2.7/dist-packages/pip/__init__.py", line 4, in &lt;module&gt; import locale File "/home/myName/anaconda3/bin/env/lib/python3.6/locale.py", line 16, in &lt;module&gt; import re File "/home/myName/anaconda3/bin/env/lib/python3.6/re.py", line 142, in &lt;module&gt; class RegexFlag(enum.IntFlag): AttributeError: module 'enum' has no attribute 'IntFlag'` </code></pre> <p>It is a crazy mix between python3, python2.7 then again python3. I'm lost \o/</p> <p><strong>edit3:</strong></p> <p>I finally found my error. It was a probleme with my PYTHONPATH, which was pointing somewhere where a .pth file was defined, with some hard link to python 2.7 libraries. with a simple "unset PYTHONPATH" it works fine. Thanks you all</p>
<p>I had the same problem, took me several hours to figure it out. </p> <p>In my case, the <code>PYTHONPATH</code> was set to <code>/usr/lib/python2.6/dist-packages/</code> changing it to <code>/Users/xxx/miniconda3/lib/python3.7/site-packages/</code> resolved the issue. Good luck. </p>
python|python-3.x|python-2.7|numpy|installation
2
5,298
60,467,807
Dask progress during task
<p>With dask dataframe using<br> <code>df = dask.dataframe.from_pandas(df, npartitions=5) series = df.apply(func) future = client.compute(series) progress(future)</code></p> <p>In a jupyter notebook I can see progress bar for how many apply() calls completed per partition (e.g 2/5).<br> Is there a way for dask to report progress inside each partition?<br> Something like <code>tqdm</code> <code>progress_apply()</code> for pandas. </p>
<p>If you mean, how complete each call of <code>func()</code> is, then no, there is no way for Dask to know that. Dask calls python functions which run in their own python thread (python threads cannot be interrupted by another thread), and Dask only knows whether the call is done or not.</p> <p>You could perhaps conceive of calling a function which has some internal callbacks or other reporting system, but I don't think I've seen anything like that.</p>
python|pandas|dask|tqdm|dask-dataframe
0
5,299
60,650,617
Error while running profiling report using pandas, giving me an error "TypeError: describe_boolean_1d() got an unexpected keyword argument 'title'"
<p>I am using the code below</p> <pre><code>profile = ProfileReport(df, title='Pandas Profiling Report', html={'style':{'full_width':True}}) </code></pre> <p>to run a profile report but getting this error</p> <pre><code>"TypeError: describe_boolean_1d() got an unexpected keyword argument 'title'" </code></pre> <p>And when I remove the title, I get this error</p> <pre><code>"TypeError: describe_boolean_1d() got an unexpected keyword argument 'html'" </code></pre>
<p>I have conda on my machine, so first tried installing pandas-profiling with conda, but had the same error as you.</p> <p>I have removed it: <code>conda remove pandas-profiling</code> and then reinstall it with pip: <code>pip install pandas-profiling[notebook,html]</code></p> <p>And it worked fine.</p> <p>Note: The visualization doesn't work on EDGE browser, Chrome is ok.</p> <p>Best, Victor</p>
pandas
3