QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,694,321
4,913,254
How to put value over bars when creating many barplots with a loop
<p>How can I put the value over the bars when creating many plots with a loop.</p> <p>The code I am using</p> <pre><code>years = [twenty,twentyone,twentytwo,twentythree] for year in years: plt.ylim(0, 60) ax = sns.barplot(data=year, errorbar=None) ax.set_xticklabels(ax.get_xticklabels(), rotation=45, horizontalalignment='right') for i in ax.containers: ax.bar_label(i,) plt.xlabel('Type of error') # Set plot title and save image if year is twenty: plt.title('2020') plt.savefig(f'barplot_2020.png',bbox_inches=&quot;tight&quot;) elif year is twentyone: plt.title('2021') plt.savefig(f'barplot_2021.png',bbox_inches=&quot;tight&quot;) elif year is twentytwo: plt.title('2022') plt.savefig(f'barplot_2022.png',bbox_inches=&quot;tight&quot;) elif year is twentythree: plt.title('2023') plt.savefig(f'barplot_2023.png',bbox_inches=&quot;tight&quot;) #ax.bar_label(ax.containers[0],fmt=&quot;%.1f&quot;) </code></pre> <p>I have tried also to put some code in the <code>if</code>s as shown in the last <code>elif</code> but the result is always the same as shown below.</p> <p><a href="https://i.sstatic.net/R4CbDm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R4CbDm.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/biyxJm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/biyxJm.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/l0qmLm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l0qmLm.png" alt="enter image description here" /></a></p>
<python><matplotlib><seaborn><bar-chart><plot-annotations>
2023-07-15 14:52:59
1
1,393
Manolo Dominguez Becerra
76,694,314
20,266,647
k8s/MLRun, issue with scale to zero
<p>I would like to save sources (RAM and CPU) for real-time classification function and I used scale to zero in MLRun/K8s, but it did not work, see information from Grafana (I expected zero replicas after 2 minutes from last call):</p> <p><a href="https://i.sstatic.net/3M1cM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3M1cM.png" alt="enter image description here" /></a></p> <p>I used this code with scale to zero after 2 minutes (idle-time duration):</p> <pre><code>fn.spec.replicas = 1 fn.spec.min_replicas = 0 fn.spec.max_replicas = 2 fn.set_config(key=&quot;spec.scaleToZero.scaleResources&quot;, value=[{&quot;metricName&quot;:&quot;nuclio_processor_handled_events_total&quot;, &quot;windowSize&quot; : &quot;2m&quot;, &quot;threshold&quot; : 0}]) </code></pre> <p>Did you have the same issue?</p>
<python><kubernetes><mlrun><nuclio>
2023-07-15 14:50:58
1
1,390
JIST
76,694,215
10,240,583
Python type casting when preallocating list
<p>This question might already have an answer, so please guide me to one if you know any. I couldn't find one myself, though this question feels like a common one.</p> <p>So, consider the following pattern:</p> <pre class="lang-py prettyprint-override"><code>arr = [None] * n for i in range(n): # do some computations # ... # even more computations arr[i] = MyClass(some_computed_value) </code></pre> <p>So far so good, I tend to use this pattern from time to time. Now, let us be thorough in our attempt to provide all the code with type annotations. The problem is that we preallocate our array with <code>None</code>s, so it has the type <code>list[None]</code>. But we want it to be <code>list[MyClass]</code>. How do we proceed?</p> <p>The most straightforward solution is making it optional:</p> <pre class="lang-py prettyprint-override"><code>arr: list[Optional[MyClass]] = [None] * n </code></pre> <p>This solves the type checker issue, but now it's our issue since that <code>Optional</code> prohibits us from performing even basic operations on the result</p> <pre class="lang-py prettyprint-override"><code>arr[0].my_method() # error: NoneType has no attribute &quot;my_method&quot; </code></pre> <p>Long story short, I end up with the following pattern:</p> <pre class="lang-py prettyprint-override"><code>arr_: Any = [None] * n for i in range(n): # ... arr_[i] = MyClass(some_computed_value) arr = typing.cast(list[MyClass], arr_) </code></pre> <p>This is ugly, inconvenient, barely readable and boilerplate. What do you do?</p>
<python><mypy><typing>
2023-07-15 14:27:17
2
448
heinwol
76,694,139
2,006,706
Pandas - Merge rows that took two rows in document
<p>I'm account statements with tabula and getting a pandas <code>DataFrame</code> object that contains extracted data from the document. Some records are spanning to two rows because of the long description. I need to merge them into one for future processing.</p> <p>This is an example of such data:</p> <pre><code>| Description | Withdrawals | Deposits | | --------------------------- | ----------- | -------- | | e-Transfer - Autodeposit | | | | AF6hdfUdV | | 17.45 | | Credit Card Payment | 46.78 | | </code></pre> <p>The first of the two rows has only a description. The next row has a description that needs to be merged with the first row.</p> <p>I tried various <code>groupby</code> but can't figure out the working parameters for my case. Is there a way to do that without iterating over rows?</p>
<python><pandas>
2023-07-15 14:06:18
2
782
bronislav
76,694,067
726,730
PyQt5 - QTabWidget - How can I have seperate heights for each tab
<p>Example code:</p> <pre class="lang-py prettyprint-override"><code># -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'tab_widget.ui' # # Created by: PyQt5 UI code generator 5.15.7 # # WARNING: Any manual changes made to this file will be lost when pyuic5 is # run again. Do not edit this file unless you know what you are doing. from PyQt5 import QtCore, QtGui, QtWidgets class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName(&quot;MainWindow&quot;) MainWindow.resize(800, 570) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName(&quot;centralwidget&quot;) self.gridLayout = QtWidgets.QGridLayout(self.centralwidget) self.gridLayout.setObjectName(&quot;gridLayout&quot;) self.tabWidget = QtWidgets.QTabWidget(self.centralwidget) self.tabWidget.setObjectName(&quot;tabWidget&quot;) self.tab = QtWidgets.QWidget() self.tab.setObjectName(&quot;tab&quot;) self.gridLayout_2 = QtWidgets.QGridLayout(self.tab) self.gridLayout_2.setObjectName(&quot;gridLayout_2&quot;) self.label = QtWidgets.QLabel(self.tab) self.label.setObjectName(&quot;label&quot;) self.gridLayout_2.addWidget(self.label, 0, 0, 1, 1) self.label_2 = QtWidgets.QLabel(self.tab) self.label_2.setObjectName(&quot;label_2&quot;) self.gridLayout_2.addWidget(self.label_2, 1, 0, 1, 1) self.tabWidget.addTab(self.tab, &quot;&quot;) self.tab_2 = QtWidgets.QWidget() self.tab_2.setObjectName(&quot;tab_2&quot;) self.gridLayout_3 = QtWidgets.QGridLayout(self.tab_2) self.gridLayout_3.setObjectName(&quot;gridLayout_3&quot;) self.label_29 = QtWidgets.QLabel(self.tab_2) self.label_29.setObjectName(&quot;label_29&quot;) self.gridLayout_3.addWidget(self.label_29, 0, 0, 1, 1) self.label_27 = QtWidgets.QLabel(self.tab_2) self.label_27.setObjectName(&quot;label_27&quot;) self.gridLayout_3.addWidget(self.label_27, 1, 0, 1, 1) self.label_28 = QtWidgets.QLabel(self.tab_2) self.label_28.setObjectName(&quot;label_28&quot;) self.gridLayout_3.addWidget(self.label_28, 2, 0, 1, 1) self.label_3 = QtWidgets.QLabel(self.tab_2) self.label_3.setObjectName(&quot;label_3&quot;) self.gridLayout_3.addWidget(self.label_3, 3, 0, 1, 1) self.label_4 = QtWidgets.QLabel(self.tab_2) self.label_4.setObjectName(&quot;label_4&quot;) self.gridLayout_3.addWidget(self.label_4, 4, 0, 1, 1) self.label_5 = QtWidgets.QLabel(self.tab_2) self.label_5.setObjectName(&quot;label_5&quot;) self.gridLayout_3.addWidget(self.label_5, 5, 0, 1, 1) self.label_8 = QtWidgets.QLabel(self.tab_2) self.label_8.setObjectName(&quot;label_8&quot;) self.gridLayout_3.addWidget(self.label_8, 6, 0, 1, 1) self.label_6 = QtWidgets.QLabel(self.tab_2) self.label_6.setObjectName(&quot;label_6&quot;) self.gridLayout_3.addWidget(self.label_6, 7, 0, 1, 1) self.label_7 = QtWidgets.QLabel(self.tab_2) self.label_7.setObjectName(&quot;label_7&quot;) self.gridLayout_3.addWidget(self.label_7, 8, 0, 1, 1) self.label_11 = QtWidgets.QLabel(self.tab_2) self.label_11.setObjectName(&quot;label_11&quot;) self.gridLayout_3.addWidget(self.label_11, 9, 0, 1, 1) self.label_9 = QtWidgets.QLabel(self.tab_2) self.label_9.setObjectName(&quot;label_9&quot;) self.gridLayout_3.addWidget(self.label_9, 10, 0, 1, 1) self.label_10 = QtWidgets.QLabel(self.tab_2) self.label_10.setObjectName(&quot;label_10&quot;) self.gridLayout_3.addWidget(self.label_10, 11, 0, 1, 1) self.label_14 = QtWidgets.QLabel(self.tab_2) self.label_14.setObjectName(&quot;label_14&quot;) self.gridLayout_3.addWidget(self.label_14, 12, 0, 1, 1) self.label_12 = QtWidgets.QLabel(self.tab_2) self.label_12.setObjectName(&quot;label_12&quot;) self.gridLayout_3.addWidget(self.label_12, 13, 0, 1, 1) self.label_13 = QtWidgets.QLabel(self.tab_2) self.label_13.setObjectName(&quot;label_13&quot;) self.gridLayout_3.addWidget(self.label_13, 14, 0, 1, 1) self.label_17 = QtWidgets.QLabel(self.tab_2) self.label_17.setObjectName(&quot;label_17&quot;) self.gridLayout_3.addWidget(self.label_17, 15, 0, 1, 1) self.label_15 = QtWidgets.QLabel(self.tab_2) self.label_15.setObjectName(&quot;label_15&quot;) self.gridLayout_3.addWidget(self.label_15, 16, 0, 1, 1) self.label_16 = QtWidgets.QLabel(self.tab_2) self.label_16.setObjectName(&quot;label_16&quot;) self.gridLayout_3.addWidget(self.label_16, 17, 0, 1, 1) self.label_20 = QtWidgets.QLabel(self.tab_2) self.label_20.setObjectName(&quot;label_20&quot;) self.gridLayout_3.addWidget(self.label_20, 18, 0, 1, 1) self.label_18 = QtWidgets.QLabel(self.tab_2) self.label_18.setObjectName(&quot;label_18&quot;) self.gridLayout_3.addWidget(self.label_18, 19, 0, 1, 1) self.label_19 = QtWidgets.QLabel(self.tab_2) self.label_19.setObjectName(&quot;label_19&quot;) self.gridLayout_3.addWidget(self.label_19, 20, 0, 1, 1) self.label_23 = QtWidgets.QLabel(self.tab_2) self.label_23.setObjectName(&quot;label_23&quot;) self.gridLayout_3.addWidget(self.label_23, 21, 0, 1, 1) self.label_21 = QtWidgets.QLabel(self.tab_2) self.label_21.setObjectName(&quot;label_21&quot;) self.gridLayout_3.addWidget(self.label_21, 22, 0, 1, 1) self.label_22 = QtWidgets.QLabel(self.tab_2) self.label_22.setObjectName(&quot;label_22&quot;) self.gridLayout_3.addWidget(self.label_22, 23, 0, 1, 1) self.label_26 = QtWidgets.QLabel(self.tab_2) self.label_26.setObjectName(&quot;label_26&quot;) self.gridLayout_3.addWidget(self.label_26, 24, 0, 1, 1) self.label_24 = QtWidgets.QLabel(self.tab_2) self.label_24.setObjectName(&quot;label_24&quot;) self.gridLayout_3.addWidget(self.label_24, 25, 0, 1, 1) self.label_25 = QtWidgets.QLabel(self.tab_2) self.label_25.setObjectName(&quot;label_25&quot;) self.gridLayout_3.addWidget(self.label_25, 26, 0, 1, 1) self.tabWidget.addTab(self.tab_2, &quot;&quot;) self.gridLayout.addWidget(self.tabWidget, 0, 0, 1, 1) MainWindow.setCentralWidget(self.centralwidget) self.retranslateUi(MainWindow) self.tabWidget.setCurrentIndex(0) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate(&quot;MainWindow&quot;, &quot;MainWindow&quot;)) self.label.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_2.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab), _translate(&quot;MainWindow&quot;, &quot;Tab 1&quot;)) self.label_29.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_27.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_28.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_3.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_4.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_5.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_8.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_6.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_7.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_11.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_9.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_10.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_14.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_12.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_13.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_17.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_15.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_16.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_20.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_18.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_19.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_23.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_21.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_22.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_26.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_24.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.label_25.setText(_translate(&quot;MainWindow&quot;, &quot;TextLabel&quot;)) self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab_2), _translate(&quot;MainWindow&quot;, &quot;Tab 2&quot;)) if __name__ == &quot;__main__&quot;: import sys app = QtWidgets.QApplication(sys.argv) MainWindow = QtWidgets.QMainWindow() ui = Ui_MainWindow() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec_()) </code></pre> <p>I want each tab height to be as small as possible (something of QSizePolicy for each tab to be Minimux or Fixed).</p> <p>Edit: I don't want to use QSpacer in each tab. That's not the point.</p>
<python><pyqt5><qtabwidget><qsizepolicy>
2023-07-15 13:50:32
1
2,427
Chris P
76,694,055
9,415,280
fail to run a probalistic tensorflow model
<p>I build a test of tensorflow lstm 2 heads and 2 ouputs model with oune of the output is probalilist. This odel work fine. I do the same work but adding more layers, following the same procedure... but this one fail with this error</p> <pre><code>2023-07-15 09:18:24.407504: W tensorflow/core/common_runtime/bfc_allocator.cc:491] ***********________***********________************______________________________________************ 2023-07-15 09:18:24.408219: E tensorflow/stream_executor/dnn.cc:868] OOM when allocating tensor with shape[2866176000] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc 2023-07-15 09:18:24.409011: W tensorflow/core/framework/op_kernel.cc:1780] OP_REQUIRES failed at cudnn_rnn_ops.cc:1564 : INTERNAL: Failed to call ThenRnnForward with model config: [rnn_mode, rnn_input_mode, rnn_direction_mode]: 2, 0, 0 , [num_layers, input_size, num_units, dir_count, max_seq_length, batch_size, cell_num_units]: [1, 240, 240, 1, 120, 19904, 240] Traceback (most recent call last): File &quot;E:\Anaconda3\envs\tf2.7_bigData\lib\site-packages\keras\utils\traceback_utils.py&quot;, line 70, in error_handler raise e.with_traceback(filtered_tb) from None File &quot;E:\Anaconda3\envs\tf2.7_bigData\lib\site-packages\tensorflow\python\eager\execute.py&quot;, line 54, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, tensorflow.python.framework.errors_impl.InternalError: Exception encountered when calling layer &quot;Extracteur_feature2&quot; &quot; f&quot;(type LSTM). {{function_node __wrapped__CudnnRNN_device_/job:localhost/replica:0/task:0/device:GPU:0}} Failed to call ThenRnnForward with model config: [rnn_mode, rnn_input_mode, rnn_direction_mode]: 2, 0, 0 , [num_layers, input_size, num_units, dir_count, max_seq_length, batch_size, cell_num_units]: [1, 240, 240, 1, 120, 19904, 240] [Op:CudnnRNN] Call arguments received by layer &quot;Extracteur_feature2&quot; &quot; f&quot;(type LSTM): • inputs=tf.Tensor(shape=(19904, 120, 240), dtype=float32) • mask=None • training=False • initial_state=None Process finished with exit code 1 </code></pre> <p>The model is build like this</p> <pre><code>def build_model(num_timesteps_in, nb_features, nb_attributs, nb_lstm_units, probalistic_model=True): &quot;&quot;&quot; Construire un modèle avec tensorflow :param num_timesteps_in : combien de jours d'observation incluant le jour à prévoir en input :param nb_features : nombre de feature utilisé en input (exclus les apports si non assimilés) :param nb_attributs : nombre de d'attibut physiographique utilisé en input :param nb_lstm_units : nombre de neuronnes par layer :return : un modèle et le checkpoint pour lancer le trainning &quot;&quot;&quot; # allocated memory on demand in lieu of full charge memory gpu_devices = tf.config.experimental.list_physical_devices(&quot;GPU&quot;) for device in gpu_devices: tf.config.experimental.set_memory_growth(device, True) def negative_loglikelihood(targets, estimated_distribution): return -estimated_distribution.log_prob(targets) tfd = tfp.distributions timeseries_input = tf.keras.Input(shape=(num_timesteps_in, nb_features)) attrib_input = tf.keras.Input(shape=(nb_attributs,)) xy = tf.keras.layers.LSTM(nb_lstm_units, # activation='softsign' kernel_initializer=tf.keras.initializers.glorot_uniform(), return_sequences=True, stateful=False, name='Extracteur_feature1')(timeseries_input) xy = tf.keras.layers.Dropout(0.2)(xy) xy = tf.keras.layers.LSTM(nb_lstm_units, # activation='softsign' kernel_initializer=tf.keras.initializers.glorot_uniform(), return_sequences=True, stateful=False, name='Extracteur_feature2')(xy) xy = tf.keras.layers.Dropout(0.2)(xy) xy = tf.keras.layers.LSTM(nb_lstm_units, # activation='softsign' kernel_initializer=tf.keras.initializers.glorot_uniform(), return_sequences=False, stateful=False, name='Extracteur_feature3')(xy) xy = tf.keras.layers.Dropout(0.2)(xy) allin_input = tf.keras.layers.Concatenate(axis=1, name='merged_head')([xy, attrib_input]) allin_input = tf.keras.layers.Dense(nb_attributs, activation='softsign', kernel_initializer=tf.keras.initializers.he_uniform(), name='Dense111')(allin_input) allin_input = tf.keras.layers.Dropout(0.2)(allin_input) allin_input = tf.keras.layers.Dense(nb_attributs, activation='softsign', kernel_initializer=tf.keras.initializers.he_uniform(), name='Dense222')(allin_input) outputs = tf.keras.layers.Dropout(0.2)(allin_input) if probalistic_model: ################### block probability ########################## prevision = tf.keras.layers.Dense(1, activation='linear', name='deterministe_1')(outputs) probabilist = tf.keras.layers.Dense(2, activation='linear', name='probabilist_2')(outputs) probabilist = tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t[..., :1], scale=1e-3 + tf.math.softplus( 0.05 * t[..., 1:])), name='normal_dist')(probabilist) # note this # 1e-3 pour éviter des prob. numériques # 0.5 pas clair, possiblement aide a accélérer l'optimisation éviter minimum locaux... # https://github.com/tensorflow/probability/issues/703 ################### fin block probability ########################## model = tf.keras.Model(inputs=[timeseries_input, attrib_input], outputs=[prevision, probabilist]) model.summary() # avec adam [.001 à .0005] résultats ET vitesse optimum optimizer = tf.keras.optimizers.Adam(learning_rate=0.001) loss = {'deterministe_1': 'mse', 'normal_dist': negative_loglikelihood} model.compile(optimizer=optimizer, loss=loss, loss_weights=[1, 1]) else: outputs = tf.keras.layers.Dense(1, activation='linear', name='deterministe')(allin_input) model = tf.keras.Model(inputs=[timeseries_input, attrib_input], outputs=outputs) model.summary() optimizer = tf.keras.optimizers.Adam(learning_rate=0.001) # avec adam [.001 à .0005] résultats ET vitesse optimum loss = 'mse' model.compile(optimizer=optimizer, loss=loss) return model, optimizer, loss </code></pre> <p>I do trainning in many step, so I reload the best itteration and re-compile it due to a custom loss fonction who give me prob. if done in an other way. The loss_fct and optimizer are define earlier as a copie of the one used in build_model.</p> <pre><code> def negative_loglikelihood(targets, estimated_distribution): return -estimated_distribution.log_prob(targets) loss = {'deterministe_1': 'mse', 'normal_dist': negative_loglikelihood} </code></pre> <pre><code>model = tensorflow.keras.models.load_model('path_to_model/model.h5', compile=False) model.compile(optimizer=optimizer, loss=loss_fct) </code></pre> <p>I don't understand this error, this model was test many time without tf.probability and work fine (lstm input shpe are ok...). What is news is adding second output of tf.probability (who work fine in a simpless version) and reload with compile=False and recompile (who work fine too with simpless model)</p> <p>I work on this problem since 3 weeks and I'm out of idea to try...</p> <p>tensorflow 2.10 tensorflow-probability 0.14.0 window/Anaconda</p>
<python><tensorflow><tensorflow-probability>
2023-07-15 13:47:45
1
451
Jonathan Roy
76,694,034
4,575,197
Sending a parameter to GDELT API is not returning the expected result
<p>I'm trying to connect to <a href="https://blog.gdeltproject.org/gdelt-doc-2-0-api-debuts/" rel="nofollow noreferrer">Gdelt Doc API</a> and so far I can send all the query parameters except for one of them. Before I get to the problem I want you to see the code.</p> <p>So if you call this method it will send a request to gdelt API (Doc API) and return the list of web sites that given search word appears on it. It has many filters one of which is Domain.</p> <p>When I enter a Domain for <em>example:cnn.com</em> , it won't return only the websites from that domain which is the expected behaivior.</p> <p>Has anyone, any experience with this?</p>
<python><gdelt>
2023-07-15 13:42:54
1
10,490
Mostafa Bouzari
76,693,807
7,657,180
Save xlsx file using python win32com
<p>I am trying to save an excel file which is already open using these lines</p> <pre><code>import win32com.client as win32 import pywintypes xlapp = win32.DispatchEx('Excel.Application') try: wb = xlapp.Workbooks('MyData.xlsx') except pywintypes.com_error: print('The file MyData.xlsx is not open.') exit() wb.Save() xlapp.Quit() </code></pre> <p>But I got this message printed at console <code>The file MyData.xlsx is not open.</code> although the file is open !!</p>
<python>
2023-07-15 12:44:03
1
9,608
YasserKhalil
76,693,593
649,920
Is np.trapz linear in its function variable
<p>I am given an equation <code>y(x') = Integral(f(x, x')*g(x), x in X)</code>. I'm also given a function <code>f</code> and lists of <code>X</code> values and <code>Y</code> values of length <code>n</code>. What I need is to find a list <code>g</code> of same length such that <code>Z[i] = np.trapz(f(X, X[i])*g, X)</code> satisfies <code>Z[i] = Y[i]</code>.</p> <p>I have first done some analytic explorations and found how to express <code>g</code> from the original equation, but once I do this numerically I get that <code>Z[i] - Y[i]</code> is a straight line with close to zero slope, but still it introduces some consistent error.</p> <p>I hence thought that maybe I need to solve for <code>g</code> directly in <code>np.trapz(f(X, X[i])*g, X) == Y[i]</code>. This should be very simple if <code>np.trapz</code> is linear in its first argument coz then it will lead to a system of linear equation on <code>g</code>. So I wonder if the linearity holds indeed.</p>
<python><numpy>
2023-07-15 11:48:26
1
357
SBF
76,693,541
4,220,282
Why does NumPy return a different type for arrays and scalars?
<p>I have some whole numbers stored in <code>np.float64</code> arrays and scalars, which I want to convert to native Python <code>int</code>.</p> <p>This is my attempt:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np a = np.array([1, 2, 3], dtype=np.float64) b = np.float64(4) def float_to_int(x): x_object = x.astype(object) return np.floor(x_object) # Array inputs are converted to int print(type(float_to_int(a)[0])) # &gt;&gt; &lt;class 'int'&gt; # Scalar inputs are left as np.float64 print(type(float_to_int(b))) # &gt;&gt; &lt;class 'numpy.float64'&gt; </code></pre> <p>There are 3 things I don't understand here:</p> <ol> <li>Why is the type casting different for scalars and arrays?</li> <li>Why did <code>np.floor()</code> do type casting at all (for array inputs)?</li> <li>How can I reliably cast <code>np.float64</code> to <code>int</code> for scalars and arrays?</li> </ol>
<python><arrays><numpy><type-conversion><numpy-ndarray>
2023-07-15 11:31:55
2
946
Harry
76,693,414
16,383,578
How to optimize splitting overlapping ranges?
<p>This Python script I wrote to split overlapping ranges into unique ranges (<a href="https://codereview.stackexchange.com/questions/285932/python-script-to-split-overlapping-ranges-version-4">last iteration</a>). It produces correct output and outperforms <a href="https://stackoverflow.com/a/76566821/16383578">the version in the answer</a>. I tested output against correct method's output and output of a brute force approach.</p> <p>An infinite number of boxes arranged in a line are numbered. Each can only hold one object, whatever was last put into the box. They are initially empty. For a list of triplets, first two elements of each are integers (the first no greater than the second) and each triplet represents an instruction to put the third element into the boxes. Triplet <code>(0, 10, 'A')</code> means &quot;put <code>'A'</code> into boxes 0 to 10 (inclusive)&quot; and after execution of the instruction, boxes 0 to 10 contain an instance of <code>'A'</code>. Empty boxes are to be ignored. Describe the state of boxes after executing all instructions using the least number of triplets. There are narrow cases and general cases:</p> <p>Narrow case: Given triplets <code>(s1, e1, d1)</code> and <code>(s2, e2, d2)</code>, <code>s1 &lt; s2 &lt; e1 &lt; e2</code> is always <code>False</code> (all pairings in input conform to this). There are four sub cases:</p> <ul> <li><p>Case 1: s1 = s2 and e2 &lt; e1:</p> <p>Whichever ends first wins. Boxes always hold the value from the winner, given <code>[(0, 10, 'A'), (0, 5, 'B')]</code>, after execution the state of the boxes is <code>[(0, 5, 'B'), (6, 10, 'a')]</code>.</p> </li> <li><p>Case 2: s1 &lt; s2 and e2 &lt; e1:</p> <p>Whichever ends first wins, example: <code>[(0, 10, 'A'), (5, 7, 'B')]</code> -&gt; <code>[(0, 4, 'A'), (5, 7, 'B'), (8, 10, 'A')]</code>.</p> </li> <li><p>Case 3: s1 &lt; s2 and e2 = e1:</p> <p>Same rule as above, but here it is a tie. In case of ties, whichever starts later wins: <code>[(0, 10, 'A'), (6, 10, 'B')]</code> -&gt; <code>[(0, 5, 'A'), (6, 10, 'B')]</code>.</p> </li> <li><p>Case 4: s1 = s2 and e1 = e2:</p> <p>This is special. In case of true ties, whichever comes later in the input wins. My code doesn't guarantee objects in boxes are from latest instructions in cases like this (but it guarantees boxes have only one object).</p> </li> </ul> <p>Additional rules:</p> <ul> <li><p>If there are no updates, do nothing.</p> <p><code>[(0, 10, 'A'), (5, 6, 'A')]</code> -&gt; <code>[(0, 10, 'A')]</code></p> </li> <li><p>If there are gaps, leave them as is (those boxes are empty):</p> <p><code>[(0, 10, 'A'), (15, 20, 'B')]</code> -&gt; <code>[(0, 10, 'A'), (15, 20, 'B')]</code></p> </li> <li><p>If e1 + 1 = s2 and d1 = d2, join them (least amount of triplets).</p> <p><code>[(0, 10, 'A'), (11, 20, 'A')]</code> -&gt; <code>[(0, 20, 'A')]</code></p> </li> <li><p>The general case is when the primary condition of the narrow case doesn't hold, in case of true intersections. Whichever starts later wins.</p> <p><code>[(0, 10, 'A'), (5, 20, 'B')]</code> -&gt; <code>[(0, 4, 'A'), (5, 20, 'B')]</code></p> </li> </ul> <p>Brute force is correct but slow and can handle the general case:</p> <pre class="lang-py prettyprint-override"><code>def brute_force_discretize(ranges): numbers = {} ranges.sort(key=lambda x: (x[0], -x[1])) for start, end, data in ranges: numbers |= {n: data for n in range(start, end + 1)} numbers = list(numbers.items()) l = len(numbers) i = 0 output = [] while i &lt; l: di = 0 curn, curv = numbers[i] while i &lt; l and curn + di == numbers[i][0] and curv == numbers[i][1]: i += 1 di += 1 output.append((curn, numbers[i-1][0], curv)) return output </code></pre> <p>Smart implementation is performant but only handles the narrow case:</p> <pre class="lang-py prettyprint-override"><code>from typing import Any, List, Tuple def get_nodes(ranges: List[Tuple[int, int, Any]]) -&gt; List[Tuple[int, int, Any]]: nodes = [] for ini, fin, data in ranges: nodes.extend([(ini, False, data), (fin, True, data)]) return sorted(nodes) def merge_ranges(data: List[List[int | Any]], range: List[int | Any]) -&gt; None: if not data or range[2] != (last := data[-1])[2] or range[0] &gt; last[1] + 1: data.append(range) else: last[1] = range[1] def discretize_narrow(ranges): nodes = get_nodes(ranges) output = [] stack = [] actions = [] for node, end, data in nodes: if not end: action = False if not stack or data != stack[-1]: if stack and start &lt; node: merge_ranges(output, [start, node - 1, stack[-1]]) stack.append(data) start = node action = True actions.append(action) elif actions.pop(-1): if start &lt;= node: merge_ranges(output, [start, node, stack.pop(-1)]) start = node + 1 else: stack.pop(-1) return output </code></pre> <p><a href="https://raw.githubusercontent.com/Estrangeling/Eleyi/main/discretize.py" rel="nofollow noreferrer">Full script</a> that generates narrow cases. Performance:</p> <pre><code>In [518]: sample = make_sample(2048, 65536, 16) In [519]: %timeit descretize(sample) 4.46 ms ± 32.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [520]: %timeit discretize_narrow(sample) 3.13 ms ± 34.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [521]: list(map(tuple, discretize_narrow(sample))) == descretize(sample) Out[521]: True </code></pre> <p>How can I make this faster? Inputs are sorted in ascending order but my code assumes it isn't. If I split top level data into non-overlapping ranges (taking advantage of input being sorted), accumulate triplets to a stack while there are overlaps and push the stack to discretize and clear the stack when overlaps are over, then join intermediate results, the code can be faster. I can't get this working.</p> <p>And how can I handle the general case? I need to know when ranges end (four states) but can't handle it correctly:</p> <pre class="lang-py prettyprint-override"><code>def get_quadruples(ranges): nodes = [] for ini, fin, data in ranges: nodes.extend([(ini, False, -fin, data), (fin, True, ini, data)]) return sorted(nodes) </code></pre> <p>I can't post on Code Review because this is a &quot;how&quot; question (I haven't achieved my goals). To demonstrate Interval Tree's slowness, <a href="https://codereview.stackexchange.com/a/285124/234107">this answer on Code Review</a> uses it:</p> <pre><code>In [531]: %timeit discretize_narrow([(0, 10, 'A'), (0, 1, 'B'), (2, 5, 'C'), (3, 4, 'C'), (6, 7, 'C'), (8, 8, 'D'), (110, 150, 'E'), (250, 300, 'C'), (256, 270, 'D'), (295, 300, 'E'), (500, 600, 'F')]) 14.3 µs ± 42.5 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [532]: %timeit merge_rows([(0, 10, 'A'), (0, 1, 'B'), (2, 5, 'C'), (3, 4, 'C'), (6, 7, 'C'), (8, 8, 'D'), (110, 150, 'E'), (250, 300, 'C'), (256, 270, 'D'), (295, 300, 'E'), (500, 600, 'F')]) 891 µs ± 12.9 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [533]: data = [(0, 10, 'A'), (0, 1, 'B'), (2, 5, 'C'), (3, 4, 'C'), (6, 7, 'C'), (8, 8, 'D'), (110, 150, 'E'), (250, 300, 'C'), (256, 270, 'D'), (295, 300, 'E'), (500, 600, 'F')] In [534]: merge_rows(data) == discretize_narrow(data) Out[534]: True In [535]: sample = make_sample(256, 65536, 16) In [536]: merge_rows(sample) == discretize_narrow(sample) Out[536]: True In [537]: %timeit discretize_narrow(sample) 401 µs ± 3.33 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [538]: %time result = merge_rows(sample) CPU times: total: 78.1 ms Wall time: 56 ms </code></pre> <p>It is not efficient but I don't think I can improve it. Test cases can be generated using code in the linked GitHub page. Logic is the same as dictionary update and brute force implementation uses dictionary update to process the ranges (it is by definition correct). Correctness of smart approach was verified using output of brute force.</p> <p>Manual test cases:</p> <pre><code>In [539]: discretize_narrow([(0, 10, 'A'), (0, 1, 'B'), (2, 5, 'C'), (3, 4, 'C'), (6, 7, 'C'), (8, 8, 'D'), (110, 150, 'E'), (250, 300, 'C'), (256, 270, 'D'), (295, 300, 'E'), (500, 600, 'F')]) Out[539]: [[0, 1, 'B'], [2, 7, 'C'], [8, 8, 'D'], [9, 10, 'A'], [110, 150, 'E'], [250, 255, 'C'], [256, 270, 'D'], [271, 294, 'C'], [295, 300, 'E'], [500, 600, 'F']] In [540]: discretize_narrow([(0, 100, 'A'), (10, 25, 'B'), (15, 25, 'C'), (20, 25, 'D'), (30, 50, 'E'), (40, 50, 'F'), (60, 80, 'G'), (150, 180, 'H')]) Out[540]: [[0, 9, 'A'], [10, 14, 'B'], [15, 19, 'C'], [20, 25, 'D'], [26, 29, 'A'], [30, 39, 'E'], [40, 50, 'F'], [51, 59, 'A'], [60, 80, 'G'], [81, 100, 'A'], [150, 180, 'H']] </code></pre> <p>Machine generated general case (and correct output):</p> <pre><code>In [542]: ranges = [] In [543]: for _ in range(20): ...: start = random.randrange(100) ...: end = random.randrange(100) ...: if start &gt; end: ...: start, end = end, start ...: ranges.append([start, end, random.randrange(5)]) In [544]: ranges.sort() In [545]: ranges Out[545]: [[0, 31, 0], [1, 47, 1], [1, 67, 0], [10, 68, 0], [15, 17, 2], [18, 39, 0], [19, 73, 3], [25, 32, 0], [26, 33, 1], [26, 72, 2], [26, 80, 2], [28, 28, 1], [29, 31, 4], [30, 78, 2], [36, 47, 0], [36, 59, 4], [44, 67, 3], [52, 61, 4], [58, 88, 1], [64, 92, 1]] In [546]: brute_force_discretize(ranges) Out[546]: [(0, 0, 0), (1, 9, 1), (10, 14, 0), (15, 17, 2), (18, 18, 0), (19, 24, 3), (25, 25, 0), (26, 28, 1), (29, 29, 4), (30, 35, 2), (36, 43, 0), (44, 51, 3), (52, 57, 4), (58, 92, 1)] </code></pre> <p>Function that makes generic cases:</p> <pre><code>def make_generic_case(num, lim, dat): ranges = [] for _ in range(num): start = random.randrange(lim) end = random.randrange(lim) if start &gt; end: start, end = end, start ranges.append([start, end, random.randrange(dat)]) ranges.sort() return ranges </code></pre> <p>Sample inputs that code from existing answer disagrees with correct result:</p> <p>Test case:</p> <pre><code>[ [0, 31, 0], [1, 47, 1], [1, 67, 0], [10, 68, 0], [15, 17, 2], [18, 39, 0], [19, 73, 3], [25, 32, 0], [26, 33, 1], [26, 72, 2], [26, 80, 2], [28, 28, 1], [29, 31, 4], [30, 78, 2], [36, 47, 0], [36, 59, 4], [44, 67, 3], [52, 61, 4], [58, 88, 1], [64, 92, 1] ] </code></pre> <p>Output:</p> <pre><code>[ (0, 0, 0), (1, 9, 1), (10, 14, 0), (15, 17, 2), (18, 18, 0), (19, 24, 3), (25, 25, 0), (26, 28, 1), (29, 29, 4), (30, 35, 2), (36, 43, 0), (44, 51, 3), (52, 57, 4), (58, 88, 1) ] </code></pre> <p>The output is almost the same as correct one, except for last range (should be <code>(58, 92, 1)</code> instead of <code>(58, 88, 1)</code>).</p> <p>Another case:</p> <pre><code>[ [4, 104, 4], [22, 463, 2], [24, 947, 2], [36, 710, 1], [37, 183, 1], [39, 698, 7], [51, 438, 4], [60, 450, 7], [120, 383, 2], [130, 193, 7], [160, 562, 5], [179, 443, 6], [186, 559, 6], [217, 765, 2], [221, 635, 2], [240, 515, 3], [263, 843, 3], [274, 759, 6], [288, 389, 5], [296, 298, 6], [333, 1007, 1], [345, 386, 5], [356, 885, 3], [377, 435, 5], [407, 942, 7], [423, 436, 1], [484, 926, 5], [496, 829, 0], [559, 870, 5], [610, 628, 1], [651, 787, 4], [735, 927, 1], [765, 1002, 1] ] </code></pre> <p>Output:</p> <pre><code>[(4, 21, 4), (22, 35, 2), (36, 38, 1), (39, 50, 7), (51, 59, 4), (60, 119, 7), (120, 129, 2), (130, 159, 7), (160, 178, 5), (179, 216, 6), (217, 239, 2), (240, 273, 3), (274, 287, 6), (288, 295, 5), (296, 298, 6), (299, 332, 5), (333, 344, 1), (345, 355, 5), (356, 376, 3), (377, 406, 5), (407, 422, 7), (423, 436, 1), (437, 483, 7), (484, 495, 5), (496, 558, 0), (559, 609, 5), (610, 628, 1), (629, 650, 5), (651, 734, 4), (735, 927, 1), (928, 942, 7), (943, 1007, 1)] </code></pre> <p>Again almost correct. Last three ranges are wrong (<code>(735, 927, 1), (928, 942, 7), (943, 1007, 1)</code> should be <code>(735, 1007, 1)</code>). I tested other inputs and proposed solution is correct, but on some edge cases it isn't. <a href="https://softwareengineering.stackexchange.com/a/363096">This algorithm</a> fails the narrow case:</p> <pre><code>def discretize_gen(ranges): nodes = get_nodes(ranges) stack = [] for (n1, e1, d1), (n2, e2, _) in zip(nodes, nodes[1:]): if e1: stack.remove(d1) else: stack.append(d1) start = n1 + e1 end = n2 - (not e2) if start &lt;= end and stack: yield start, end, stack[-1] </code></pre> <p>Use like: <code>list(merge(discretize_gen(ranges)))</code>. <code>merge</code> function can be found in answer below. It fails for many inputs:</p> <pre><code>def compare_results(ranges): correct = brute_force_discretize(ranges) correct_set = set(correct) output = list(merge(discretize_gen(ranges))) output_set = set(output) errors = output_set ^ correct_set indices = [(i, correct.index(e)) for i, e in enumerate(output) if e not in errors] indices.append((len(output), None)) comparison = [(a, b) if c - a == 1 else (slice(a, c), slice(b, d)) for (a, b), (c, d) in zip(indices, indices[1:])] result = {} for a, b in comparison: key = output[a] val = correct[b] if isinstance(key, list): key = tuple(key) result[key] = val return result </code></pre> <p>Where it fails:</p> <pre><code>[(73, 104, 3), (75, 98, 0), (78, 79, 3), (83, 85, 3), (88, 90, 2)] </code></pre> <p>Comparison:</p> <pre><code>{(73, 74, 3): (73, 74, 3), ((75, 77, 0), (78, 87, 3)): [(75, 77, 0), (78, 79, 3), (80, 82, 0), (83, 85, 3), (86, 87, 0)], ((88, 90, 2), (91, 104, 3)): [(88, 90, 2), (91, 98, 0), (99, 104, 3)]} </code></pre> <p>The answer has a score of 11 and is accepted, yet it doesn't work. How to fix it?</p>
<python><python-3.x><algorithm><performance>
2023-07-15 10:55:21
1
3,930
Ξένη Γήινος
76,693,273
7,301,792
How can I display gridlines on a secondary y-axis in a Matplotlib histogram with two y-axes?
<p>I have a dataset of students' scores and their corresponding populations, and I have created a histogram with two y-axes to visualize the data as shown in the image provided. However, I am facing an issue with the gridlines. The gridlines are visible on the left y-axis, but not on the secondary y-axis to the right.</p> <p><a href="https://i.sstatic.net/TcWPZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TcWPZ.png" alt="enter image description here" /></a></p> <p>I started by constructing a DataFrame and using the default Matplotlib style:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import pandas as pd # Set the style plt.style.use('default') # Create the DataFrame sample = pd.DataFrame({ 'score':[595, 594, 593, 592, 591, 590, 589, 588, 587, 586, 585, 584, 583, 582, 581, 580, 579, 578, 577, 576], 'population':[ 705, 745, 716, 742, 722, 746, 796, 750, 816, 809, 815,821, 820, 865, 876, 886, 947, 949, 1018, 967]}) </code></pre> <p>Then, I created a histogram with a secondary y-axis to display the data on the right y-axis:</p> <pre class="lang-py prettyprint-override"><code># Create the histogram fig, ax1 = plt.subplots() hist, bins, patches = ax1.hist(sample['score'], weights=sample['population'], alpha=0.7, color='steelblue', edgecolor='white') ax1.set_xlabel('Score', fontsize=12) ax1.set_ylabel('Population', fontsize=12) ax1.set_yticks(hist[-1::-2]) # Set the y-axis range to start from the minimum value ax1.set_ylim(bottom=hist.min()-100) ax2 = ax1.secondary_yaxis('right') ax2.set_yticks(hist[-2::-2]) </code></pre> <p>Finally, I attempted to set the gridlines using the grid() method:</p> <pre class="lang-py prettyprint-override"><code>ax1.grid(axis='y', color='red', alpha=0.5) ax2.grid(axis='y', color='red', alpha=0.5) </code></pre> <p>However, the gridlines were not visible on the secondary y-axis. I tried using different options for the which parameter of the grid() method, such as which='both', which='major', and which='minor', but none of them worked.</p> <p>Can you suggest a solution to this problem?</p>
<python><pandas><matplotlib>
2023-07-15 10:17:19
2
22,663
Wizard
76,693,251
6,822,178
Variations with repetition of r integers in {0...k} that sum to u
<p>Given a set of integers <code>x = {0...k}</code>, I need to find the most efficient algorithm to generate all variations with repetition of <code>r</code> integers <code>x</code> that sum to <code>u</code>.</p> <p>I thought to</p> <pre class="lang-py prettyprint-override"><code>from itertools import product import numpy as np import pandas as pd k = 10 r = 5 u = 12 x = np.arange(0, k+1) prod = product(x, repeat=r) df = pd.DataFrame(list(prod)) print(f&quot;VR{x.size, r} = {df.index.size} = (k+1)^r = {(k+1)**r}\n&quot;) df = df[df.sum(axis=1)==u] print(df) </code></pre> <pre><code>VR(11, 5) = 161051 = (k+1)^r = 161051 0 1 2 3 4 32 0 0 0 2 10 42 0 0 0 3 9 52 0 0 0 4 8 62 0 0 0 5 7 72 0 0 0 6 6 ... .. .. .. .. .. 146652 10 0 2 0 0 147742 10 1 0 0 1 147752 10 1 0 1 0 147862 10 1 1 0 0 149072 10 2 0 0 0 [1795 rows x 5 columns] </code></pre> <p>But this is highly inefficient because the total number of variations with repetition is <code>VR(k+1, r) = (k+1)^r</code> and generates a giant <code>df</code>.</p> <p>Any suggestion?</p>
<python><algorithm><performance><python-itertools><combinatorics>
2023-07-15 10:11:03
2
2,289
Max Pierini
76,693,188
12,470,444
How to get the value of a merged cell in Excel with openpyxl?
<p>I try to read the value of the cells D44 to D47. The problem is that each one is merged with its' neighbor cell in column E means D44:E44 for example.</p> <p>How do I get the value of the merged cells to check if its value is for example &quot;Yes&quot;?</p> <pre><code> for datei in dateien: # Überprüfen, ob die Datei eine Excel-Datei ist if datei.endswith(&quot;.xlsx&quot;) or datei.endswith(&quot;.xls&quot;): # Vollständiger Pfad zur Datei datei_pfad = os.path.join(ordner_pfad, datei) # Excel-Datei öffnen workbook = load_workbook(filename=datei_pfad) # Tabellenblatt &quot;Declaration&quot; auswählen sheet = workbook[&quot;Declaration&quot;] for zelle in [&quot;D44&quot;, &quot;D45&quot;, &quot;D46&quot;, &quot;D47&quot;]: if sheet[zelle].value == &quot;Yes&quot; or &quot;Unknown&quot;: # Ändern des Dateinamens mit dem Zusatz &quot;_ISSUE&quot; neuer_dateiname = datei.replace(&quot;.xlsx&quot;, &quot;_ISSUE.xlsx&quot;) neuer_dateipfad = os.path.join(ordner_pfad, neuer_dateiname) # Datei speichern workbook.save(filename=neuer_dateipfad) # Alte Datei löschen os.remove(datei_pfad) # Schleife abbrechen, da eine &quot;Yes&quot;-Zelle gefunden wurde break else: # Ändern des Dateinamens mit dem Zusatz &quot;_OK&quot; neuer_dateiname = datei.replace(&quot;.xlsx&quot;, &quot;_OK.xlsx&quot;) neuer_dateipfad = os.path.join(ordner_pfad, neuer_dateiname) # Datei speichern workbook.save(filename=neuer_dateipfad) # Alte Datei löschen os.remove(datei_pfad) # Schleife abbrechen, da eine &quot;Yes&quot;-Zelle gefunden wurde break </code></pre>
<python><excel><merge><openpyxl>
2023-07-15 09:52:58
1
497
exec85
76,693,184
649,920
Finding the maximal convex function below a given one
<p>I have a vector X of arguments and Y of values, both length n vectors of floats. I need to construct a vector such that (X, Yc) is a graph of a maximal convex function satisfying Yc &lt;= Y pointwise. A naive algo that comes to my mind is O(n^2) in time, so I'm looking for something faster than that.</p> <p>In math that's something we call a convex hull problem, so I have tried finding any lib in python that will do this, but they all seem to be focused on 2d or 3d problems. Any suggestions what would be the fastest way for me to find Yc?</p> <p>What I have tried: I can use a <code>scipy.spatial</code> library to compute the 2d convex hull</p> <pre><code>import numpy as np from scipy.spatial import ConvexHull import matplotlib.pyplot as plt X = np.linspace(1, 5, 9) Y = np.array([2, 3, 1, 4, 6, 2, 1, 4, 3]) points = np.column_stack((X, Y)) hull = ConvexHull(points) print(&quot;Indices of points forming the convex hull:&quot;, hull.vertices) plt.plot(X, Y, 'o') for simplex in hull.simplices: plt.plot(points[simplex, 0], points[simplex, 1], 'k-') plt.show() </code></pre> <p>which produces the following plot:</p> <p><a href="https://i.sstatic.net/o8R46.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o8R46.png" alt="enter image description here" /></a></p> <p>however, the vector Yc I need is the lower boundary of this plot, and I am not sure what's the easiest way to obtain it.</p>
<python><algorithm><convex-hull>
2023-07-15 09:51:28
1
357
SBF
76,693,167
1,488,641
How to get installed package from broken virtual environment python
<p>I had venv that installed Django and python version3.8. After upgrading my ubuntu my virtual environment doesn't work anymore and it doesn't recognize the pip command and it returns:</p> <pre><code>ModuleNotFoundError: No module named 'pip' </code></pre> <p>After searching I found out the virtual environment is broken after upgrading and the best way is to remove venv directory and create again and install your packages again but I don't have my installed package list (using pip freeze &gt; list.txt) so now how can I get installed package list from the broken virtual environment? I can see the list of directories here:</p> <pre><code>$ ls venv/lib/python3.8/site-packages/ </code></pre> <p>but I don't have it</p>
<python><django><pip><virtualenv>
2023-07-15 09:47:57
0
435
minttux
76,693,005
92,153
Converting datetime to UTC and ISO format ends up with Z instead of +00:00
<p>+00:00&quot; is a valid ISO 8601 timezone designation for UTC. But this seems to have changed to Z in latest fastapi=0.100.0. Is there a way to change this back?</p> <p>In fastapi==0.95.0 we created our ISO date like this and returned it as json:</p> <pre><code>expiry_date = (device.expires_at.replace(tzinfo=pytz.UTC)).isoformat() return { &quot;expires_at&quot;: expiry_date } </code></pre> <p>The unit test was expecting a dateformat like this: <code>2023-07-16T06:26:30.769459+00:00</code></p> <pre><code>assert response.json()[&quot;expires_at&quot;] == date.strftime( &quot;%Y-%m-%dT%H:%M:%S.%f+00:00&quot; ) </code></pre> <p>But now with fastapi==0.100.0 the format has changed, which fails the unit test. <code>2023-07-16T06:26:30.769459Z</code></p> <p>Is there a way to change this back to <code>+00:00</code>?</p>
<python><fastapi>
2023-07-15 09:00:34
1
66,610
Houman
76,692,969
713,200
how to match a partial string from a dictionary values in python?
<p>I want to search for a part of a string only from the <code>values</code> (not <code>keys</code>) from a dictionary in Python. But the code I have is not able to search for the part of string which I got from a device output.</p> <p>Here is the code</p> <pre><code>n = {'show run | i mpls ldp': '\rMon Jun 26 06:21:29.965 UTC\r\nBuilding configuration...\r\nmpls ldp'} if &quot;mpls ldp&quot; in n: print(&quot;yes&quot;) else: print(&quot;no&quot;) </code></pre> <p>Everytime when I run it always prints <code>no</code>.</p> <p>I want to search for <code>mpls ldp</code> in the <code>\nBuilding configuration...\r\nmpls ldp</code> value part.</p>
<python><string><dictionary><string-operations>
2023-07-15 08:51:30
1
950
mac
76,692,922
163,573
Type checking for MagicMock spies
<p>When wrapping a class in a MagicMock I want to keep type checking and intellisense for both MagicMock and the wrapped class methods.</p> <p>Currently I achieve this by using the below code. Is there a better (less verbose) way?</p> <pre><code>if TYPE_CHECKING: class MockedSomeClass(MagicMock, SomeClass): pass else: MockedSomeClass = TypeVar('MockedSomeClass') ... spy: MockedSomeClass = MagicMock(wraps=SomeClass) </code></pre>
<python><python-typing>
2023-07-15 08:38:13
0
6,869
rickythefox
76,692,916
14,743,705
In nixos, what is the difference between installing from pkgs or python311Packages
<p>I had an issue when I installed <code>Yapf</code> this way:</p> <pre><code>environment.systemPackages = with pkgs; [ (python311.withPackages(ps: with ps; [ toml python-lsp-server pyls-isort flake8 ])) pkgs.yapf ]; </code></pre> <p>This gave me the error:</p> <blockquote> <p>$ yapf autoapp.py yapf: toml package is needed for using pyproject.toml as a configuration file</p> </blockquote> <p>And I solved when I did:</p> <pre><code>environment.systemPackages = with pkgs; [ (python311.withPackages(ps: with ps; [ toml python-lsp-server pyls-isort flake8 yapf ])) ]; </code></pre> <p>Why was the first configuration giving me an installed version of yapf that couldn't import toml?</p>
<python><nix><nixos>
2023-07-15 08:35:57
1
305
Harm
76,692,886
8,938,220
Python application exiting when importing user defined module(pandas, numpy etc.)
<p>I installed python 3.8.5(32-bit) and I added python paths properly</p> <ul> <li>D:\SW\Python38; D:\SW\Python38\Scripts; D:\SW\Python38\Lib; D:\SW\Python38\Lib\site-packages</li> </ul> <p>I tried to import user-defined module in command line window as well as IDLE.</p> <p><a href="https://i.sstatic.net/WghY4.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WghY4.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/OAHzW.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OAHzW.jpg" alt="enter image description here" /></a></p> <p>In the program print function is not working as <strong>python is closing</strong> without error or exception.</p>
<python><pandas><numpy>
2023-07-15 08:26:58
1
343
Ravi Kannan
76,692,819
11,488,421
Changing Pixels in a PyTorch Image
<p>I am getting started with PyTorch. I have loaded CIFAR10 and would now like to manually play around with the images a bit (ultimately, I am interested in certain augmentations for contrastive learning). However, I can't even set individual pixels to zero. What is going wrong in below code? It doesn't change the entries of the tensor.</p> <pre><code>transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) indices_train = [i for i in range(len(trainset)) if trainset.targets[i] in [1,2]] trainset = torch.utils.data.Subset(trainset, indices_train) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) print(trainset[0][0][:,1,1]) trainset[0][0][:,1,1] = 0 print(trainset[0][0][:,1,1]) </code></pre>
<python><pytorch>
2023-07-15 08:06:28
1
569
Winger 14
76,692,675
2,530,181
Multi-player games rating method: how to improve Python code and get more speed
<p>I am making an ELO like rating system for online games players. ELO is used for one-v-one games whereas I want to hae multi-player games. Here I Modify an ordinal range to give more weight to the winner and close finishers by using an exponential function Events have an id, 'event_id' and a field size, 'fsize' Players have an id, 'player_id' and a finishing position, 'finish'</p> <p>The code runs and produces values that look reasonable but it is too slow when applied to large amounts of data eg 100k events. My Python skills are not up to seeing how I can improve this code, any assistance appreciated!</p> <h1>A toy dataset</h1> <pre><code>dict = {'event_id': [1,1,1,2,2,2,3,3,3,3,4,4,4,4,4], 'player_id': [1,2,3,1,2,3,1,2,3,4,1,2,3,4,5], 'fsize': [3,3,3,3,3,3,4,4,4,4,5,5,5,5,5], 'finish': [1,2,3,1,2,3,1,2,3,4,1,2,3,4,5]} df = pd.DataFrame(dict) base = 1.5 # for an exponential score set this greater than 1 # n is number of players in an event # For example the exponential scores using a base of 1.5 and n of 3 and printing the values np.array([base ** (n - p) - 1 for p in range(1, n + 1)])[0] print(np.array([base ** (5 - p) - 1 for p in range(1, 5 + 1)])[0]) print(np.array([base ** (5 - p) - 1 for p in range(1, 5 + 1)])[1]) print(np.array([base ** (5 - p) - 1 for p in range(1, 5 + 1)])[2]) #pd.set_option('mode.chained_assignment', None) df['score'] = 0 # initialize the score for j in range(0, df.shape[0]): df['score'][j] = np.array([base ** (df['fsize'][j] - p) - 1 for p in range(1, df['fsize'][j] + 1)])[df['finish'][j]-1] # normalize the score to so that sum() of 'score' for each 'event_id' is 1 df['score_sum'] = df['score'].groupby(df['event_id']).transform('sum') df['score_norm'] = round(df.score/df['score_sum'],4) print(df) </code></pre>
<python><arrays><pandas>
2023-07-15 07:29:23
2
588
cousin_pete
76,692,656
955,091
How to let two Python abstract classes implement each other's abstract methods?
<p>Suppose you have abstract classes <code>A1</code> and <code>A2</code>. Each of them has an abstract method and a concrete method.</p> <pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod class A0(ABC): pass class A1(A0, ABC): def foo(self): return 1 @abstractmethod def bar(self): raise NotImplementedError() class A2(A0, ABC): @abstractmethod def foo(self): raise NotImplementedError() def bar(self): return 10 </code></pre> <p>Now you want to mix-in them togather to implement each other's abstract method:</p> <pre class="lang-py prettyprint-override"><code>class C(A2, A1, A0): def all(self): return (super().foo(), super().bar()) C().all() </code></pre> <p>However the above code does not work because I got the following error:</p> <pre><code>TypeError: Can't instantiate abstract class C with abstract method foo </code></pre> <p>How can I create <code>C</code> so that it can mix-in both <code>A1</code> and <code>A2</code>?</p>
<python><inheritance><multiple-inheritance><method-resolution-order><linearization>
2023-07-15 07:22:45
2
3,773
Yang Bo
76,692,651
7,078,356
How to detect square which has a specific width value in images and check the intersection of 2 lines within the circle?
<p>I want to use the Python <strong>OpenCV</strong> library to perform image processing and detect squares in an image. The squares have a certain width, but the specific width value is unknown. Then I want to draw a circle with the center of the square as the center point and a radius equal to four times the distance from the center to the square's contour. There are two intersecting lines in the image, and their positions are not fixed. The clarity of the lines may vary in different images. You can notice that there are many noise dots on the image, and the brightness levels are also inconsistent with different images. <strong>How can I determine if the intersection point of the two lines falls within the circle?</strong> Red arrow indicates the intersection point of 2 lines in the below image.</p> <p><a href="https://i.sstatic.net/uFeTJ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uFeTJ.jpg" alt="enter image description here" /></a> I used 4 images for testing. Currently, I convert the color image to grayscale, apply Gaussian blur, use the Canny algorithm for edge detection, and then perform contour detection. The problem I'm facing is that due to the width of the square, some images have the square contour drawn outside the square, some have it drawn inside, and some have both inside and outside contours. How can I update the algorithm to only draw the outer contour? I have already tried adjusting the values in <code>cv2.Canny(blur, 50, 100)</code>, but it's difficult to find a set of values that work for all images. And <code>cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)</code>,if change <code>cv2.RETR_TREE</code> to <code>cv2.EXTERNAL</code>, it cannot find the center point and circle.</p> <p>Looking forward to receiving some useful suggestions from you.</p> <p>The 4 images for testing are as below.</p> <p>Pic_1: <a href="https://i.sstatic.net/oCti9.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oCti9.jpg" alt="pic_1" /></a> Pic_2: <a href="https://i.sstatic.net/P9sCR.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P9sCR.jpg" alt="pic_2" /></a> Pic_3: <a href="https://i.sstatic.net/IoFCS.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IoFCS.jpg" alt="pic_3" /></a> Pic_4: <a href="https://i.sstatic.net/IzVRZ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IzVRZ.jpg" alt="pic_4" /></a></p> <p>My Python code is as below.</p> <pre><code>import cv2 import glob import os import numpy as np def detect_squares(img_path): file_name = os.path.basename(img_path) # Read the image and create a copy for drawing results img = cv2.imread(img_path) image_with_rectangles = img.copy() # Convert to grayscale gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Apply Gaussian blur blur = cv2.GaussianBlur(gray, (5, 5), 0) # Perform edge detection edges = cv2.Canny(blur, 50, 100) # Find contours contours, _ = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) # Traverse through the detected contours for contour in contours: # Calculate contour perimeter perimeter = cv2.arcLength(contour, True) # Get the coordinates of contour vertices approx = cv2.approxPolyDP(contour, 0.04 * perimeter, True) # Get the coordinate values, width, and height x, y, w, h = cv2.boundingRect(approx) # Classify the contour if len(approx) == 4 and perimeter &gt;= 350 and 0.9 &lt;= float(w) / h &lt;= 1.1: # Draw red contour lines cv2.drawContours(image_with_rectangles, contour, -1, (0, 0, 255), cv2.FILLED) # Calculate contour moments M = cv2.moments(contour) # Calculate the centroid coordinates of the contour if M[&quot;m00&quot;] != 0: center_x = int(M[&quot;m10&quot;] / M[&quot;m00&quot;]) center_y = int(M[&quot;m01&quot;] / M[&quot;m00&quot;]) center = (center_x, center_y) # Draw the center point on the image cv2.circle(image_with_rectangles, center, 5, (0, 255, 255), 3) # Calculate the distance from the center to the square contour distance = int(cv2.pointPolygonTest(approx, center, True)) # Calculate the radius of the circle radius = 4 * distance # Draw the circle cv2.circle(image_with_rectangles, (center_x, center_y), radius, (0, 255, 255), 2) # Show the resulting image # cv2.imshow(f&quot;Canny Detection - {file_name}&quot;, edges) cv2.imshow(f&quot;Shape Detection - {file_name}&quot;, image_with_rectangles) cv2.waitKey(0) cv2.destroyAllWindows() detect_squares(image_path) </code></pre>
<python><opencv><image-processing><computer-vision><canny-operator>
2023-07-15 07:21:03
1
1,327
Ringo
76,692,478
13,060,649
How to use django asynhronous programming with drf API views?
<p>I am trying to implement asynchronous programming with django rest framework and I am using <code>api_view</code> decorators for my function based API views. So I was trying to convert my synchronous method to asynchronous with <code>asgiref.sync.sync_to_async</code> decorator but seems like using this django is not able to route to my API view. Here is my function based api view.</p> <pre><code>from rest_framework.decorators import permission_classes, api_view, authentication_classes from rest_framework.permissions import AllowAny, IsAuthenticated from asgiref.sync import sync_to_async from ..customauth import CustomAuthBackend from ..utils.auth_utils import AuthUtils @api_view(['POST']) @permission_classes([AllowAny]) @authentication_classes([]) @sync_to_async def login(request): email = request.data.get('email') if email is None or email == '': return Response(data={'success': False, 'message': 'Invalid credentials'}) user = User.get_by_email(email=email) if user is not None and user.is_active and user.check_password(raw_password=request.data.get('password')): serializer = UserSerializer(user) tokens_map = AuthUtils.generate_token(request=request, user=user) return Response({'success': True, 'user': serializer.data, 'tokens': tokens_map}) return Response(data={'success': False, 'message': 'Invalid login credentials'}, status=status.HTTP_403_FORBIDDEN) </code></pre> <p>If I use <code>async def</code> that is also not working with django rest framework. How do I make the use of event loop here efficiently ?</p>
<python><django><django-rest-framework><python-asyncio><asgi>
2023-07-15 06:19:54
1
928
suvodipMondal
76,692,396
8,920,642
Python 3.11.4 launcher.wixproj cannot detect v143 buildtools
<p>I tried to build Python 3.11.4 with Visual Studio 2022 (v143) and I get following error at the end of compilation. Rest of the project binaries are built using v143 successfully.</p> <p>I used following command to build: <strong>Python\Tools\msi\build.bat&quot; -x64 --pack</strong></p> <p><sub>Project &quot;D:\build\DE-Python\Python\Tools\msi\launcher\launcher.wixproj&quot; (1) is building &quot;D:\build\DE-Python\Python\PCbuild\pyshellext.vcxproj&quot; (2) on node 1 (default targets). C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\MSBuild\Microsoft\VC\v170\Microsoft.CppBuild.targets(456,5): error MSB8020: The build tools for v143 (Platform Toolset = 'v143') cannot be found. To build using the v143 bui ld tools, please install v143 build tools. Alternatively, you may upgrade to the current Visual Studio tools by selecting the Project menu or right-click the solution, and then selecting &quot;Retarget solution&quot;. [D:\build\DE-Python\Python<br /> PCbuild\pyshellext.vcxproj]</sub></p> <p><strong>My system details:</strong></p> <pre><code>VCIDEInstallDir=C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\Common7\IDE\VC VCINSTALLDIR=C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC VCToolsInstallDir=C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.36.32532 VCToolsRedistDir=C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Redist\MSVC\14.36.32532 VCToolsVersion=14.36.32532 VisualStudioVersion=17.0 VS170COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\Common7\Tools VS2022INSTALLDIR=C:\Program Files\Microsoft Visual Studio\2022\Professional </code></pre>
<python><wix><visual-studio-2022><cpython>
2023-07-15 05:46:49
1
398
Ashish Shirodkar
76,692,192
454,671
How do I install python packages behind firewall using pip
<p>On my work computer, I can browse to pypi.python.org but on my jupyter notebook, whenver I try to pip install a package, I get error. I tried:</p> <pre><code> !pip install langchain or %pip install langchain or pip install --proxy http://host:port/ langhcain </code></pre> <p>Error (always the same):</p> <pre><code>WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(&lt;pip._vendor.urllib3.connection.HTTPSConnection object at 0x000001C088C20B50&gt;, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/langchain/ </code></pre> <p>What can I try or do i need to contact IT sescurity?</p> <pre><code>pip install langchain --trusted-host pypi.python.org </code></pre>
<python><pip><proxy><firewall>
2023-07-15 04:09:47
1
17,167
Victor
76,692,070
1,390,192
How do I fill the image with white?
<p>I am trying to create a rotated image which is white, but this does not work. The image always turns out black.</p> <p>How could I achieve this?</p> <pre><code>def rotate_image(image): # Convert the image to a NumPy array image_array = np.array(image) # Set the fill color (RGB format) fill_color = (255, 255, 255) # White color # Define the rotation angle rotation_angle = random.randint(1, 360) # Perform rotation using OpenCV rows, cols = image_array.shape[:2] M = cv2.getRotationMatrix2D((cols / 2, rows / 2), rotation_angle, 1) rotated_array = cv2.warpAffine(image_array, M, (cols, rows), borderValue=fill_color) # Convert the rotated array back to an image rotated_image = Image.fromarray(rotated_array) # Save the rotated image as a PNG file rotated_image.save(&quot;test1.png&quot;) return rotated_image </code></pre> <p><a href="https://i.sstatic.net/CVrlI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CVrlI.png" alt="enter image description here" /></a></p> <p>Here is the code that I am using it in:</p> <pre><code>def get_dataset_batch(batch_size=2): flip = False base_image = Image.open(&quot;star_map_base.png&quot;) train_A = [] train_B = [] for i in range(0, batch_size): if flip: turbulence_size = random.choice([1, 2, 3, 4]) turbulence_image = Image.open(&quot;turbulence.jpg&quot;) x = random.randint(0, 4096 - IMG_SIZE * turbulence_size) y = random.randint(0, 2136 - IMG_SIZE * turbulence_size) crop_actual_rect = (x, y, x + IMG_SIZE * turbulence_size, y + IMG_SIZE * turbulence_size) cropped_actual = turbulence_image.crop(crop_actual_rect) cropped_actual = cropped_actual.resize((IMG_SIZE, IMG_SIZE)) else: helix_size = random.choice([1, 2, 3, 4, 5, 6, 7]) helix_image = Image.open(&quot;helix_bw_base.jpg&quot;) x = random.randint(0, 4096 - IMG_SIZE * helix_size) y = random.randint(0, 4096 - IMG_SIZE * helix_size) crop_actual_rect = (x, y, x + IMG_SIZE * helix_size, y + IMG_SIZE * helix_size) cropped_actual = helix_image.crop(crop_actual_rect) cropped_actual = cropped_actual.resize((IMG_SIZE, IMG_SIZE)) flip = not flip cropped_actual = cropped_actual.convert('LA') star_overlayed = cropped_actual star_overlayed = rotate_image(star_overlayed) star_overlayed = star_overlayed.convert('L') star_overlayed = Image.fromarray(transform(image=np.asarray(star_overlayed))[&quot;image&quot;] / 1) star_overlayed = star_overlayed.convert('LA') ca = star_overlayed.copy() ca = ca.convert('L') base_image = base_image.convert('RGBA') star_overlayed = star_overlayed.convert('RGBA') overlaid_image = overlay_images(star_overlayed, base_image) overlaid_image = Image.fromarray(overlaid_image) star_overlayed = overlaid_image.convert('L') a = np.asarray(ca, dtype=&quot;float32&quot;).reshape(1, IMG_SIZE, IMG_SIZE, 1) / 512 b = np.asarray(star_overlayed, dtype=&quot;float32&quot;).reshape(1, IMG_SIZE, IMG_SIZE, 1) / 512 train_A.append(a) train_B.append(b) return train_A, train_B </code></pre>
<python><opencv><image-processing><python-imaging-library>
2023-07-15 03:06:44
2
11,500
tmighty
76,691,967
22,212,435
Trying to create a function for generating an image of circle with some thickness. Stucked at thickness algorithm
<p>So far l have created a function to generate a circle:</p> <pre><code>def add_circle(diameter, image: PhotoImage, outline=&quot;black&quot;, thickness=1, fill='', to=(0, 0)): img = image img.config(width=max(diameter+to[0], int(img.cget(&quot;width&quot;))), height=max(diameter+to[1], int(img.cget(&quot;height&quot;)))) center = (diameter/2, diameter/2) # of the aiming perfect circle, so can be fractional # that is a &quot;position&quot; of pixels left up corner start_point = (0, diameter//2) # Start at the right most part of the pixel circle img.put(outline, (to[0], start_point[1] + to[1])) img.put(outline, (diameter - 1 + to[0], start_point[1] + to[1])) while start_point[0] != diameter//2: # we will move up until reach the top. So a quarter will be drawn # we will move either up(North), right(East) or diagonally(North East) # we check distance to the aiming circle center from the next possible pixel center positions (so + 0.5) # than subtract from the aiming circle radius # in other words distances to the aiming circle circumference. disE = abs(dist(center, (start_point[0]+1 + 0.5, start_point[1] + 0.5))-(diameter/2-0.5)) disN = abs(dist(center, (start_point[0] + 0.5, start_point[1]-1 + 0.5))-(diameter/2-0.5)) disNE = abs(dist(center, (start_point[0] + 1+0.5, start_point[1]-1 + 0.5))-(diameter/2-0.5)) if disE &lt; disN: if disE &lt; disNE: # so go to right is the nearest to circle. Usually happens at the top of circle start_point = (start_point[0]+1, start_point[1]) else: # disNE smallest start_point = (start_point[0]+1, start_point[1]-1) # usually to the last half of the circle quarter else: if disN &lt; disNE: # up is the best. Usually happens at the beginning, when the vertical line drawn start_point = (start_point[0], start_point[1]-1) else: # disNE smallest start_point = (start_point[0]+1, start_point[1]-1) # usually after the vertical part is drawn # put right pixel and mirror it horizontally and vertically to form a full circle later img.put(outline, (start_point[0] + to[0], start_point[1] + to[1])) img.put(outline, (diameter-1-start_point[0] + to[0], start_point[1] + to[1])) img.put(outline, (start_point[0] + to[0], diameter-1-start_point[1] + to[1])) img.put(outline, (diameter-1-start_point[0] + to[0], diameter-1-start_point[1] + to[1])) </code></pre> <p>it works fine (may be not the most efficient code, so if someone has a better one, I would glad to see it as well). But then I can't find a way to do thickness (new layers should be generated towards the center only). In order to do that I created the other circle inside:<br /> <a href="https://i.sstatic.net/vNiPS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vNiPS.png" alt="" /></a></p> <p>Then my basic algorithms then were like start between them, fill horizontally (also tried diagonally) up until meet the other circle (using get function to check pixels). And in some cases they worked, but all failed at the thickness of 2. The problem is that I just can't check, if my &quot;pointer&quot; inside, as circles are touching each other. For horizontal algorithm it just started filling inside the second circle. For diagonal method it just escaped the circles and start filling surrounds. I know that l can just make special case for that thickness, but i don't think that it is a good to do. A good algorithm should work nice at least for basic cases, mine are failed in the most possible one.</p> <p>NOTES:<br /> 1.1 I also tried to create many circles inside that one, but this method failed, as some circles can't fit the nearest exactly:<br /> <a href="https://i.sstatic.net/X1tOh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X1tOh.png" alt="" /></a><br /> 1.2 also there are a problem of overlapping of two circles, that sometimes some parts doesn't have thickness of two, it could be seen on the picture. That leads to the difficulties to any algorithms I have tried so far<br /> 2. I can't use PILLOW or any other external libraries, don't ask me why. Only tkinter library has been used so far. Other build-in libraries may be used.<br /> 3. I know, that there are a way to create a circle by using create_oval or something like that, but l want to create an image to then put it on the label. There are some problems, that l can't use any standart canvas objects ( can't use canvas.create . . .), again don't ask why, just can't.</p> <p>If someone can help, I will be really grateful.</p> <p>P.S. Sorry for my english</p>
<python><tkinter><geometry>
2023-07-15 02:08:36
0
610
Danya K
76,691,911
5,942,100
Take a month by month cumulative by grouped column using Pandas
<p>I would like to show the cumulative value by month using this dataset:</p> <p><strong>Data</strong></p> <pre><code>from io import StringIO import pandas as pd df = pd.read_csv(StringIO(&quot;&quot;&quot; ID val Date AA:1 1 4/1/2022 AA:1 2 4/3/2022 AA:2 3 5/1/2022 AA:3 5 6/5/2022 AA:4 5 1/1/2023&quot;&quot;&quot;), sep='\s+') </code></pre> <p><strong>Desired</strong></p> <pre><code>ID val Date AA 3 4/1/2022 AA 6 5/1/2022 AA 11 6/1/2022 AA 16 1/1/2023 </code></pre> <p><strong>Doing</strong></p> <pre><code># Convert df['Date'] = pd.to_datetime(df['Date']) # Group by ID and month, and calculate the cumulative sum df['Cumulative_Value'] = df.groupby([df['ID'], df['Date'].dt.to_period('M')])['val'].cumsum() </code></pre> <p>However, the cumulative value is not computing.</p>
<python><pandas><numpy><group-by>
2023-07-15 01:45:48
3
4,428
Lynn
76,691,838
14,965,919
How to travel to specific file path when Chrome browser asks for a file without using Selenium(third party libraries)
<p>me here trying to find out how to achieve the next:</p> <p>I am actually using <strong>Selenium</strong> to drive trough webpages but when I want to upload some images in any webpage the <strong>&quot;Open&quot;</strong>(<strong>Abrir</strong> because in spanish) file browser opens up and I want to know how to browse to the directory I want and select the file(s) that I want to upload --&gt; For example: I go to <a href="https://imgbb.com/" rel="nofollow noreferrer">imgBB URL</a> and I click the <strong>START UPLOADING</strong> button with Selenium but then I want to browse to an specific folder and then select some files I want, but <strong>without using Selenium</strong>. <a href="https://i.sstatic.net/7GfyR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7GfyR.png" alt="enter image description here" /></a></p> <p>Do someone knows what other libraries may I use to select the specific files that I want to upload?</p>
<python><file-browser>
2023-07-15 01:07:34
2
335
RedEye
76,691,727
11,152,224
Celery can't send tasks to queue when running in docker
<p>I tested it on windows and it worked, but now I want to do it using docker. The problem is when I try to execute task to send email to user I get error: <code>[Errno 111] Connection refused</code>, but celery starts successfully and connects to rabbitmq. Why can't celery send tasks to rabbitmq?</p> <pre><code>Traceback (most recent call last): File &quot;/usr/local/lib/python3.11/dist-packages/kombu/utils/functional.py&quot;, line 32, in __call__ return self.__value__ ^^^^^^^^^^^^^^ During handling of the above exception ('ChannelPromise' object has no attribute '__value__'), another exception occurred: File &quot;/usr/local/lib/python3.11/dist-packages/kombu/connection.py&quot;, line 472, in _reraise_as_library_errors yield ^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/kombu/connection.py&quot;, line 459, in _ensure_connection return retry_over_time( File &quot;/usr/local/lib/python3.11/dist-packages/kombu/utils/functional.py&quot;, line 318, in retry_over_time return fun(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/kombu/connection.py&quot;, line 941, in _connection_factory self._connection = self._establish_connection() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/kombu/connection.py&quot;, line 867, in _establish_connection conn = self.transport.establish_connection() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/kombu/transport/pyamqp.py&quot;, line 203, in establish_connection conn.connect() ^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/amqp/connection.py&quot;, line 323, in connect self.transport.connect() ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/amqp/transport.py&quot;, line 129, in connect self._connect(self.host, self.port, self.connect_timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/amqp/transport.py&quot;, line 184, in _connect self.sock.connect(sa) ^^^^^^^^^^^^^^^^^^^^^ The above exception ([Errno 111] Connection refused) was the direct cause of the following exception: File &quot;/usr/local/lib/python3.11/dist-packages/django/core/handlers/exception.py&quot;, line 55, in inner response = get_response(request) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/django/core/handlers/base.py&quot;, line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/website/journal_website/views.py&quot;, line 281, in register_new_user send_email_message_to_user_with_activation_link.delay(new_user.pk, new_user_additional_data.code) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/celery/app/task.py&quot;, line 444, in delay return self.apply_async(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/celery/app/task.py&quot;, line 594, in apply_async return app.send_task( File &quot;/usr/local/lib/python3.11/dist-packages/celery/app/base.py&quot;, line 798, in send_task amqp.send_task_message(P, name, message, **options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/celery/app/amqp.py&quot;, line 517, in send_task_message ret = producer.publish( File &quot;/usr/local/lib/python3.11/dist-packages/kombu/messaging.py&quot;, line 186, in publish return _publish( File &quot;/usr/local/lib/python3.11/dist-packages/kombu/connection.py&quot;, line 563, in _ensured return fun(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/kombu/messaging.py&quot;, line 195, in _publish channel = self.channel ^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/kombu/messaging.py&quot;, line 218, in _get_channel channel = self._channel = channel() ^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/kombu/utils/functional.py&quot;, line 34, in __call__ value = self.__value__ = self.__contract__() ^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/kombu/messaging.py&quot;, line 234, in &lt;lambda&gt; channel = ChannelPromise(lambda: connection.default_channel) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/kombu/connection.py&quot;, line 960, in default_channel self._ensure_connection(**conn_opts) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/kombu/connection.py&quot;, line 458, in _ensure_connection with ctx(): ^^^^^ File &quot;/usr/lib/python3.11/contextlib.py&quot;, line 155, in __exit__ self.gen.throw(typ, value, traceback) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/dist-packages/kombu/connection.py&quot;, line 476, in _reraise_as_library_errors raise ConnectionError(str(exc)) from exc ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ </code></pre> <p>docker-compose.yml:</p> <pre><code>version: &quot;3.0&quot; services: # WEB django: build: . command: python3.11 manage.py runserver 0.0.0.0:8000 container_name: django-server volumes: - media_volume:/website/journal_website/media - static_volume:/website/journal_website/static - database_volume:/website/journal_website/database ports: - &quot;8000:8000&quot; depends_on: - rabbit # Celery celery: build: . command: celery -A website worker -l info container_name: celery depends_on: - rabbit # RabbitMQ rabbit: hostname: rabbit container_name: rabbitmq image: rabbitmq:3.12-rc-management ports: # AMQP protocol port - &quot;5672:5672&quot; # HTTP management UI - &quot;15672:15672&quot; restart: always volumes: media_volume: static_volume: database_volume: </code></pre> <p>celery.py:</p> <pre><code>from __future__ import absolute_import, unicode_literals import os from celery import Celery os.environ.setdefault(&quot;DJANGO_SETTINGS_MODULE&quot;, &quot;website.settings&quot;) celery_application = Celery(&quot;website&quot;) celery_application.config_from_object(&quot;django.conf:settings&quot;, namespace=&quot;CELERY&quot;) celery_application.conf.broker_url = &quot;amqp://rabbit:5672&quot; celery_application.autodiscover_tasks() </code></pre> <p>tasks.py:</p> <pre><code>from __future__ import absolute_import, unicode_literals from celery import shared_task # Some imports... @shared_task def send_email_message_to_user_with_activation_link(target_user_id: int, code: UUID) -&gt; HttpResponse | None: target_user = User.objects.get(pk=target_user_id) content = { &quot;email&quot;: target_user.email, &quot;domain&quot;: &quot;127.0.0.1:8000&quot;, &quot;site_name&quot;: &quot;Website&quot;, &quot;user&quot;: target_user, &quot;protocol&quot;: &quot;http&quot;, &quot;code&quot;: code, } message = render_to_string(&quot;user/account_activation/account_activation_email.txt&quot;, content) try: send_mail(&quot;Account activation&quot;, message, &quot;admin@example.com&quot; , [target_user.email], fail_silently=False) except BadHeaderError: return HttpResponse(&quot;Invalid header found.&quot;) </code></pre>
<python><django><docker><rabbitmq><celery>
2023-07-15 00:16:52
2
569
WideWood
76,691,557
1,014,217
How to properly create a python package and use it in another project
<p>I have a new project with following folder architecture.</p> <p><a href="https://i.sstatic.net/QStGr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QStGr.png" alt="enter image description here" /></a></p> <p>PineconeClient is a class with 3 methods</p> <p>inside indexes folder init.py looks like this:</p> <pre><code>&quot;&quot;&quot;Wrappers on top of vector stores.&quot;&quot;&quot; from e61indexingclient.indexes.PineconeClient import (CreateIndex, DeleteIndex, IndexAllDocumentsFromBlobStorage) __all__ = [ &quot;PineconeClient&quot;, &quot;CreateIndex&quot;, &quot;DeleteIndex&quot;, &quot;IndexAllDocumentsFromBlobStorage&quot; ] </code></pre> <p>in the root the init folder looks like this too:</p> <pre><code>&quot;&quot;&quot;Main entrypoint into package.&quot;&quot;&quot; from importlib import metadata from typing import Optional try: __version__ = metadata.version(__package__) except metadata.PackageNotFoundError: # Case where package metadata is not available. __version__ = &quot;&quot; del metadata # optional, avoids polluting the results of dir(__package__) verbose: bool = False debug: bool = False __all__ = [ &quot;PineconeClient&quot;, &quot;CreateIndex&quot;, &quot;DeleteIndex&quot;, &quot;IndexAllDocumentsFromBlobStorage&quot; ] </code></pre> <p>With poetry build I can generate a whl file.</p> <p>However here comes the issue</p> <p>In the other project, I try to install the package like this:</p> <pre><code>pip install pathtowhlfile </code></pre> <p>The install works fine.</p> <p>Whats the issue then?</p> <p>if I do <code>import e61indexingclient</code> its not recognized If I do <code>from e61indexingclient.indexes.Pineconeclient import CreateIndex, DeleteIndex</code> its also not recognized</p> <p>What am I doing wrong?</p> <p>pyproject.toml file</p> <pre><code>[tool.poetry] name = &quot;e61indexingclient&quot; version = &quot;0.0.5&quot; description = &quot;&quot; authors = [&quot;x&quot;] readme = &quot;README.md&quot; [tool.poetry.dependencies] python = &quot;^3.10&quot; langchain = &quot;^0.0.232&quot; openai = &quot;^0.27.8&quot; pinecone-client = &quot;^2.2.2&quot; azure-storage-blob = &quot;^12.17.0&quot; azure-identity = &quot;^1.13.0&quot; azure-ai-textanalytics = &quot;^5.3.0&quot; azure-core = &quot;^1.28.0&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>My whl files look like this: <a href="https://i.sstatic.net/mTBDW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mTBDW.png" alt="enter image description here" /></a></p> <p>I use poetry build to generate whl files</p>
<python><pip><python-poetry>
2023-07-14 23:08:03
1
34,314
Luis Valencia
76,691,505
71,522
How can I use boto to generate pre-signed POST requests for a bucket with KMS-managed encryption keys?
<p>Amazon's blog post describes using the Java client to <a href="https://aws.amazon.com/blogs/developer/generating-amazon-s3-pre-signed-urls-with-sse-kms-part-2/" rel="nofollow noreferrer">generate pre-signed URLs for buckets with SSE-KMS encryption</a>, but it's not clear how to do the same with <code>boto3</code>.</p> <p>How can I use boto to generate pre-signed POST requests for a bucket with KMS-managed encryption keys?</p>
<python><amazon-web-services><amazon-s3><boto3>
2023-07-14 22:56:20
1
155,668
David Wolever
76,691,446
19,123,103
How to use statsmodels get_rdataset?
<p>Python's <code>statsmodels</code> library has <code>get_rdataset()</code> method that can fetch various datasets. Where is the list of datasets that can be fetched? How do I use it to load datasets?</p> <p>The <a href="https://www.statsmodels.org/dev/datasets/statsmodels.datasets.get_rdataset.html" rel="nofollow noreferrer">documentation</a> has no mention of which datasets are available. It merely says that <code>dataname: The name of the dataset you want to download</code> is a required parameter but does not mention which datanames are possible anywhere.</p>
<python><statistics><statsmodels>
2023-07-14 22:33:33
1
25,331
cottontail
76,691,387
17,886,849
How to subclass coroutine?
<p>I want my class <code>MyCoro</code> to behave as same as coroutine. I am looking for something like this as class definition:</p> <pre class="lang-py prettyprint-override"><code>class MyCoro(Coroutine?): def __init__(self, text): async def coro(): await asyncio.sleep(5) print(text) super().__init__(coro()) </code></pre> <p>So that I can use it like this:</p> <pre class="lang-py prettyprint-override"><code>asyncio.create_task(MyCoro(&quot;Hello!&quot;)) # Above is same as: async def coro(text): await asyncio.sleep(5) print(text) asyncio.create_task(coro(&quot;Hello!&quot;)) </code></pre> <p>How can I do this?</p>
<python><oop><python-asyncio>
2023-07-14 22:15:20
1
603
Max Smirnov
76,691,177
9,677,095
How to get a slice of a dataframe with boolean indexing that can modify the original
<p>Say we have a dataframe as such:</p> <pre><code>dataset = pd.DataFrame({'A':[5,5,5,5,5,5], 'B':[1,2,3,4,5,6]}) dataset A B 0 5 1 1 5 2 2 5 3 3 5 4 4 5 5 5 5 6 </code></pre> <p>Getting a normal slice of 1 to 4 gives us</p> <pre><code>ds = dataset[1:4] ds A B 1 5 2 2 5 3 3 5 4 </code></pre> <p>While getting a slice through boolean indexing gives us</p> <pre><code>sliced = dataset.loc[dataset['B'] % 2 == 0] sliced A B 1 5 2 3 5 4 5 5 6 </code></pre> <p>I'm trying to edit the original dataset with the slice gotten through the boolean indexing method. When doing it with normal slicing it works as expected</p> <pre><code>ds.loc[1,'A'] = 10 ds A B 1 10 2 2 5 3 3 5 4 dataset A B 0 5 1 1 10 2 2 5 3 3 5 4 4 5 5 5 5 6 </code></pre> <p>But when editing the boolean indexed slice it doesn't change the original</p> <pre><code>sliced.loc[1,'A'] = 10 sliced A B 1 10 2 3 5 4 5 5 6 dataset A B 0 5 1 1 5 2 2 5 3 3 5 4 4 5 5 5 5 6 </code></pre> <p>I couldn't find anything in the docs about boolean indexing creating a copy instead of a view (I'm not totally sure if that is even the right terminology). I've tried adding .loc to the slices and that didn't seem to change anything. I even tried ds._is_copy() and sliced._is_copy() but they both return the same thing, a weakref to the original dataframe. ._is_view seems to return True or False almost randomly but according to the source code it doesn't seem too reliable anyway.</p> <blockquote> <p>def is_view(self) -&gt; bool:</p> <p>&quot;&quot;&quot; return a boolean if I am possibly a view &quot;&quot;&quot;</p> </blockquote>
<python><pandas>
2023-07-14 21:21:00
3
358
nemba
76,691,027
1,540,660
Bokeh tool change icon
<p>I cannot seem to change the icon of any tool on Bokeh Toolbar. To avoid any confusion I am running Bokeh v3.2.0 and all references and links below will be in the context of 3.2.0, though the problem appears to exist in earlier versions (I have tested 2.4.*</p> <p>Here is what <a href="https://docs.bokeh.org/en/latest/docs/user_guide/interaction/tools.html#customizing-tools-icon" rel="nofollow noreferrer">official documentation</a> states:</p> <blockquote> <p>It’s also possible to change a tool icon using the icon keyword. You can pass:</p> <p>A well known icon name</p> <pre><code>plot.add_tools(BoxSelectTool(icon=&quot;box_zoom&quot;)) </code></pre> <p>A CSS selector</p> <pre><code>plot.add_tools(BoxSelectTool(icon=&quot;.my-icon-class&quot;)) </code></pre> <p>An image path</p> <pre><code>plot.add_tools(BoxSelectTool(icon=&quot;path/to/icon&quot;)) </code></pre> </blockquote> <p>Here is a minimal example of the problem I have (in Jupyter for simplicity):</p> <pre><code>''' Minimal example of the problem ''' from bokeh.plotting import figure, show from bokeh.io import output_notebook from bokeh.models import BoxSelectTool output_notebook() x = list(range(11)) y = x # create three plots p = figure( width=300, height=300, ) p.circle(x, y, size=12, alpha=0.8, color=&quot;#53777a&quot;) p.add_tools(BoxSelectTool(icon=&quot;box_zoom&quot;)) # This fails show(p) </code></pre> <p>This produces an error:</p> <pre><code>FileNotFoundError: [Errno 2] No such file or directory: '/home/username/workspace/box_zoom' </code></pre> <p>Based on the documentation I would expect the above code to pick a built-in icon based on a &quot;well known name&quot;. In fact the line of code is literally taken from the documentation.</p> <p>I also tried:</p> <pre><code>p.add_tools(BoxSelectTool(icon=&quot;.my-icon-class&quot;)) </code></pre> <p>.. hoping that I can then just save icon in my static folder and sort the rest out in CSS. That also fails with the same error:</p> <pre><code>FileNotFoundError: [Errno 2] No such file or directory: '/home/username/workspace/.my-icon-class' </code></pre> <p>I searched the interwebz extensively but nobody appears to have be having the same problem. What am I doing wrong?</p>
<python><icons><bokeh><toolbar>
2023-07-14 20:44:42
0
336
Art Gertner
76,690,873
1,540,660
Bokeh toolbar indicators wrong when merging tools in gridplot
<h2>Problem statement</h2> <p>When grouping bokeh plots into a <a href="https://docs.bokeh.org/en/latest/docs/reference/layouts.html#gridplot" rel="nofollow noreferrer">gridplot</a> tool indicators appear to have the wrong state. This is the case for at least <a href="https://docs.bokeh.org/en/latest/docs/reference/models/tools.html#bokeh.models.WheelZoomTool" rel="nofollow noreferrer">WheelZoomTool</a> and <a href="https://docs.bokeh.org/en/latest/docs/reference/models/tools.html#bokeh.models.HoverTool" rel="nofollow noreferrer">HoverTool</a>, though could be the case for some other tools as well. The problem is purely visual. The state of the underlying tools is correct as set in the code. Clicking on the tools a few times seems to bring the indicators back into sync with the actual tool state.</p> <p>Here is a steps to reproduce the behaviour (in Jupyter for simplicity):</p> <pre><code>''' Minimal example of the problem ''' from bokeh.layouts import gridplot from bokeh.plotting import figure, show from bokeh.models import HoverTool from bokeh.io import output_notebook output_notebook() x = list(range(11)) y0 = x y1 = [10 - i for i in x] y2 = [abs(i - 5) for i in x] # create three plots s1 = figure( background_fill_color=&quot;#fafafa&quot;, active_scroll=&quot;wheel_zoom&quot;, active_inspect=None, width=300, height=300, ) s1.circle(x, y0, size=12, alpha=0.8, color=&quot;#53777a&quot;) s1.add_tools(HoverTool()) s2 = figure( background_fill_color=&quot;#fafafa&quot;, active_scroll=&quot;wheel_zoom&quot;, active_inspect=None, width=300, height=300, ) s2.triangle(x, y1, size=12, alpha=0.8, color=&quot;#c02942&quot;) s2.add_tools(HoverTool()) s3 = figure( background_fill_color=&quot;#fafafa&quot;, active_scroll=&quot;wheel_zoom&quot;, active_inspect=None, width=300, height=300, ) s3.square(x, y2, size=12, alpha=0.8, color=&quot;#d95b43&quot;) s3.add_tools(HoverTool()) # make a grid grid = gridplot([[s2, s3]], width=300, height=300, merge_tools=True, toolbar_location='right') show(s1) show(grid) </code></pre> <p>The example above produces the following result:</p> <p><a href="https://i.sstatic.net/QFID5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QFID5.png" alt="enter image description here" /></a></p> <h2>Questions</h2> <p>I have two questions about this</p> <ol> <li>Is this a bug or am I doing something wrong?</li> <li>If this is a bug, does anyone have an elegant workaround? I do have one of my own but it is <strong>very</strong> far from elegant.</li> </ol> <h2>What I have tried</h2> <p>At the moment I wrap gridplot into a wrapper function which first creates an instance of <a href="https://docs.bokeh.org/en/latest/docs/reference/models/plots.html#bokeh.models.GridPlot" rel="nofollow noreferrer">GridPlot</a> then decorates it by looping through the tools in the toolbar and manually setting the indicators.</p> <p>Here is a snapshot of code:</p> <pre><code>def customized_gridplot(children:list, **kwargs)-&gt;GridPlot: &quot;&quot;&quot; Calls ``bokeh.layouts.gridplot`` and decorates returned gridplot. Merges the tools on the toolbar Fiddles with active indicators of the tools to correct them Args: children (list): List of items that will be passed to ``bokeh.layouts.gridplot`` **kwargs: Other kwargs that will be passed to ``bokeh.layouts.gridplot`` Returns: GridPlot: Decorated instance of GridPlot &quot;&quot;&quot; kwargs_default = dict( toolbar_options = dict(logo=None), # No Bokeh logo merge_tools = True, # Group identical tools from different plots ) kwargs_default.update(kwargs) gp = gridplot( children=children, **kwargs_default ) # This next part fiddles with toolbar tool indicators. # Below we manipulate the state of these indicators !only!. # This will not impact the state of the tools themselves. for tool in gp.toolbar.tools: if tool.__class__.__name__ == 'ToolProxy': if tool.tools[0].__class__.__name__ == 'PanTool': tool.active = True if tool.tools[0].__class__.__name__ == 'BoxZoomTool': tool.active = False if tool.tools[0].__class__.__name__ == 'WheelZoomTool': tool.active = True if tool.tools[0].__class__.__name__ == 'HoverTool': tool.active = True return gp </code></pre> <p>As you can see the workaround I use is really poor and does not generalize well, the indicator states are litterally hardcoded which is not suitable for my use case.</p> <p>Any advise is appreciated</p> <p><em>All the code provided above is not my actual porduction code but rather a compiled examples to demonstrate the problem.</em></p> <h2>Also relevant</h2> <p><a href="https://stackoverflow.com/questions/64774658/how-to-active-scroll-with-a-bokeh-gridplot#comment114574460_64774754">How to active scroll with a bokeh gridplot?</a></p> <p><a href="https://stackoverflow.com/a/49297520/1540660">https://stackoverflow.com/a/49297520/1540660</a></p> <p><a href="https://github.com/bokeh/bokeh/issues/10107" rel="nofollow noreferrer">https://github.com/bokeh/bokeh/issues/10107</a></p>
<python><plot><bokeh><toolbar>
2023-07-14 20:15:01
1
336
Art Gertner
76,690,821
4,800,754
Python Bokeh visualization aspect ratio issue. circle is distorted. why?
<p>I am using the following code in python to visualize a circle with radius = 1 (using Bokeh). However, when I show the circle, it seems like the circle is distorted. As you can see in the plot below, the circle reaches x=+1 and x=-1 on the x-axis as expected, but on the y-axis it gets close to y=-1 and y=+1 but does not reach there, and it's more like 0.95 or something... I cannot understand the source of this issue in the code!</p> <p>I have tried setting the <code>aspect ratio</code> but did not help, also tried <code>match_aspect=True</code> with no luck...</p> <p>Code (running in jupyter notebook):</p> <pre><code>from bokeh.io import output_notebook, output_file, show from bokeh.models import Circle from bokeh.plotting import figure # Create the figure with an aspect ratio of 1 p = figure(width=500, height=500, x_range=(-1.2, 1.2), y_range=(-1.2, 1.2), tools='tap', title='Distorted Circle!', aspect_ratio=1, match_aspect=True) # Plot the circle circle = Circle(x=0, y=0, radius=1, fill_color='lightblue', line_color='black') p.add_glyph(circle) output_notebook() show(p, match_aspect=True) </code></pre> <p>Resulting visualization:</p> <p><a href="https://i.sstatic.net/pSKEm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pSKEm.png" alt="enter image description here" /></a></p>
<python><visualization><bokeh>
2023-07-14 20:06:28
0
563
ameerosein
76,690,812
3,611,472
Question about Context Manager: how to link two objects in an elegant way
<p>I was reading the documentation of TensorFlow and I have found this code</p> <pre><code>g = tf.Graph() with g.as_default(): c = tf.constant(5.0) assert c.graph is g </code></pre> <p><code>tf.Graph().as_default()</code> returns a context manager and whatever is define below the <code>with</code> statement acquires an attribute <code>graph</code> that stores the object <code>g</code>.</p> <p>I don't quite understand how the object <code>c</code> and <code>g</code> gets linked to each other, since in the definition of <code>c</code> we are not passing the object <code>g</code>. Can you explain me how Python makes the link between the objects? This question is not specific to TensorFlow - I am just using it as an example.</p> <p>More specifically, I would like to create two classes <code>A</code> and <code>B</code> where <code>A</code> implements a Context Manager, something along the lines of the following code</p> <pre><code>class A: def __init__(self): pass def __enter__(self): # something here def __exit__(self): # something here class B: def __init__(self): self.a = None </code></pre> <p>Then, I would like to make the following code work</p> <pre><code>obj = A() with obj as a: b = B() assert b.a is obj </code></pre>
<python><object>
2023-07-14 20:03:40
0
443
apt45
76,690,618
10,623,444
Apply two condititions per Group of rows but if the first condition is satisfied dont check for the second condition and move to the Group [pyspark]
<p>I have the following spark Dataframe</p> <pre class="lang-py prettyprint-override"><code>data = [ (&quot;1&quot;, &quot;SAP&quot;, &quot;A&quot;, 1, 51.55, &quot;Team1&quot;), (&quot;2&quot;, &quot;SAP&quot;, &quot;B&quot;, 1, 51.55, &quot;Team1&quot;), (&quot;3&quot;, &quot;SAP&quot;, &quot;B&quot;, 1, 51.55, &quot;Team1&quot;), (&quot;4&quot;, &quot;SAP&quot;, &quot;A&quot;, 1, 55.35, &quot;Team2&quot;), (&quot;5&quot;, &quot;SAP&quot;, &quot;B&quot;, 1, 55.35, &quot;Team2&quot;), (&quot;6&quot;, &quot;SAP&quot;, &quot;C&quot;, 1, 58.00, &quot;Team3&quot;), (&quot;7&quot;, &quot;SAP&quot;, &quot;D&quot;, 1, 47.00, &quot;Team3&quot;) ] df = spark.createDataFrame(data, [&quot;ID&quot;, &quot;Source&quot;, &quot;Type&quot;, &quot;Active&quot;, &quot;Weight&quot;, &quot;Group&quot;]) </code></pre> <p>I want to apply the following two conditions per &quot;Group&quot;, thus separately for the records of &quot;Team1&quot;, &quot;Team2&quot; and &quot;Team3&quot;</p> <pre class="lang-py prettyprint-override"><code>import pyspark.sql.functions as F from pyspark.sql import Window condition1 = (F.col(&quot;Type&quot;) == &quot;A&quot;) &amp; (F.col(&quot;Active&quot;) == 1) &amp; (F.col(&quot;Source&quot;) == &quot;SAP&quot;) condition2 = (F.col(&quot;Rank&quot;) == 1) mainRecordWindow = Window.partitionBy(&quot;Group&quot;).orderBy(F.col(&quot;Weight&quot;).desc()) df = df.withColumn( &quot;Rank&quot;, F.rank().over(mainRecordWindow) ) df = df.withColumn( &quot;main_record&quot;, F.when(condition1, 1).otherwise(F.when(condition2, 1).otherwise(0)) ) df.show() </code></pre> <p>The result is the following table:</p> <pre><code>+--+------+----+-------------+-----+----+-----------+ |ID|Source|Type|Active|Weight|Group|Rank|main_record| +--+------+----+-------------+-----+----+-----------+ | 1| SAP| A| 1| 51.55|Team1| 1| 1| | 2| SAP| B| 1| 51.55|Team1| 1| 1| | 3| SAP| B| 1| 51.55|Team1| 1| 1| | 4| SAP| A| 1| 55.35|Team2| 1| 1| | 5| SAP| B| 1| 55.35|Team2| 1| 1| | 6| SAP| C| 1| 58.00|Team3| 1| 1| | 7| SAP| D| 1| 47.00|Team3| 2| 0| +--+------+----+-------------+-----+----+-----------+ </code></pre> <p>For records of &quot;Team1&quot;, even though they satisfy the second condition, since the group has a already a row that satisfies the first condition (row with ID=1) then the other records should be marked as 0 for the main_record column for the same &quot;Team&quot;.</p> <p>The expected result should be:</p> <pre><code>+--+------+----+-------------+-----+----+-----------+ |ID|Source|Type|Active|Weight|Group|Rank|main_record| +--+------+----+-------------+-----+----+-----------+ | 1| SAP| A| 1| 51.55|Team1| 1| 1| | 2| SAP| B| 1| 51.55|Team1| 1| 0| | 3| SAP| B| 1| 51.55|Team1| 1| 0| | 4| SAP| A| 1| 55.35|Team2| 1| 1| | 5| SAP| B| 1| 55.35|Team2| 1| 0| | 6| SAP| C| 1| 58.00|Team3| 1| 1| | 7| SAP| D| 1| 47.00|Team3| 2| 0| +--+------+----+-------------+-----+----+-----------+ </code></pre>
<python><apache-spark><pyspark>
2023-07-14 19:21:25
1
1,589
NikSp
76,690,541
1,082,438
Averaging over sub columns of a DataFrame
<p>I have a dataframe that looks like this</p> <pre><code>1 4 0 3 0 1 0 3 1 2 0 0 0 6 </code></pre> <p>1's in the first column are guaranteed to be &gt; N steps apart. For each occurrence of 1 in the first column, I want to take a slice of length M (M &lt; N) of the second column and average the results.</p> <p>E.g. for M = 2, I have two slices: 4,3 and 2,0 so the result should be 2,1.5 (say, as a Series or a list).</p> <p>Can't come up with anything but manually iterating over the rows. Would be grateful for any suggestions!</p> <p>EDIT: The example I gave could interpreted in two ways, I changed the numbers, so that's clear: I don't want a list of group averages, but rather for each slice position, the average of corresponding entries over the groups.</p>
<python><pandas><average>
2023-07-14 19:10:06
3
506
LazyCat
76,690,464
3,434,388
mypy & typing singleton / factory classes
<p>I often use the following construct to generate singletons in my code:</p> <pre><code>class Thing: pass class ThingSingletonFactory: _thing = None def __new__(cls) -&gt; Thing: if cls._thing is None: cls._thing = Thing() return cls._thing def get_thing() -&gt; Thing: return ThingSingletonFactory() thing = get_thing() same_thing = get_thing() assert thing is same_thing </code></pre> <p><code>class ThingSingletonFactory</code> stores the only instance of <code>Thing</code>, and returns it anytime a new <code>ThingSingletonFactory()</code> is requested. Works great for API clients, logging.Logger, etc.</p> <p>I'm adding mypy type checking to an existing project that uses this, and mypy does not like it, at all.</p> <pre><code>line 8: error: Incompatible return type for &quot;__new__&quot; (returns &quot;Thing&quot;, but must return a subtype of &quot;ThingSingletonFactory&quot;) [misc] line 15: error: Incompatible return value type (got &quot;ThingSingletonFactory&quot;, expected &quot;Thing&quot;) [return-value] </code></pre> <p>I feel the type hints in the code are correct: <code>__new__()</code> does return the type Thing, as does the func <code>get_thing()</code>.</p> <p>How can I provide mypy the hints required to make it happy? Or is this construct simply considered &quot;bad&quot; ?</p>
<python><mypy><typing>
2023-07-14 18:56:02
3
3,700
Danielle M.
76,690,412
3,551,365
Measure wait time of threadpool tasks in python
<p>I'm running a threadpool executor to which I submit some tasks.</p> <pre><code>executor = ThreadPoolExecutor(thread_name_prefix='OMS.oms_thread_', max_workers=16) task = executor.submit(method_to_run, args) </code></pre> <p>I know I can get the status of these by calling <code>task.running()</code> to know if it is completed or not. What I can't figure out is how to measure how much time a task spent waiting to be started. One way could be to store the time when a task was created, and pass some task Id to method_to_run and have it store the time when it started running and get the difference between those times.</p> <p>But that's a lot of trouble and would need me to change the method_to_run. Is there a better way?</p>
<python><future><executor>
2023-07-14 18:47:10
1
1,303
hoodakaushal
76,690,192
8,844,970
PATH from docker env overwritten by Singularity when called by Snakemake
<p>I am trying to use containers to run Snakemake pipelines, but the PATH variable keeps getting overwritten when Snakemake converts docker containers to singularity. Here is my dockerfile.</p> <pre><code># Dockerfile FROM mambaorg/micromamba@sha256:1dee8bc9409368d83d08b0a93370422a4fe0fd0092f56819bbf4c4cb254a8183 RUN mkdir -p /usr/bin WORKDIR /usr/bin RUN export PATH=&quot;$PATH:/opt/conda/bin&quot; ARG ENV_YML COPY --chown=$MAMBA_USER:$MAMBA_USER ${ENV_YML} /usr/bin/${ENV_YML} RUN micromamba install -y -n base -f /usr/bin/${ENV_YML} &amp;&amp; \ micromamba clean --all --yes </code></pre> <p>The <code>$ENV_YML</code> simply has specifications for an environment. I have tried to manually add <code>/opt/conda/bin</code> to <code>PATH</code> even though for docker itself, I don't need it. This is where executables from the programs that micromamba installs goes (e.g. <code>reformat.sh</code>). I can open up docker <code>docker run -it --rm dockername/dockerfile:1.0.0</code> and run <code>reformat.sh</code> from anywhere because <code>/opt/conda/bin</code> is in the <code>PATH</code>. But when I try to run Snakemake using the docker I get <code>reformat.sh: file not found</code> error and thats because within the Singularity image the <code>/opt/conda/bin</code> is not in <code>PATH</code>. I can patch it by manually appending the folder to path like this in Snakefile:</p> <pre><code># Snakefile rule example: output: somefile.txt shell: &quot;&quot;&quot; export PATH=/opt/conda/bin:$PATH &amp;&amp; \ (reformat.sh) 2&gt; {log} &quot;&quot;&quot; </code></pre> <p>But obviously I don't want to keep adding this code to all my rules. So how do I get singularity to not overwrite the <code>PATH</code> variable from docker?</p> <p>Adding PATH explicitly to docker doesn't seem to work but adding it before executing in singularity does. So the file exists, its just that the <code>PATH</code> variable is overriden by user <code>PATH</code> variable in singularity, which is obviously an expected behavior on singularities end.</p> <p><strong>Edit:</strong> Some progress. If I use <code>--cleanenv</code> when running singularity then it seems to work e.g. <code>singularity run --cleanenv /path/to/image.simg</code> Inside that container instance all the path variables are set as they were in docker. Now I tried passing that as an arg via snakemake with:</p> <pre><code>snakemake --cores 1 --snakefile example/Snakefile --configfile config/config_test.yml --use-singularity --singularity-args &quot;--cleanenv &quot; </code></pre> <p>and that fails because I believe snakemake uses <code>singularity exec</code> under the hood.</p>
<python><docker><snakemake><singularity-container>
2023-07-14 18:07:26
0
369
spo
76,690,027
19,157,137
Line Graph with Labeled Markers for Multiple Columns using Plotly
<p>I'm trying to create a line graph with labeled marker points using Plotly. I have a data frame with columns <code>Date</code>, <code>Percentage Change</code> and <code>Percentage Change From Initial</code>. I want to display all the labeled marker points at once on the graph and ensure that the markers are spaced out enough to be clearly visible.</p> <p>I would like to plot the <code>Percentage Change</code> and <code>Percentage Change From Initial</code> columns as a line graph and display labeled marker points for the <code>Value</code> column. How can I achieve this using Plotly?</p> <pre><code>import plotly.graph_objects as go import pandas as pd # Create a sample dataframe data = { 'Date': ['2023-05-01', '2023-05-02', '2023-05-03', '2023-05-04', '2023-05-05', '2023-05-06'], 'Percentage Change': [0.0, 0.2, -0.083, 0.227, -0.037, 0.154], 'Percentage Change From Initial': [0.0, 0.2, 0.1, 0.35, 0.3, 0.5] } df = pd.DataFrame(data) # Create the Plotly figure fig = go.Figure() # Calculate the interval for data points interval = len(df) // 2 # Create a list of indices for the data points indices = list(range(0, len(df), interval)) # Add Percentage Change trace with markers and labels fig.add_trace(go.Scatter(x=df['Date'], y=df['Percentage Change'], mode='lines+markers', name='Percentage Change')) fig.add_trace(go.Scatter(x=df['Date'][indices], y=df['Percentage Change'][indices], mode='markers', marker=dict(size=10), showlegend=False, text=df['Percentage Change'][indices], hovertemplate=&quot;Percentage Change: %{text} &lt;extra&gt;&lt;/extra&gt;&quot;)) # Add Percentage Change From Initial trace with markers and labels fig.add_trace(go.Scatter(x=df['Date'], y=df['Percentage Change From Initial'], mode='lines+markers', name='Percentage Change From Initial')) fig.add_trace(go.Scatter(x=df['Date'][indices], y=df['Percentage Change From Initial'][indices], mode='markers', marker=dict(size=10), showlegend=False, text=df['Percentage Change From Initial'][indices], hovertemplate=&quot;Percentage Change From Initial: %{text} &lt;extra&gt;&lt;/extra&gt;&quot;)) # Set the title and axis labels fig.update_layout(title='Stock Data', xaxis_title='Date', yaxis_title='Percentage') # Display the figure fig.show() </code></pre> <p><strong>The Current Output:</strong></p> <p><a href="https://i.sstatic.net/OBC33.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OBC33.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe><plot><plotly>
2023-07-14 17:32:38
1
363
Bosser445
76,689,983
713,200
How to access items of 'Device' data type in python?
<p>Basically I have a variable named <code>dev_handle</code> and the value of it is</p> <pre><code>print(dev_handle) output: Device RET789-452.234 (alias=RET789), type poplk </code></pre> <p>I do not know the type of the variable , but I know its not int, or string, or list or dict.</p> <p>I want to access the value <code>poplk</code> I also want to check if <code>poplk</code> exists in the variable <code>dev_handle</code></p> <p>When I tried the following code</p> <pre><code> print(dev_handle[type]) </code></pre> <p>I get the following error</p> <pre><code>print(dev_handle[type]) ERROR: TypeError: 'Device' object is not subscriptable </code></pre> <p>How can I access the variable elements to access <code>poplk</code></p> <p>This is where the variable <code>dev_handle</code> is created,</p> <pre><code>def get_details(self, testscript, device_ip): testbed_device_data = {} for device, device_detail in testscript.parameters['testbed'].devices.items(): if device_detail.alias not in ['qwerPA', 'qwerLA', 'TESTManager']: if device_ip == str(device_detail.connections.cli.ip): device_type = device_detail.os testbed_device_data.update({str(device_detail.connections.cli.ip): device_detail}) logger.info(testbed_device_data) dev_handle = testbed_device_data[device_ip] logger.info(dev_handle) logger.info(device_type) break return testbed_device_data, dev_handle, device_type </code></pre> <p>The above method will read a testbed.yaml file which has details of the device in jason format and picks it and stores it in <code>dev_handle</code> and then returns it</p>
<python><typeerror>
2023-07-14 17:27:04
0
950
mac
76,689,909
11,922,765
python dataframe drop rows in all columns outside the predefined limits
<p>I am dropping rows in all columns of the df if they fall outside the specified limit.</p> <p>my code:</p> <pre><code>xdf = pd.DataFrame(columns=['A','B'],data=[[-10,20],[2,8],[4,1],[3,-1]]) print(xdf) xdf_params_limits = pd.DataFrame(columns=['A','B'],index=['min','max'],data=[[0,1],[5,30]]) print(xdf_params_limits) xdf_flt = xdf.apply(lambda x: (x[col]&gt;xdf_params_limits[col].loc['min'])&amp;(x[col]&lt;xdf_params_limits[col].loc['max']) for col in xdf.columns, axis=1) xdf = xdf[xdf_flt] A B 0 -10 20 1 2 8 2 4 1 3 3 -1 A B min 0 1 max 5 30 </code></pre> <p>Expected output</p> <p>xdf =</p> <pre><code> A B 1 2 8 </code></pre> <p>Present output:</p> <pre><code> 81 def ast_parse(self, source, filename='&lt;unknown&gt;', symbol='exec'): 82 &quot;&quot;&quot;Parse code to an AST with the current compiler flags active. 83 84 Arguments are exactly the same as ast.parse (in the standard library), 85 and are passed to the built-in compile function.&quot;&quot;&quot; ---&gt; 86 return compile(source, filename, symbol, self.flags | PyCF_ONLY_AST, 1) SyntaxError: Generator expression must be parenthesized (2981748052.py, line 5) </code></pre>
<python><pandas><dataframe><numpy>
2023-07-14 17:16:46
3
4,702
Mainland
76,689,751
11,370,582
Filer by Dates that Start After a Specific Time - pandas, python
<p>I have a large dataset of individuals spanning different timeframes, I need to filter for those that exist only within a specified period; <code>01-01-2019 - 01-10-2019</code> for example.</p> <p>Is there a method in pandas that I can use to filter out individuals that either start after or finish before that period?</p> <p>For example, in the below dataset I would only want to filter for individuals that have data from <code>01-01-19 - 01-10-19</code> concurrently, so just <code>jim</code> and <code>sara</code> would be kept.</p> <pre><code>df = pd.DataFrame({'names':['jim','jim','jim','jim','jim','jim','jim','jim','jim','jim', 'bob','bob','bob','bob','bob','bob', 'sara','sara','sara','sara','sara','sara','sara','sara','sara','sara'], 'dates':['01-01-19','01-02-19','01-03-19','01-04-19','01-05-19','01-06-19','01-07-19','01-08-19','01-09-19','01-10-19', '01-05-19','01-06-19','01-07-19','01-08-19','01-09-19','01-10-19', '01-01-19','01-02-19','01-03-19','01-04-19','01-05-19','01-06-19','01-07-19','01-08-19','01-09-19','01-10-19']}) </code></pre> <p>Something along the lines of <code>dflt = df[(df['dates'] &gt;= &quot;2019-01-01&quot;)&amp;(df['Date'] &lt;= &quot;2019-01-10&quot;)]</code> but only <code>if</code> data exists for the full time period.</p>
<python><pandas><date><datetime><filter>
2023-07-14 16:52:00
2
904
John Conor
76,689,730
1,391,683
Dynamically change docstring of instance method
<p>Here is a minimum example of a class that holds a function as an attribute and has a method that is calling said function.</p> <pre><code>class MyClass: def __init__(self, f): self.f = f def func(self): &quot;&quot;&quot;This is the docstring to be replaced.&quot;&quot;&quot; return self.f() </code></pre> <p>The function <code>f</code> can change during runtime and I would like to dynamically change the docstring of <code>func</code> to reflect the current function stored in <code>f</code>.</p> <p>I was thinking of achieving this by using properties and setter to change the docstring of <code>func</code> as soon as a new <code>f</code> is assigned.</p> <pre><code>class MyClass: def __init__(self, f): self.f = f @property def f(self): return self._f @f.setter def f(self, value): self._f = value self.func.__doc__ = value.__doc__ def func(self): &quot;&quot;&quot;This is the docstring to be replaced&quot;&quot;&quot; return self.f() </code></pre> <p>The usage would be as follows</p> <pre><code>def func(): &quot;&quot;&quot;This is the docstring to be shown&quot;&quot;&quot; return True obj = MyClass(func) </code></pre> <p>with the goal that calling <code>help(obj.func)</code> would display <code>&quot;This is the docstring to be shown&quot;</code>.</p> <p>However, it seems like docstrings of instance methods are not replaceable.</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[3], line 1 ----&gt; 1 obj = MyClass(func) Cell In[1], line 4, in MyClass.__init__(self, f) 3 def __init__(self, f): ----&gt; 4 self.f = f Cell In[1], line 13, in MyClass.f(self, value) 10 @f.setter 11 def f(self, value): 12 self._f = value ---&gt; 13 self.func.__doc__ = value.__doc__ AttributeError: attribute '__doc__' of 'method' objects is not writable </code></pre> <p>Is there any way of achieving this?</p>
<python><docstring><instance-methods>
2023-07-14 16:48:45
1
805
Sebastian
76,689,715
7,438,365
What is the best way to set one python class attribute from another class?
<p>What is the best way to add the class Defaults into class Primary? The Defaults are to be shared among all Primary class instances. I will iterate through lists to create Primary class objects and will need to change the Defaults between different lists. All of these work but it seems like the first method is less readable and the second method would cause every Primary class to create a separate copy of the default variables. The third way is using inheritance, which I have read is not preferred over composition. I feel like there must be a reason to use one over another.</p> <p>Method 1: Create class attribute</p> <pre><code>class Primary(): def __init__(self, url): self.url = url class Defaults(): def __init__(self, name, path): self.name = name self.path = path inst_default = Defaults(&quot;myfile&quot;, &quot;/tmp/mydir/&quot;) Primary.default = inst_default inst_primary = Primary(&quot;https://www.someurl&quot;) </code></pre> <p>Method 2: Create instance attribute</p> <pre><code>class Primary(): def __init__(self, url, inst_default): self.url = url self.default = inst_default class Defaults(): def __init__(self, name, path): self.name = name self.path = path inst_default = Defaults(&quot;myfile&quot;, &quot;/tmp/mydir/&quot;) inst_primary = Primary(&quot;https://www.someurl&quot;, inst_default) </code></pre> <p>Testing Method 1 &amp; 2:</p> <pre><code>print(inst_primary.url) print(inst_primary.default.path) print(inst_primary.default.name) print(inst_default.name) </code></pre> <p>Method 3: Inheritance of class attribute</p> <pre><code>class Defaults(): def __init__(self, name, path): Defaults.name = name Defaults.path = path class Primary(Defaults): def __init__(self, url): self.url = url self.name = Defaults.name self.path = Defaults.path inst_default = Defaults(&quot;myfile&quot;, &quot;/tmp/mydir/&quot;) inst_primary = Primary(&quot;https://www.someurl&quot;) </code></pre> <p>Testing Method 3:</p> <pre><code>print(inst_primary.url) print(inst_primary.path) print(inst_primary.name) print(inst_default.name) </code></pre>
<python><class><dependency-injection><attributes>
2023-07-14 16:47:39
0
330
steveH
76,689,604
2,173,773
Type alias in separate file: PytestCollectionWarning: cannot collect test class 'dict' because it has a __init__ constructor
<p>I am using <a href="https://python-poetry.org/" rel="nofollow noreferrer">Poetry</a>, Python 3.10, and <a href="https://docs.pytest.org/en/stable/" rel="nofollow noreferrer">pytest</a>. I try to separate type aliases in a separate test file (that can be included from different test files) but when running <code>pytest</code> I get a warning about <code>cannot collect test class 'dict'</code>. Here is a minimal example (my real program is more complex):</p> <pre><code>$ poetry new --src myproject $ cd myproject $ poetry add --group=dev pytest $ cat &lt;&lt;'END' &gt; src/myproject/main.py def add(a: int, b: int) -&gt; int: # This is the function that we will test with pytest return a + b END $ cat &lt;&lt;'END' &gt; tests/common.py TestDataDict = dict[str, str] # This is the type alias in a separate file END $ cat &lt;&lt;'END' &gt; tests/test_main.py # This is the test file import pytest import myproject.main from tests.common import TestDataDict @pytest.fixture() def get_dict() -&gt; TestDataDict: dict_ = {'a': 1, 'b': 2} return dict_ def test_add(get_dict: TestDataDict) -&gt; None: dict_ = get_dict assert myproject.main.add(dict_['a'], dict_['b']) == 3 END $ poetry install $ poetry shell $ pytest # Run the test ============================================================================ test session starts ============================================================================ platform linux -- Python 3.10.4, pytest-7.4.0, pluggy-1.2.0 rootdir: /home/hakon/test/python/pytest-warning/myproject collected 1 item tests/test_main.py . [100%] ============================================================================= warnings summary ============================================================================== .:0 :0: PytestCollectionWarning: cannot collect test class 'dict' because it has a __init__ constructor (from: tests/test_main.py) -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ======================================================================= 1 passed, 1 warning in 0.01s ======================================================================== </code></pre> <p>How can I get rid of the warning: <code>PytestCollectionWarning: cannot collect test class 'dict' because it has a __init__ constructor</code> ?</p>
<python><pytest><python-typing><python-poetry>
2023-07-14 16:32:26
0
40,918
Håkon Hægland
76,689,563
119,861
Google Oauth2 flow in frontend - api call in backend
<p>I have an frontend application (currently retool, in future react) where I want to have the Oauth2 flow.</p> <p>The API request I want to do in backend. What's the best practice for this use case?</p> <p>The google client in python <a href="https://google-auth.readthedocs.io/en/stable/reference/google.oauth2.credentials.html" rel="nofollow noreferrer">needs a credential object</a> which can be instantiated e.g. like this:</p> <pre><code>credentials = Credentials( token=token, refresh_token=refresh_token, token_uri=&quot;https://www.googleapis.com/oauth2/v3/token&quot;, client_id=client_id, client_secret=client_secret, ) </code></pre> <p>Is it good practice to send <code>token</code> and <code>refresh_token</code> from frontend to backend in order to do the subsequent API calls in backend?</p>
<python><reactjs><oauth-2.0><google-api><retool>
2023-07-14 16:26:44
1
11,671
hansaplast
76,689,557
5,722,359
Deleting all items of a ttk.Treeview does not delete or unconfigure their tagNames?
<p>I noticed a strange phenomenon in the <code>ttk.Treeview</code>. Below is my test code to demonstrate the phenomenon.</p> <pre><code>import tkinter as tk from tkinter import ttk class App(ttk.Frame): def __init__(self, parent): super().__init__(parent) self.parent = parent self.top_iids = [] self.child_iids = [] self._create_widget() def _create_widget(self): self.tree = ttk.Treeview( self, height=10, selectmode='extended', takefocus=True,) self.tree.column('#0', stretch=True, width=200, anchor='w') self.tree.heading('#0', text=&quot;Col0&quot;, anchor='w') self.bn1 = ttk.Button(self, text=&quot;Populate&quot;, command=self.populate) self.bn2 = ttk.Button(self, text=&quot;TagConfigure&quot;, command=self.tagconfig) self.bn3 = ttk.Button(self, text=&quot;Reset&quot;, command=self.reset) self.tree.grid(row=0, column=0, rowspan=4, padx=5, pady=5) self.bn1.grid(row=0, column=1, padx=5, pady=5) self.bn2.grid(row=1, column=1, padx=5, pady=5) self.bn3.grid(row=2, column=1, padx=5, pady=5) def populate(self): counter = 0 for i in range(4): self.tree.tag_configure(i) tliid = f&quot;G{i}&quot; self.tree.insert(&quot;&quot;, &quot;end&quot;, iid=tliid, open=True, tags=[tliid], text=f&quot;Restaurant {i}&quot;) ciid = f&quot;F{counter}&quot; self.tree.insert(tliid, &quot;end&quot;, iid=ciid, text=f&quot;Cookie&quot;, tags=[ciid], values=(ciid)) self.top_iids.append(tliid) self.child_iids.append(ciid) counter+=1 def tagconfig(self): self.tree.tag_configure('F2', foreground='red') self.tree.tag_configure('F3', foreground='red') def reset(self): print(f&quot;def reset(self):&quot;) top_level_nodes = self.tree.get_children() self.tree.delete(*top_level_nodes) tln_exist = {i:self.tree.exists(i) for i in top_level_nodes} chn_exist = {i:self.tree.exists(i) for i in self.child_iids} print(f&quot;{tln_exist=}&quot;) print(f&quot;{chn_exist=}&quot;) # uncomment to make iids F2 &amp; F3 black at 2nd press of Populate button # self.tree.tag_configure('F2', foreground='') # self.tree.tag_configure('F3', foreground='') if __name__ == '__main__': root = tk.Tk() app = App(root) app.pack(fill=&quot;both&quot;, expand=True) root.mainloop() </code></pre> <p>When the buttons are pressed in the following sequence, <kbd>Populate</kbd>--&gt;<kbd>TagConfigure</kbd>--&gt;<kbd>Reset</kbd>--&gt;<kbd>Populate</kbd>, items <code>F2</code> and <code>F3</code> continue to be in red color after they have been destroyed.</p> <p><a href="https://i.sstatic.net/TsWah.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TsWah.gif" alt="Treeview_issue" /></a></p> <p>Also, see printed-out messages after Reset Button is pressed, which confirms all treeview items have been destroyed, i.e. become non-existent:</p> <pre><code>def reset(self): tln_exist={'G0': False, 'G1': False, 'G2': False, 'G3': False} chn_exist={'F0': False, 'F1': False, 'F2': False, 'F3': False} </code></pre> <p>To fix this issue, I have to issue the following commands in the <code>self.reset()</code> method:</p> <pre><code>self.tree.tag_configure('F2', foreground='') self.tree.tag_configure('F3', foreground='') </code></pre> <p>According to documentation, the 1st argument of <code>ttk.Treeview.tag_configure(tagName, option=None, **kw)</code> is a <code>tagName</code>. Consequently, it means that tagName <code>F2</code> &amp; <code>F3</code> still exist despite all items in the Treeview widget had been destroyed.</p> <p>Simply put, tagNames are undestroyed even though all treeview items have been destroyed via <code>self.tree.delete(*self.tree.get_children())</code>. Is this understanding correct? If so, is there an equivalent command to the <code>ttk.Treeview.delete(*items)</code> method where one can undo and/or destroy all old tagNames and their configurations?</p>
<python><tkinter><treeview><tcl>
2023-07-14 16:25:58
1
8,499
Sun Bear
76,689,522
18,002,749
Celery Error: Cannot connect to Redis result backend, despite valid credentials and successful broker connection
<p>I am encountering an issue with my Celery setup where I am unable to connect to the Redis result back-end. Despite providing valid credentials and successfully connecting to the broker, I receive the following error message:</p> <p><code>[2023-07-14 16:55:22,255: ERROR/MainProcess] consumer: Cannot connect to redis://:**@eu1-brave-turtle-39167.upstash.io:39167//: Connection closed by server.. Trying again in 8.00 seconds... (4/100)</code></p> <p>I have verified that all the credentials used for the Redis result backend are correct, and the broker connection is established without any issues. However, the connection to the result backend fails consistently.</p> <p>Here are the details of my setup:</p> <pre><code>Celery version: 5.2.3 Redis Cloud provider: Upstash Redis Horizontal Redis URL for the result backend: redis://:&lt;PASSWORD&gt;@&lt;HOST&gt;:&lt;PORT&gt;/0 Redis URL for the broker: redis://:&lt;PASSWORD&gt;@&lt;HOST&gt;:&lt;PORT&gt; Celery configuration: </code></pre> <p>Code:</p> <pre><code>app = Flask(__name__) CORS(app, resources={r&quot;*&quot;: {&quot;origins&quot;: &quot;*&quot;}}) # Enable debug mode app.debug = True # Set the Upload Folder app.config[&quot;UPLOAD_FOLDER&quot;] = os.path.join(os.path.dirname(__file__), &quot;Uploads&quot;) # Setup the Celery Config in Flask Application app.config[&quot;UPSTASH_REDIS_URL&quot;] = dotenv.get_key(&quot;.env&quot;, &quot;UPSTASH_REDIS_URL&quot;) BrokerURL = app.config[&quot;UPSTASH_REDIS_URL&quot;] ResultBackend = app.config[&quot;UPSTASH_REDIS_URL&quot;] print(&quot;Broker URL: &quot;, BrokerURL) print(&quot;Result Backend: &quot;, ResultBackend) # Initialize Celery celery = Celery( app.name, ) celery.conf.broker_url = BrokerURL celery.conf.result_backend = f&quot;{ResultBackend}/0&quot; </code></pre> <hr /> <p>I have already tried the following troubleshooting steps:</p> <pre><code>1. Double-checked the Redis URL and credentials for the result backend. 2.Verified network connectivity and ensured that there are no firewall restrictions blocking the connection. 3. Confirmed the Redis Upstash Server instance is running and accessible. 4.Tested the connection using a Redis client library, which was successful. 5. Enabled logging in Celery to check for any additional error messages. </code></pre> <p>Despite these efforts, I am still unable to establish a connection to the Redis result backend. What things can I try to resolve this issue?</p> <p>PS: I already tried to use Redis on my local machine with WSL, all seems good, but now I have to use a Production Redis Database so I am using Upstash since it exists in Heroku Add-ons.</p> <p>I have already tried the following troubleshooting steps mentioned above, and tried to use a different Redis Provider &quot;Redis Cloud Entreprise&quot;, but still the Worker is running with the command <code>celery -A app.celery worker -l INFO</code>.</p> <p>But few seconds later I get the error above.</p>
<python><flask><redis><celery>
2023-07-14 16:20:07
1
418
Yassine Chettouch
76,689,402
323,631
Use literal style for just multiline strings in ruamel.yaml
<p>I would like to have a custom <code>ruamel.yaml</code> dumper that uses Literal style for all multiline strings and the default style otherwise. For example:</p> <pre class="lang-py prettyprint-override"><code>import sys import ruamel.yaml data = {&quot;a&quot;: &quot;hello&quot;, &quot;b&quot;: &quot;hello\nthere\nworld&quot;} print(&quot;Default style&quot;) yaml = ruamel.yaml.YAML() yaml.dump(data, sys.stdout) print() print(&quot;style='|'&quot;) yaml = ruamel.yaml.YAML() yaml.default_style = &quot;|&quot; yaml.dump(data, sys.stdout) </code></pre> <p>This produces:</p> <pre><code>Default style a: hello b: &quot;hello\nthere\nworld&quot; style='|' &quot;a&quot;: |- hello &quot;b&quot;: |- hello there world </code></pre> <p>My desired output is:</p> <pre><code>a: hello b: |- hello there world </code></pre>
<python><ruamel.yaml>
2023-07-14 16:00:43
1
2,581
Tom Aldcroft
76,689,364
22,062,869
How to fix deprecation warning when setting on a slice
<p>I'm trying to add a year to each observation in a pandas dataframe until each observation is within a specified date range.</p> <pre><code> for i in range(0,3): df.loc[df['date'] &lt; &quot;2023-06-01&quot;, 'date'] = df['date'] + pd.DateOffset(years=1) </code></pre> <p>I'm getting this warning.</p> <pre><code>DeprecationWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)` </code></pre> <p>How can I fix this? I've tried many things, but I can't seem to get around setting on a slice, and every method I try throws either the <code>DeprecationWarning</code> or <code>SettingWithCopyWarning</code>.</p>
<python><python-3.x><pandas><dataframe><pandas-settingwithcopy-warning>
2023-07-14 15:56:30
2
395
rasputin
76,689,352
2,506,034
mypy: class with custom __add__ method gives [operator] error 'Unsupported operand types for + ("MyCustomClass" and "Callable[..., Any]")'
<p>I have a custom class similar to:</p> <pre><code>class DailyAmounts: def __init__(self) -&gt; None: self.daily_map: Dict[date, int] = dict() self.total: int = 0 def __daily_math(self: 'DailyAmounts', other: 'DailyAmounts', function: Callable[[int, int], int]) -&gt; 'DailyAmounts': if not isinstance(other, DailyAmounts): raise ValueError(&quot;Only {} can be added&quot;.format(DailyAmounts.__name__)) res = DailyAmounts() days = list(set(self.daily_map.keys()) | set(other.daily_map.keys())) for day in days: amount = function(self.daily_map.get(day, 0), other.daily_map.get(day, 0)) res[day] = amount return res def __add__(self: 'DailyAmounts', other: 'DailyAmounts') -&gt; 'DailyAmounts': def function(a: int, b: int) -&gt; int: return a + b addition: DailyAmounts = self.__daily_math(other, function) return addition def __sub__(self: 'DailyAmounts', other: 'DailyAmounts') -&gt; 'DailyAmounts': def function(a: int, b: int) -&gt; int: return a - b subtraction: DailyAmounts = self.__daily_math(other, function) return subtraction def __iter__(self: 'DailyAmounts') -&gt; Iterator[date]: return iter(self.daily_map) </code></pre> <p>Another class has a <code>DailyAmounts</code> as an attribute:</p> <pre><code>class Item: def __init__(self) -&gt; None: self.daily_amounts: DailyAmounts = DailyAmounts() # values calculated later elsewhere </code></pre> <p>This error and similar errors happen in multiple areas in the code base, but a simple example of the <code>[operator]</code> error is:</p> <pre><code>def some_function(old_item: Item): new_item = Item() # line below: Unsupported operand types for + (&quot;DailyAmounts&quot; and &quot;Callable[..., Any]&quot;) [operator] new_item.daily_amounts = old_item.daily_amounts + DailyAmounts() # basically make a copy </code></pre> <p>similar issue with mypy error <code>[index]</code></p> <pre><code>def another_function(): ... # line below: Unsupported target for indexed assignment (&quot;Callable[..., Any]&quot;) [index] for entry in my_item.daily_amounts.items(): my_other_item.daily_amounts[entry[0]] = entry[1] </code></pre>
<python><mypy>
2023-07-14 15:54:27
0
364
mochatiger
76,689,243
7,492,736
Export PyTorch model to ONNX with fixed batch size
<p>I am trying to export a PyTorch model to ONNX as follows:</p> <pre class="lang-py prettyprint-override"><code>import torch from transformers import BertModel from tvm import relay import sys sys.setrecursionlimit(1000000) bert = BertModel.from_pretrained('bert-base-uncased') embedding_dim = bert.config.to_dict()['max_position_embeddings'] device = &quot;cuda&quot; HIDDEN_DIM = 256 OUTPUT_DIM = 1 N_LAYERS = 2 DROPOUT = 0 bert = model.BERTGRUSentiment(bert, HIDDEN_DIM, OUTPUT_DIM, N_LAYERS, DROPOUT) bert.eval() bert.to(device) print(f&quot;cuda: {next(bert.parameters()).is_cuda}&quot;) print(f&quot;training: {bert.training}&quot;) input_name = &quot;text&quot; input_shape = [embedding_dim] example = model.preprocess(model.tokenizer, &quot;it was ok&quot;).unsqueeze(0) print(example) torch.onnx.export(bert, example, &quot;model.onnx&quot;, export_params=True, opset_version=10, do_constant_folding=True, input_names = ['text'], output_names = ['output']) </code></pre> <p>However, when I try this, I get the following warning:</p> <p><code>UserWarning: Exporting a model to ONNX with a batch_size other than 1, with a variable length with GRU can cause an error when running the ONNX model with a different batch size. Make sure to save the model with a batch size of 1, or define the initial states (h0/c0) as inputs of the model.</code></p> <p>Is there a way to fix the batch size of the model?</p>
<python><pytorch><onnx>
2023-07-14 15:36:25
0
882
Someone
76,689,242
4,865,723
pyflakes say "'_' may be undefined" when using gettext class-based API
<p>I do use <a href="https://docs.python.org/3/library/gettext.html#localizing-your-application" rel="nofollow noreferrer">GNU gettext class-based API</a>. In practice this means that the function <code>_()</code> is not explicit declare by myself in each module like this:</p> <pre><code>import gettext t = gettext.translation('spam', '/usr/share/locale') _ = t.gettext </code></pre> <p>But it is done like this</p> <pre><code>import gettext gettext.install('myapplication') </code></pre> <p>In the back this <code>install()</code> declares the <code>_()</code> in the back and place it into the builtins namespace so I can use it in every module.</p> <p>But this is a problem from the point of view of linters like pyflake:</p> <pre><code>pyflakes: '_' may be undefined, or defined from star imports </code></pre> <p>Of course I do use <code>_()</code> everywhere in my code but it is not declare elsewhere but implicit via <code>gettext.install()</code>.</p> <p>Any idea how to solve this? There are a lot of locations where I do use <code>_()</code>. So I don't want to put an <code># noqa</code> behind it.</p>
<python><gettext><pyflakes>
2023-07-14 15:36:25
0
12,450
buhtz
76,689,146
2,043,980
Pandas read_csv, ignore first cell
<p>I have .csv file like this:</p> <pre><code>Str; Int; Flt A; 123; 0.1 B; 456; 0.2 C; 789; 0.3 </code></pre> <p>I want to get DataFrame like this</p> <pre><code> Int; Flt A; 123; 0.1 B; 456; 0.2 C; 789; 0.3 </code></pre> <p>I read csv like this</p> <pre><code>df = pd.read_csv('data.csv', index_col=0, sep=&quot;;&quot;) </code></pre> <p>And the problem is that I can't use <code>df.loc[&quot;A&quot;, &quot;Int&quot;]</code> to get cell value. If I drop <code>Str;</code> from csv everything works fine.</p> <p>So the idea is to use first row as row names and first column as column names. I understand that first element can't be used both as col name and row name, is there any way to drop such ambigous value?</p>
<python><pandas>
2023-07-14 15:26:25
2
1,253
Demaunt
76,688,889
6,496,267
is there any automated way to extract the "pip install"ed dependecies from PyCharm project?
<p>I installed a bunch of dependencies in PyCharm terminal using pip install.</p> <p>The project works.</p> <p>Now I want to take all those dependencies and put in requirements.txt file. However, I do not remember what I installed. It's been a while. I can manually recreate the list of installed packages by trial and error. However, is there any automated way to extract the &quot;<code>pip install</code>&quot;ed dependecies from PyCharm project?</p>
<python><pycharm>
2023-07-14 14:58:11
2
717
john
76,688,874
217,844
Python: How to make a pathlib Path function argument optional?
<p>I pass a <code>Path</code> argument to a function:</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path def my_function(my_path: Path): pass </code></pre> <p>and I would like to make the argument optional.</p> <p>My first naive attempt doesn't work because <code>Path()</code> (somewhat surprisingly to me) creates a <code>PosixPath</code> object for the current directory (on macOS here; <code>WindowsPath</code> on Windows):</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path def my_function(my_path: Path = Path()): if my_path: print(f'my_path arg: {my_path}') print(f'my_path type: {type(my_path)}') else: print(f'no my_path argument; falling back to some default (or so)') my_function() </code></pre> <p>output:</p> <pre><code>my_path arg: . my_path type: &lt;class 'pathlib.PosixPath'&gt; </code></pre> <p>The probably obvious approach to use <code>None</code> does work:</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path def my_function(my_path: Path = None): if my_path: print(f'my_path arg: {my_path}') print(f'my_path type: {type(my_path)}') else: print(f'no my_path argument; falling back to some default (or so)') my_function() </code></pre> <p>output:</p> <pre><code>no my_path argument; falling back to some default (or so) </code></pre> <p>but that causes a <code>pyright</code> issue</p> <pre><code>Expression of type &quot;None&quot; cannot be assigned to parameter of type &quot;Path&quot; Type &quot;None&quot; cannot be assigned to type &quot;Path&quot; </code></pre> <p>which forces me to consider <code>Path</code> and <code>None</code> types; I think that's what <a href="https://docs.python.org/3/library/typing.html#typing.Union" rel="nofollow noreferrer"><code>typing.Union</code></a> is for:</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path from typing import Union def my_function(my_path: Union[None, Path] = None): if my_path: print(f'my_path arg: {my_path}') print(f'my_path type: {type(my_path)}') else: print(f'no my_path argument; falling back to some default (or so)') my_function() </code></pre> <p>output:</p> <pre><code>no my_path argument; falling back to some default (or so) </code></pre> <p><strong>THE PROBLEM:</strong> I have <em>a lot of</em> functions I pass <code>Path</code> arguments to - and having to keep <code>None</code> <em>and</em> <code>Path</code> types in mind seems rather inelegant to me - and also causes all sorts of complications down the line. Therefore, I would really like something like a <code>NonePath</code> that is of type <code>Path</code> and of which I can create objects that evaluate to <code>False</code>.</p> <p>I found <a href="https://stackoverflow.com/a/63194915">this</a> SO answer (and learned about <a href="https://docs.python.org/3/library/functions.html#type" rel="nofollow noreferrer">creating new types</a> in python on the way 🙂) and figured from <a href="https://docs.python.org/3/library/stdtypes.html#truth-value-testing" rel="nofollow noreferrer">Truth Value Testing</a> that in order to make an object testable for truthiness, I need to give it a <code>__bool__()</code> method.</p> <p>My next (still naive) attempt:</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path NonePath = type('NonePath', (), {'__bool__': lambda: False}) def my_function(my_path: Path = NonePath): if my_path: print(f'my_path arg: {my_path}') print(f'my_path type: {type(my_path)}') else: print(f'no my_path argument; falling back to some default (or so)') my_function() </code></pre> <p>output:</p> <pre><code>my_path arg: &lt;class '__main__.NonePath'&gt; my_path type: &lt;class 'type'&gt; </code></pre> <p>is not correct of course - I need to use a <code>NonePath()</code> object as default, not a <code>NonePath</code> type; next attempt:</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path NonePath = type('NonePath', (), {'__bool__': lambda: False}) def my_function(my_path: Path = NonePath()): # &lt;-- adding () if my_path: print(f'my_path arg: {my_path}') print(f'my_path type: {type(my_path)}') else: print(f'no my_path argument; falling back to some default (or so)') my_function() </code></pre> <p>output:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;&lt;stdin&gt;&quot;, line 2, in my_function TypeError: &lt;lambda&gt;() takes 0 positional arguments but 1 was given </code></pre> <p>I don't really understand the error and continuing to screw around cluelessly with this hasn't gotten me anywhere - but simply returning <code>False</code> (or should it be <code>None</code> ?) anyway seems not the right approach: Shouldn't I return <code>True</code> / <code>False</code> based on some test on one of the object's attributes ? Which ones ?</p> <p>Also, shouldn't I use <code>Path</code> as the base for <code>NonePath</code> ? Didn't really get anywhere with that, either:</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path # NOTE: trailing comma required to make second arg a tuple NonePath = type('NonePath', (Path,), {'__bool__': lambda: False}) def my_function(my_path: Path = NonePath()): if my_path: print(f'my_path arg: {my_path}') print(f'my_path type: {type(my_path)}') else: print(f'no my_path argument; falling back to some default (or so)') </code></pre> <p>output:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/local/Cellar/python@3.10/3.10.12_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pathlib.py&quot;, line 960, in __new__ self = cls._from_parts(args) File &quot;/usr/local/Cellar/python@3.10/3.10.12_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pathlib.py&quot;, line 594, in _from_parts drv, root, parts = self._parse_args(args) File &quot;/usr/local/Cellar/python@3.10/3.10.12_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pathlib.py&quot;, line 587, in _parse_args return cls._flavour.parse_parts(parts) AttributeError: type object 'NonePath' has no attribute '_flavour' </code></pre> <p>Sounds to me like I need to override more <code>Path</code> methods for <code>NonePath</code>.</p> <p>Well, I clearly seem to be out of my depths with python internals here. For the moment, I'll have to use what works and live with the inelegant <code>None</code> / <code>Path</code> approach.</p> <p><strong>MY QUESTION:</strong> Is it possible to implement a <code>NonePath</code> as explained ? How would I go about it ? Am I at least somewhat on the right track ?</p>
<python><python-3.x><types><pathlib>
2023-07-14 14:56:15
1
9,959
ssc
76,688,664
18,904,265
How to elegantly store f-strings as templates with default values
<p>I have a bunch of f-strings as templates for printing labels. Placeholders in those f-stings would be filled with entries for the label. A part of that f-string could look like this:</p> <pre class="lang-py prettyprint-override"><code>template_1 = f&quot;&quot;&quot; ^FH\^FDLA,{identifier}^FS ^FT145,51FD{name}^FS^CI27 &quot;&quot;&quot; </code></pre> <p>For each of those fields (in this case identifier and name) I need defaults, if no value is provided. Those defaults are not the same for each label, even if they have the same variable name.</p> <p>At the moment the only (imo ugly) solution I can come up with, is to use if/else statements to set the defaults and then use a giant dict for the f-strings. This might look somethin like this (in some function):</p> <pre class="lang-py prettyprint-override"><code>if template == &quot;template_1&quot;: if name is None: name = &quot;name&quot; if project is None: project= &quot;project&quot; elif template == &quot;template_2&quot;: if name is None: name = &quot;different name&quot; if person is None: person = &quot;some person&quot; temmplates = { &quot;template_1&quot; : f&quot;&quot;&quot; as{name}{project} &quot;&quot;&quot;, &quot;template_2&quot; : f&quot;&quot;&quot; asflkjasdf{person}{name} &quot;&quot;&quot;, } </code></pre> <p>There must be a better way to dynamically fill some f-sting templates with values and provide default values for that. Maybe using a class or something? But I just can't get it figured out. Bonus on top would be, if I could store those f-stings and defaults in a toml file, but one problem at a time :)</p> <p>Thanks alot in advance!</p>
<python>
2023-07-14 14:29:47
1
465
Jan
76,688,653
4,814,873
Deduplicate pandas dataset by index value without using `networkx`
<p><strong>Please note I have already reviewed this link</strong></p> <p><a href="https://stackoverflow.com/questions/67577054/pandas-and-python-deduplication-of-dataset-by-several-fields*">Pandas and python: deduplication of dataset by several fields</a>*</p> <p><em>Update 18-Jul: My perspective is that all of these solutions point to just avoiding indices until after all the de-duplication has performed. Thank you to all who have replied so far.</em>*</p> <p>I would like to have only one unique value of field <code>code</code> per value of <code>id</code>.</p> <pre><code>df = pd.DataFrame({'code':['A','A','B','C','D','A']},index=[1,1,1,2,3,3]) df.index.name='id' </code></pre> <p>df:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>code</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>A</td> </tr> <tr> <td>1</td> <td>A</td> </tr> <tr> <td>1</td> <td>B</td> </tr> <tr> <td>2</td> <td>C</td> </tr> <tr> <td>3</td> <td>D</td> </tr> <tr> <td>3</td> <td>A</td> </tr> </tbody> </table> </div> <p>My desired output is:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>code</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>A</td> </tr> <tr> <td>1</td> <td>B</td> </tr> <tr> <td>2</td> <td>C</td> </tr> <tr> <td>3</td> <td>D</td> </tr> <tr> <td>3</td> <td>A</td> </tr> </tbody> </table> </div> <p>I managed to accomplish this as follows, <em>but I don't love it</em>.</p> <pre><code>i=df.index.name df.reset_index().drop_duplicates().set_index(i) </code></pre> <p>Here's why:</p> <ul> <li>This will fail if the index has no name</li> <li>I shouldn't need to re-set and set an index</li> <li>This is a fairly common operation, and there is way too much ink here.</li> </ul> <p>What I want to say is:</p> <p><code>df.groupby('id').drop_duplicates()</code></p> <p>Which is, currently, not supported.</p> <p>Is there a more Pythonic way to do this?</p>
<python><python-3.x><pandas><dataframe>
2023-07-14 14:28:53
6
372
Michael Tuchman
76,688,388
6,392,523
Pytorch - adding output of hooks to loss function
<p>I'm loading a model (not written/trained by me) and I've added hooks to some layers of the model using <code>register_forward_hook</code>.</p> <p>My hooks calculate some transformations of the input of layer (which is the output of the previous layer).</p> <p>The goal is to add the transformations calculated by the hooks to the loss function, so that during fine tune the model will attempt to learn that the output of the transformations should be minimized.</p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>y1 = None def hook(module, input): y1 = foo(input) model.some_layer.register_forward_hook(hook) loss = MSE(...) + L1(y1.detach()) </code></pre> <p>Does it make sense to implement it that way? Would it work backpropagation-wise?</p>
<python><deep-learning><pytorch><neural-network>
2023-07-14 13:59:02
3
1,054
ChikChak
76,688,336
2,739,700
Json python format extractor python
<p>I am using openai, Requestesting json response, few times it works fine and few times it failed to load.</p> <p>Is there any way I can convert unsaturated data to dict and below is example sting</p> <pre><code>{ 'finishReason': 'STOP', 'response': ' { &quot;short_description&quot;: &quot;some issue&quot;, &quot;detail_description&quot;: &quot;&quot;, &quot;action_items&quot;: [ { &quot;action_name&quot;: &quot;Raise ticket&quot;, &quot;assignee&quot;: &quot;NOC&quot; }, { &quot;action_name&quot;: &quot;Involve Java team oncall&quot;, &quot;assignee&quot;: &quot;NOC&quot; }, { &quot;action_name&quot;: &quot;Involve PKI team&quot;, &quot;assignee&quot;: &quot;NOC&quot; } ], &quot;followups&quot;: [ &quot;Check if issue is limited to xyz&quot;, &quot;Check if issue M&quot;, &quot;Check if issue is related to any latest changes in the configs&quot; ], &quot;issue_start_time&quot;: ISO 8601 string, &quot;urgency&quot;: 1, &quot;impact&quot;: 1, &quot;priority&quot;: 1 }' } </code></pre> <p>I am trying below code</p> <pre><code> json.loads(data[&quot;response&quot;]) </code></pre> <p>Error:</p> <pre><code> Expecting value: line 1 column 1598 (char 1597) </code></pre>
<python><json>
2023-07-14 13:52:27
2
404
GoneCase123
76,688,261
890,610
Create new variable inside of pylintrc file
<p>I'm coming from a world where we program mostly in C++ and our minimal Python code often uses a mix of Pythonic methods and C++ best practices.</p> <p>As such I'd like to create some custom regexes for variable naming. Example:</p> <pre><code>^([a-z]+)(?:([A-Z]{1})([a-z]+))+$|^[A-Z][a-z]+(?:[A-Z][a-z]+)*$ </code></pre> <p>matches both <code>camelCase</code> and <code>PascalCase</code>. Sometimes our methods, classes, filenames are <code>PascalCase</code> and sometimes they are <code>camelCase</code>. The above regex matches both.</p> <p>Is it possible to create a <code>pascalCase_or_CamelCase</code> variable inside of always using the above? There are more examples like this but if one variable can be created based on the above I can follow suit for all the other overrides we need.</p>
<python><pylint><pylintrc>
2023-07-14 13:43:31
1
4,778
drjrm3
76,688,120
1,291,544
List comprehension with conditional expression omitting some cases
<p>I have list of bonds between points (as pairs of indexes) and index of a pivot point. I want to list of points bonded to that pivot point irrespective if it is on the first or the second position (I always want the index of the second point to which pivot is bonded in pair).</p> <pre><code>bonds = [(1,2),(3,4),(5,6),(3,1)] ipiv = 1 bonded_to_pivot = [ b[1] for b in bonds if(b[0]==ipiv) ] + [ b[0] for b in bonds if(b[1]==ipiv) ] </code></pre> <p><strong>Can this be done using just one list comprehension in elegant way?</strong></p> <p>I was looking into these other question about <a href="https://stackoverflow.com/questions/9987483/elif-in-list-comprehension-conditionals">comprehension with conditional expression</a> but I miss something (e.g. <code>else pass</code>) to make it work</p>
<python><conditional-statements><list-comprehension>
2023-07-14 13:25:38
1
2,474
Prokop Hapala
76,688,099
12,766,031
Can't find module using fastAPI - Uvicorn
<p>I have a python file named <code>twitter_spider.py</code> in which other files are <strong>imported</strong> from the <code>core</code> <em>directory</em> on it and it works well.</p> <p><a href="https://i.sstatic.net/I1pev.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I1pev.png" alt="files Structure" /></a></p> <p>I am going to read that file in the <em><strong>router file</strong></em> <code>route.py</code>, which has the task of connecting functions to routes in fastapi, but when I run the project using <em><strong>Uvicorn</strong></em>, it says that the <code>core</code> module was not recognized and the error originates from the <code>twitter_spider.py</code>.</p> <p>I'm really confused why there is no problem without running <strong>Uvicorn</strong>, but by adding the <code>twitter_spider</code> file to the <code>route.py</code> file, the <code>twitter_spider</code> file becomes a problem. I use this command to run server:</p> <pre><code>uvicorn main:app --reload </code></pre> <p><strong>Error</strong> :</p> <pre><code>File &quot;/--/services/twitter_spider.py&quot;, line 1, in &lt;module&gt; from core import Twitter_Conecction ModuleNotFoundError: No module named 'core' </code></pre> <p>My <code>route.py</code> code is :</p> <pre><code>from fastapi import APIRouter from models.twitter import Twitter from config.database import collection from schema.schemas import list_serialize from services.twitter_spider import ACCOUNT, USERNAME, PASSWORD from bson import ObjectId router = APIRouter() # GET Request Methods @router.get('/') async def get_account_information(): # info = list_serialize(collection.find()) ACCOUNT.login(USERNAME, PASSWORD) info = ACCOUNT.me return info </code></pre> <p>And that's my <code>twitter_spider.py</code> code: its login in <em>twitter</em> and return user information.</p> <pre><code>from core import Twitter_Conecction from core import config USERNAME = 'jj' EMAIL = 'jj@jj.com' PASSWORD = 'jj' config.PROXY = {&quot;http&quot;: &quot;127.0.0.1:2080&quot;, &quot;https&quot;: &quot;127.0.0.1:2080&quot;} ACCOUNT = Twitter_Conecction() def main(): ACCOUNT.login(USERNAME, PASSWORD) print(ACCOUNT.me) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>This works well, but when it is combined with <em><strong>fastapi</strong></em>, it has problems.</p>
<python><import><fastapi>
2023-07-14 13:23:24
1
315
Sir-Sorg
76,687,847
5,586,359
How do I access outer class instance attributes from an inner class?
<p>How do I make a class like this?</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; model = Model(x=3, y=5) &gt;&gt;&gt; data = model.Data(y=7) &gt;&gt;&gt; data.x 3 &gt;&gt;&gt; data.y 7 </code></pre> <p>Essentially, the Data class should be able to access model's instance attributes whilst allowing overrides. I assume it's a special kind of inner class?</p> <p>My Attempt:</p> <pre class="lang-py prettyprint-override"><code>class Model: def __init__(self, x, y): self.x = x self.y = y class Data: def __init__(self, x=None, y=None): self.x = x self.y = y </code></pre>
<python><python-3.x><class>
2023-07-14 12:49:21
1
954
Vivek Joshy
76,687,814
6,943,622
Dynamically Provision iot Device With Azure dps - Unexpected Failure. Python sdk
<p>I am dynamically provision an iot device using the python azure-iot-device python package. I am using v2 and not 3.0.0b2. I can't even get that to compile.</p> <p>Here's my python code trying to provision a device:</p> <pre><code>import asyncio import os from azure.iot.device.aio import ( ProvisioningDeviceClient, ) from dotenv import load_dotenv load_dotenv(dotenv_path=&quot;.env&quot;) CONNECTION_STRING = os.getenv(&quot;IOTHUB_DEVICE_CONNECTION_STRING&quot;) ID_SCOPE = os.getenv(&quot;PROVISIONING_IDSCOPE&quot;) REGISTRATION_ID = os.getenv(&quot;PROVISIONING_REGISTRATION_ID&quot;) SYMMETRIC_KEY = os.getenv(&quot;PROVISIONING_SYMMETRIC_KEY&quot;) PROVISIONING_HOST = os.getenv(&quot;PROVISIONING_HOST&quot;) # PROVISIONING_SHARED_ACCESS_KEY = os.getenv(&quot;PROVISIONING_SHARED_ACCESS_KEY&quot;) async def main(): print(&quot;Starting multi-feature sample&quot;) provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key( provisioning_host=PROVISIONING_HOST, registration_id=REGISTRATION_ID, id_scope=ID_SCOPE, symmetric_key=SYMMETRIC_KEY, ) provisioning_device_client.provisioning_payload = &quot;&lt;Your Payload&gt;&quot; provisioning_result = None try: provisioning_result = await provisioning_device_client.register() except Exception as e: print(f&quot;an error occurred provisioning the device -- {e}&quot;) finally: print(f&quot;result -- {provisioning_result}&quot;) if __name__ == &quot;__main__&quot;: try: asyncio.run(main()) except KeyboardInterrupt: # Exit application because user indicated they wish to exit. # This will have cancelled `main()` implicitly. print(&quot;User initiated exit. Exiting.&quot;) </code></pre> <p>The symmetric key is derived by using the enrollment group master key to compute an HMAC-SHA256 of the registration ID for the device. I simply followed the &quot;Derive a Device Key&quot; section in this guide -- <a href="https://learn.microsoft.com/en-us/azure/iot-dps/how-to-legacy-device-symm-key?tabs=linux&amp;pivots=programming-language-python#derive-a-device-key" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/iot-dps/how-to-legacy-device-symm-key?tabs=linux&amp;pivots=programming-language-python#derive-a-device-key</a></p> <p>I keep getting 'Unexpected Failure' error. The code is so little that there's almost nothing to debug. I believe I followed the steps closely in setting up my iot hub and dps. Please let me know any suggestions</p>
<python><azure><azure-iot-hub><azure-iot-sdk>
2023-07-14 12:44:51
1
339
Duck Dodgers
76,687,792
4,451,315
Lookup column name in column
<p>Say I have</p> <pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({ 'a': [1, 2, 1], 'b': [2, 1, 2], 'c': [3, 3, 2], 'column': ['a', 'c', 'b'], }) </code></pre> <pre><code>shape: (3, 4) ┌─────┬─────┬─────┬────────┐ │ a ┆ b ┆ c ┆ column │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ str │ ╞═════╪═════╪═════╪════════╡ │ 1 ┆ 2 ┆ 3 ┆ a │ │ 2 ┆ 1 ┆ 3 ┆ c │ │ 1 ┆ 2 ┆ 2 ┆ b │ └─────┴─────┴─────┴────────┘ </code></pre> <p>I want add a column which, for each row, take the value in the row corresponding to the column in <code>column</code>.</p> <p>Expected output:</p> <pre><code>shape: (3, 5) ┌─────┬─────┬─────┬────────┬────────┐ │ a ┆ b ┆ c ┆ column ┆ lookup │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ str ┆ i64 │ ╞═════╪═════╪═════╪════════╪════════╡ │ 1 ┆ 2 ┆ 3 ┆ a ┆ 1 │ │ 2 ┆ 1 ┆ 3 ┆ c ┆ 3 │ │ 1 ┆ 2 ┆ 2 ┆ b ┆ 2 │ └─────┴─────┴─────┴────────┴────────┘ </code></pre>
<python><python-polars>
2023-07-14 12:42:10
4
11,062
ignoring_gravity
76,687,610
7,870,777
How does snakemake threads work, and what is the expected behavior when thread definition is inconsistent?
<p>Can you help me understand how snakemake handles the threading?</p> <p>Imagine a case: You are running snakemake on a computer with <strong>10 cores</strong>.</p> <p>Case 0 (proper use case)</p> <pre class="lang-py prettyprint-override"><code>rule someRule0: benchmark: &quot;benchmark_result0.tsv&quot; threads: 2 shell: &quot;&quot;&quot; ./sometool --threads {threads} &quot;&quot;&quot; </code></pre> <p>Case 1 (<code>threads</code> missing; thus defaults to 1?)</p> <pre class="lang-py prettyprint-override"><code>rule someRule1: benchmark: &quot;benchmark_result1.tsv&quot; shell: &quot;&quot;&quot; ./sometool --threads 2 &quot;&quot;&quot; </code></pre> <p>Case 2 (<code>threads</code> &lt; actual nthreads we give program as argument)</p> <pre class="lang-py prettyprint-override"><code>rule someRule2: benchmark: &quot;benchmark_result2.tsv&quot; threads: 1 shell: &quot;&quot;&quot; ./sometool --threads 2 &quot;&quot;&quot; </code></pre> <p>Case 3 (<code>threads</code> &gt; actual nthreads we give program as argument)</p> <pre class="lang-py prettyprint-override"><code>rule someRule3: benchmark: &quot;benchmark_result3.tsv&quot; threads: 5 shell: &quot;&quot;&quot; ./sometool --threads 2 &quot;&quot;&quot; </code></pre> <p>Case 4 (<code>threads</code> asked in total &gt; ncores available in the system)</p> <pre class="lang-py prettyprint-override"><code>rule someRule3: benchmark: &quot;benchmark_result4.tsv&quot; threads: 10 shell: &quot;&quot;&quot; ./sometool --threads {threads} &quot;&quot;&quot; rule someOtherRule: benchmark: &quot;benchmark_result_other.tsv&quot; threads: 5 shell: &quot;&quot;&quot; ./sometoolv2 --threads {threads} &quot;&quot;&quot; </code></pre> <p>Questions:</p> <ul> <li><p>In case 1 and 2, snakemake cannot change what is defined in the shell directive (<code>--threads 2</code>), but I think it will reserve 1 thread then run the command. Will it use 1 threads or 2? How does snakemake actually send the job?</p> </li> <li><p>As snakemake says &quot;Rules claiming more threads will be scaled down&quot;, how can we make sure that this is not affecting the benchmarking results by scaling down under the hood while we assume results are for the number of threads we specified?</p> </li> </ul> <p>Thanks!</p>
<python><multithreading><snakemake>
2023-07-14 12:18:36
1
459
Isin Altinkaya
76,687,509
4,913,254
Split pandas column and create new columns that count the split values and group by months
<p>I have a dataframe that looks like this</p> <pre><code> Error \nNumber Date Type of error 0 2122 2020-01-09 NHS Spine check - External error 1 2123 2020-01-09 EP3- Run failure 2 2124 2020-02-09 NHS Spine check - External error 3 2125 2020-03-09 NHS Spine check - External error 4 2126 2020-04-09 NHS Spine check - External error .. ... ... ... 837 2949 2023-03-07 DE - Data Entry 838 2950 2023-03-07 EI - Error of interpretation 839 2951 2023-03-07 EX -External error - other 840 2952 2023-04-07 EP8- SOPs not being followed 841 2953 2023-06-07 OT - Other </code></pre> <p>Here a reproducible data set converted in a dictionary</p> <pre><code>data.head().to_dict() {'Error \nNumber': {0: '2122', 1: '2123', 2: '2124', 3: '2125', 4: '2126'}, 'Date': {0: Timestamp('2020-01-09 00:00:00'), 1: Timestamp('2020-01-09 00:00:00'), 2: Timestamp('2020-02-09 00:00:00'), 3: Timestamp('2020-03-09 00:00:00'), 4: Timestamp('2020-04-09 00:00:00')}, 'Type of error ': {0: 'NHS Spine check - External error', 1: 'EP3- Run failure', 2: 'NHS Spine check - External error', 3: 'NHS Spine check - External error', 4: 'NHS Spine check - External error'}} </code></pre> <p>I am trying to count the values in column &quot;Type of error&quot;, group them by months and split data in new columns like this (following example only takes the first 5 rows</p> <pre><code>Date NHS Spine check - External error EP3- Run failure 2020-09 4 1 </code></pre> <p>I have tried the answers found <a href="https://stackoverflow.com/questions/57065878/split-pandas-column-and-create-new-columns-that-count-the-split-values">here</a> but the differences prevent me to do what I want</p>
<python><pandas>
2023-07-14 12:06:18
1
1,393
Manolo Dominguez Becerra
76,687,462
10,413,428
Is there a way to ignore "Mypy: Call to untyped function "setupUi" in typed context [no-untyped-call]" for the UI call when using PySide6
<p>I create my <code>.ui</code> files with QT Designer and convert them to a python file using the <code>pyside6-uic ...</code> command. Each time I load such an `auto-generated' python file via :</p> <pre class="lang-py prettyprint-override"><code>class MyCrazyTestDialog(QDialog): def __init__(self) -&gt; None: super().__init__() self.ui = Ui_MyCrazyTestDialog() self.ui.setupUi(self) </code></pre> <p>the line <code>self.ui.setupUi(self)</code> will generate the mypy error:</p> <pre><code>Mypy: Call to untyped function &quot;setupUi&quot; in typed context [no-untyped-call] </code></pre> <p>I have tried several ways to exclude all auto-generated files from the mypy search. But had no success. The obvious solution with <code>self.ui.setupUi(self) # type: ignore</code> does not work either, because it will complain about calling an untyped function in the type environment for every line that uses <code>self.ui</code>...</p> <p>I have already installed the unofficial pyside6 stubs from <a href="https://github.com/python-qt-tools/PySide6-stubs" rel="nofollow noreferrer">here</a>, which helped with mypy errors for the self.translate qt code, but it does not help here.</p> <p>Does anyone know how I can configure my mypy setup so that it will ignore calls to the auto-generated files?</p>
<python><python-3.x><qt><mypy><pyside6>
2023-07-14 11:59:35
0
405
sebwr
76,687,450
14,380,704
Call method based on index
<p>I have 4 methods that I would like to put into a list and call dynamically (through another method) based on another variable's value. Unsure if that's possible, but this is what I'm attempting and the error I'm receiving is: 'str' object is not callable. I know that the methods listed are strings, but is there a way to do this or is this approach just not functional?</p> <pre><code>def CallMethod(self): Method_List = [ 'Method_1', 'Method_2', 'Method_3', 'Method_4' ] self.base_index == 1 self.MethodType = Method_List[self.base_index] result = Table([self.MainBodyType()]) </code></pre>
<python><pandas><dataframe>
2023-07-14 11:57:28
1
307
2020db9
76,687,377
8,477,066
Can I prevent sphinx numpydoc to split the Returns into two sections
<p>When using Sphinx Numpy doc. I document the Returns section.</p> <pre><code>Returns ------- out: float A sentence about the return type </code></pre> <p>However, rendered this is split into two sections.</p> <pre><code>Returns: out - A sentence about the return type Return type: float </code></pre> <p>How can I prevent this section from being split and only having a Returns section? Where the Returns section is rendered just as the Parameters section.</p> <p>If this matters, I am using the PyData Sphinx theme.</p>
<python><python-sphinx><numpydoc>
2023-07-14 11:47:07
1
2,859
Hielke Walinga
76,687,267
10,232,932
Remove all special characters in all column names / headers in pandas / python
<p>There is a pretty similar question on this page: <a href="https://stackoverflow.com/questions/37952797/pandas-dataframe-column-name-remove-special-character">pandas dataframe column name: remove special character</a></p> <p>but in my case, I have several special characters in the column names / headers, so the example is with a pandas dataframe df:</p> <pre><code>('Untitled 1, Year') ('Untitled 1', 'Name') ('Life', 'Age') 2000 John 24 2001 Kelly 32 </code></pre> <p>How can I remover all the special characters from the column headers? I am searching for a way, that I don't have to use the replace method several times for every special character. That the output looks like this:</p> <pre><code>Untitled 1 Year Untitled 1 Name Life Age 2000 John 24 2001 Kelly 32 </code></pre>
<python><pandas>
2023-07-14 11:31:58
3
6,338
PV8
76,687,260
5,437,090
How to speed up Stanza lemmatizer by excluding reduntant words
<p><strong>Given</strong>:</p> <p>I have a small sample document with limited number of words as follows:</p> <pre><code>d =''' I go to school by the school bus everyday with all of my best friends. There are several students who also take the buses to school. Buses are quite cheap in my city. The city which I live in has an enormous number of brilliant schools with smart students. We have a nice math teacher in my school whose name is Jane Doe. She also teaches several other topics in our school, including physics, chemistry and sometimes literature as a substitute teacher. Other classes don't appreciate her efforts as much as my class. She must be nominated as the best school's teacher. My school is located far from my apartment. This is why, I am taking the bus to school everyday. ''' </code></pre> <p><strong>Goal</strong>:</p> <p>Considering my real-world large document with more words (<code>4000 ~ 8000 words</code>), I would like to speed up my Stanza lemmatizer by <em>probably</em> excluding lemmatizing repeated words, <em>e.g.</em>, words which has occurred more than once. I do not intend to use <code>set()</code> method to obtain the unique lemmas in my result list, rather I intend to ignore lemmatizing words which have already been lemmatized.</p> <p>For instance, for the given sample raw document <code>d</code>, there are several redundant words which could be ignored in the process:</p> <pre><code>Word Lemma -------------------------------------------------- school school school school &lt;&lt;&lt;&lt;&lt; Redundant bus bus everyday everyday friends friend students student buses bus school school Buses bus &lt;&lt;&lt;&lt;&lt; Redundant cheap cheap city city city city &lt;&lt;&lt;&lt;&lt; Redundant live live enormous enormous number number brilliant brilliant schools school smart smart students student nice nice math math teacher teacher school school &lt;&lt;&lt;&lt;&lt; Redundant Jane jane Doe doe teaches teach topics topic school school &lt;&lt;&lt;&lt;&lt; Redundant including include physics physics chemistry chemistry literature literature substitute substitute teacher teacher &lt;&lt;&lt;&lt;&lt; Redundant classes class appreciate appreciate efforts effort class class nominated nominate school school &lt;&lt;&lt;&lt;&lt; Redundant teacher teacher school school &lt;&lt;&lt;&lt;&lt; Redundant located locate apartment apartment bus bus school school &lt;&lt;&lt;&lt;&lt; Redundant everyday everyday &lt;&lt;&lt;&lt;&lt; Redundant </code></pre> <p>My [<em>inefficient</em>] solution:</p> <pre><code>import stanza import nltk nltk_modules = ['punkt', 'averaged_perceptron_tagger', 'stopwords', 'wordnet', 'omw-1.4', ] nltk.download(nltk_modules, quiet=True, raise_on_error=True,) STOPWORDS = nltk.corpus.stopwords.words(nltk.corpus.stopwords.fileids()) nlp = stanza.Pipeline(lang='en', processors='tokenize,lemma,pos', tokenize_no_ssplit=True,download_method=DownloadMethod.REUSE_RESOURCES) doc = nlp(d) %timeit -n 10000 [ wlm.lower() for _, s in enumerate(doc.sentences) for _, w in enumerate(s.words) if (wlm:=w.lemma) and len(wlm)&gt;2 and wlm not in STOPWORDS] 10.5 ms ± 112 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) </code></pre> <p>My [<em>alternative</em>] solution, a little faster but still <strong>NOT</strong> efficient for (<code>4000 ~ 8000 words</code>):</p> <pre><code>def get_lm(): words_list = list() lemmas_list = list() for _, vsnt in enumerate(doc.sentences): for _, vw in enumerate(vsnt.words): wlm = vw.lemma.lower() wtxt = vw.text.lower() if wtxt in words_list and wlm in lemmas_list: lemmas_list.append(wlm) elif ( wtxt not in words_list and wlm and len(wlm) &gt; 2 and wlm not in STOPWORDS ): lemmas_list.append(wlm) words_list.append(wtxt) return lemmas_list %timeit -n 10000 get_lm() 7.85 ms ± 66.6 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) </code></pre> <p>My ideal result for this sample document, from either solution, should look like this, containing even repeated lemmas:</p> <pre><code>lm = [ wlm.lower() for _, s in enumerate(doc.sentences) for _, w in enumerate(s.words) if (wlm:=w.lemma) and len(wlm)&gt;2 and wlm not in STOPWORDS] # solution 1 # lm = get_lm() # solution 2 print(len(lm), lm) 47 ['school', 'school', 'bus', 'everyday', 'friend', 'student', 'bus', 'school', 'bus', 'cheap', 'city', 'city', 'live', 'enormous', 'number', 'brilliant', 'school', 'smart', 'student', 'nice', 'math', 'teacher', 'school', 'jane', 'doe', 'teach', 'topic', 'school', 'include', 'physics', 'chemistry', 'literature', 'substitute', 'teacher', 'class', 'appreciate', 'effort', 'class', 'nominate', 'school', 'teacher', 'school', 'locate', 'apartment', 'bus', 'school', 'everyday'] </code></pre> <p>Is there any better or more efficient approach for this problem when considering large corpus or documents?</p> <p>Cheers,</p>
<python><nlp><stanford-nlp><lemmatization><stanza>
2023-07-14 11:31:20
0
1,621
farid
76,687,199
2,194,036
ModuleNotFoundError while running python script via ansible
<br> I am trying to run a python script from the ansible yml file but its failing with the error "ModuleNotFoundError". The python script runs fine when runs in the command line. I have tried to provide the complete path of the module as well in the python script but still the answer is failure when it runs via ansible. <br> <p>Error is:</p> <pre><code> File \&quot;/home/user/.ansible/tmp/ansible-tmp-1689331594.7866418-1621045-127842877821154/my_script.py\&quot;, line 8, in &lt;module&gt;&quot;, &quot; from my_module import property&quot;, &quot;ModuleNotFoundError: No module named 'my_module'&quot;], &quot;stdout&quot;: &quot;&quot;, &quot;stdout_lines&quot;: []} </code></pre> <p>here are the headers for python script &quot;/home/user/my_script.py&quot; I am using:</p> <pre><code>#!/bin/env python3 import sys sys.path.append('/home/user/my_module') from my_module import property </code></pre> <p>and below is the ansible one:</p> <pre><code>- name: Play of file hosts: localhost strategy : free gather_facts: no tasks: - name: Executing python script. script: /home/user/my_script.py register: script_out - debug: var: script_out.stdout_linesy </code></pre> <p>just to add, the python script and ansible yml files both are in different directories..</p>
<python><python-3.x><ansible>
2023-07-14 11:21:25
2
459
Sandy
76,686,920
1,388,353
How to insert two values in a many-to-many relationship from a python SQL query?
<p>I am trying to insert authors in an authors table to match sources through an association table of author_ids and source_ids in a flask website. I am using mariaDB, therefore I am aiming to have it all in one SQL query. So far I haven't been able to make it work with multiple queries.</p> <p>Table creation</p> <pre><code>CREATE TABLE test_associate_author (author_id int,source_id int); CREATE TABLE test_source (source_id int auto_increment primary key, title varchar(30)); CREATE TABLE test_author (author_id int auto_increment primary key, forename varchar(10),surname varchar(10)); </code></pre> <p>The long way round with pull variables into python is as below.</p> <pre><code>INSERT INTO author (forename,surname) VALUES (&quot;Bob&quot;,&quot;Smith&quot;); INSERT INTO source (title) VALUES (&quot;Short Stories&quot;); var a = (MAX(id) FROM from author; INSERT INTO author (forename,surname) VALUES (&quot;Jackie&quot;,&quot;Smith&quot;); var b = select (MAX(id) FROM from author; var c = SELECT (MAX(id) FROM source) INSERT INTO associate_author (author_id,source_id) VALUES (a, c); INSERT INTO associate_author (author_id,source_id) VALUES (b, c); </code></pre>
<python><mysql><flask><mariadb>
2023-07-14 10:40:35
2
1,095
OrigamiEye
76,686,888
1,433,901
Using bson.ObjectId in Pydantic v2
<p>I found <a href="https://github.com/mongodb-developer/mongodb-with-fastapi/blob/master/app.py#L13C1-L28C43" rel="noreferrer">some examples</a> on how to use ObjectId within <code>BaseModel</code> classes. Basically, this can be achieved by creating a Pydantic-friendly class as follows:</p> <pre class="lang-py prettyprint-override"><code>class PyObjectId(ObjectId): @classmethod def __get_validators__(cls): yield cls.validate @classmethod def validate(cls, v): if not ObjectId.is_valid(v): raise ValueError(&quot;Invalid objectid&quot;) return ObjectId(v) @classmethod def __modify_schema__(cls, field_schema): field_schema.update(type=&quot;string&quot;) </code></pre> <p>However, this seems to be for Pydantic v1, as this mechanism has been superseeded by the <code>__get_pydantic_core_schema__</code> classmethod. However, I have been unable to achieve an equivalent solution with Pydantic v2. Is it possible? What validators do I need? I tried to refactor things but was unable to get anything usable.</p>
<python><pydantic><bson>
2023-07-14 10:35:39
9
3,704
MariusSiuram
76,686,798
3,591,044
Sliding window with burn-in elements
<p>I have a Python list. Here is an example:</p> <pre><code>seq = [0, 1, 2, 3, 4, 5] </code></pre> <p>Now I would like to iterate over the list with a window. For example, with a window of size 3 I can do it as follows:</p> <pre><code>window_size = 3 for i in range(len(seq) - window_size + 1): print(seq[i: i + window_size]) </code></pre> <p>This would result in the following output:</p> <pre><code>[0, 1, 2] [1, 2, 3] [2, 3, 4] [3, 4, 5] </code></pre> <p>Now, I would like to also include the burn-in phase, i.e. lists with less than window_size elements. The output should be as follows:</p> <pre><code>[0] [0, 1] [0, 1, 2] [1, 2, 3] [2, 3, 4] [3, 4, 5] </code></pre> <p>How can my code be extended to do this most efficiently (also for bigger lists and bigger window_size)?</p>
<python><python-3.x><list><sliding-window>
2023-07-14 10:23:18
4
891
BlackHawk
76,686,724
6,681,932
streamlit survey annotation
<p>Given a sample file I would like to label certain pairs in streamlit for a survey app, but the code behavior is not what I am expecting:</p> <p>Here I provide a sample data of ingredients:</p> <pre><code>import streamlit as st import itertools ingredients = [ {'id': 1, 'Food': 'Pizza', 'Type': 'Veggie'}, {'id': 2, 'Food': 'Meat', 'surname': 'No Veggie'}, {'id': 3, 'Food': 'Pineaple', 'surname': 'Fruit'}, {'id': 4, 'Food': 'Pasta', 'surname': 'Veggie'},] </code></pre> <p>I am gathering all answers from users in a <code>list</code> from certain pair of ingredients:</p> <pre><code>for i, pair in enumerate(itertools.combinations(ingredients, 2)): st.write(pair) annotation_result = st.selectbox(f'Do u like? recipe_{i}', options=['Unsure', 'yes', 'no']) if annotation_result == 'yes': l.append([{&quot;Like&quot;: [pair]}]) elif annotation_result == &quot;no&quot;: l.append([{&quot;Disgusting&quot;: [pair]}]) st.write(l) </code></pre> <p>The expected behavior is that each pair of ingredients should be annotated by a person clicking on the answer. This information is stored in a list.</p> <p>By default, the for loop iterates through all pairs with same default response, but not waiting, also every decision is printed but nor removed from the prompt, this can be a nightmare when having big data.</p> <p>In summary, I expect the process to allow clicking, saving, and proceeding to the next pair in a sequential manner with break time for the user. At the end, the annotations can be saved and graphs generated.</p>
<python><user-interface><streamlit>
2023-07-14 10:11:28
1
478
PeCaDe
76,686,704
2,630,406
Numpy array bool operation slow
<p>I started profiling my application and I tested to following code:</p> <pre><code>a = np.random.random((100000,)) def get_first_index(value, arr): firstIndex = np.argmax(arr &gt; value) if firstIndex &lt;= 0: raise Exception('No index found') return firstIndex for i in range(0, 1000): get_first_index(0.5, a) </code></pre> <p>It just returns me the first index of an element bigger than the given value. On my machine it takes around 0.01s for array size 50k and 1k calls. I was wondering what causes the slow down. My first suspect was <code>np.argmax</code> but I boiled it down to the boolean comparison <code>arr &gt; value</code>. It spends 99% of the time creating the bool comparison. Is there any faster way I am not aware of?</p> <p>Test code for profiling:</p> <pre><code>a = np.random.random((100000,)) def test_function(a, b): return a &lt; b import cProfile, pstats profiler = cProfile.Profile() profiler.enable() for i in range(0, 1000): test_function(0.5, a) profiler.disable() stats = pstats.Stats(profiler).sort_stats('tottime') stats.print_stats() </code></pre>
<python><arrays><numpy>
2023-07-14 10:09:18
1
933
perotom
76,686,701
3,973,269
Sampled frequency sweep in python, ending with incorrect frequencies in plot
<p>I have a Python code intended to create a chirp (like in LoRa) from one frequency to the other.</p> <p>On purpose, I want to create it by sampling with a 5 MHz sampling rate. I will move the values to an FPGA later on so it can just run through the list of samples. I want to sweep around 100 kHz with a bandwidth of 20 kHz (90.000 Hz to 110 Hz).</p> <p>It looks roughly like this:</p> <p><a href="https://i.sstatic.net/Hosqf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hosqf.png" alt="total chirp" /></a> When zooming in on the first two peaks, I calculate a time difference of 11.02us -&gt; 90.744 Hz. That is about expected.</p> <p>But when zooming in on the last two peaks, I calculate a time difference of 7.77uS -&gt; 128.7kHz. That is way higher than the 110 kHz I'm aiming for.</p> <p>In my code, what I do is creating a sampling frequency of 5 MHz by dividing 50 MHz by 10. I calculate the expected time of my chirp and then the amount of samples in that whole chirp.</p> <p>Using that, I calculate my delta t and my delta f. Doing so I can iterate over the range of samples and create a sine value for each sample by incrementally increasing the time with delta t and the frequency with delta f.</p> <p>My code below contains some parts as well that intend to write all those values into binary format for my FPGA later on, those outputs will be written to a txt file but are not important right now.</p> <p>I'm asking my code to print out the timestamps and corresponding frequencies at that timestamp. That output is showing the expected frequencies at the expected timestamps. So I'm really confused why my plot is giving the result that I do not expect.</p> <p>The code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import scipy Fclock = 50000000 ClockFactor = 10 binaryRange = 0b1111111111 def chirp(s, SF, Fc, BW, counter, file): Fsample = Fclock / ClockFactor # general calculations # number of chips num_chips = int(2**SF) # chip time Tchip = 1 / BW # symbol time Tsymbol = num_chips * Tchip # time of a sample Tsample = 1 / Fsample # start frequency f_begin = Fc - BW/2 # stop frequency f_end = Fc + BW/2 sineValuesBin = [] sineValues = [] num_samples = int(Tsymbol / Tsample) # Frequency increment (delta) per sample Fdelta = BW / num_samples t = 0 tlist = [] f = f_begin + s / num_chips * num_samples * Fdelta for i in range(num_samples): sineValue = np.sin(float(2 * np.pi) * float(f) * float(t)) sineValuesBin.append(bin(int(sineValue * binaryRange / 2 + binaryRange / 2))[2:].zfill(10)) sineValues.append(sineValue) tlist.append(t) file.write(f&quot;Frequency: {f}\n&quot;) f += Fdelta if f &gt; f_end: f = f_begin t += Tsample print(f&quot;freq: {f}; t: {t}&quot;) plt.plot(tlist, sineValues, '.-') plt.show() i = counter for sineValueBin in sineValuesBin: file.write(f&quot; sig_sineList({i}) &lt;= \&quot;{sineValueBin}\&quot;;&quot;) file.write(&quot;\n&quot;) i += 1 return i def message(symbol_items, SF, Fc, BW): counter = 0 f = open(&quot;output.txt&quot;, &quot;w&quot;) f.close() f = open(&quot;output.txt&quot;, &quot;w+&quot;) for symbol in symbol_items: counter = chirp(symbol, SF, Fc, BW, counter, f) f.close() symbols = [0] message(symbols, 2, 100000, 20000) </code></pre> <p>TLDR: I create a sampled frequency sweep plot from 90.000 Hz to 110.000 Hz with a sampling rate of 5 MHz. The plot shows that the last frequency is not around 110.000 Hz but 128.000 Hz. Why?</p> <p><strong>Small update:</strong> I found that, when I half the frequency delta, it works perfectly. But if anyone could tell my why that is.</p>
<python><plot><frequency><chirp>
2023-07-14 10:09:03
0
569
Mart
76,686,509
13,560,598
why does the pseudo-inverse blow up in numpy?
<p>Consider the following code. Why does the norm of <code>pinv</code> of <code>AA.T</code> blow up? Is there a more numerically stable way of computing it?</p> <pre><code># Why does the norm of pinv(AA.T) blow up? import numpy as np import matplotlib.pyplot as plt # number of equations nn = 225 # norm of pinv of AA and AA.T AA_pinv_norm = [] AAT_pinv_norm = [] for nn in range(1,nn): AA = np.asarray([[0.1,0.1]]*nn) pinv_AA = np.linalg.pinv(AA) AA_pinv_norm.append(np.linalg.norm(pinv_AA)) pinv_AAT = np.linalg.pinv(AA.T) AAT_pinv_norm.append(np.linalg.norm(pinv_AAT)) fig , ax1 = plt.subplots(nrows=1,ncols=1) ax1.plot(AA_pinv_norm,'go-',markerfacecolor='none',label='norm(pinv(AA))') ax1.plot(AAT_pinv_norm,'rx-',label='norm(pinv(AA.T))') ax1.ticklabel_format(useOffset=False,style='plain') # to prevent scientific notation. ax1.set_title('Blow up of norm(pinv(AA.T))') ax1.set_yscale('log') ax1.set_xlabel('size') ax1.set_ylabel('norm of pinv') ax1.legend() </code></pre>
<python><numpy><numerical-methods>
2023-07-14 09:40:00
1
593
NNN
76,686,356
10,566,774
Azure Functions "Microsoft.Azure.WebJobs.Script: Error building configuration in an external startup class. System.Net.Http"
<p>I am migrating an Azure Functions application from Runtime v3 to v4. The functions app includes Durable Functions. Migrating the code was straight forward and I had no issues running locally. However, when I published to production, I started seeing this error:</p> <blockquote> <p>Microsoft.Azure.WebJobs.Script: Error building configuration in an external startup class. System.Net.Http: The SSL connection could not be established, see inner exception. System.Net.Sockets: Unable to read data from the transport connection: Connection reset by peer. Connection reset by peer.</p> </blockquote> <p>This error happens immediately during app startup so it seems to not be an issue with the actual functions but in the initial setup connecting to Azure Storage, plus maybe something that has nothing to do with the migration from v3 to v4 and I've lost some old configuration during the update. Here what I see:</p> <p><a href="https://i.sstatic.net/ZpejR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZpejR.png" alt="enter image description here" /></a></p> <p>Here my host.json:</p> <pre><code>{ &quot;version&quot;: &quot;2.0&quot;, &quot;logging&quot;: { &quot;applicationInsights&quot;: { &quot;samplingSettings&quot;: { &quot;isEnabled&quot;: true, &quot;excludedTypes&quot;: &quot;Request&quot; } } }, &quot;extensionBundle&quot;: { &quot;id&quot;: &quot;Microsoft.Azure.Functions.ExtensionBundle&quot;, &quot;version&quot;: &quot;[1.*, 2.0.0)&quot; } } </code></pre> <p>How can I fix it? Any help will be appreciated</p>
<python><azure><azure-functions><azure-functions-runtime>
2023-07-14 09:19:23
1
372
Salvatore Nedia
76,686,322
689,242
Automating CLI application which opens another GUI window
<p>I have a program <code>JLinkExe</code> which I can start in Windows/Powershell:</p> <pre class="lang-none prettyprint-override"><code>&amp; &quot;C:\Program Files\SEGGER\JLink\JLink.exe&quot; -nogui 1 </code></pre> <p>Or in Linux/WSL:</p> <pre class="lang-none prettyprint-override"><code>JLinkExe -nogui 1 </code></pre> <p>In both cases program starts, prints some data and presents a <code>J-Link&gt;</code> <em>prompt</em>.</p> <pre class="lang-none prettyprint-override"><code>SEGGER J-Link Commander V7.88k (Compiled Jul 5 2023 15:02:18) DLL version V7.88k, compiled Jul 5 2023 15:00:41 Connecting to J-Link via USB...O.K. Firmware: J-Link STLink V21 compiled Aug 12 2019 10:29:20 Hardware version: V1.00 J-Link uptime (since boot): N/A (Not supported by this model) S/N: 775087052 VTref=3.300V Type &quot;connect&quot; to establish a target connection, '?' for help J-Link&gt; </code></pre> <p>At this point user needs to start inputting commands <em>(each ending with enter/newline)</em> eventually leading to flashing of some firmware to the target embedded system. The commands that I have to send are:</p> <pre class="lang-none prettyprint-override"><code>connect STM32F429ZI SWD 4000 erase loadbin program.elf , 0x0 q </code></pre> <p>The problem here is that after command <code>4000</code> application <code>JLinkExe</code> spawns a GUI window where user needs to use a mouse to click <kbd>Accept</kbd> button.</p> <hr /> <p>I want to write a <code>python3</code> &amp; <code>pytest</code> script that will automate this so I managed to do this so far:</p> <pre class="lang-py prettyprint-override"><code>import pytest import subprocess import pyautogui import sys import shutil import os JLINK_SERIAL = &quot;115081052&quot; TARGET = &quot;STM32F429ZI&quot; @pytest.fixture def canScriptRun() -&gt; None: # Newline for nicer pytest formatting. print(&quot;\n&quot;) printSeparator() # Check whether script is ran by superuser / admin. if (os.name == &quot;nt&quot;): print(&quot;Operating system: NT compliant&quot;) if (os.access(&quot;Program Files&quot;, os.W_OK) is False): print(&quot;User: not admin (continue)&quot;) else: print(&quot;User: admin&quot;) elif (os.name == &quot;posix&quot;): print(&quot;Operating system: Posix compliant&quot;) if (os.access(&quot;/etc/&quot;, os.W_OK) is False): print(&quot;User: not superuser (exit)&quot;) sys.exit(0) else: print(&quot;User: superuser&quot;) printSeparator() @pytest.fixture def flash(canScriptRun) -&gt; None: # JLinkExe child process creation. if (os.name == &quot;nt&quot;): p = subprocess.Popen( [r&quot;C:\Program Files\SEGGER\JLink\JLink.exe&quot;, &quot;-nogui&quot;, &quot;1&quot;], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True, text=True, shell=True ) elif (os.name == &quot;posix&quot;): p = subprocess.Popen( [&quot;JLinkExe&quot;, &quot;-nogui&quot;, &quot;1&quot;], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True, text=True ) # Send newline-separated commands to the process &amp; leave the running # process as it is. stdout, stderr = p.communicate( input=&quot;connect\nSTM32F429ZI\nS\n4000\n&quot;, timeout=None ) print(stdout) print(stderr) # Locate &amp; click the GUI accept button based on it's screenshot saved in project folder. buttonCenterLocation = pyautogui.locateCenterOnScreen(&quot;accept.png&quot;) pyautogui.move(buttonCenterLocation, 1) pyautogui.click() def printSeparator() -&gt; None: terminalWidth = shutil.get_terminal_size().columns for i in range(0, terminalWidth): print('-', end='') def test_tests(flash) -&gt; None: # Newline for nicer pytest formatting. print(&quot;\n&quot;) pass </code></pre> <p>Once command <code>4000</code> executes a GUI window appears prompting me to click <kbd>Accept</kbd> button <em>(screenshoot)</em>, but mouse will not move to that position and will not make a click. This is because when GUI window is displayed <code>JLinkExe</code> is blocked and script also waits...</p> <p><a href="https://i.sstatic.net/SRb0J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SRb0J.png" alt="enter image description here" /></a></p> <p>Any idea on how to solve this?</p>
<python><subprocess><pytest><popen><rpa>
2023-07-14 09:14:25
1
1,505
71GA
76,686,297
7,599,215
How to use extractall() result to add partially duplicated rows to original dataset?
<h3>Please, consider next example:</h3> <h4>generate data</h4> <p><strong>UPD:</strong> mask added<br /> <strong>UPD2:</strong> added letters to 'b' column to justify extraction</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame([{'name': 'Bob', 'a': '123', 'b': '111'}, {'name': None, 'a': '', 'b': '999'}, {'name': 'Alice', 'a': '234', 'b': '[555aaa, 666bbb]'}, {'name': 'Steve', 'a': '567', 'b': '[777ccc]'}]) print(df.to_markdown()) </code></pre> <pre><code>| | name | a | b | |---:|:-------|:----|:-----------------| | 0 | Bob | 123 | 111 | | 1 | | | 999 | | 2 | Alice | 234 | [555aaa, 666bbb] | | 3 | Steve | 567 | [777ccc] | </code></pre> <pre class="lang-py prettyprint-override"><code>mask = df['name'].isna() m = df[~mask]['b'].str.extractall(r'([0-9]+)') print(m.to_markdown() </code></pre> <pre><code>| | 0 | |:-------|----:| | (0, 0) | 111 | | (2, 0) | 555 | | (2, 1) | 666 | | (3, 0) | 777 | </code></pre> <h4>result I want</h4> <p>Basically, I want duplicate the whole row except extracted value column.</p> <p>Do not know what to do with indexes</p> <pre><code>| | name | a | b | |---:|:-------|----:|:-----------| | 0 | Bob | 123 | 111 | | 1 | | | 999 | | 2 | Alice | 234 | 555 | | 2 | Alice | 234 | 66 | | 3 | Steve | 567 | 777 | </code></pre> <p>Something like <code>df[~mask]['b'] = df[~mask]['b'].str.extractall(r'([0-9]+)')</code></p>
<python><pandas>
2023-07-14 09:11:17
1
2,563
banderlog013
76,686,270
14,282,714
ModuleNotFoundError: No module named 'pycaret.nlp'
<p>I'm trying to use <code>pycaret</code> with nlp from this <a href="https://pycaret.gitbook.io/docs/learn-pycaret/official-blog/nlp-text-classification-in-python-using-pycaret" rel="nofollow noreferrer">page</a>. Unfortunately, I'm not able to install it. I tried using <code>pip install pycaret</code> like described <a href="https://pycaret.gitbook.io/docs/get-started/installation" rel="nofollow noreferrer">here</a>. It is also mentioned in my <code>pip list</code>:</p> <pre><code>pip list Package Version ------------------------- -------- pycaret 3.0.4 </code></pre> <p>This is my python version:</p> <pre><code>python --version Python 3.10.12 </code></pre> <p>I also tried to install it from Build from Source like described <a href="https://github.com/pycaret/pycaret" rel="nofollow noreferrer">here</a>:</p> <pre><code>pip install git+https://github.com/pycaret/pycaret.git@master --upgrade </code></pre> <p>The error is still there:</p> <pre><code>ModuleNotFoundError: No module named 'pycaret.nlp' </code></pre> <p>So I was wondering if anyone knows how to fix this error?</p> <hr /> <p>Please note: I'm using Mac</p>
<python><module><nlp><pycaret>
2023-07-14 09:07:35
1
42,724
Quinten
76,686,256
12,409,079
Can we exporting model property with Pydantic model method?
<p>Is there a way to export Pydantic (v1.10) models property with <a href="https://docs.pydantic.dev/1.10/usage/exporting_models/#modeldict" rel="nofollow noreferrer">Pydantic model method</a>?</p> <p>This is what I want :</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel class Foo(BaseModel): a : int @property def square(self): return a**2 foo = Foo(a=2) assert &quot;square&quot; in foo.dict() assert foo.dict()[&quot;square&quot;]==foo.dict()[&quot;a&quot;]**2 </code></pre>
<python><pydantic>
2023-07-14 09:06:39
0
479
Ermite
76,686,230
3,859,376
How to properly associate SQLAlchemy session with pyramid sessions in Pyramid?
<p>In pyramid, I'm connecting to database with credentials supplied by user: every HTTP session have its own database connection, with different credentials. It works fine, if I'm opening and closing database connection in every request, but is inefficient. I've tried to use SQLAlchemy scoped_session with scopefunc returning beaker.session.id, but randomly, pyramid session receives wrong sqlalchemy session (created by different user). Is there any working solution for such case ?</p>
<python><postgresql><session><sqlalchemy><pyramid>
2023-07-14 09:02:24
1
351
Jarek
76,686,123
3,121,975
Pylint fails due to UnicodeError
<p>I'm using Pylint to check my code when I do commits. Recently, I've had a commit fail because of the following error:</p> <blockquote> <p>UnicodeEncodeError: 'charmap' codec can't encode characters in position 1699-1713: character maps to &lt;undefined&gt;</p> </blockquote> <p>Here's the traceback:</p> <pre><code>Traceback (most recent call last): File &quot;\AppData\Local\Programs\Python\Python310\lib\runpy.py&quot;, line 196, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;\AppData\Local\Programs\Python\Python310\lib\runpy.py&quot;, line 86, in _run_code exec(code, run_globals) File &quot;\venv\tso_ingestion\Scripts\pylint.EXE\__main__.py&quot;, line 7, in &lt;module&gt; File &quot;\venv\tso_ingestion\lib\site-packages\pylint\__init__.py&quot;, line 36, in run_pylint PylintRun(argv or sys.argv[1:]) File &quot;\venv\tso_ingestion\lib\site-packages\pylint\lint\run.py&quot;, line 213, in __init__ linter.check(args) File &quot;\venv\tso_ingestion\lib\site-packages\pylint\lint\pylinter.py&quot;, line 701, in check with self._astroid_module_checker() as check_astroid_module: File &quot;\AppData\Local\Programs\Python\Python310\lib\contextlib.py&quot;, line 142, in __exit__ next(self.gen) File &quot;\venv\tso_ingestion\lib\site-packages\pylint\lint\pylinter.py&quot;, line 1010, in _astroid_module_checker checker.close() File &quot;\venv\tso_ingestion\lib\site-packages\pylint\checkers\similar.py&quot;, line 875, in close self.add_message(&quot;R0801&quot;, args=(len(couples), &quot;\n&quot;.join(msg))) File &quot;\venv\tso_ingestion\lib\site-packages\pylint\checkers\base_checker.py&quot;, line 164, in add_message self.linter.add_message( File &quot;\venv\tso_ingestion\lib\site-packages\pylint\lint\pylinter.py&quot;, line 1323, in add_message self._add_one_message( File &quot;\venv\tso_ingestion\lib\site-packages\pylint\lint\pylinter.py&quot;, line 1281, in _add_one_message self.reporter.handle_message( File &quot;\venv\tso_ingestion\lib\site-packages\pylint\reporters\text.py&quot;, line 208, in handle_message self.write_message(msg) File &quot;\venv\tso_ingestion\lib\site-packages\pylint\reporters\text.py&quot;, line 201, in write_message self.writeln(self._fixed_template.format(**self_dict)) File &quot;\venv\tso_ingestion\lib\site-packages\pylint\reporters\base_reporter.py&quot;, line 64, in writeln print(string, file=self.out) File &quot;\AppData\Local\Programs\Python\Python310\lib\encodings\cp1252.py&quot;, line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode characters in position 1699-1713: character maps to &lt;undefined&gt; </code></pre> <p>The only changes I could see that could possible result in an encoding error was refactoring a set of asserts that looked like this:</p> <pre><code> assert e_info.type is PageException assert e_info.value.args[0].url == url assert e_info.value.args[0].body == soup.body assert e_info.value.args[0].element == soup.body.find(&quot;form&quot;, id=&quot;form&quot;).find( &quot;a&quot;, string=&quot;料金通知情報一覧&quot; ) assert ( &quot;Onclick event associated with \\'料金通知情報一覧\\' link was missing or malformed&quot; in str(e_info.value.args[0]) ) </code></pre> <p>to look like this:</p> <pre><code> check_page_exception( e_info, url, soup.body, soup.body.find(&quot;form&quot;, id=&quot;form&quot;).find(&quot;a&quot;, string=&quot;料金通知情報一覧&quot;), &quot;Onclick event associated with \\'料金通知情報一覧\\' link was missing or malformed&quot;, ) </code></pre> <p>There are Unicode characters in here but they've only moved around so I don't see how this could be causing the error. Does anyone know how to fix this?</p>
<python><unicode><pylint>
2023-07-14 08:46:55
1
8,192
Woody1193
76,685,996
10,735,143
Can't open lib 'ODBC Driver 17 for SQL Server' : file not found (0) (SQLDriverConnect) while connect with sqlalchemy
<p>I have searched a lot for the solution but still struggling with this problem.</p> <p>I'm trying to connect to a SQL Server instance running on 127.0.0.1:1433. However, I'm getting a sqlalchemy.exc.DBAPIError with the following error message:</p> <pre><code>sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('01000', &quot;[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 17 for SQL Server' : file not found (0) (SQLDriverConnect)&quot;) </code></pre> <p>I think I need to install the ODBC driver, but I'm not sure if it needs to be installed on the SQL Server Docker image or on my local VM. If the answer is the Docker image, then I think my /etc/odbcinst.ini file is correctly configured as follows:</p> <pre><code>[ODBC Driver 17 for SQL Server] Description=Microsoft ODBC Driver 17 for SQL Server Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.10.so.2.1 UsageCount=1 </code></pre> <p>But if the ODBC driver needs to be installed on my local VM, then my /etc/odbcinst.ini file is empty.</p> <p>Here's the Python code I used to connect to the SQL Server instance:</p> <pre class="lang-bash prettyprint-override"><code>from sqlalchemy import create_engine server = &quot;127.0.0.1,1433&quot; user = &quot;sa&quot; password = &quot;Pass@12345&quot; db_name = &quot;test_database&quot; engine = create_engine(f'mssql+pyodbc://{user}:{password}@{server}/{db_name}?driver=ODBC Driver 17 for SQL Server') connection = engine.connect() print(&quot;connected&quot;) </code></pre> <p>Another question is what should i do if there is <code>@</code> in password?</p> <ul> <li>sqlserver: <code>sqlserver:2022-latest</code> docker image, runs on <code>127.0.0.1:1433</code></li> <li>os: Ubuntu 22.04</li> <li>python: 3.10.6</li> <li>sqlalchemy: 2.0.16</li> </ul> <p>Any help would be greatly appreciated. Thanks!</p>
<python><sql-server><docker><sqlalchemy><odbc>
2023-07-14 08:29:19
1
634
Mostafa Najmi
76,685,950
18,108,367
Files with bytecode are really created only for imported modules?
<p>In my system I have created a Python application that is composed by more than one file. The directory structure of the application is the following:</p> <pre><code>application_directory |- file_a.py |- file_b.py |- file_c.py </code></pre> <p>Inside the script <code>file_c.py</code> are present the following imports:</p> <pre><code>import file_a from file_b import ClassB </code></pre> <p>When I execute the script <code>file_c.py</code> by the command line:</p> <pre><code>python3 file_c.py </code></pre> <p>in the folder <code>application_folder</code> is created a directory called <code>__pycache__</code> and inside it I find two files containing the Python bytecode and called:</p> <pre><code>__pycache__/file_a.cpython-310.pyc __pycache__/file_b.cpython-310.pyc </code></pre> <p><strong>Note</strong>: I have read <a href="https://nedbatchelder.com/blog/201803/is_python_interpreted_or_compiled_yes.html" rel="nofollow noreferrer">this link</a> and <a href="https://opensource.com/article/18/4/introduction-python-bytecode" rel="nofollow noreferrer">this other link</a> to get information about the concepts of compilation in Python, bytecode, and Python interpreter as virtual machine.</p> <p>To understand why are created this files I have search a lot and I have found many links about this topic. The most clear could be <a href="https://stackoverflow.com/questions/6889747/is-python-interpreted-or-compiled-or-both/57390010#57390010">this answer</a> that seems perfect to understand what happens in my context.</p> <p>In the answer is present the following sentence:</p> <blockquote> <p>Running a script is not considered an import and no .pyc will be created.</p> </blockquote> <p>So I am asking: only for modules imported are created a file with bytecode? Why is there this difference in the management of imported modules respect to not imported scripts?</p> <hr /> <p><strong>EDIT</strong> I have found <a href="https://stackoverflow.com/questions/17713633/how-much-of-a-speedup-does-bytecode-compilation-give-python-code/17714409#17714409">this other answer</a> very useful to better understand how works CPython, the concept of bytecode and the file <code>.pyc</code> saved to the hard disk.</p>
<python><module><bytecode><cpython>
2023-07-14 08:23:38
1
2,658
User051209
76,685,933
3,371,250
How to perform a left join and ignoring a default value?
<p>Table df1 (a Pandas dataframe):</p> <pre><code> id1 type cat 0 1 a 0 1 1 b 0 2 2 a 8 3 2 a 9 </code></pre> <p>Table df2:</p> <pre><code> id2 type cat 0 0 a 8 1 0 a 9 2 1 a 8 3 1 a 9 4 2 a 8 5 3 a 9 6 4 b 7 7 5 b 8 </code></pre> <p>I am performing a left join:</p> <pre><code>merged = pd.merge(df1, df2, how='left') </code></pre> <p>I am again doing this with Pandas, but I could use SQL, and I get:</p> <pre><code> id1 type cat id2 0 1 a 0 NaN 1 1 b 0 NaN 2 2 a 8 0.0 3 2 a 8 1.0 4 2 a 8 2.0 5 2 a 9 0.0 6 2 a 9 1.0 7 2 a 9 3.0 </code></pre> <p>What I want, though, is to treat the <code>0</code> in the <code>cat</code> column of df1 like a wildcard, like it is including ALL categories. It doesn't have an expression and should therefore be excluded from the merge.</p> <p>In this example the result would include all rows from df2, because only the <code>type</code> column would have to match.</p>
<python><sql><pandas><merge><left-join>
2023-07-14 08:21:41
1
571
Ipsider