QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,676,556
2,955,827
What's the correct way to use user local python environment under PEP668?
<p>I have tried to install any python packages on Ubuntu 24.04, but found I cannot do that as in 22.04, with <code>--user</code></p> <p><a href="https://peps.python.org/pep-0668/" rel="nofollow noreferrer">PEP668</a> said it is for avoiding package conflict between system-wide package and user installed package.</p> <p>example:</p> <pre class="lang-none prettyprint-override"><code>$ pip install setuptools --user error: externally-managed-environment × This environment is externally managed ╰─&gt; To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. </code></pre> <p>I am really confused with current rules and can not install any package to <strong>user local env</strong>.</p> <p>How can I manage my user local environment now? And how can I use latest <code>pip</code> (not linux-distro version) and other packages by default for current user?</p> <p>My Environment (dockerfile just for reproduce):</p> <pre class="lang-none prettyprint-override"><code>FROM ubuntu:24.04 # add python RUN apt install -y python3-pip python3-venv python-is-python3 pipx USER ubuntu WORKDIR /app </code></pre> <p>I know I can use some env manage tools (pyenv) to do that, but is there any built-in method to bring my user local env back?</p>
<python>
2023-12-18 01:23:40
6
3,295
PaleNeutron
77,676,553
10,200,497
Finding the first row that meets conditions of a mask and selecting one row after it that meets a condition
<p>This is an extension to this <a href="https://stackoverflow.com/questions/77651219/finding-the-first-row-that-meets-conditions-of-a-mask-and-selecting-one-row-afte">post</a>.</p> <p>My dataframe is:</p> <pre><code>import pandas as pd df = pd.DataFrame( { 'a': [100, 1123, 123, 100, 1, 0, 1], 'b': [1000, 11123, 1123, 0, 55, 0, 1], 'c': [100, 1123, 123, 999, 11, 50, 1], 'd': [100, 1123, 123, 190, 1, 105, 1], 'e': ['a', 'b', 'c', 'd', 'e', 'f', 'g'], } ) </code></pre> <p>And this is the output that I want. I need to create column <code>x</code>:</p> <pre><code> a b c d e x 0 100 1000 100 100 a NaN 1 1123 11123 1123 1123 b NaN 2 123 1123 123 123 c NaN 3 100 0 999 190 d NaN 4 1 55 11 1 e NaN 5 0 0 50 105 f f 6 1 1 1 1 g NaN </code></pre> <p>My mask is:</p> <pre><code>mask = (df.a &gt; df.b) </code></pre> <p>And these are the steps needed:</p> <p>a) Find the first row that meets conditions of the mask.</p> <p>b) Get the value of column <code>a</code> of the above step.</p> <p>c) Find the first row that the above value is between columns <code>c</code> and <code>d</code>. Being equal to one of them is also OK.</p> <p>d) Get the value in column <code>e</code> and create column <code>x</code>.</p> <p>For example for the above dataframe:</p> <p>a) First row of mask is row <code>3</code>.</p> <p>b) The value of column <code>a</code> is 100.</p> <p>c) From rows that are after the mask (4, 5, ...) the first row that 100 is between columns <code>c</code> and <code>d</code> is row 5. So 'f' is selected for column <code>x</code>.</p> <p>d) So 'f' is chosen for column <code>x</code>.</p> <p>This image clarifies the above steps:</p> <p><a href="https://i.sstatic.net/TSpfA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TSpfA.png" alt="enter image description here" /></a></p> <p>This is what I have tried:</p> <pre><code>mask = (df.a &gt; df.b) val = df.loc[mask.cumsum().eq(1) &amp; mask, 'a'] </code></pre> <p>I prefer the solution to be generic in any possible way.Like this <a href="https://stackoverflow.com/a/77651477/10200497">answer</a> IF POSSIBLE.</p> <p>I have provided some additional dataframes in case you need to test the code with other subtle different conditions. For instance what if there no rows that meets conditions of the mask. In that case column <code>x</code> is all <code>NaN</code>s. Column names are all the same as the above <code>df</code>.</p> <pre><code>df = pd.DataFrame({'a': [100, 1123, 123, -1, 1, 0, 1], 'b': [1000, 11123, 1123, 0, 55, 0, 1],'c': [100, 1123, 123, 999, 11, 50, 1], 'd': [100, 1123, 123, 190, 1, 105, 1], 'e': ['a', 'b', 'c', 'd', 'e', 'f', 'g']}) df = pd.DataFrame({'a': [100, 1123, 123, 100, 1, 0, 1], 'b': [1000, 11123, 1123, 0, 55, 0, 1], 'c': [100, 1123, 123, 999, 11, -1, 1], 'd': [100, 1123, 123, 190, 1, 10, 1], 'e': ['a', 'b', 'c', 'd', 'e', 'f', 'g']}) df = pd.DataFrame({'a': [100, 1123, 123, 1, 1, 0, 100], 'b': [1000, 11123, 1123, 0, 55, 0, 1], 'c': [100, 1123, 123, 999, 11, -1, 50], 'd': [100, 1123, 123, 190, 1, 10, 101], 'e': ['a', 'b', 'c', 'd', 'e', 'f', 'g']}) df = pd.DataFrame({'a': [100, 1123, 123, 100, 1, 1000, 1],'b': [1000, 11123, 1123, 0, 55, 0, 1],'c': [100, 1123, 123, 999, 11, 50, 500], 'd': [100, 1123, 123, 190, 1, 105, 2000], 'e': ['a', 'b', 'c', 'd', 'e', 'f', 'g']}) </code></pre>
<python><pandas><dataframe>
2023-12-18 01:21:22
2
2,679
AmirX
77,676,414
3,929,525
How to train a model for a dataset of images with 12 bands using TensorFlow's pix2pix notebook?
<p>I'm using TensorFlow's <a href="https://www.tensorflow.org/tutorials/generative/pix2pix" rel="nofollow noreferrer">pix2pix: Image-to-image translation with a conditional GAN notebook</a> to train a model for my dataset that consists of <strong>multispectral satellite images with 12 bands like 512 x 512 x 12</strong>.</p> <p>As the original notebook is used for images with 256 x 256 x 3 dimensions, I had to apply some changes to a few of the important functions like <code>load()</code>, <code>Generator()</code>, <code>Discriminator()</code> and <code>generate_images()</code>.</p> <p>My images have more than 3 bands (channels), specifically 12 bands. So, I have stored each 3 bands as an RBB (.png) image with a depth of 16 bit. That's why I have changed my load function in a way that it reads all four images that form a 12-band image and creates inputs with the shape: 512 x 512 x 12.</p> <p>The Generator() function was also changed like below:</p> <pre><code>def Generator(): input_shape = (512, 512, 12) inputs = tf.keras.layers.Input(shape=input_shape) # Downsampling layers down_stack = [ downsample(64, 4, apply_batchnorm=False), # First layer downsample(128, 4), # Batchnorm applied downsample(256, 4), downsample(512, 4), downsample(512, 4), downsample(512, 4), downsample(512, 4), downsample(512, 4) ] # Upsampling layers up_stack = [ upsample(512, 4, apply_dropout=True), # Apply dropout in the first 3 layers upsample(512, 4, apply_dropout=True), upsample(512, 4, apply_dropout=True), upsample(512, 4), upsample(256, 4), upsample(128, 4), upsample(64, 4) ] initializer = tf.random_normal_initializer(0., 0.02) last = tf.keras.layers.Conv2DTranspose(OUTPUT_CHANNELS, 4, strides=2, padding='same', kernel_initializer=initializer, activation='tanh') # OUTPUT_CHANNELS should match your requirement x = inputs # Downsampling through the model skips = [] for down in down_stack: x = down(x) skips.append(x) skips = reversed(skips[:-1]) # Upsampling and establishing the skip connections for up, skip in zip(up_stack, skips): x = up(x) if skip is not None: x = tf.keras.layers.Concatenate()([x, skip]) x = last(x) return tf.keras.Model(inputs=inputs, outputs=x) </code></pre> <p>And I changed the Discriminator() like below:</p> <pre><code>def Discriminator(): initializer = tf.random_normal_initializer(0., 0.02) # Input and target images will have 12 channels each inp = tf.keras.layers.Input(shape=[512, 512, 12], name='input_image') tar = tf.keras.layers.Input(shape=[512, 512, 12], name='target_image') # Concatenate input and target images x = tf.keras.layers.concatenate([inp, tar]) # (batch_size, 512, 512, 24) down1 = downsample(64, 4, False)(x) # (batch_size, 256, 256, 64) down2 = downsample(128, 4)(down1) # (batch_size, 128, 128, 128) down3 = downsample(256, 4)(down2) # (batch_size, 64, 64, 256) zero_pad1 = tf.keras.layers.ZeroPadding2D()(down3) # (batch_size, 66, 66, 256) conv = tf.keras.layers.Conv2D(512, 4, strides=1, kernel_initializer=initializer, use_bias=False)(zero_pad1) # (batch_size, 63, 63, 512) batchnorm1 = tf.keras.layers.BatchNormalization()(conv) leaky_relu = tf.keras.layers.LeakyReLU()(batchnorm1) zero_pad2 = tf.keras.layers.ZeroPadding2D()(leaky_relu) # (batch_size, 65, 65, 512) last = tf.keras.layers.Conv2D(1, 4, strides=1, kernel_initializer=initializer)(zero_pad2) # (batch_size, 62, 62, 1) return tf.keras.Model(inputs=[inp, tar], outputs=last) </code></pre> <p>Finally, I changed the generate_images() function like below:</p> <pre><code>def generate_images(model, test_input, tar, save_dir='Some_Directory'): prediction = model(test_input, training=True) num_bands = 12 for i in range(0, num_bands, 3): bands = [i, i+1, i+2] if (i+2) &lt; num_bands else [i, i+1, num_bands-1] plt.figure(figsize=(15, 5)) display_list = [test_input[0], tar[0], prediction[0]] title = ['Input Image', 'Ground Truth', 'Predicted Image'] for j in range(3): plt.subplot(1, 3, j+1) plt.title(title[j]) # Normalize and select the bands for visualization image_display = tf.stack([display_list[j][..., band] for band in bands], axis=-1) image_display = (image_display + 1) / 2 # Rescale to [0, 1] plt.imshow(image_display) plt.axis('off') # Ensure the save directory exists os.makedirs(save_dir, exist_ok=True) plt.savefig(os.path.join(save_dir, f'generated_image_bands_{i}-{i+1}-{i+2}.png')) plt.close() # Usage example for example_input, example_target in test_dataset.take(1): generate_images(generator, example_input, example_target) </code></pre> <p>After these changes, when I run the fit() function to train the model using the code below:</p> <pre><code>fit(train_dataset, test_dataset, steps=40000) </code></pre> <p>Mostly after some steps (which can range from 1 to a few thousands), I get errors that are usually related to the shape of the tensors.</p> <p><strong>These errors are like below:</strong></p> <pre><code> tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __wrapped__StridedSlice_device_/job:localhost/replica:0/task:0/device:GPU:0}} slice index 1 of dimension 0 out of bounds. [Op:StridedSlice] name: strided_slice/ [[{{node EagerPyFunc}}]] [Op:IteratorGetNext] name: InvalidArgumentError: {{function_node __wrapped__StridedSlice_device_/job:localhost/replica:0/task:0/device:GPU:0}} Expected begin, end, and strides to be 1D equal size tensors, but got shapes [1], [1], and [3] instead. [Op:StridedSlice] name: strided_slice/ InvalidArgumentError: Exception encountered when calling layer 'conv2d_transpose_8' (type Conv2DTranspose). {{function_node __wrapped__StridedSlice_device_/job:localhost/replica:0/task:0/device:GPU:0}} Expected begin, end, and strides to be 1D equal size tensors, but got shapes [1], [3], and [3] instead. [Op:StridedSlice] name: model/conv2d_transpose_8/strided_slice/ Call arguments received by layer 'conv2d_transpose_8' (type Conv2DTranspose): • inputs=tf.Tensor(shape=(1, 256, 256, 128), dtype=float32) </code></pre> <p>Also, it is worth mentioning that sometimes when I run the <code>fit()</code> function, I get warnings like below but I don't think that these are causing the issue because I had similar errors in previous functions as well but they didn't affect the result.</p> <pre><code>WARNING:tensorflow:5 out of the last 5 calls to &lt;function _BaseOptimizer._update_step_xla at 0x7927400fc1f0&gt; triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:6 out of the last 6 calls to &lt;function _BaseOptimizer._update_step_xla at 0x7927400fc1f0&gt; triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. </code></pre> <p>I have tried changing the <code>BATCH_SIZE</code> from 1 to 2, reducing the number of samples significantly and changing some attributes inside the <code>Generator()</code> like changing the <code>apply_batchnorm</code> to True or False. How can I fix the issue and train my model for images with <strong>512 x 512 x 12</strong> dimensions?</p> <p>**Edit: **Here is the result of running <code>generator.summary()</code>:</p> <pre><code> Model: &quot;model&quot; __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 512, 512, 12)] 0 [] sequential_2 (Sequential) (None, 256, 256, 64) 12288 ['input_1[0][0]'] sequential_3 (Sequential) (None, 128, 128, 128) 131584 ['sequential_2[0][0]'] sequential_4 (Sequential) (None, 64, 64, 256) 525312 ['sequential_3[0][0]'] sequential_5 (Sequential) (None, 32, 32, 512) 2099200 ['sequential_4[0][0]'] sequential_6 (Sequential) (None, 16, 16, 512) 4196352 ['sequential_5[0][0]'] sequential_7 (Sequential) (None, 8, 8, 512) 4196352 ['sequential_6[0][0]'] sequential_8 (Sequential) (None, 4, 4, 512) 4196352 ['sequential_7[0][0]'] sequential_9 (Sequential) (None, 2, 2, 512) 4196352 ['sequential_8[0][0]'] sequential_10 (Sequential) (None, 4, 4, 512) 4196352 ['sequential_9[0][0]'] concatenate (Concatenate) (None, 4, 4, 1024) 0 ['sequential_10[0][0]', 'sequential_8[0][0]'] sequential_11 (Sequential) (None, 8, 8, 512) 8390656 ['concatenate[0][0]'] concatenate_1 (Concatenate (None, 8, 8, 1024) 0 ['sequential_11[0][0]', ) 'sequential_7[0][0]'] sequential_12 (Sequential) (None, 16, 16, 512) 8390656 ['concatenate_1[0][0]'] concatenate_2 (Concatenate (None, 16, 16, 1024) 0 ['sequential_12[0][0]', ) 'sequential_6[0][0]'] sequential_13 (Sequential) (None, 32, 32, 512) 8390656 ['concatenate_2[0][0]'] concatenate_3 (Concatenate (None, 32, 32, 1024) 0 ['sequential_13[0][0]', ) 'sequential_5[0][0]'] sequential_14 (Sequential) (None, 64, 64, 256) 4195328 ['concatenate_3[0][0]'] concatenate_4 (Concatenate (None, 64, 64, 512) 0 ['sequential_14[0][0]', ) 'sequential_4[0][0]'] sequential_15 (Sequential) (None, 128, 128, 128) 1049088 ['concatenate_4[0][0]'] concatenate_5 (Concatenate (None, 128, 128, 256) 0 ['sequential_15[0][0]', ) 'sequential_3[0][0]'] sequential_16 (Sequential) (None, 256, 256, 64) 262400 ['concatenate_5[0][0]'] concatenate_6 (Concatenate (None, 256, 256, 128) 0 ['sequential_16[0][0]', ) 'sequential_2[0][0]'] conv2d_transpose_8 (Conv2D (None, 512, 512, 12) 24588 ['concatenate_6[0][0]'] Transpose) ================================================================================================== Total params: 54453516 (207.72 MB) Trainable params: 54442636 (207.68 MB) Non-trainable params: 10880 (42.50 KB) __________________________________________________________________________________________________ </code></pre>
<python><tensorflow><deep-learning><generative-adversarial-network><cgan>
2023-12-17 23:59:46
1
1,312
Naser.Sadeghi
77,676,117
6,394,617
why does Python read(3) seek to end of file
<p>I am trying to understand file IO in Python, with the different modes for open, and reading and writing to the same file object (just self learning).</p> <p>I was surprised by the following code (which was just me exploring):</p> <pre class="lang-py prettyprint-override"><code>with open('blah.txt', 'w') as f: # create a file with 13 characters f.write(&quot;hello, world!&quot;) with open('blah.txt', 'r+') as f: for i in range(5): # write one character f.write(str(i)) # then print current position and next 3 characters print(f&quot;{f.tell()}: &quot;, f&quot;{f.read(3)}&quot;) with open('blah.txt', 'r') as f: # look at how the file was modified print(f.read()) </code></pre> <p>Which output:</p> <pre class="lang-py prettyprint-override"><code>1: ell 14: 15: 16: 17: 0ello, world!1234 </code></pre> <p>As I expected, the first character was overwritten by <code>0</code>, then the next 3 characters read were <code>ell</code>, <em><strong>but</strong></em> I expected the <code>1</code> to be written over the <code>o</code> in <code>hello</code>, then the next 3 characters read to be <code>, w</code>.</p> <p>I'm reading the docs <a href="https://docs.python.org/3/tutorial/inputoutput.html#methods-of-file-objects" rel="nofollow noreferrer">here</a>, but I don't see where it explains the behavior that I observed.</p> <p>It appears that the first read, no matter what the size, seeks to the end of the file.</p> <p>Can anyone provide a link to where it explains this in the docs?</p> <p>I tried searching for a similar question on this site, but while there were many questions related to read, none that I found mentioned this behavior.</p> <p><em><strong>UPDATE</strong></em></p> <p>After more exploration, it is not the first <code>read</code> that seeks to the end of the file, but rather the second write that does. Again, I'm not sure why, which is why I'm hoping to find somewhere in the docs that explains this behavior.</p> <p>Here's my change to the code above that shows that it's not the first read:</p> <pre class="lang-py prettyprint-override"><code>with open('blah.txt', 'w') as f: # create a file with 13 characters f.write(&quot;hello, world!&quot;) with open('blah.txt', 'r+') as f: for i in range(3): # write one character f.write(str(i)) # then print current position and next 3 characters print(f&quot;{f.tell()}: &quot;, f&quot;{f.read(3)}&quot;) print(f&quot;{f.tell()}: &quot;, f&quot;{f.read(3)}&quot;) with open('blah.txt', 'r') as f: # look at how the file was modified print(f.read()) </code></pre> <p>Which output:</p> <pre class="lang-py prettyprint-override"><code>1: ell 4: o, 14: 14: 15: 15: 0ello, world!12``` </code></pre>
<python><io>
2023-12-17 21:43:59
1
913
Joe
77,675,936
6,272,006
Extracting Discount Factors with reference from Bond Settlement Date instead of Evaluation Date in QuantLib
<p>I have bootstrapped a yield curve and I have managed to extract discount factors from this yield curve but the discount factors are referencing from the Evaluation Date. These discount factors are used to calculate the PV of the bond but they can not be used to calculate the Dirty Price of the bond if the Evaluation Date is different from the Bond Settlement Date. How do I extract the Discount Factors from the Bond Settlement Date instead of the Evaluation Date. Hoping that my question is clear.</p> <p>Find below a part of the code that I have tried to use to extract the discount factors;</p> <pre><code>fields = ['accrualStartDate', 'accrualEndDate', 'date', 'nominal', 'rate', 'amount', 'accrualDays', 'accrualPeriod'] BondCashflows = [] for cf in list(map(ql.as_fixed_rate_coupon, bond.cashflows()))[:-1]: row = {fld: eval(f&quot;cf.{fld}()&quot;) for fld in fields} row['AccrualPeriod'] = round((row['accrualEndDate'] - row['accrualStartDate']) / 365, 4) if row['date'] &gt;= today: row['ZeroRate (NPV)'] = round(curve.zeroRate(row['date'], day_count, ql.Compounded, ql.Annual).rate(), 9) row['ZeroRate (Dirty Price)'] = round(curve.forwardRate(bond.settlementDate(), row['date'], day_count, ql.Compounded, ql.Annual).rate(), 9) row['DiscFactor (NPV)'] = round(curve.discount(row['date']), 9) row['DiscFactor (Dirty Price)'] = round(curve.discount(bond.settlementDate(), row['date']), 9) else: row['ZeroRate (NPV)'] = 0 row['ZeroRate (Dirty Price)'] = 0 row['DiscFactor (NPV)'] = 0 # or any other appropriate handling for dates before today row['DiscFactor (Dirty Price)'] = 0 # or any other appropriate handling for dates before today row['NPV'] = round(row['DiscFactor (NPV)'] * row['amount'], 9) BondCashflows.append(row) BondCashflows = pd.DataFrame(BondCashflows) print(BondCashflows) </code></pre>
<python><quantitative-finance><quantlib><quantlib-swig>
2023-12-17 20:39:24
1
303
ccc
77,675,878
20,803,947
Circular import Python flask (docker-compose)
<p>Im getting this error in my flask container while starts with docker-compose up:</p> <pre><code>web_1 | Traceback (most recent call last): web_1 | File &quot;/app/app.py&quot;, line 3, in &lt;module&gt; web_1 | from presentation.controllers.notifications import notifications_router web_1 | File &quot;/app/presentation/controllers/notifications.py&quot;, line 4, in &lt;module&gt; web_1 | from app.presentation.models.notifications_body import NotificationsBody web_1 | File &quot;/app/app.py&quot;, line 3, in &lt;module&gt; web_1 | from presentation.controllers.notifications import notifications_router web_1 | ImportError: cannot import name 'notifications_router' from partially initialized module 'presentation.controllers.notifications' (most likely due to a circular import) (/app/presentation/controllers/notifications.py) </code></pre> <p>my folder structure is: <a href="https://i.sstatic.net/aTXdV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aTXdV.png" alt="enter image description here" /></a></p> <p>if i run this outside docker-compose it works normally, but in docker-compose it gives an error, does anyone know how to fix it?</p> <p>this is my docker-compose.yml</p> <pre><code>version: '3.9' services: web: build: context: . dockerfile: Dockerfile ports: - &quot;5000:5000&quot; depends_on: - mongodb - rabbitmq - nginx nginx: image: nginx:latest ports: - &quot;80:80&quot; volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro mongodb: image: mongo:latest ports: - &quot;27017:27017&quot; environment: - MONGO_INITDB_DATABASE=test - MONGO_INITDB_ROOT_USERNAME=admin - MONGO_INITDB_ROOT_PASSWORD=admin - MONGO_URI=&quot;mongodb://admin:admin@mongodb:27017/&quot; volumes: - mongodb_data:/data/db mongo-express: image: mongo-express:latest ports: - &quot;8081:8081&quot; environment: - ME_CONFIG_MONGODB_ADMINUSERNAME=admin - ME_CONFIG_MONGODB_ADMINPASSWORD=admin - ME_CONFIG_MONGODB_SERVER=mongodb - ME_CONFIG_MONGODB_ENABLE_ADMIN=true - ME_CONFIG_BASICAUTH_USERNAME=admin - ME_CONFIG_BASICAUTH_PASSWORD=admin depends_on: - mongodb rabbitmq: image: rabbitmq:3-management container_name: rabbitmq restart: always environment: - RABBITMQ_DEFAULT_USER=admin - RABBITMQ_DEFAULT_PASS=admin - RABBITMQ_DEFAULT_VHOST=/ ports: - 15672:15672 volumes: - rabbitmq_data:/var/lib/rabbitmq volumes: mongodb_data: rabbitmq_data: </code></pre> <p>app.py</p> <pre><code>from app.infrastructure.server import entrypoint app = entrypoint.start() if __name__ == '__main__': app.run(host='0.0.0.0') </code></pre> <p>entrypoint.py</p> <pre><code>from flask import Flask from app.infrastructure.server import url_mapping def start(): &quot;&quot;&quot; Starts the application Returns: Flask: Flask application &quot;&quot;&quot; app = Flask(__name__) app.url_map.strict_slashes = False url_mapping.map_routes(app) app.app_context().push() return app </code></pre> <p>url_mapping.py</p> <pre><code>from flask import Flask from app.presentation.controllers.notifications import notifications_router from app.presentation.controllers.ping_ctr import ping_router def map_routes(application: Flask) -&gt; None: &quot;&quot;&quot; Map routes Args: application (Flask): Flask application &quot;&quot;&quot; application.register_blueprint(ping_router) application.register_blueprint(notifications_router) </code></pre> <p>Can someone help with this? tks!</p>
<python><docker><flask><docker-compose><circular-reference>
2023-12-17 20:17:35
1
309
Louis
77,675,830
16,425,029
channel_layer.send() not sending messages to unique channel_names based on username - Django channels
<p>I am trying to implement a way to send messages to users that are user specific. I have seen the docs where they advice to store the channel_name in the database and remove it on disconnect, but I think that will be a burden on the database as channel_name keeps on changing whenever the user connects to the consumer. So I tried the below method during accept, where I tried to assign the user's user_name as a unique channel name</p> <p><strong>consumer</strong></p> <pre><code>class ChatGenericAsyncConsumer(AsyncWebsocketConsumer): &quot;&quot;&quot;this is an async consumer&quot;&quot;&quot; async def connect(self): self.user = self.scope[&quot;user&quot;] if self.user.is_authenticated: print( f&quot;authentication successful connection accepted for {self.user.user_name}&quot; ) self.username = f&quot;{self.user.user_name}&quot; self.channel_name = f&quot;{self.user.user_name}&quot; await self.accept() else: print(&quot;authentication unsuccessful connection closed&quot;) await self.close(code=4123) async def receive(self, text_data=None, bytes_data=None): pass async def disconnect(self, code): await self.close(code=4123) # this is the event handler of 'chat.message' async def chat_message(self, event): &quot;&quot;&quot; this method handles the sending of message to the group. this is same as chat.message &quot;&quot;&quot; # sending message to the group print(event[&quot;data&quot;]) await self.send(text_data=event[&quot;data&quot;]) </code></pre> <p>and then tried to send a message from outside the consumer, where I try to mimic a situation that for some reason the admin wants to send a user specific message or notification to a specific user. So I coded the below api and used channel_layer.send() to send message to that specific channel name</p> <p><strong>api</strong></p> <pre><code>@api_view([&quot;POST&quot;]) @permission_classes([AllowAny]) def send_message_from_admin(request, group_name): try: message = request.data.get(&quot;message&quot;) username = request.data.get(&quot;username&quot;) channel_layer = get_channel_layer() send_data = {&quot;user&quot;: &quot;Admin&quot;, &quot;message&quot;: message} async_to_sync(channel_layer.send)( username, {&quot;type&quot;: &quot;chat.message&quot;, &quot;data&quot;: json.dumps(send_data)} ) return Response( {&quot;message&quot;: &quot;sent message&quot;}, status=status.HTTP_200_OK ) except Exception as e: print(f&quot;An exception occurred {e}&quot;) return Response({&quot;error&quot;: str(e)}, status=status.HTTP_500_INTERNAL_SERVER_ERROR) </code></pre> <p>here the <strong>username</strong> is the same as the channel_name set during connect. But somehow this does not work and the event handler is not called to send the message to the specific channel name and the user does not receive the message.</p> <p>Please suggest me whether there is something wrong with the code or my understanding of sending user specific messages or notifications</p>
<python><django><django-rest-framework><django-channels>
2023-12-17 20:02:17
1
776
Ritankar Bhattacharjee
77,675,824
16,988,223
Unable to apply otsu thresholding on gpu, opencv throws this: errorthreshold.cu:105: error: (-215:Assertion failed) type <= 4 in function 'threshold'
<p>I'm trying to do all my image processing in the GPU and not cpu processing:</p> <pre><code># coding=utf8 import cv2 # Leer la imagen en la GPU image = cv2.cuda_GpuMat() image.upload(cv2.imread(&quot;sutil.jpeg&quot;)) original = image # Convertir la imagen de BGR a HSV en la GPU image = cv2.cuda.cvtColor(image, cv2.COLOR_BGR2HSV) # Convertir la imagen a escala de grises en la GPU image = cv2.cuda.cvtColor(image, cv2.COLOR_BGR2GRAY) # Binarizar la imagen en escala de grises con el método de Otsu en la GPU ret, otsu = cv2.cuda.threshold(image,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU) # Aplicar una máscara binaria a la imagen original del limón en la GPU result = cv2.cuda.bitwise_and(original, original, mask=otsu) # Mostrar la imagen resultante segmentada cv2.imshow('Imágen Resultante Segmentada',image) cv2.waitKey(0) </code></pre> <p><a href="https://i.sstatic.net/XOXpw.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XOXpw.jpg" alt="enter image description here" /></a></p> <p>However after apply the HSV conversion, the next step is apply otsu thresholding, and it is throwing this error:</p> <pre><code>Traceback (most recent call last): File &quot;gpu.py&quot;, line 18, in &lt;module&gt; ret, otsu = cv2.cuda.threshold(image,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU) cv2.error: OpenCV(4.5.4) /home/orin/opencv_contrib/modules/cudaarithm/src/cuda/threshold.cu:105: error: (-215:Assertion failed) type &lt;= 4 in function 'threshold' </code></pre> <p>what does mean it &quot;type &lt;= 4 in function 'threshold'&quot; I don't know why this happens, my code on cpu works perfectly:</p> <pre><code>import cv2 image = cv2.imread(&quot;sutil.jpeg&quot;) original = image image = cv2.blur(image,(31,31),0) # Convert BGR a HSV para saturar la imágen. image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) #convertir a escala de grises (CV_RGB2GRAY) image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) #Binarizar la imagen en escala de grises con el método de Otsu. ret, otsu = cv2.threshold(image,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU) #Aplicar mascara binaria a la imagen original del limón. result = cv2.bitwise_and(original, original, mask=otsu) cv2.imshow('Imagen Resultante Segmentada', result) cv2.waitKey(0) </code></pre> <p>also a detail in my gpu code version is that I removed &quot;cv2.blur(image,(31,31),0)&quot; because I can't do it, if you any idea guys to fix this two problems I will appreciate it.</p> <p>thanks so much.</p>
<python><opencv><cuda><image-thresholding>
2023-12-17 19:59:59
1
429
FreddicMatters
77,675,755
12,285,101
Cannot install quarto with nbdev_install_quarto - ImportError: cannot import name 'uname' from 'os'
<p>I'm trying to install nbdev in my jupyter notebook <a href="https://nbdev.fast.ai/tutorials/tutorial.html" rel="nofollow noreferrer">as described in this tutorial</a> .<br /> I'm trying to run in my CLI this command (after completeing the previous steps that described there):</p> <pre><code>nbdev_install_quarto </code></pre> <p>But I get this error :</p> <pre><code>ImportError: cannot import name 'uname' from 'os' (C:\Users\Reut\AppData\Local\Programs\Python\Python312\Lib\os.py). Did you mean: 'name'? </code></pre> <p>I am using Python version 3.12, Operating System - Windows. I couldn't find the reason for this error,Maybe I should find osname file and change it, but that seems to me to &quot;radical&quot;, also, I have nevevr had to do that before with nbdev and I'm afraid to ruin the environment). could it be that I need to downgrade the Python version I am using? It What is the right thing to do? how can I tackle this error and make my nbdev environment to work?</p>
<python><windows><fast-ai><nbdev>
2023-12-17 19:40:57
1
1,592
Reut
77,675,694
5,822,695
Extract list from a Pandas ExponentialMovingWindow
<p>Is there a way to extract the underlying list/array from a Pandas ExponentialMovingWindow?</p> <pre><code>window = pd.Series.ewm(df, alpha) window.mean() // works window.sum() // works window.getattr('array') // does NOT work </code></pre>
<python><pandas>
2023-12-17 19:23:23
1
4,148
Anshul Kai
77,675,693
6,447,123
FID and custom feature extractor
<p>I would like to use a custom feature extractor to calculate FID</p> <p>according to <a href="https://lightning.ai/docs/torchmetrics/stable/image/frechet_inception_distance.html" rel="nofollow noreferrer">https://lightning.ai/docs/torchmetrics/stable/image/frechet_inception_distance.html</a> I can use <code>nn.Module</code> for <code>feature</code></p> <p>What is wrong with the following code?</p> <pre class="lang-py prettyprint-override"><code> import torch _ = torch.manual_seed(123) from torchmetrics.image.fid import FrechetInceptionDistance from torchvision.models import inception_v3 net = inception_v3() checkpoint = torch.load('checkpoint.pt') net.load_state_dict(checkpoint['state_dict']) net.eval() fid = FrechetInceptionDistance(feature=net) # generate two slightly overlapping image intensity distributions imgs_dist1 = torch.randint(0, 200, (100, 3, 299, 299), dtype=torch.uint8) imgs_dist2 = torch.randint(100, 255, (100, 3, 299, 299), dtype=torch.uint8) fid.update(imgs_dist1, real=True) fid.update(imgs_dist2, real=False) result = fid.compute() print(result) </code></pre> <pre><code> Traceback (most recent call last): File &quot;foo.py&quot;, line 12, in &lt;module&gt; fid = FrechetInceptionDistance(feature=net) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Lib/site-packages/torchmetrics/image/fid.py&quot;, line 304, in __init__ num_features = self.inception(dummy_image).shape[-1] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Lib/site-packages/torch/nn/modules/module.py&quot;, line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Lib/site-packages/torch/nn/modules/module.py&quot;, line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Lib/site-packages/torchvision/models/inception.py&quot;, line 166, in forward x, aux = self._forward(x) ^^^^^^^^^^^^^^^^ File &quot;/Lib/site-packages/torchvision/models/inception.py&quot;, line 105, in _forward x = self.Conv2d_1a_3x3(x) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/Lib/site-packages/torch/nn/modules/module.py&quot;, line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Lib/site-packages/torch/nn/modules/module.py&quot;, line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Lib/site-packages/torchvision/models/inception.py&quot;, line 405, in forward x = self.conv(x) ^^^^^^^^^^^^ File &quot;/Lib/site-packages/torch/nn/modules/module.py&quot;, line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Lib/site-packages/torch/nn/modules/module.py&quot;, line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Lib/site-packages/torch/nn/modules/conv.py&quot;, line 460, in forward return self._conv_forward(input, self.weight, self.bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Lib/site-packages/torch/nn/modules/conv.py&quot;, line 456, in _conv_forward return F.conv2d(input, weight, bias, self.stride, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: expected scalar type Byte but found Float Process finished with exit code 1 </code></pre>
<python><pytorch><pytorch-lightning><torchmetrics>
2023-12-17 19:23:17
1
4,309
A.A
77,675,560
13,180,560
Flask app throws 403 forbiden after refreshing the page
<p>My react app starts smoothly on first opening , then after any refresh i start getting 403 forbidden error , which makes my development super slow. any solution will be welcomed !</p> <p>Backend running on Python Flask following are configs for CORS :</p> <pre><code>CORS(app, resources={r&quot;/*&quot;: {&quot;origins&quot;: [&quot;http://localhost:3000&quot;,&quot;http://127.0.0.1:3000/&quot;]}}) socketio = SocketIO(app, cors_allowed_origins=&quot;*&quot;) </code></pre> <p><a href="https://i.sstatic.net/pLprr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pLprr.png" alt="Error" /></a></p> <p>Running on Mac m2</p>
<python><reactjs><flask><cors>
2023-12-17 18:43:07
1
1,416
Amir Doreh
77,675,439
3,623,537
Kill python process from itself and run code after
<p>I have this use case when during one script I want to use one .pyd library but generally I want to use another version of the same .pyd.</p> <p>To accomplish this I switch .pyd files when python script starts and then import lib imports correct version.</p> <p>At the end of python script (using atexit) I've attached script that will presumably switch the library. The catch is that you cannot change .pyd if it's used by some process (which is current python instance) and it cannot be unloaded (<a href="https://discuss.python.org/t/how-to-unload-native-extensions-pyd-on-windows/3749" rel="nofollow noreferrer">I've found notion that it's impossible to unload loaded .dlls or .pyds at runtime</a>). So I need to kill the python.exe and then run my script.</p> <p>When I kill python.exe it seems to terminate <code>os.system</code> I've tried at first and it never comes to running .bat file that will perform the library switch. And I'm trying to work around that.</p> <p>LONG STORY SHORT - I want to kill python process and run some code after.</p> <p>Example code is below - it never prints process id to <code>test_my_case.txt</code>. If I remove <code>&amp;&amp; timeout 3</code> it works fine - so I think the problem is that <code>subprocess.call</code> process is killed as python process gets killed and it's not able to print if there is any delay before the echo.</p> <p>What I've tried - <code>os.startfile</code>, <code>os.system</code>, <code>subprocess.Popen</code>, <code>subprocess.call</code>.</p> <pre class="lang-py prettyprint-override"><code>import os import subprocess import atexit # never prints PID to the test_my_case.txt def switch_library(): command = f&quot;taskkill /F /PID {os.getpid()} &amp;&amp; timeout 3 &amp;&amp; echo {os.getpid()} &gt; test_my_case.txt&quot; subprocess.run(command, shell=True) atexit.register(switch_library) exit() </code></pre>
<python><process><system>
2023-12-17 18:01:48
1
469
FamousSnake
77,675,385
587,650
Sqlalchemy order_by hybrid_property orders by something else
<p>So, I have a Product class and a User class. A User is the seller of a Product, and can have many products. The <code>Product.price</code> is the price without VAT, and each User have a flag in the User table <code>is_vat</code>, to state if the price showed externally shall have VAT added to it or not.</p> <p>So I wanted to create a query which is ordered by price, and decided to create a hybrid_property (<code>show_price</code>) so that I would be able to order_by the price showed externally. The hybrid_property checks if the seller have the <code>is_vat</code> flag checked, and then calculates the outward showing price depending on whether VAT should be added or not.</p> <p><strong>So, the problem is, the query results come out ordered on <code>Product.price</code>, instead of <code>Product.show_price</code>, and I don't understand what on earth I am doing wrong?</strong></p> <p>This is the query I am using (Flask-sqlalchemy and old sqlalchemy syntax):</p> <pre><code>products_query = db.session.query(Product) products = products_query.order_by(Product.show_price.desc()).all() </code></pre> <p>This is the Product class with the hybrid_property:</p> <pre><code>class Product(db.Model): id = db.Column(db.Integer, primary_key=True) price = db.Column(db.Integer) seller_id = db.Column(db.Integer, db.ForeignKey('user.id')) @hybrid_property def show_price(self): is_vat = db.session.query(User.is_vat).filter(User.id == self.seller_id).first()[0] if is_vat == True: return (self.price * 1.25) if is_vat == False or is_vat == None: return (self.price * 1.0) </code></pre> <p>Edit: Based on the response from @detlef I found out I had to create an expression for the hybrid_property. To first try to make a solution without the association_proxy, I made the following expression, which allows me to use Product.show_price in order_by. I believe the <code>expression</code> has to return a query object which calculates the value, not the value in itself.</p> <pre><code>@show_price.expression def show_price(cls): return db.session.query(((db.func.coalesce(User.is_vat, False).cast(db.Integer) * 0.25) + 1) * cls.price).where(User.id == cls.seller_id) </code></pre>
<python><flask><sqlalchemy><flask-sqlalchemy>
2023-12-17 17:42:47
1
719
Drublic
77,675,126
18,125,313
How does numpy gain speed when the number of elements is less than 3-5 million?
<p>Some numpy operations suddenly improve performance when the number of elements drops below a certain amount.</p> <p>This is a function that uses numpy to find the maximum value of an array.</p> <pre class="lang-py prettyprint-override"><code>def np_max(arr): return np.max(arr) </code></pre> <p>For comparison, here is a function that uses numba to find the maximum value as well.</p> <pre class="lang-py prettyprint-override"><code>@numba.njit def nb_max(arr): a_max = arr[0] for a in arr[1:]: a_max = max(a_max, a) return a_max </code></pre> <p>And this is the runtime measured with perfplot.</p> <p>Code:</p> <pre class="lang-py prettyprint-override"><code>def setup(n): rng = np.random.default_rng(0) return rng.random(n) def benchmark_timeit(): for f in [np_max, nb_max]: f(setup(100)) n_run = 100 for n in np.logspace(6, 7, num=30, dtype=int).tolist(): arr = setup(n) np_max_time = timeit(lambda: np_max(arr), number=n_run) / n_run nb_max_time = timeit(lambda: nb_max(arr), number=n_run) / n_run print(f&quot;n={n,}:&quot; f&quot; np_max={np_max_time * 1000:.2f}&quot; f&quot;, nb_max={nb_max_time * 1000:.2f}&quot; f&quot;, np/nb={np_max_time / nb_max_time:.2f}&quot;) def benchmark_perfplot(): for f in [np_max, nb_max]: f(setup(100)) data = perfplot.bench( n_range=np.logspace(1, 8, num=8 + 7 * 9, dtype=int).tolist(), # n_range=np.logspace(6, 7, num=30, dtype=int).tolist(), setup=setup, kernels=[np_max, nb_max], equality_check=np.allclose, target_time_per_measurement=1.0, ) data.save(&quot;./temp2.png&quot;) if __name__ == &quot;__main__&quot;: benchmark_perfplot() benchmark_timeit() </code></pre> <p>Result:</p> <p><a href="https://i.sstatic.net/mM1wB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mM1wB.png" alt="numpy vs numba" /></a></p> <p>As you can see, there is a big jump between 10^6 and 10^7. The results measured with timeit also confirm this.</p> <pre><code>n=(1000000,): np_max=0.11, nb_max=0.27, np/nb=0.42 &lt;-- np/nb = numpy-runtime / numba-runtime n=(1082636,): np_max=0.12, nb_max=0.30, np/nb=0.40 n=(1172102,): np_max=0.13, nb_max=0.32, np/nb=0.40 n=(1268961,): np_max=0.15, nb_max=0.46, np/nb=0.31 n=(1373823,): np_max=0.16, nb_max=0.38, np/nb=0.41 n=(1487352,): np_max=0.17, nb_max=0.41, np/nb=0.41 n=(1610262,): np_max=0.18, nb_max=0.45, np/nb=0.41 n=(1743328,): np_max=0.19, nb_max=0.48, np/nb=0.40 n=(1887391,): np_max=0.21, nb_max=0.52, np/nb=0.41 n=(2043359,): np_max=0.23, nb_max=0.56, np/nb=0.41 n=(2212216,): np_max=0.25, nb_max=0.61, np/nb=0.41 n=(2395026,): np_max=0.30, nb_max=0.69, np/nb=0.44 n=(2592943,): np_max=0.31, nb_max=0.72, np/nb=0.42 n=(2807216,): np_max=0.33, nb_max=0.77, np/nb=0.43 n=(3039195,): np_max=0.38, nb_max=0.88, np/nb=0.43 &lt;-- 0.4 n=(3290344,): np_max=0.50, nb_max=0.97, np/nb=0.51 | n=(3562247,): np_max=0.65, nb_max=1.07, np/nb=0.61 | n=(3856620,): np_max=0.80, nb_max=1.19, np/nb=0.67 | 2x difference n=(4175318,): np_max=1.00, nb_max=1.35, np/nb=0.74 | n=(4520353,): np_max=1.11, nb_max=1.49, np/nb=0.75 | n=(4893900,): np_max=1.25, nb_max=1.59, np/nb=0.78 | n=(5298316,): np_max=1.44, nb_max=1.79, np/nb=0.81 &lt;-- 0.8 n=(5736152,): np_max=1.57, nb_max=1.95, np/nb=0.80 n=(6210169,): np_max=1.71, nb_max=2.08, np/nb=0.82 n=(6723357,): np_max=1.85, nb_max=2.27, np/nb=0.81 n=(7278953,): np_max=2.02, nb_max=2.49, np/nb=0.81 n=(7880462,): np_max=2.17, nb_max=2.67, np/nb=0.81 n=(8531678,): np_max=2.44, nb_max=2.91, np/nb=0.84 n=(9236708,): np_max=2.61, nb_max=3.17, np/nb=0.82 n=(10000000,): np_max=2.81, nb_max=3.50, np/nb=0.80 </code></pre> <p>My questions are:</p> <ol> <li>What optimization made such a big difference?</li> <li>Is it possible to reproduce it in numba?</li> </ol> <p>Please note that what I want to accomplish is to gain knowledge about this optimization, not to beat numpy with out-of-the-box approaches like parallelization.</p> <hr /> <p>Specs:</p> <ul> <li>AMD Ryzen 9 5900X</li> <li>RAM 64 GB</li> <li>Windows 10</li> <li>Python 3.10.11</li> <li>numpy 1.26.2</li> <li>numba 0.58.1</li> </ul> <hr /> <p>Here is the LLVM IR and assembly code for n=10**6, where the performance difference with numpy is still large.</p> <pre class="lang-py prettyprint-override"><code>def inspect(): n_run = 100 arr = setup(10 ** 6) nb_max(arr) print(timeit(lambda: nb_max(arr), number=n_run) / n_run) t = nb_max.inspect_asm() assert len(t) == 1 Path(&quot;inspect_asm_10-6.txt&quot;).write_text(t[list(t)[0]]) t = nb_max.inspect_llvm() assert len(t) == 1 Path(&quot;inspect_llvm_10-6.txt&quot;).write_text(t[list(t)[0]]) if __name__ == &quot;__main__&quot;: # benchmark_perfplot() # benchmark_timeit() inspect() </code></pre> <p><code>inspect_asm_10-6.txt</code>:</p> <pre><code> .text .file &quot;&lt;string&gt;&quot; .globl _ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE .p2align 4, 0x90 .type _ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE,@function _ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE: pushq %r14 pushq %rsi pushq %rdi pushq %rbx movq 96(%rsp), %rax movq 88(%rsp), %r9 movl $1, %r8d cmpq $2, %rax jl .LBB0_10 vmovsd (%r9), %xmm0 subq %r8, %rax testq %rax, %rax jle .LBB0_9 .LBB0_2: movq 104(%rsp), %r10 movq %rax, %rdx sarq $63, %rdx andnq %rax, %rdx, %r11 movl %r11d, %edx leaq -1(%r11), %rax andl $7, %edx cmpq $7, %rax jae .LBB0_4 xorl %eax, %eax jmp .LBB0_6 .LBB0_4: movabsq $9223372036854775800, %rax leaq (%r9,%r8,8), %rdi leaq (%r10,%r10,2), %r14 andq %rax, %r11 xorl %eax, %eax .p2align 4, 0x90 .LBB0_5: vmovsd (%rdi), %xmm1 vmovsd (%r10,%rdi), %xmm2 vmovsd (%rdi,%r10,2), %xmm3 leaq (%r14,%rdi), %rbx addq $8, %rax vmaxsd %xmm0, %xmm1, %xmm0 vmaxsd %xmm0, %xmm2, %xmm0 vmovsd (%r14,%rdi), %xmm2 leaq (%r10,%rbx), %rdi vmaxsd %xmm0, %xmm3, %xmm0 vmovsd (%r10,%rbx), %xmm3 leaq (%r10,%rdi), %rbx leaq (%r10,%rbx), %rsi vmaxsd %xmm0, %xmm2, %xmm0 vmovsd (%r10,%rdi), %xmm2 leaq (%r10,%rsi), %rdi vmaxsd %xmm0, %xmm3, %xmm0 vmovsd (%r10,%rbx), %xmm3 addq %r10, %rdi vmaxsd %xmm0, %xmm2, %xmm0 vmovsd (%r10,%rsi), %xmm2 vmaxsd %xmm0, %xmm3, %xmm0 vmaxsd %xmm0, %xmm2, %xmm0 cmpq %rax, %r11 jne .LBB0_5 .LBB0_6: testq %rdx, %rdx je .LBB0_9 imulq %r10, %rax addq %rax, %r9 leaq (%r9,%r8,8), %rax .p2align 4, 0x90 .LBB0_8: vmovsd (%rax), %xmm1 addq %r10, %rax decq %rdx vmaxsd %xmm0, %xmm1, %xmm0 jne .LBB0_8 .LBB0_9: vmovsd %xmm0, (%rcx) xorl %eax, %eax popq %rbx popq %rdi popq %rsi popq %r14 retq .LBB0_10: movq %rax, %r8 vmovsd (%r9), %xmm0 subq %r8, %rax testq %rax, %rax jg .LBB0_2 jmp .LBB0_9 .Lfunc_end0: .size _ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE, .Lfunc_end0-_ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE .globl _ZN7cpython8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE .p2align 4, 0x90 .type _ZN7cpython8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE,@function _ZN7cpython8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE: .cfi_startproc pushq %rbp .cfi_def_cfa_offset 16 .cfi_offset %rbp, -16 movq %rsp, %rbp .cfi_def_cfa_register %rbp pushq %rsi andq $-32, %rsp subq $224, %rsp vmovaps %xmm6, -32(%rbp) .cfi_offset %rsi, -24 .cfi_offset %xmm6, -48 leaq 88(%rsp), %rax movq %rdx, %rcx movabsq $.const.nb_max, %rdx movl $1, %r8d movl $1, %r9d movq %rax, 32(%rsp) movabsq $PyArg_UnpackTuple, %rax callq *%rax testl %eax, %eax je .LBB1_1 movabsq $_ZN08NumbaEnv8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE, %rax cmpq $0, (%rax) je .LBB1_4 movq 88(%rsp), %rcx movabsq $NRT_adapt_ndarray_from_python, %rax leaq 96(%rsp), %rdx vxorps %xmm0, %xmm0, %xmm0 vmovaps %ymm0, 96(%rsp) vmovups %ymm0, 120(%rsp) vzeroupper callq *%rax testl %eax, %eax jne .LBB1_8 cmpq $8, 120(%rsp) jne .LBB1_8 vmovaps 128(%rsp), %xmm0 movq 144(%rsp), %rax movq 96(%rsp), %rsi leaq 80(%rsp), %rcx movq $0, 80(%rsp) movq %rax, 64(%rsp) movabsq $_ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE, %rax vmovups %xmm0, 48(%rsp) callq *%rax vmovsd 80(%rsp), %xmm6 movabsq $NRT_decref, %rax movq %rsi, %rcx callq *%rax movabsq $PyFloat_FromDouble, %rax vmovaps %xmm6, %xmm0 callq *%rax .LBB1_2: vmovaps -32(%rbp), %xmm6 leaq -8(%rbp), %rsp popq %rsi popq %rbp retq .LBB1_4: movabsq $PyExc_RuntimeError, %rcx movabsq $&quot;.const.missing Environment: _ZN08NumbaEnv8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE&quot;, %rdx jmp .LBB1_5 .LBB1_8: movabsq $PyExc_TypeError, %rcx movabsq $&quot;.const.can't unbox array from PyObject into native value. The object maybe of a different type&quot;, %rdx .LBB1_5: movabsq $PyErr_SetString, %rax callq *%rax .LBB1_1: xorl %eax, %eax jmp .LBB1_2 .Lfunc_end1: .size _ZN7cpython8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE, .Lfunc_end1-_ZN7cpython8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE .cfi_endproc .globl cfunc._ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE .p2align 4, 0x90 .type cfunc._ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE,@function cfunc._ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE: subq $88, %rsp vmovaps 128(%rsp), %xmm0 movq 144(%rsp), %rax leaq 80(%rsp), %rcx movq $0, 80(%rsp) movq %rax, 64(%rsp) movabsq $_ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE, %rax vmovups %xmm0, 48(%rsp) callq *%rax vmovsd 80(%rsp), %xmm0 addq $88, %rsp retq .Lfunc_end2: .size cfunc._ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE, .Lfunc_end2-cfunc._ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE .weak NRT_decref .p2align 4, 0x90 .type NRT_decref,@function NRT_decref: .cfi_startproc testq %rcx, %rcx je .LBB3_2 #MEMBARRIER lock decq (%rcx) je .LBB3_3 .LBB3_2: retq .LBB3_3: movabsq $NRT_MemInfo_call_dtor, %rax #MEMBARRIER rex64 jmpq *%rax .Lfunc_end3: .size NRT_decref, .Lfunc_end3-NRT_decref .cfi_endproc .type .const.nb_max,@object .section .rodata,&quot;a&quot;,@progbits .const.nb_max: .asciz &quot;nb_max&quot; .size .const.nb_max, 7 .type _ZN08NumbaEnv8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE,@object .comm _ZN08NumbaEnv8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE,8,8 .type &quot;.const.missing Environment: _ZN08NumbaEnv8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE&quot;,@object .p2align 4 &quot;.const.missing Environment: _ZN08NumbaEnv8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE&quot;: .asciz &quot;missing Environment: _ZN08NumbaEnv8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE&quot; .size &quot;.const.missing Environment: _ZN08NumbaEnv8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE&quot;, 128 .type &quot;.const.can't unbox array from PyObject into native value. The object maybe of a different type&quot;,@object .p2align 4 &quot;.const.can't unbox array from PyObject into native value. The object maybe of a different type&quot;: .asciz &quot;can't unbox array from PyObject into native value. The object maybe of a different type&quot; .size &quot;.const.can't unbox array from PyObject into native value. The object maybe of a different type&quot;, 89 .section &quot;.note.GNU-stack&quot;,&quot;&quot;,@progbits </code></pre> <p><code>inspect_llvm_10-6.txt</code>:</p> <pre><code>; ModuleID = 'nb_max' source_filename = &quot;&lt;string&gt;&quot; target datalayout = &quot;e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128&quot; target triple = &quot;x86_64-pc-windows-msvc&quot; @.const.nb_max = internal constant [7 x i8] c&quot;nb_max\00&quot; @_ZN08NumbaEnv8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE = common local_unnamed_addr global i8* null @&quot;.const.missing Environment: _ZN08NumbaEnv8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE&quot; = internal constant [128 x i8] c&quot;missing Environment: _ZN08NumbaEnv8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE\00&quot; @PyExc_TypeError = external global i8 @&quot;.const.can't unbox array from PyObject into native value. The object maybe of a different type&quot; = internal constant [89 x i8] c&quot;can't unbox array from PyObject into native value. The object maybe of a different type\00&quot; @PyExc_RuntimeError = external global i8 ; Function Attrs: nofree norecurse nosync nounwind define i32 @_ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE(double* noalias nocapture writeonly %retptr, { i8*, i32, i8*, i8*, i32 }** noalias nocapture readnone %excinfo, i8* nocapture readnone %arg.arr.0, i8* nocapture readnone %arg.arr.1, i64 %arg.arr.2, i64 %arg.arr.3, double* %arg.arr.4, i64 %arg.arr.5.0, i64 %arg.arr.6.0) local_unnamed_addr #0 { B0.else.endif: %.44 = load double, double* %arg.arr.4, align 8 %.93 = icmp slt i64 %arg.arr.5.0, 2 br i1 %.93, label %B0.else.endif.if, label %B0.endif, !prof !0 B24: ; preds = %B24, %B24.lr.ph.new %lsr.iv9 = phi i64 [ %8, %B24 ], [ %16, %B24.lr.ph.new ] %a_max.2.04 = phi double [ %.44, %B24.lr.ph.new ], [ %.321.7, %B24 ] %.224.03 = phi i64 [ 0, %B24.lr.ph.new ], [ %.294.7, %B24 ] %.290 = inttoptr i64 %lsr.iv9 to double* %.291 = load double, double* %.290, align 8 %.320 = fcmp ogt double %.291, %a_max.2.04 %.321 = select i1 %.320, double %.291, double %a_max.2.04 %0 = add i64 %arg.arr.6.0, %lsr.iv9 %.290.1 = inttoptr i64 %0 to double* %.291.1 = load double, double* %.290.1, align 8 %.320.1 = fcmp ogt double %.291.1, %.321 %.321.1 = select i1 %.320.1, double %.291.1, double %.321 %sunkaddr = inttoptr i64 %lsr.iv9 to double* %sunkaddr11 = mul i64 %arg.arr.6.0, 2 %1 = bitcast double* %sunkaddr to i8* %sunkaddr12 = getelementptr i8, i8* %1, i64 %sunkaddr11 %2 = bitcast i8* %sunkaddr12 to double* %.291.2 = load double, double* %2, align 8 %.320.2 = fcmp ogt double %.291.2, %.321.1 %.321.2 = select i1 %.320.2, double %.291.2, double %.321.1 %3 = add i64 %17, %lsr.iv9 %.290.3 = inttoptr i64 %3 to double* %.291.3 = load double, double* %.290.3, align 8 %.320.3 = fcmp ogt double %.291.3, %.321.2 %.321.3 = select i1 %.320.3, double %.291.3, double %.321.2 %4 = add i64 %arg.arr.6.0, %3 %.290.4 = inttoptr i64 %4 to double* %.291.4 = load double, double* %.290.4, align 8 %.320.4 = fcmp ogt double %.291.4, %.321.3 %.321.4 = select i1 %.320.4, double %.291.4, double %.321.3 %5 = add i64 %arg.arr.6.0, %4 %.290.5 = inttoptr i64 %5 to double* %.291.5 = load double, double* %.290.5, align 8 %.320.5 = fcmp ogt double %.291.5, %.321.4 %.321.5 = select i1 %.320.5, double %.291.5, double %.321.4 %6 = add i64 %arg.arr.6.0, %5 %.290.6 = inttoptr i64 %6 to double* %.291.6 = load double, double* %.290.6, align 8 %.320.6 = fcmp ogt double %.291.6, %.321.5 %.321.6 = select i1 %.320.6, double %.291.6, double %.321.5 %7 = add i64 %arg.arr.6.0, %6 %.290.7 = inttoptr i64 %7 to double* %.291.7 = load double, double* %.290.7, align 8 %.294.7 = add nuw i64 %.224.03, 8 %.320.7 = fcmp ogt double %.291.7, %.321.6 %.321.7 = select i1 %.320.7, double %.291.7, double %.321.6 %niter.ncmp.7 = icmp eq i64 %unroll_iter, %.294.7 %8 = add i64 %arg.arr.6.0, %7 br i1 %niter.ncmp.7, label %B38.loopexit.unr-lcssa, label %B24 B38.loopexit.unr-lcssa: ; preds = %B24, %B24.lr.ph %.321.lcssa.ph = phi double [ undef, %B24.lr.ph ], [ %.321.7, %B24 ] %a_max.2.04.unr = phi double [ %.44, %B24.lr.ph ], [ %.321.7, %B24 ] %.224.03.unr = phi i64 [ 0, %B24.lr.ph ], [ %.294.7, %B24 ] %lcmp.mod.not = icmp eq i64 %xtraiter, 0 br i1 %lcmp.mod.not, label %B38, label %B24.epil.preheader B24.epil.preheader: ; preds = %B38.loopexit.unr-lcssa %9 = ptrtoint double* %arg.arr.4 to i64 %10 = mul i64 %.224.03.unr, %arg.arr.6.0 %11 = add i64 %9, %10 %12 = shl i64 %.73.sroa.0.0, 3 %13 = add i64 %11, %12 br label %B24.epil B24.epil: ; preds = %B24.epil.preheader, %B24.epil %lsr.iv7 = phi i64 [ %xtraiter, %B24.epil.preheader ], [ %lsr.iv.next8, %B24.epil ] %lsr.iv = phi i64 [ %13, %B24.epil.preheader ], [ %lsr.iv.next, %B24.epil ] %a_max.2.04.epil = phi double [ %.321.epil, %B24.epil ], [ %a_max.2.04.unr, %B24.epil.preheader ] %.290.epil = inttoptr i64 %lsr.iv to double* %.291.epil = load double, double* %.290.epil, align 8 %.320.epil = fcmp ogt double %.291.epil, %a_max.2.04.epil %.321.epil = select i1 %.320.epil, double %.291.epil, double %a_max.2.04.epil %lsr.iv.next = add i64 %lsr.iv, %arg.arr.6.0 %lsr.iv.next8 = add nsw i64 %lsr.iv7, -1 %epil.iter.cmp.not = icmp eq i64 %lsr.iv.next8, 0 br i1 %epil.iter.cmp.not, label %B38, label %B24.epil, !llvm.loop !1 B38: ; preds = %B24.epil, %B38.loopexit.unr-lcssa, %B0.endif %a_max.2.0.lcssa = phi double [ %.44, %B0.endif ], [ %.321.lcssa.ph, %B38.loopexit.unr-lcssa ], [ %.321.epil, %B24.epil ] store double %a_max.2.0.lcssa, double* %retptr, align 8 ret i32 0 B0.endif: ; preds = %B0.else.endif, %B0.else.endif.if %.73.sroa.0.0 = phi i64 [ %arg.arr.5.0, %B0.else.endif.if ], [ 1, %B0.else.endif ] %.161 = sub i64 %arg.arr.5.0, %.73.sroa.0.0 %.168.inv = icmp sgt i64 %.161, 0 %.170 = select i1 %.168.inv, i64 %.161, i64 0 br i1 %.168.inv, label %B24.lr.ph, label %B38 B24.lr.ph: ; preds = %B0.endif %.186 = getelementptr double, double* %arg.arr.4, i64 %.73.sroa.0.0 %14 = add nsw i64 %.170, -1 %xtraiter = and i64 %.170, 7 %15 = icmp ult i64 %14, 7 br i1 %15, label %B38.loopexit.unr-lcssa, label %B24.lr.ph.new B24.lr.ph.new: ; preds = %B24.lr.ph %16 = ptrtoint double* %.186 to i64 %unroll_iter = and i64 %.170, 9223372036854775800 %17 = mul i64 %arg.arr.6.0, 3 br label %B24 B0.else.endif.if: ; preds = %B0.else.endif br label %B0.endif } define i8* @_ZN7cpython8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE(i8* nocapture readnone %py_closure, i8* %py_args, i8* nocapture readnone %py_kws) local_unnamed_addr { entry: %.5 = alloca i8*, align 8 %.6 = call i32 (i8*, i8*, i64, i64, ...) @PyArg_UnpackTuple(i8* %py_args, i8* getelementptr inbounds ([7 x i8], [7 x i8]* @.const.nb_max, i64 0, i64 0), i64 1, i64 1, i8** nonnull %.5) %.7 = icmp eq i32 %.6, 0 %.21 = alloca { i8*, i8*, i64, i64, double*, [1 x i64], [1 x i64] }, align 8 %.43 = alloca double, align 8 br i1 %.7, label %common.ret, label %entry.endif, !prof !0 common.ret: ; preds = %entry.endif.endif.endif.thread, %entry, %entry.endif.endif.endif.endif, %entry.endif.if %common.ret.op = phi i8* [ null, %entry.endif.if ], [ %.67, %entry.endif.endif.endif.endif ], [ null, %entry ], [ null, %entry.endif.endif.endif.thread ] ret i8* %common.ret.op entry.endif: ; preds = %entry %.11 = load i8*, i8** @_ZN08NumbaEnv8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE, align 8 %.16 = icmp eq i8* %.11, null br i1 %.16, label %entry.endif.if, label %entry.endif.endif, !prof !0 entry.endif.if: ; preds = %entry.endif call void @PyErr_SetString(i8* nonnull @PyExc_RuntimeError, i8* getelementptr inbounds ([128 x i8], [128 x i8]* @&quot;.const.missing Environment: _ZN08NumbaEnv8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE&quot;, i64 0, i64 0)) br label %common.ret entry.endif.endif: ; preds = %entry.endif %.20 = load i8*, i8** %.5, align 8 %.24 = bitcast { i8*, i8*, i64, i64, double*, [1 x i64], [1 x i64] }* %.21 to i8* %0 = bitcast { i8*, i8*, i64, i64, double*, [1 x i64], [1 x i64] }* %.21 to i8* call void @llvm.memset.p0i8.i64(i8* noundef nonnull align 8 dereferenceable(56) %0, i8 0, i64 56, i1 false) %.25 = call i32 @NRT_adapt_ndarray_from_python(i8* %.20, i8* nonnull %.24) %1 = bitcast { i8*, i8*, i64, i64, double*, [1 x i64], [1 x i64] }* %.21 to i8* %sunkaddr = getelementptr inbounds i8, i8* %1, i64 24 %2 = bitcast i8* %sunkaddr to i64* %.29 = load i64, i64* %2, align 8 %.30 = icmp ne i64 %.29, 8 %.31 = icmp ne i32 %.25, 0 %.32 = or i1 %.31, %.30 br i1 %.32, label %entry.endif.endif.endif.thread, label %entry.endif.endif.endif.endif, !prof !0 entry.endif.endif.endif.thread: ; preds = %entry.endif.endif call void @PyErr_SetString(i8* nonnull @PyExc_TypeError, i8* getelementptr inbounds ([89 x i8], [89 x i8]* @&quot;.const.can't unbox array from PyObject into native value. The object maybe of a different type&quot;, i64 0, i64 0)) br label %common.ret entry.endif.endif.endif.endif: ; preds = %entry.endif.endif %3 = bitcast { i8*, i8*, i64, i64, double*, [1 x i64], [1 x i64] }* %.21 to i8** %.36.fca.0.load = load i8*, i8** %3, align 8 %4 = bitcast { i8*, i8*, i64, i64, double*, [1 x i64], [1 x i64] }* %.21 to i8* %sunkaddr3 = getelementptr inbounds i8, i8* %4, i64 32 %5 = bitcast i8* %sunkaddr3 to double** %.36.fca.4.load = load double*, double** %5, align 8 %6 = bitcast { i8*, i8*, i64, i64, double*, [1 x i64], [1 x i64] }* %.21 to i8* %sunkaddr4 = getelementptr inbounds i8, i8* %6, i64 40 %7 = bitcast i8* %sunkaddr4 to i64* %.36.fca.5.0.load = load i64, i64* %7, align 8 %8 = bitcast { i8*, i8*, i64, i64, double*, [1 x i64], [1 x i64] }* %.21 to i8* %sunkaddr5 = getelementptr inbounds i8, i8* %8, i64 48 %9 = bitcast i8* %sunkaddr5 to i64* %.36.fca.6.0.load = load i64, i64* %9, align 8 store double 0.000000e+00, double* %.43, align 8 %.49 = call i32 @_ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE(double* nonnull %.43, { i8*, i32, i8*, i8*, i32 }** nonnull undef, i8* undef, i8* undef, i64 undef, i64 undef, double* %.36.fca.4.load, i64 %.36.fca.5.0.load, i64 %.36.fca.6.0.load) #1 %.59 = load double, double* %.43, align 8 call void @NRT_decref(i8* %.36.fca.0.load) %.67 = call i8* @PyFloat_FromDouble(double %.59) br label %common.ret } declare i32 @PyArg_UnpackTuple(i8*, i8*, i64, i64, ...) local_unnamed_addr declare void @PyErr_SetString(i8*, i8*) local_unnamed_addr declare i32 @NRT_adapt_ndarray_from_python(i8* nocapture, i8* nocapture) local_unnamed_addr declare i8* @PyFloat_FromDouble(double) local_unnamed_addr ; Function Attrs: nofree norecurse nosync nounwind define double @cfunc._ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE({ i8*, i8*, i64, i64, double*, [1 x i64], [1 x i64] } %.1) local_unnamed_addr #0 { entry: %.3 = alloca double, align 8 store double 0.000000e+00, double* %.3, align 8 %extracted.data = extractvalue { i8*, i8*, i64, i64, double*, [1 x i64], [1 x i64] } %.1, 4 %extracted.shape = extractvalue { i8*, i8*, i64, i64, double*, [1 x i64], [1 x i64] } %.1, 5 %.7 = extractvalue [1 x i64] %extracted.shape, 0 %extracted.strides = extractvalue { i8*, i8*, i64, i64, double*, [1 x i64], [1 x i64] } %.1, 6 %.8 = extractvalue [1 x i64] %extracted.strides, 0 %.9 = call i32 @_ZN8__main__6nb_maxB2v1B38c8tJTIcFKzyF2ILShI4CrgQElQb6HczSBAA_3dE5ArrayIdLi1E1C7mutable7alignedE(double* nonnull %.3, { i8*, i32, i8*, i8*, i32 }** nonnull undef, i8* undef, i8* undef, i64 undef, i64 undef, double* %extracted.data, i64 %.7, i64 %.8) #1 %.19 = load double, double* %.3, align 8 ret double %.19 } ; Function Attrs: noinline define linkonce_odr void @NRT_decref(i8* %.1) local_unnamed_addr #1 { .3: %.4 = icmp eq i8* %.1, null br i1 %.4, label %common.ret1, label %.3.endif, !prof !0 common.ret1: ; preds = %.3, %.3.endif ret void .3.endif: ; preds = %.3 fence release %.8 = bitcast i8* %.1 to i64* %.4.i = atomicrmw sub i64* %.8, i64 1 monotonic, align 8 %.10 = icmp eq i64 %.4.i, 1 br i1 %.10, label %.3.endif.if, label %common.ret1, !prof !0 .3.endif.if: ; preds = %.3.endif fence acquire tail call void @NRT_MemInfo_call_dtor(i8* nonnull %.1) ret void } declare void @NRT_MemInfo_call_dtor(i8*) local_unnamed_addr ; Function Attrs: argmemonly nofree nounwind willreturn writeonly declare void @llvm.memset.p0i8.i64(i8* nocapture writeonly, i8, i64, i1 immarg) #2 attributes #0 = { nofree norecurse nosync nounwind } attributes #1 = { noinline } attributes #2 = { argmemonly nofree nounwind willreturn writeonly } !0 = !{!&quot;branch_weights&quot;, i32 1, i32 99} !1 = distinct !{!1, !2} !2 = !{!&quot;llvm.loop.unroll.disable&quot;} </code></pre>
<python><numpy><performance><optimization><numba>
2023-12-17 16:15:13
1
3,446
ken
77,674,793
6,713,310
"Error: ENOENT: no such file or directory" but file exists
<p>I am trying to download a <code>.wav</code> file from an Azure server (Ubuntu 20.04.6 LTS) using Visual Studio, but I get</p> <p><code>&quot;Error: ENOENT: no such file or directory</code></p> <p>In addition, the Python code I am runnining cannot load this file, as I get</p> <p><code>FileNotFoundError: [Errno 2] No such file or directory:</code></p> <p>The file is there. The file is created by the code itself, so I am not sure if this is some sort of encoding issue or anything else.</p> <pre><code>#! python import os import sys import argparse if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('-d_i', help='MAESTRO original corpus directory (input)', default='/mnt/hdd1/AMT/corpus/MAESTRO/MAESTRO') parser.add_argument('-d_o', help='MAESTRO renamed corpus directory (output)', default='/mnt/hdd1/AMT/corpus/MAESTRO') parser.add_argument('-d_list', help='corpus list directory') args = parser.parse_args() print('** rename MAESTRO wav/mid file **') a_attribute = ['train', 'valid', 'test'] for attribute in a_attribute: with open(args.d_list.rstrip('/')+'/'+attribute+'.tsv', 'r', encoding='utf-8') as f: a_in = f.readlines() for i in range(1, len(a_in)): fname_wav = a_in[i].rstrip('\n').split('\t')[5] fname_mid = a_in[i].rstrip('\n').split('\t')[4] number = a_in[i].rstrip('\n').split('\t')[7] os.symlink(args.d_i.rstrip('/')+'/'+fname_wav, args.d_o.rstrip('/')+'/wav/'+attribute+'_'+number+'.wav') os.symlink(args.d_i.rstrip('/')+'/'+fname_mid, args.d_o.rstrip('/')+'/midi/'+attribute+'_'+number+'.mid') print('** done **') </code></pre>
<python><visual-studio>
2023-12-17 14:38:50
1
528
Phys
77,674,636
19,125,840
Overload a Function that accepts Any Number of Positional Arguments and returns int or Tuple[int]
<p>I am trying to type hint a function that accepts any number of positional arguments and returns int if size is 1 and returns tuple of same size if it is more than 1.</p> <p>This is what I have right now:</p> <pre class="lang-py prettyprint-override"><code>def timestamp(*date: Union[datetime, str, int]) -&gt; int | Tuple[int, ...]: &quot;&quot;&quot; It converts a date to a timestamp :param date: The date to be converted to timestamp :return: The number of seconds since the epoch. &quot;&quot;&quot; if len(date) == 1: return timestamp_(date[0]) return tuple([timestamp_(d) for d in date]) </code></pre> <p>mypy does not complain about this, but I want to overload it in a correct way because it will be a little bit hard to use this function with correct types</p>
<python><mypy><python-typing>
2023-12-17 13:49:10
2
460
demetere._
77,674,542
354,051
tkinter scrolled window in dynamic(add/delete widgets at runtime) environment
<p><a href="https://i.sstatic.net/mOXWV.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mOXWV.jpg" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk from tkinter import ttk import random import string def id_generator(size=6, chars=string.ascii_uppercase + string.digits): return ''.join(random.choice(chars) for _ in range(size)) class ScrollableFrame(tk.Frame): def __init__(self, master, *args, **kwargs): tk.Frame.__init__(self, master, *args, **kwargs) self.canvas = tk.Canvas(self) self.scrollbar_y = tk.Scrollbar(self, orient=&quot;vertical&quot;, command=self.canvas.yview) self.scrollbar_x = tk.Scrollbar(self, orient=&quot;horizontal&quot;, command=self.canvas.xview) self.scrollable_frame = tk.Frame(self.canvas) self.canvas.create_window((0, 0), window=self.scrollable_frame, anchor=&quot;nw&quot;) self.canvas.configure(yscrollcommand=self.scrollbar_y.set, xscrollcommand=self.scrollbar_x.set) self.scrollbar_x.pack(side=&quot;bottom&quot;, fill=&quot;x&quot;, expand=False) self.scrollbar_y.pack(side=&quot;right&quot;, fill=&quot;y&quot;, expand=False) self.canvas.pack(side=&quot;left&quot;, fill=&quot;both&quot;, expand=True) self.scrollable_frame.bind(&quot;&lt;Configure&gt;&quot;, self.on_frame_configure) self.canvas.bind_all(&quot;&lt;Up&gt;&quot;, lambda event: self.canvas.yview_scroll(-1, &quot;units&quot;)) self.canvas.bind_all(&quot;&lt;Down&gt;&quot;, lambda event: self.canvas.yview_scroll(1, &quot;units&quot;)) self.canvas.bind_all(&quot;&lt;Left&gt;&quot;, lambda event: self.canvas.xview_scroll(-1, &quot;units&quot;)) self.canvas.bind_all(&quot;&lt;Right&gt;&quot;, lambda event: self.canvas.xview_scroll(1, &quot;units&quot;)) self.pack(side=&quot;top&quot;, expand=True, fill=&quot;both&quot;) def on_frame_configure(self, event): self.canvas.configure(scrollregion=self.canvas.bbox(&quot;all&quot;)) def update_scrollbars(self): self.update_idletasks() self.canvas.configure(scrollregion=self.canvas.bbox(&quot;all&quot;)) class StringWidget(tk.Frame): '''''' def __init__(self, master=None, **kwargs): '''''' tk.Frame.__init__(self, master, kwargs) entry = tk.Entry(self) entry.insert(0, f&quot;This is entry number = {id_generator()}&quot;) #entry.pack(pady=5, fill=&quot;both&quot;, padx=5, expand=True) entry.grid(row=0, column=0, padx=5, pady=5, sticky=&quot;ew&quot;) self.grid_columnconfigure(0, weight=1) # Make the entry stretchable self.pack(side=&quot;top&quot;, fill=&quot;x&quot;, expand=True, padx=5, pady=5) class Editor(tk.Frame): '''''' def __init__(self, master=None, **kwargs): '''''' tk.Frame.__init__(self, master, kwargs) self.combobox = ttk.Combobox(self, values=['10','25', '50'], state=&quot;readonly&quot;) self.combobox.current(0) self.combobox.bind('&lt;&lt;ComboboxSelected&gt;&gt;', self.change_callback) self.combobox.pack(side=&quot;top&quot;, padx=10, pady=5, anchor=tk.NW) self.widgets = [] self.change_callback() self.pack(fill=&quot;both&quot;, expand=True, padx=5, pady=5) def change_callback(self, event=None): # Delete existing widgets for widget in self.widgets: widget.destroy() # Clear widgets list self.widgets = [] # Create widgets for i in range(int(self.combobox.get())): sw = StringWidget(self) self.widgets.append(sw) # Example usage: root = tk.Tk() root.title(&quot;Scrollable Frame Example&quot;) sf = ScrollableFrame(root) #sf.pack(side=&quot;top&quot;, fill=&quot;both&quot;, expand=True) ae = Editor(sf.scrollable_frame) sf.update_scrollbars() #sf.scrollable_frame.pack(expand=True, fill=&quot;both&quot;) root.mainloop() </code></pre> <p>I'm creating a custom scrolled window widget in tkinter. The above code is working fine except two problems.</p> <ol> <li><p>The <em>StringWidget</em> is not expanded till the end of the window. To see the desired effect, uncomment the last two commented lines, but now scrollbars doesn't work.</p> </li> <li><p>The border over riding problem as shown in the image. When contents of the window are longer then the size of the window, you can see the problem.</p> </li> </ol>
<python><tkinter>
2023-12-17 13:14:57
0
947
Prashant
77,674,409
6,439,229
How to fully implement custom right mousebutton functionality for QCheckBox
<p>When a normal <code>QCheckbox</code> is in <code>PartiallyChecked</code> state and you left-click on it, it will change to <code>Checked</code> state. Right-clicking on a checkbox does nothing.</p> <p>I want to modify a checkbox so that right-clicking does the same as left-clicking except when clicking in <code>PartiallyChecked</code> state the state will change to <code>Unchecked</code> instead of <code>Checked</code>.</p> <p>Here's my attempt to achieve that:</p> <pre><code>class MyCheckBox(QCheckBox): def __init__(self): super().__init__() self.clicked.connect(lambda: self.setTristate(False)) def mousePressEvent(self, event: QMouseEvent): if event.button() == Qt.MouseButton.RightButton: event = QMouseEvent(event.type(), event.position(), Qt.MouseButton.LeftButton, event.buttons(), event.modifiers()) super().mousePressEvent(event) def mouseReleaseEvent(self, event: QMouseEvent): if event.button() == Qt.MouseButton.RightButton: event = QMouseEvent(event.type(), event.position(), Qt.MouseButton.LeftButton, event.buttons(), event.modifiers()) if self.checkState() == Qt.CheckState.PartiallyChecked: self.setCheckState(Qt.CheckState.Checked) super().mouseReleaseEvent(event) </code></pre> <p>This works as intended except when you press down on the checkbox &gt; move off the checkbox while holding the button down &gt; release the button. The custom part of the <code>mouseReleaseEvent</code> is being executed while the 'native' part (in the super() call) is not. For instance, the <code>clicked</code> signal is not emitted.</p> <p>How would you make it so that the custom part of the release-event is executed in the same way as the native part i.e. not when the release-event is off the checkbox.</p> <p>An observation that may be related:<br /> When you press-down on the a checkbox, the box gets shaded. When you move off the box (while holding down) that shading disappears, and it will appear again when moving on the box again.<br /> This behaviour is not replicated in my custom checkbox for the right button. The shading will stay when moving off the box.</p>
<python><pyqt><pyqt6><qcheckbox>
2023-12-17 12:34:26
1
1,016
mahkitah
77,674,134
501,266
Where can I find a Python method to clearly and concisely initialize lists?
<p>Creating lists is a very common task for me, and when those lists need to be initialized, the Python code needed to do so feels overly complex and low level. I would like to find a method something like this:</p> <pre class="lang-py prettyprint-override"><code>def create_list(size = 0, initial_value = None, initializer = None): &quot;&quot;&quot; Creates a list of the specified `size`. If `initial_value` is specified, then all elements will be assigned that value. If `initializer` is specified, it is assumed to be a callable (function or lambda) and will be called for each element, passing to it the array index. If both `initial_value` and `initializer` are specified, an error is raised. &quot;&quot;&quot; </code></pre> <p>How can I implement this, and am I duplicating a similar method elsewhere in the Python ecosystem?</p>
<python><list><function>
2023-12-17 11:01:04
2
5,010
Keith Bennett
77,674,033
969,724
How to pass a callback function from Rust to Python with pyo3
<p>Here is an example what I would like to achieve, of course this is just a psudo code that doesn't work.</p> <p>Python class with a function that should be called from rust with a callback parameter.</p> <pre><code>class Test: def test_fn(callback): a = 1 print(f&quot;value a from python {a}&quot;) callback(a) b = 2 print(f&quot;value b from python {b}&quot;) callback(b) </code></pre> <p>Rust that just loads a class and calls a method and passing our callback function.</p> <pre><code>fn my_callback(value: u32) { println!(&quot;value from rust: {}&quot;, value); } pub fn cllback_test() -&gt; PyResult&lt;()&gt; { let py_test_path = include_str!(concat!( env!(&quot;CARGO_MANIFEST_DIR&quot;), &quot;/test.py&quot; )); Python::with_gil(|py| -&gt; PyResult&lt;()&gt; { let test_class = PyModule::from_code(py, py_test_path, &quot;&quot;, &quot;&quot;)? .getattr(&quot;Test&quot;)?; test_class.call_method1(&quot;test_fn&quot;, (my_callback))?; Ok(()) })?; Ok(()) } </code></pre> <p>The main Idea is that I want to be able to emit/callback some values at some point from python, I don't want to return them at the end of the fuctnion but use callback so I can emit multiple values at different times. Maybe this is not the correct approach but I wanted to give an example to what I would like to achieve.</p> <p>Also I am new to pyo3 so currently I am not using maturin, just the pyo3 library and if possible would like to stick as similar as the way I am showing if possible, if it is not would appreciate any tips.</p>
<python><rust><pyo3>
2023-12-17 10:23:15
0
1,793
V.Rashkov
77,673,792
10,200,497
groupby streak of numbers and one row after it then check the first value of a column for each group and create a new column
<p>This is my dataframe:</p> <pre><code>df = pd.DataFrame( { 'a': [0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0], 'b': [1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, 1, 1, -1, 1, 1] } ) </code></pre> <p>And this is the output that I want. I want to create column <code>c</code>:</p> <pre><code> a b c 0 0 1 0 1 0 -1 0 2 1 -1 0 3 1 -1 0 4 1 1 1 5 1 -1 1 6 0 1 0 7 0 1 0 8 0 -1 0 9 1 1 1 10 1 1 1 11 1 -1 1 12 0 1 0 13 0 1 0 14 1 -1 0 15 1 -1 1 16 0 1 0 </code></pre> <p>This is basically an extension to this <a href="https://stackoverflow.com/questions/77372530/groupby-streak-of-numbers-and-one-row-after-it">post</a>. The highlighted rows below summarize the way that this needs to be done.</p> <p><a href="https://i.sstatic.net/cnlwf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cnlwf.png" alt="enter image description here" /></a></p> <p>First of all in column <code>a</code>, groups are created by streak of 1s and one row after the streak ends. The highlighted rows in column <code>a</code> are these groups. The solution for this step is <a href="https://stackoverflow.com/a/77372543/10200497">here</a>.</p> <p>Now what I need is to check column <code>b</code> for each group in <code>a</code>. Find the first value that is 1 in <code>b</code> for each group. And then any value that comes before this becomes 0. This is how column <code>c</code> is created.</p> <p>For example for first group in <code>a</code>, the first value that column <code>b</code> is 1 is row number <code>4</code>. The previous values in that group becomes 0. And the result is the first highlighted group in column <code>c</code>.</p> <p>Note that if for a group, all the values in <code>b</code> are NOT 1, the corresponding group in <code>c</code> will become all 0s.</p> <p>This is what I have tried but I can't find the complete solution:</p> <pre><code>g = df.loc[::-1, 'a'].eq(0).cumsum() x = df.groupby(g).filter(lambda x: x.b.iloc[0] == 1) </code></pre>
<python><pandas><dataframe>
2023-12-17 08:52:32
2
2,679
AmirX
77,673,595
11,831,138
Problems serving staticfiles(css and javascripts) on django project using digital ocean spaces
<p>i built a project using django 4.2 and i configured django storage and digital ocean spaces on it. the project is also hosted on digital ocean using droplets. the problem i'm having now is that i'm able to collect static and all files are uploaded to digital ocean space, the images are displaying but it doesn't serve the css and javascript. i've been on this for over 2 weeks checking many suggestions online but none of them solved my problem. Below is my current configuration for django 4.2.</p> <p>please kindly help if you come across this.</p> <p>This is my current configuration. my media folder resides in my static folder as well.</p> <pre><code>AWS_ACCESS_KEY_ID = env('aws_s3_access_key') AWS_SECRET_ACCESS_KEY = env('aws_s3_secret_key') AWS_S3_REGION_NAME = env('aws_s3_region_name') AWS_STORAGE_BUCKET_NAME = env('aws_storage_bucket_name') AWS_S3_ENDPOINT_URL = &quot;https://mogdynamics.nyc3.digitaloceanspaces.com&quot; AWS_STATIC_LOCATION = 'static' AWS_DEFAULT_ACL = &quot;public-read&quot; AWS_S3_OBJECT_PARAMETERS = {'CacheControl': 'max-age=86400'} # DIGITIALOCEAN SPACES CONFIGURATION STORAGES = { &quot;default&quot;: { &quot;BACKEND&quot;: &quot;storages.backends.s3.S3Storage&quot;, &quot;OPTIONS&quot;: { &quot;bucket_name&quot;: AWS_STORAGE_BUCKET_NAME, &quot;region_name&quot;: AWS_S3_REGION_NAME, &quot;default_acl&quot;: AWS_DEFAULT_ACL }, }, &quot;staticfiles&quot;: { &quot;BACKEND&quot;: &quot;storages.backends.s3.S3Storage&quot;, &quot;OPTIONS&quot;: { &quot;bucket_name&quot;: AWS_STORAGE_BUCKET_NAME, &quot;region_name&quot;: AWS_S3_REGION_NAME, &quot;default_acl&quot;: AWS_DEFAULT_ACL }, }, } STATICFILES_DIRS = [os.path.join(BASE_DIR, 'static'),] STATIC_URL = f'https://{AWS_S3_ENDPOINT_URL}/{AWS_STATIC_LOCATION}/' </code></pre> <p>When i viewed the page source and clicked on the css link i get the below error</p> <pre><code>This XML file does not appear to have any style information associated with it. The document tree is shown below. &lt;Error&gt; &lt;Code&gt;SignatureDoesNotMatch&lt;/Code&gt; &lt;Message/&gt; &lt;RequestId&gt;tx000003614bd933f0201e8-00657ea2de-4e1a3-nyc3d&lt;/RequestId&gt; &lt;HostId&gt;4e1a3-nyc3d-nyc3-zg04&lt;/HostId&gt; &lt;/Error&gt; </code></pre>
<python><django><digital-ocean><digital-ocean-spaces>
2023-12-17 07:21:17
0
349
Abayomi Olowu
77,673,353
9,357,484
How to use adapter transformers with a Huggingface Pipeline
<p>I tried to run the model &quot;AdapterHub/bert-base-uncased-pf-conll2003&quot; (<a href="https://huggingface.co/AdapterHub/bert-base-uncased-pf-conll2003" rel="nofollow noreferrer">Model description here</a>) for token classification in NLP.</p> <p>First I tried to install the adapter transformers</p> <pre><code>pip install -U adapter-transformers </code></pre> <p>The output of the above command was</p> <pre><code>Collecting adapter-transformers [... see edit history for skipped lines ...] Installing collected packages: tokenizers, huggingface-hub, adapter-transformers Attempting uninstall: tokenizers Found existing installation: tokenizers 0.15.0 Uninstalling tokenizers-0.15.0: Successfully uninstalled tokenizers-0.15.0 Attempting uninstall: huggingface-hub Found existing installation: huggingface-hub 0.19.4 Uninstalling huggingface-hub-0.19.4: Successfully uninstalled huggingface-hub-0.19.4 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. transformers 4.35.2 requires huggingface-hub&lt;1.0,&gt;=0.16.4, but you have huggingface-hub 0.13.4 which is incompatible. transformers 4.35.2 requires tokenizers&lt;0.19,&gt;=0.14, but you have tokenizers 0.13.3 which is incompatible. Successfully installed adapter-transformers-3.2.1.post0 huggingface-hub-0.13.4 tokenizers-0.13.3 </code></pre> <hr /> <p>I tried to load the model like this into the pipeline:</p> <pre class="lang-py prettyprint-override"><code>from transformers import AutoModelWithHeads from transformers import pipeline token_classification = pipeline(&quot;token-classification&quot;, model = &quot;AdapterHub/bert-base-uncased-pf-conll2003&quot;) res = token_classification(&quot;Take out the trash bag from the bin and replace it.&quot;) print(res) </code></pre> <p>I received the errors</p> <pre><code>EntryNotFoundError: 404 Client Error. (Request ID: Root=1-657e793c-0ce0c1936aff5e5741676650) Entry Not Found for url: https://huggingface.co/AdapterHub/bert-base-uncased-pf-conll2003/resolve/main/config.json. During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) &lt;ipython-input-3-030dfe0e128d&gt; in &lt;cell line: 3&gt;() 1 from transformers import AutoModelWithHeads 2 from transformers import pipeline ----&gt; 3 token_classification = pipeline(&quot;token-classification&quot;, model = &quot;AdapterHub/bert-base-uncased-pf-conll2003&quot;) 4 res = token_classification(&quot;Take out the trash bag from the bin and replace it.&quot;) 5 print(res) /usr/local/lib/python3.10/dist-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs) 673 hub_kwargs[&quot;_commit_hash&quot;] = config._commit_hash 674 elif config is None and isinstance(model, str): --&gt; 675 config = AutoConfig.from_pretrained(model, _from_pipeline=task, **hub_kwargs, **model_kwargs) 676 hub_kwargs[&quot;_commit_hash&quot;] = config._commit_hash 677 [... see edit history for skipped lines ...] /usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py in _get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 624 try: 625 # Load from local folder or from cache or download from model Hub and cache --&gt; 626 resolved_config_file = cached_file( 627 pretrained_model_name_or_path, 628 configuration_file, /usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash) 452 if revision is None: 453 revision = &quot;main&quot; --&gt; 454 raise EnvironmentError( 455 f&quot;{path_or_repo_id} does not appear to have a file named {full_filename}. Checkout &quot; 456 f&quot;'https://huggingface.co/{path_or_repo_id}/{revision}' for available files.&quot; OSError: AdapterHub/bert-base-uncased-pf-conll2003 does not appear to have a file named config.json. Checkout 'https://huggingface.co/AdapterHub/bert-base-uncased-pf-conll2003/main' for available files. </code></pre> <p>How do I correctly load this adapter model?</p>
<python><machine-learning><nlp><huggingface-transformers>
2023-12-17 04:49:28
1
3,446
Encipher
77,673,233
1,131,293
What is more pythonic? attribute querying or subclassing?
<p>I want to create class than can take either a float for epsilion or an epsilion that has some decaying method.</p> <pre class="lang-py prettyprint-override"><code>class DoSomething: def __init__(self, epsilion): self.epsilion = epsilion def something(self): # other code # then call decay decay(self.epsilion) ds1 = DoSomething(0.2) ds1.something() ds2 = DoSomething(DecayingEpsilion(0.2)) ds2.something() </code></pre> <p>Which is more pythonic?</p> <p>Subclassing like this</p> <pre class="lang-py prettyprint-override"><code>class EpsilionWithDecay(ABC): @abstractmethod def decay(self): ... def decay(ep): if isinstance(ep, EpsilionWithDecay): ep.decay() </code></pre> <p>or checking the variable for the callable method</p> <pre class="lang-py prettyprint-override"><code>def decay(ep): if isinstance(ep, object) and hasattr(ep, 'decay') and callable(ep.decay): ep.decay() </code></pre>
<python>
2023-12-17 03:33:12
2
543
jcjustesen
77,673,182
6,676,101
What Python script will find the locations of every backslash inside of an HTML tag and replace the backslash with a forward slash?
<p>Input:</p> <pre class="lang-html prettyprint-override"><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;This is a title&lt;/title&gt; &lt;\head&gt; &lt;body&gt; &lt;div&gt; &lt;p&gt;H/e/l/l/o \a\b\c\d\e\f\gw/o/r/l/d!&lt;/p&gt; &lt;/div&gt; &lt;\body&gt; &lt;\html&gt; </code></pre> <p>Output:</p> <pre class="lang-html prettyprint-override"><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;This is a title&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;div&gt; &lt;p&gt;H/e/l/l/o \a\b\c\d\e\f\gw/o/r/l/d!&lt;/p&gt; &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>We do not want to replace every single forward slash <code>\</code> with backslash <code>/</code>. Some of the backslashes are <em>intended</em> to be backslashes. We wish only to replace backslash <code>\</code> with forward slash <code>/</code> if the backslash <code>\</code> is inside of a misspelled HTML tag such as <code>&lt;\kbd&gt;</code>.</p>
<python><html><regex>
2023-12-17 03:02:16
2
4,700
Toothpick Anemone
77,672,938
826
pip unusable in Python 3.12 virtual environment
<p>I've created a blank, brand-new Python project using PyCharm. From the command line, going into that project correctly activates the virtual environment.</p> <pre><code>$ [~/Developer] cd pythonProject Switching pipenv: pythonProject-IbA-tQW [🐍Python 3.12.0] $ [~/Developer/pythonProject] </code></pre> <p>In that virtual environment, <code>python</code> and <code>pipenv</code> both work:</p> <pre><code>$ [~/Developer/pythonProject] python --version Python 3.12.0 $ [~/Developer/pythonProject] pipenv --version pipenv, version 2022.6.7 $ [~/Developer/pythonProject] </code></pre> <p>However, <code>pip</code> refuses to run, which makes it impossible to install packages.</p> <pre><code>$ [~/Developer/pythonProject] pip install icecream Traceback (most recent call last): File &quot;/Users/andrew/.local/share/virtualenvs/pythonProject-IbA-tQW-/bin/pip&quot;, line 5, in &lt;module&gt; from pip._internal.cli.main import main ... File &quot;/Users/andrew/.local/share/virtualenvs/pythonProject-IbA-tQW-/lib/python3.12/site-packages/pip/_internal/locations/_distutils.py&quot;, line 9, in &lt;module&gt; from distutils.cmd import Command as DistutilsCommand ModuleNotFoundError: No module named 'distutils' $ [~/Developer/pythonProject] </code></pre> <p>Maybe it's picking up the wrong <code>pip</code>. Let's see what version it is…</p> <pre><code>$ [~/Developer/pythonProject] pip --version Traceback (most recent call last): File &quot;/Users/andrew/.local/share/virtualenvs/pythonProject-IbA-tQW-/bin/pip&quot;, line 5, in &lt;module&gt; from pip._internal.cli.main import main ... File &quot;/Users/andrew/.local/share/virtualenvs/pythonProject-IbA-tQW-/lib/python3.12/site-packages/pip/_internal/locations/_distutils.py&quot;, line 9, in &lt;module&gt; from distutils.cmd import Command as DistutilsCommand ModuleNotFoundError: No module named 'distutils' $ [~/Developer/pythonProject] </code></pre> <p>So I'm not actually trying to do anything with <code>pip</code> — just see what version it is. Apparently I can't do that.</p> <p>Outside the virtual environment, it's fine:</p> <pre><code>$ [~/Developer] pip --version pip 23.3.1 from /usr/local/lib/python3.11/site-packages/pip (python 3.11) $ [~/Developer] </code></pre> <p>I'm really starting to be quite bothered by the difficulty in Python virtual environments. It's a bit of a trainwreck, to be honest. Every time I need to upgrade a Python version it's a major issue.</p> <p>What do I have to do to install a package in a Python 3.12 virtual environment when <code>pip</code> refuses to work?</p> <hr /> <p>Edit: Poetry doesn't help, as it defers to <code>pip</code> for package removal. Partial poetry output:</p> <pre><code>The following error occurred when trying to handle this error: EnvCommandError Command ['/Users/andrew/.local/share/virtualenvs/pythonProject-IbA-tQW-/bin/python', '-m', 'pip', 'uninstall', 'icecream', '-y'] errored with the following return code 1 Output: Traceback (most recent call last): File &quot;&lt;frozen runpy&gt;&quot;, line 198, in _run_module_as_main ... File &quot;/Users/andrew/.local/share/virtualenvs/pythonProject-IbA-tQW-/lib/python3.12/site-packages/pip/_internal/locations/_distutils.py&quot;, line 9, in &lt;module&gt; from distutils.cmd import Command as DistutilsCommand ModuleNotFoundError: No module named 'distutils' </code></pre>
<python><python-3.x><pip><python-3.12>
2023-12-17 00:11:25
0
12,020
Andrew
77,672,931
9,135,359
How to get KerrasRegressor to work? I keep getting an AttributeError
<p>I'm trying to optimize hyperparameters for a simple sequential neural-network using the KerasRegressor as part of a learning exercise. This is my code:</p> <pre><code>from sklearn.model_selection import GridSearchCV, RandomizedSearchCV from scipy.stats import randint as sp_randint from keras.wrappers.scikit_learn import KerasRegressor from sklearn.metrics import mean_squared_error, make_scorer '''CREATE THE MODEL''' def design_model(features): model = Sequential(name = &quot;My_Sequential_Model&quot;) model.add(InputLayer(input_shape=(features.shape[1],))) model.add(Dense(128, activation='relu')) model.add(Dense(1)) opt = Adam(learning_rate=0.01) model.compile(loss='mse', metrics=['mae'], optimizer=opt) return model '''TEST/PLOT THE MODEL: GRID SEARCH''' def do_grid_search(): batch_size = [6, 64] epochs = [10, 30, 61] model = KerasRegressor(build_fn=design_model, features=features_train) # KerasRegressor expects a function and not the model param_grid = dict(batch_size=batch_size, epochs=epochs) grid = GridSearchCV(estimator = model, param_grid=param_grid, scoring = make_scorer(mean_squared_error, greater_is_better=False),return_train_score = True) grid_result = grid.fit(features_train, labels_train, verbose = 0) print(grid_result) print(&quot;Best: %f using %s&quot; % (grid_result.best_score_, grid_result.best_params_)) means = grid_result.cv_results_['mean_test_score'] stds = grid_result.cv_results_['std_test_score'] params = grid_result.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print(&quot;%f (%f) with: %r&quot; % (mean, stdev, param)) print(&quot;Traininig&quot;) means = grid_result.cv_results_['mean_train_score'] stds = grid_result.cv_results_['std_train_score'] for mean, stdev, param in zip(means, stds, params): print(&quot;%f (%f) with: %r&quot; % (mean, stdev, param)) print(&quot;-------------- GRID SEARCH --------------------&quot;) do_grid_search() </code></pre> <p>But I keep getting the following error:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\sklearn\model_selection\_validation.py&quot;, line 732, in _fit_and_score estimator.fit(X_train, y_train, **fit_params) File &quot;C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\wrappers\scikit_learn.py&quot;, line 167, in fit AttributeError: module 'keras.losses' has no attribute 'is_categorical_crossentropy' </code></pre> <p>What do I do? I am using Tensorflow 2.15 and Keras 2.15.</p>
<python><tensorflow><keras>
2023-12-17 00:08:41
1
844
Code Monkey
77,672,825
12,853,184
SQLAlchemy relationship or association_proxy through several models(tables)
<p>I have several tables, connected as a chain with One-to-many relationships (names are changed to emphasise, there is a chain):</p> <p>Country &lt;- City &lt;- Street &lt;- House</p> <p>I need to have an attribute in House model to access it's only one Country.</p> <p>Street is connected to House using relationship, as well as City is connected to Street and Country to City. There is an absolutely clear chain. I already had a problem, connecting House to City, but solved it using Association_proxy:</p> <pre><code>class House(db.Model): ... street= db.relationship('Street', foreign_keys='House.street_id', backref='streets') city = association_proxy('street', 'city') class Street(db.Model): ... city = db.relationship('City', foreign_keys='Street.city_id', backref='streets') class City(db.Model): ... country = db.relationship('Country', foreign_keys='City.country_id', backref='cities') </code></pre> <p>But none of this helped me to connect the Country to House. I need to filter by Country, so @property does not help. Yes, it seems that filtering by Association_proxy or relationship is not implemented in SQLAlchemy (I had an Exception), but I already solved it too, created special method to filter. Association_proxy seems to use only two model names: first as an immediate target and second as the final target. Or is there a way to use more?</p> <p>I also did find a part <a href="https://docs.sqlalchemy.org/en/20/orm/join_conditions.html#composite-secondary-joins" rel="nofollow noreferrer">&quot;Composite “Secondary” Joins&quot;</a> of Relationships docs, but it seems to be used in more difficult cases (I don't get, why there should be a_id in C-table, I don't have House_id in City and don't find it to be necessary). Or did I miss something?</p> <p>So now <strong>I need a way to get Country from House</strong> using one of these methods. Is there a way to use several models as proxy or straightforward relationship through them? I did not find a good explanation in the docs. Is it possible?</p>
<python><flask-sqlalchemy><relationship>
2023-12-16 23:12:12
2
674
Anton Makarov
77,672,702
46,503
Flask app: how to handle multiple requests that should be responded in no time
<p>My Flask application is built around some third-party API, I have hundreds of thousands requests. The problem is their API requires responses to be sent very fast whereas I need to do many things: check the request parameters, verify the webhook request, search for a user in our database, etc. which takes time. Even if it looks pretty fast, it usually causes the timeout error for that API.</p> <p>So, to overcome this requirement, I create a new thread to handle the request and sent the immediate response:</p> <pre><code>@public_route('/some/api') class User_request(Resource): def post(self): request_headers = request.headers data = request_payload() event = data.get('event') content = get_field_value(data, 'data.content') handle_request(data, event, content) return jsonify('ok') ### def handle_request(data, event, content): app = current_app._get_current_object() thr = Thread(target = _handle_request_in_thread, args=[app, data, event, content]) thr.start() def _handle_request_in_thread(app, data, event, content): # Here is where I actually handle the request pass </code></pre> <p>As a result, I don't have the timeout error from that API anymore but I often have the error on my side: &quot;RuntimeError: can't start new thread&quot;. I was thinking of using a queue in some way but I'm not sure if I have to create a new thread on every new task in the queue and whether I should use the Python Queue or some library like RQ.</p>
<python><amazon-web-services><multithreading><flask><amazon-elastic-beanstalk>
2023-12-16 22:20:00
0
5,287
mimic
77,672,656
6,237,093
How to update pip in all environments created by pyenv
<p>Python warns to update the <code>pip</code> version when I try to upgrade <code>jupyterlab</code> in my <code>jupyter3</code> environment created using <code>pyenv</code>. I followed the instructions in the output log:</p> <pre><code>[notice] A new release of pip is available: 23.2.1 -&gt; 23.3.1 [notice] To update, run: python3.12 -m pip install --upgrade pip </code></pre> <p>when I do it, outside of the environments it doesn't update pip within the <code>jupyter3</code> environment. Following the recommendation from: <a href="https://stackoverflow.com/questions/49868654/how-to-update-pip-version-installed-by-pyenv">How to update pip version installed by pyenv</a> see comment from @JeffreyGoldberg from the answer provided by @TwistedSim it is suggested to run instead:</p> <pre><code>python -m pip install --upgrade pip </code></pre> <p>but I don't get the <code>pip</code> updated inside the environment:</p> <pre><code>dleal:~$ pyenv versions system * 3.12.1 (set by /Users/dleal/.pyenv/version) 3.12.1/envs/jupyter3 * jupyter3 --&gt; /Users/dleal/.pyenv/versions/3.12.1/envs/jupyter3 (set by /Users/dleal/.pyenv/version) </code></pre> <p>Now:</p> <pre><code>dleal:~$ pip --version pip 23.3.1 from /Users/dleal/.pyenv/versions/3.12.1/lib/python3.12/site-packages/pip (python 3.12) </code></pre> <p>but inside the environment:</p> <pre><code>dleal:~$ pyenv activate jupyter3 pyenv-virtualenv: prompt changing will be removed from future release. configure `export PYENV_VIRTUALENV_DISABLE_PROMPT=1' to simulate the behavior. (jupyter3) dleal:~$ pip --version pip 23.2.1 from /Users/dleal/.pyenv/versions/3.12.1/envs/jupyter3/lib/python3.12/site-packages/pip (python 3.12) </code></pre> <p><strong>Notes</strong>:</p> <ul> <li>I am trying to have Python workspace for my new MacBook Air 2023 M2. Following the steps indicated on this link: <a href="https://medium.com/@henriquebastos/the-definitive-guide-to-setup-my-python-workspace-628d68552e14" rel="nofollow noreferrer">The definitive guide to setting up my Python workspace</a> from Henrique Bastos.</li> <li>From this SO question: <a href="https://stackoverflow.com/questions/68852128/how-can-i-manually-update-pip-on-all-pyenv-virtualenvs">How can I manually update pip on all pyenv virtualenvs?</a> the answer suggested by @phd is to do a <code>for</code>-loop, but I guess there should be a better way to achieve it.</li> </ul>
<python><python-3.x><pip><pyenv>
2023-12-16 22:02:39
0
6,798
David Leal
77,672,632
7,930,118
RuntimeError When Modifying Node Count in Networkx Graph for Graph Neural Network Training
<p>I'm encountering a runtime error while manipulating the node count in a Networkx-generated graph passed through a Graph Neural Network (GNN). Here's my GNN code, which seems independent of the graph's node count:</p> <pre><code>class GCN(nn.Module): def __init__(self, input_size, hidden_size, num_classes): super(GCN, self).__init__() self.layer1 = GCNConv(input_size, hidden_size) self.layer2 = GCNConv(hidden_size, hidden_size) self.layer3 = GCNConv(hidden_size, num_classes) self.softmax = nn.Softmax(dim=0) def forward(self, node_features, edge_index): output = self.layer1(node_features, edge_index) output = torch.relu(output) output = self.layer2(output, edge_index) output = torch.relu(output) output = self.layer3(output, edge_index) output = self.softmax(output) return output </code></pre> <p>This is how I am creating the graph and removing a node from it.</p> <pre><code>def generate_graph(num_nodes): # generate weighted and connected graph Graph = nx.gnm_random_graph(num_nodes, random.randint(num_nodes, num_nodes*2), seed=42) while not nx.is_connected(Graph): Graph = nx.gnm_random_graph(num_nodes, random.randint(num_nodes, num_nodes*2), seed=42) # add features to nodes # node 0 will be the source node # each node will have a feature of 3 # first feature will represent the node's bias (a random value between 0 and 1) # second feature will represent if the node is a source node (0 or 1, 1 if the node is the source node) # third feature will represent the node's degree for node in Graph.nodes: Graph.nodes[node]['feature'] = [random.random(), 1 if node == 0 else 0, Graph.degree[node]] node_features = Graph.nodes.data('feature') node_features = torch.tensor([node_feature[1] for node_feature in node_features]) edge_index = torch.tensor(list(Graph.edges)).t().contiguous() return Graph, node_features, edge_index def remove_node_from_graph(Graph, node): # remove the node from the graph Graph.remove_node(node) # update the features of the nodes for node in Graph.nodes: Graph.nodes[node]['feature'][2] = Graph.degree[node] node_features = Graph.nodes.data('feature') node_features = torch.tensor([node_feature[1] for node_feature in node_features]) edge_index = torch.tensor(list(Graph.edges)).t().contiguous() return Graph, node_features, edge_index </code></pre> <p>Training my GCN with a 10-node graph succeeds, but when I remove one node and pass the modified graph through the GCN, I encounter the error:</p> <blockquote> <p>RuntimeError: index 9 is out of bounds for dimension 0 with size 9</p> </blockquote> <p>Surprisingly, the process works fine when I generate a new 9-node graph after the initial training step. I'm struggling to pinpoint where I might be making a mistake. Any insights would be greatly appreciated!</p>
<python><networkx><graph-neural-network>
2023-12-16 21:49:54
0
529
Protik Nag
77,672,545
3,142,695
Cannot run python3 script, which uses another package
<p>I am trying to run a simple python script on my macOS terminal. The scripts starts with</p> <pre><code>import wfdb #WaveForm-Database package. A library of tools for reading, writing, and processing WFDB signals and annotations. import pandas as pd import numpy as np import glob </code></pre> <p>But it fails with the error <code>No module named 'wfdb'</code>which doesn't make sense to me as I just installed the package:</p> <pre><code>user in ~/ecg &gt; pip3 install wfdb Requirement already satisfied: wfdb in /usr/local/lib/python3.11/site-packages (4.1.2) Requirement already satisfied: SoundFile&gt;=0.10.0 in /usr/local/lib/python3.11/site-packages (from wfdb) (0.12.1) Requirement already satisfied: matplotlib&gt;=3.2.2 in /usr/local/lib/python3.11/site-packages (from wfdb) (3.8.2) Requirement already satisfied: numpy&gt;=1.10.1 in /usr/local/lib/python3.11/site-packages (from wfdb) (1.26.2) Requirement already satisfied: pandas&gt;=1.3.0 in /usr/local/lib/python3.11/site-packages (from wfdb) (2.1.4) Requirement already satisfied: requests&gt;=2.8.1 in /usr/local/lib/python3.11/site-packages (from wfdb) (2.31.0) Requirement already satisfied: scipy&gt;=1.0.0 in /usr/local/lib/python3.11/site-packages (from wfdb) (1.11.4) Requirement already satisfied: contourpy&gt;=1.0.1 in /usr/local/lib/python3.11/site-packages (from matplotlib&gt;=3.2.2-&gt;wfdb) (1.2.0) Requirement already satisfied: cycler&gt;=0.10 in /usr/local/lib/python3.11/site-packages (from matplotlib&gt;=3.2.2-&gt;wfdb) (0.12.1) Requirement already satisfied: fonttools&gt;=4.22.0 in /usr/local/lib/python3.11/site-packages (from matplotlib&gt;=3.2.2-&gt;wfdb) (4.46.0) Requirement already satisfied: kiwisolver&gt;=1.3.1 in /usr/local/lib/python3.11/site-packages (from matplotlib&gt;=3.2.2-&gt;wfdb) (1.4.5) Requirement already satisfied: packaging&gt;=20.0 in /usr/local/lib/python3.11/site-packages (from matplotlib&gt;=3.2.2-&gt;wfdb) (23.2) Requirement already satisfied: pillow&gt;=8 in /usr/local/lib/python3.11/site-packages (from matplotlib&gt;=3.2.2-&gt;wfdb) (10.1.0) Requirement already satisfied: pyparsing&gt;=2.3.1 in /usr/local/lib/python3.11/site-packages (from matplotlib&gt;=3.2.2-&gt;wfdb) (3.1.1) Requirement already satisfied: python-dateutil&gt;=2.7 in /usr/local/lib/python3.11/site-packages (from matplotlib&gt;=3.2.2-&gt;wfdb) (2.8.2) Requirement already satisfied: pytz&gt;=2020.1 in /usr/local/lib/python3.11/site-packages (from pandas&gt;=1.3.0-&gt;wfdb) (2023.3.post1) Requirement already satisfied: tzdata&gt;=2022.1 in /usr/local/lib/python3.11/site-packages (from pandas&gt;=1.3.0-&gt;wfdb) (2023.3) Requirement already satisfied: charset-normalizer&lt;4,&gt;=2 in /usr/local/lib/python3.11/site-packages (from requests&gt;=2.8.1-&gt;wfdb) (3.3.2) Requirement already satisfied: idna&lt;4,&gt;=2.5 in /usr/local/lib/python3.11/site-packages (from requests&gt;=2.8.1-&gt;wfdb) (3.6) Requirement already satisfied: urllib3&lt;3,&gt;=1.21.1 in /usr/local/lib/python3.11/site-packages (from requests&gt;=2.8.1-&gt;wfdb) (2.1.0) Requirement already satisfied: certifi&gt;=2017.4.17 in /usr/local/lib/python3.11/site-packages (from requests&gt;=2.8.1-&gt;wfdb) (2023.11.17) Requirement already satisfied: cffi&gt;=1.0 in /usr/local/lib/python3.11/site-packages (from SoundFile&gt;=0.10.0-&gt;wfdb) (1.16.0) Requirement already satisfied: pycparser in /usr/local/lib/python3.11/site-packages (from cffi&gt;=1.0-&gt;SoundFile&gt;=0.10.0-&gt;wfdb) (2.21) Requirement already satisfied: six&gt;=1.5 in /usr/local/lib/python3.11/site-packages (from python-dateutil&gt;=2.7-&gt;matplotlib&gt;=3.2.2-&gt;wfdb) (1.16.0) user in ~/ecg &gt; python3 convert.py Traceback (most recent call last): File &quot;/Users/user/ecg/convert.py&quot;, line 1, in &lt;module&gt; import wfdb #WaveForm-Database package. A library of tools for reading, writing, and processing WFDB signals and annotations. ^^^^^^^^^^^ ModuleNotFoundError: No module named 'wfdb' </code></pre>
<python><macos>
2023-12-16 21:19:21
0
17,484
user3142695
77,672,116
11,908,063
Python TypeVar bound to a class is not compatible with bound class
<p>This is a fairly contrived example, but in summary:</p> <ul> <li>I have a <code>Store</code> responsible for returning values of type <code>T</code> (a specific sub-class of <code>Model</code></li> <li>The <code>Store</code> can contain any subclass of <code>Model</code> but allows you to register coverters such that it always returns a class of type <code>T</code></li> </ul> <pre class="lang-py prettyprint-override"><code>from collections.abc import Callable, Generator from dataclasses import dataclass from typing import Generic, TypeVar @dataclass class Model: pass @dataclass class EntryV1(Model): field: int @dataclass class EntryV2(Model): field: str T = TypeVar(&quot;T&quot;, bound=Model) U = TypeVar(&quot;U&quot;, bound=Model) class Store(Generic[T]): def __init__(self, model: type[T], entries: list[Model]) -&gt; None: self.model = model self.entries = entries self.converters: dict[str, Callable[[Model], T]] = {} def register_converter(self, old: type[U], converter: Callable[[U], T]) -&gt; None: self.converters[old.__name__] = converter def _convert(self, entry: Model) -&gt; T: if isinstance(entry, self.model): return entry else: converter = self.converters[entry.__class__.__name__] return converter(entry) def get(self, idx: int) -&gt; T: return self._convert(self.entries[idx]) def get_all(self) -&gt; Generator[T, None, None]: return (self._convert(entry) for entry in self.entries) store = Store(EntryV2, [EntryV1(field=1), EntryV2(field=&quot;2&quot;)]) store.register_converter(EntryV1, lambda entry: EntryV2(field=str(entry.field))) print(store.get(0)) print(list(store.get_all())) </code></pre> <p>When run this code outputs:</p> <pre class="lang-py prettyprint-override"><code>EntryV2(field='1') [EntryV2(field='1'), EntryV2(field='2')] </code></pre> <p>The problem I have is that <code>mypy</code> isn't happy with is code despite <code>U</code> being bound to <code>Model</code></p> <pre><code>file.py:32: error: Incompatible types in assignment (expression has type &quot;Callable[[U], T]&quot;, target has type &quot;Callable[[Model], T]&quot;) [assignment] </code></pre> <p>I'm using <code>U</code> in <code>register_converter</code> because I want to ensure the type of <code>old</code> is compatible with the type of <code>coverter</code></p> <pre class="lang-py prettyprint-override"><code> def register_converter(self, old: type[U], converter: Callable[[U], T]) -&gt; None: </code></pre> <hr /> <p><strong>Edit</strong></p> <p>Interestingly, if I pass the wrong type of converter to <code>register_converter</code> mypy is able to spot that it's incorrect and infer the relevant types (e.g. <code>register_converter(Entity1, converter)</code></p> <pre><code>Argument 2 to &quot;register_converter&quot; of &quot;Store&quot; has incompatible type &quot;Callable[[EntryV2], EntryV2]&quot;; expected &quot;Callable[[EntryV1], EntryV2] </code></pre>
<python><generics><mypy><python-typing>
2023-12-16 18:50:22
1
632
Sam Broster
77,672,055
1,473,517
How to shade above a plotted line?
<p>I have a grid with a line drawn on it as follows.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np # Draw squares with random integers between -3 and 3 in each square np.random.seed(42) # Setting a seed for reproducibility square = np.zeros((10, 10), dtype=np.int_) for x in np.arange(0.5, 10, 1): for y in np.arange(0.5, 10, 1): value = np.random.randint(-3, 4) square[int(x), int(y)] = value plt.text(x-0.2, y-0.2, str(value), ha='center', va='center', fontsize=8, color='black') # Calculate the center coordinates and add blue dots for x in np.arange(0.5, 10, 1): for y in np.arange(0.5, 10, 1): plt.scatter(x, y, color='blue', s=1) # Adjust the size (s) as needed x_points = [7, 2] y_points = [0, 10] plt.plot(x_points, y_points) # Customize the plot plt.xlim(0, 10) plt.ylim(0, 10) plt.yticks(np.arange(0, 10.01, 1)) plt.xticks(np.arange(0, 10.01, 1)) plt.gca().invert_yaxis() plt.gca().set_aspect('equal', adjustable='box') plt.grid() </code></pre> <p>It looks like:</p> <p><a href="https://i.sstatic.net/5MBM4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5MBM4.png" alt="enter image description here" /></a></p> <p>I would like to shade the region that is above the line (that is the region that includes the top left square). How can you do that?</p>
<python><matplotlib>
2023-12-16 18:31:41
1
21,513
Simd
77,671,877
7,959,614
How to merge two xarray.Datasets based on common coordinates
<p>I want to merge two <code>xarray.Dataset</code>-objects.</p> <p>My code looks as follows:</p> <pre><code>import numpy as np import xarray as xr N_CHAINS = 4 N_DRAWS = 1000 N_PLAYERS = 5 player_idx = [1, 1, 2, 3, 4, 4, 0, 0, 2, 2] opponent_idx = [0, 3, 1, 4, 1, 1, 1, 4, 3, 3] h2h_idx = pd.MultiIndex.from_tuples( tuple(zip(player_idx, opponent_idx)), names=('player_id', 'opponent_id') ) obs = xr.Dataset( data_vars=dict( n_points_won=(['h2h_id'], np.array([11, 11, 8, 9, 4, 11, 7, 11, 11, 11])), n_points_lost=(['h2h_id'], np.array([9, 9, 11, 11, 11, 1, 11, 2, 3, 6])), ), coords=dict( h2h_id=(['h2h_id'], h2h_idx), ) ) alpha = np.random.rand(N_CHAINS, N_DRAWS, N_PLAYERS, N_PLAYERS) * 100 beta = np.random.rand(N_CHAINS, N_DRAWS, N_PLAYERS, N_PLAYERS) * 100 pos = xr.Dataset( data_vars=dict( alpha=(['chain', 'draw', 'player_id', 'opponent_id'], alpha), beta=(['chain', 'draw', 'player_id', 'opponent_id'], beta), ), coords=dict( chain=(['chain'], list(range(N_CHAINS))), draw=(['draw'], list(range(N_DRAWS))), player_id=(['player_id'], list(range(N_PLAYERS))), oppponent_id=(['opponent_id'], list(range(N_PLAYERS))), ), ) combined = xr.combine_nested([obs, pos], concat_dim=['player_id', 'opponent_id']) # this results in an error </code></pre> <p>I am getting the following error:</p> <blockquote> <p>ValueError: concat_dims has length 2 but the datasets passed are nested in a 1-dimensional structure</p> </blockquote> <p>How can I merge the two datasets? The goal is to create a <code>xarray.Dataset</code> with coordinates <code>h2h_id</code>, <code>player_id</code>, <code>opponent_id</code>, <code>chain</code> and <code>draw</code>. Basically for each data variable in <code>obs</code> I want to concat the corresponding <code>alpha</code>- and <code>beta</code> values from pos of that particular observation (through <code>player_id</code> and <code>opponent_id</code>)</p> <p>Thanks in advance</p>
<python><python-xarray>
2023-12-16 17:38:37
1
406
HJA24
77,671,804
13,801,302
How to solve: RuntimeError: CUDA error: device-side assert triggered?
<p>I want to use the <em>paraphrase-multilingual-mpnet-base-v2</em> model to build embeddings and I got this error:</p> <p><strong>RuntimeError: CUDA error: device-side assert triggered</strong></p> <p>The error occurs by executing <code>string = {k: v.to(device=device) for k, v in string.items()}</code>.</p> <p><strong>Why do I get the error?</strong></p> <p>I work in a Google Colab with 12.7 GB RAM and 16 GB GPU-RAM</p> <p>The goal of the code is to generate sentence embeddings. With some customizing is a chunk-wise execution also possible.</p> <p>The complete error message:</p> <pre><code>RuntimeError Traceback (most recent call last) &lt;ipython-input-17-8e6bf00d9e24&gt; in &lt;cell line: 104&gt;() 102 return np.nan 103 --&gt; 104 processed_data = processDataRAG(df[5000:], tokenizer, model) 4 frames &lt;ipython-input-17-8e6bf00d9e24&gt; in processDataRAG(data, tokenizer, model) 10 sents = [str(sentences[0]) for sentences in article_sentences] 11 number_of_article =[sentences[1] for sentences in article_sentences] ---&gt; 12 embedded_sentencs = [embeddChunkwise(sentence, tokenizer, model, 512) for sentence in tqdm(sents, desc = &quot;Create chunk-wise embeddings&quot;)] 13 return pd.DataFrame({ 14 &quot;sentences&quot;: sents, &lt;ipython-input-17-8e6bf00d9e24&gt; in &lt;listcomp&gt;(.0) 10 sents = [str(sentences[0]) for sentences in article_sentences] 11 number_of_article =[sentences[1] for sentences in article_sentences] ---&gt; 12 embedded_sentencs = [embeddChunkwise(sentence, tokenizer, model, 512) for sentence in tqdm(sents, desc = &quot;Create chunk-wise embeddings&quot;)] 13 return pd.DataFrame({ 14 &quot;sentences&quot;: sents, &lt;ipython-input-17-8e6bf00d9e24&gt; in embeddChunkwise(string, tokenizer, model, chunk_size) 55 #encoded_input = tokenizer(tokenizer.detokenize(tokenized_chunk)) 56 if len(encoded_chunk) &gt; 0: ---&gt; 57 embedded_chunk = createEmbeddings( 58 tokenizer(tokenizer.decode(encoded_chunk, skip_special_tokens = True), return_tensors='pt', add_special_tokens=False), 59 model &lt;ipython-input-17-8e6bf00d9e24&gt; in createEmbeddings(string, model) 77 #print(&quot;Length of input_ids: &quot;, len(string[&quot;input_ids&quot;][0])) 78 if &quot; input_ids&quot; in string.keys(): ---&gt; 79 string = {k: v.to(device=device) for k, v in string.items()} 80 with torch.no_grad(): 81 &lt;ipython-input-17-8e6bf00d9e24&gt; in &lt;dictcomp&gt;(.0) 77 #print(&quot;Length of input_ids: &quot;, len(string[&quot;input_ids&quot;][0])) 78 if &quot;input_ids&quot; in string.keys(): ---&gt; 79 string = {k: v.to(device=device) for k, v in string.items()} 80 with torch.no_grad(): 81 RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. </code></pre> <p>I run this code:</p> <pre class="lang-py prettyprint-override"><code>from transformers import AutoTokenizer, AutoModel import torch from torch import cuda def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Select device globally device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu' # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-mpnet-base-v2') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-mpnet-base-v2', device_map = device) df = pd.read_json(file_path) def processDataRAG(data, tokenizer, model): article_sentences = data.content.progress_apply(lambda x: list(nlp_de(x).sents)) #tokenized_articles = data.content.progress_apply(lambda article: tokenizeChunkwise(article, tokenizer, 512)) article_sentences = [ (sentences, idx) for idx, article in tqdm(enumerate(list(article_sentences)), desc=&quot;Loop over articles with index&quot;) for sentences in article ] sents = [str(sentences[0]) for sentences in article_sentences] number_of_article =[sentences[1] for sentences in article_sentences] embedded_sentencs = [embeddChunkwise(sentence, tokenizer, model, 512) for sentence in tqdm(sents, desc = &quot;Create chunk-wise embeddings&quot;)] return pd.DataFrame({ &quot;sentences&quot;: sents, &quot;embeddings&quot;: embedded_sentencs, &quot;article&quot;: number_of_article }) def embeddChunkwise(string, tokenizer, model, chunk_size): decreasing_by_special_tokens = 0 # Because of speical tokens at the beginning and end encoded_string = tokenizer(string, add_special_tokens=False) if len(encoded_string[&quot;input_ids&quot;])/chunk_size &gt; 1: print(&quot;Tokenized_string:&quot;, encoded_string) print(&quot;Total tokens: &quot;, str(len(encoded_string[&quot;input_ids&quot;]))) print(&quot;Tokenized string in chunks: &quot;, str(len(encoded_string[&quot;input_ids&quot;])/chunk_size), &quot; --- &quot; , str(len(encoded_string[&quot;input_ids&quot;])//chunk_size +1)) embedded_chunks = [] for idx in list(range(len(encoded_string[&quot;input_ids&quot;])//chunk_size +1 )): encoded_chunk=None if (chunk_size-decreasing_by_special_tokens)*(idx+1) &lt; len(encoded_string[&quot;input_ids&quot;]): # sentences with 1000 words as instances start_idx, end_idx = (chunk_size*idx - decreasing_by_special_tokens*idx, chunk_size*(idx+1) - decreasing_by_special_tokens*(idx+1)) encoded_chunk = encoded_string[&quot;input_ids&quot;][start_idx:end_idx] else: # If it is a sentences with 20 words as instance if chunk_size-decreasing_by_special_tokens &gt; len(encoded_string[&quot;input_ids&quot;]): encoded_chunk = encoded_string[&quot;input_ids&quot;][chunk_size*(idx) - decreasing_by_special_tokens*(idx):] else: encoded_chunk = encoded_string[&quot;input_ids&quot;][-(chunk_size*(idx) - decreasing_by_special_tokens*(idx)):] if len(encoded_chunk) &gt; 0: embedded_chunk = createEmbeddings( tokenizer(tokenizer.decode(encoded_chunk, skip_special_tokens = True), return_tensors='pt', add_special_tokens=False), model ) if isinstance(embedded_chunk, list): embedded_chunks.append(embedded_chunk[0]) if len(embedded_chunks) &gt; 1: return embedded_chunks elif len(embedded_chunks) == 0: return np.nan else: return embedded_chunks[0] def createEmbeddings(string, model): if &quot;input_ids&quot; in string.keys(): string = {k: v.to(device=device) for k, v in string.items()} with torch.no_grad(): try: model_output = model(**string) except Exception as ex: print(&quot;--- Error by creating Embeddings ---&quot;) print(&quot;Error: &quot;, str(ex)) return np.nan # Perform pooling. In this case, average pooling try: sentence_embeddings = mean_pooling(model_output, string['attention_mask']) except Exception as ex: print(&quot;--- Error by pooling embeddings ---&quot;) print(&quot;Model output: &quot;, str(model_output)) print(&quot;Attention_mask: &quot;, str(string['attention_mask'])) print(&quot;Error: &quot;, str(ex)) return np.nan sentence_embeddings = sentence_embeddings.detach().cpu().numpy() return sentence_embeddings else: return np.nan </code></pre>
<python><pytorch><transformer-model>
2023-12-16 17:17:20
1
621
Christian01
77,671,478
1,180,930
How can I detect and "fix" URLs in a body of text if they have spaces?
<p>Suppose I have the following submitted text. Note the spaces in most of the URLs:</p> <blockquote> <p>According to NASA (htt ps://www.nasa. gov) and the New York Times (<a href="https://www.nytimes.com/topic/organization/national-aeronautics-" rel="nofollow noreferrer">https://www.nytimes.com/topic/organization/national-aeronautics-</a> and-space -administration), scientists are making lots of new discoveries! There are all kinds of exciting new findings. The <code>astro-ph</code> category of ArXiv (https:// arxiv.org /list/astro-ph.GA/new) lists a bunch of new research that is going on. Lucky me! A Google search (<a href="https://www.google.com/" rel="nofollow noreferrer">https://www.google.com/</a>) turned up more new discoveries!</p> </blockquote> <p>I want to detect the URLs and replace them with the corrected URL using Python.</p> <p>The fixed URLs in this text are:</p> <ul> <li><a href="https://www.nasa.gov" rel="nofollow noreferrer">https://www.nasa.gov</a></li> <li><a href="https://www.nytimes.com/topic/organization/national-aeronautics-and-space-administration" rel="nofollow noreferrer">https://www.nytimes.com/topic/organization/national-aeronautics-and-space-administration</a></li> <li><a href="https://arxiv.org/list/astro-ph.GA/new" rel="nofollow noreferrer">https://arxiv.org/list/astro-ph.GA/new</a></li> <li><a href="https://www.google.com/" rel="nofollow noreferrer">https://www.google.com/</a></li> </ul> <p>Not all the URLs have spaces. Some URLs have more than one space. The spaces may be in any part of the URL. I can assume the URLs are web URLs (http/https). I think I can assume there will only be spaces (no tabs or newlines). I think I can assume there will not be more than one consecutive space. I think I can assume that tokens/words will not be broken by a space — in other words, spaces will be next to punctuation marks. I cannot assume that all URLs are enclosed in parentheses.</p> <p>Note: My question is similar to <a href="https://stackoverflow.com/questions/53009784/remove-spaces-only-in-urls">this one</a>, except that the URLs I am hoping to fix are in written text, the spaces may be in any part of the URL, and I am limiting myself to web URLs.</p> <p>Note: I currently am using the excellent (if overkill) Liberal Regex Pattern for Web URLs <a href="https://gist.github.com/gruber/8891611" rel="nofollow noreferrer">here</a>, but it seems insufficient for this job.</p> <p>Note: I need to both detect <em>and</em> replace the URLs. For my own use, I scan the text and convert it to LaTeX. The URLs get converted to hyperlinks via the <code>\href{}{}</code> command. In doing so, I need to detect good and bad URLs, fix any bad URLs, create a hyperlink using the correct URL, and then replace the original good or bad URL the corrected URL inside the text body.</p>
<python><string><replace><python-re><url-parsing>
2023-12-16 15:31:01
2
1,971
jvriesem
77,671,360
4,556,556
Docker: Can't Find my_app Module
<p>I am creating an flask app to demo Cloudbuild and Docker.</p> <p>Here is my file structure:</p> <p><a href="https://i.sstatic.net/ZoUb2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZoUb2.png" alt="file_structure" /></a></p> <p>When I build and try to run my docker image, I keep getting an error:</p> <p><strong>ModuleNotFoundError: No module named 'my_app'</strong></p> <p>Here is my main.py file:</p> <pre><code>from flask import Flask, render_template, redirect, url_for from faker import Faker import random app = Flask(__name__) fake = Faker() # Function to generate fake employee data def create_employee(): return { &quot;id&quot;: random.randint(1000, 9999), &quot;name&quot;: fake.name(), &quot;address&quot;: fake.address(), &quot;email&quot;: fake.email(), &quot;job_title&quot;: fake.job() } # In-memory list to hold employees employees = [create_employee() for _ in range(10)] # Route to display a list of employees @app.route('/') def index(): return render_template('index.html', employees=employees) # Route to add a new employee and redirect to index @app.route('/add-employee', methods=['GET']) def add_employee(): employees.append(create_employee()) return redirect(url_for('index')) if __name__ == '__main__': app.run(host='0.0.0.0', port=8080) </code></pre> <p>And here is my DockerFile:</p> <pre><code># Use an official Python runtime as a parent image FROM python:3.8-slim # Set environment variables ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # Set the working directory in the container to /app WORKDIR /my_app/ # Copy the current directory contents into the container at /app COPY ./my_app /app # Install any needed packages specified in requirements.txt COPY requirements.txt requirements.txt RUN pip install --no-cache-dir -r requirements.txt # Make port 5000 available to the world outside this container EXPOSE 8080 # Define environment variable ENV NAME World # Run app.py when the container launches # CMD [&quot;python&quot;, &quot;app.py&quot;] CMD [&quot;gunicorn&quot;, &quot;--bind&quot;, &quot;0.0.0.0:8080&quot;, &quot;--workers&quot;, &quot;1&quot;, &quot;--threads&quot;, &quot;8&quot;, &quot;my_app.main:main&quot;] </code></pre> <p>For more context: Originally, the main.py file was at the root folder. I was having issues with finding modules in the main.py file for testing, so I decided to move the main.py file and other folders into the &quot;my_app&quot; folder.</p>
<python><docker><flask>
2023-12-16 14:55:24
0
502
Patrick Bentley
77,671,277
5,687,866
ValueError: Invalid pattern: '**' can only be an entire path component
<p>I am trying to fine tune a LLM</p> <p>My code so far:</p> <pre><code>from datasets import load_dataset, DatasetDict, Dataset from transformers import ( AutoTokenizer, AutoConfig, AutoModelForSequenceClassification, DataCollatorWithPadding, TrainingArguments, Trainer) from peft import PeftModel, PeftConfig, get_peft_model, LoraConfig import evaluate import torch import numpy as np # load dataset dataset = load_dataset('TokenBender/code_instructions_122k_alpaca_style') dataset </code></pre> <p>Error:</p> <pre><code> --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In [12], line 2 1 # load dataset ----&gt; 2 dataset = load_dataset('TokenBender/code_instructions_122k_alpaca_style') 3 dataset File /usr/local/lib/python3.9/dist-packages/datasets/load.py:1664, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1661 ignore_verifications = ignore_verifications or save_infos 1663 # Create a dataset builder -&gt; 1664 builder_instance = load_dataset_builder( 1665 path=path, 1666 name=name, 1667 data_dir=data_dir, 1668 data_files=data_files, 1669 cache_dir=cache_dir, 1670 features=features, 1671 download_config=download_config, 1672 download_mode=download_mode, 1673 revision=revision, 1674 use_auth_token=use_auth_token, 1675 **config_kwargs, 1676 ) 1678 # Return iterable dataset in case of streaming 1679 if streaming: File /usr/local/lib/python3.9/dist-packages/datasets/load.py:1490, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1488 download_config = download_config.copy() if download_config else DownloadConfig() 1489 download_config.use_auth_token = use_auth_token -&gt; 1490 dataset_module = dataset_module_factory( 1491 path, 1492 revision=revision, 1493 download_config=download_config, 1494 download_mode=download_mode, 1495 data_dir=data_dir, 1496 data_files=data_files, 1497 ) 1499 # Get dataset builder class from the processing script 1500 builder_cls = import_main_class(dataset_module.module_path) File /usr/local/lib/python3.9/dist-packages/datasets/load.py:1242, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1237 if isinstance(e1, FileNotFoundError): 1238 raise FileNotFoundError( 1239 f&quot;Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. &quot; 1240 f&quot;Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}&quot; 1241 ) from None -&gt; 1242 raise e1 from None 1243 else: 1244 raise FileNotFoundError( 1245 f&quot;Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory.&quot; 1246 ) File /usr/local/lib/python3.9/dist-packages/datasets/load.py:1223, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1215 return HubDatasetModuleFactoryWithScript( 1216 path, 1217 revision=revision, (...) 1220 dynamic_modules_path=dynamic_modules_path, 1221 ).get_module() 1222 else: -&gt; 1223 return HubDatasetModuleFactoryWithoutScript( 1224 path, 1225 revision=revision, 1226 data_dir=data_dir, 1227 data_files=data_files, 1228 download_config=download_config, 1229 download_mode=download_mode, 1230 ).get_module() 1231 except Exception as e1: # noqa: all the attempts failed, before raising the error we should check if the module is already cached. 1232 try: File /usr/local/lib/python3.9/dist-packages/datasets/load.py:846, in HubDatasetModuleFactoryWithoutScript.get_module(self) 836 token = self.download_config.use_auth_token 837 hfh_dataset_info = HfApi(config.HF_ENDPOINT).dataset_info( 838 self.name, 839 revision=self.revision, 840 token=token, 841 timeout=100.0, 842 ) 843 patterns = ( 844 sanitize_patterns(self.data_files) 845 if self.data_files is not None --&gt; 846 else get_patterns_in_dataset_repository(hfh_dataset_info) 847 ) 848 data_files = DataFilesDict.from_hf_repo( 849 patterns, 850 dataset_info=hfh_dataset_info, 851 allowed_extensions=ALL_ALLOWED_EXTENSIONS, 852 ) 853 infered_module_names = { 854 key: infer_module_for_data_files(data_files_list, use_auth_token=self.download_config.use_auth_token) 855 for key, data_files_list in data_files.items() 856 } File /usr/local/lib/python3.9/dist-packages/datasets/data_files.py:471, in get_patterns_in_dataset_repository(dataset_info) 469 resolver = partial(_resolve_single_pattern_in_dataset_repository, dataset_info) 470 try: --&gt; 471 return _get_data_files_patterns(resolver) 472 except FileNotFoundError: 473 raise FileNotFoundError( 474 f&quot;The dataset repository at '{dataset_info.id}' doesn't contain any data file.&quot; 475 ) from None File /usr/local/lib/python3.9/dist-packages/datasets/data_files.py:99, in _get_data_files_patterns(pattern_resolver) 97 try: 98 for pattern in patterns: ---&gt; 99 data_files = pattern_resolver(pattern) 100 if len(data_files) &gt; 0: 101 non_empty_splits.append(split) File /usr/local/lib/python3.9/dist-packages/datasets/data_files.py:303, in _resolve_single_pattern_in_dataset_repository(dataset_info, pattern, allowed_extensions) 301 data_files_ignore = FILES_TO_IGNORE 302 fs = HfFileSystem(repo_info=dataset_info) --&gt; 303 glob_iter = [PurePath(filepath) for filepath in fs.glob(PurePath(pattern).as_posix()) if fs.isfile(filepath)] 304 matched_paths = [ 305 filepath 306 for filepath in glob_iter 307 if filepath.name not in data_files_ignore and not filepath.name.startswith(&quot;.&quot;) 308 ] 309 if allowed_extensions is not None: File /usr/local/lib/python3.9/dist-packages/fsspec/spec.py:606, in AbstractFileSystem.glob(self, path, maxdepth, **kwargs) 602 depth = None 604 allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs) --&gt; 606 pattern = glob_translate(path + (&quot;/&quot; if ends_with_sep else &quot;&quot;)) 607 pattern = re.compile(pattern) 609 out = { 610 p: info 611 for p, info in sorted(allpaths.items()) (...) 618 ) 619 } File /usr/local/lib/python3.9/dist-packages/fsspec/utils.py:734, in glob_translate(pat) 732 continue 733 elif &quot;**&quot; in part: --&gt; 734 raise ValueError( 735 &quot;Invalid pattern: '**' can only be an entire path component&quot; 736 ) 737 if part: 738 results.extend(_translate(part, f&quot;{not_sep}*&quot;, not_sep)) ValueError: Invalid pattern: '**' can only be an entire path component </code></pre> <p>I tried to find something online the closet I found is this article <a href="https://github.com/coala/coala/issues/401" rel="noreferrer">https://github.com/coala/coala/issues/401</a></p> <p>but I could not understand their solution. Can anyone help me in understanding the solution for the error I am facing. Thanks.</p> <p>My library versions:</p> <ul> <li>peft : '0.6.0'</li> <li>torch : '2.1.2+cu121'</li> <li>datasets : '2.1.0'</li> <li>transformers : '4.21.3'</li> </ul>
<python><large-language-model><huggingface-datasets>
2023-12-16 14:30:38
5
1,080
Hitesh Somani
77,671,246
5,986,907
jax.jit my whole program or just parts of it?
<p>Suppose I have a JAX program like</p> <pre><code>def f(x: jnp.array) -&gt; jnp.array: ... def g(x: jnp.array) -&gt; jnp.array: # use f lots of times # do other stuff ... </code></pre> <p>where <code>f</code> and <code>g</code> are fully <code>jit</code>-compatible, what should I <code>jit</code>?</p> <ol> <li><code>g</code> and let JAX/XLA optimize the multiple uses of <code>f</code>?</li> <li><code>f</code> but not <code>g</code>?</li> <li><code>f</code> and <code>g</code>?</li> </ol> <p>and why?</p> <p>I'm trying to understand what <code>jit</code> does. For example, the difference between <code>jit</code> and <code>XlaBuilder.Build</code>, whether <code>jit</code> is more of Python/JAX thing and is necessary in XLA usage in general, and what other uses of XLA the <code>jit</code> pattern would be applicable to.</p> <p>I have also asked <a href="https://stackoverflow.com/q/77671327/5986907">this</a> XLA-specific, but very related, question.</p>
<python><jit><jax>
2023-12-16 14:23:17
1
8,082
joel
77,670,915
1,473,517
How to place the tick label between grid lines
<p>I am plotting a grid and I want two changes. The first is that I want the y-axis tick labels to be descending starting from the top. I have managed to do it but I was wondering if there was a simpler way. The second is that I want the tick labels to be between the grid lines and not at the location of the grid lines. This is my current MWE:</p> <pre><code>import matplotlib.pyplot as plt plt.yticks(np.arange(0, 10.01, 1)) plt.xticks(np.arange(0, 10.01, 1)) plt.xlim(0,10) plt.ylim(0,10) plt.gca().invert_yaxis() # Set aspect ratio to be equal plt.gca().set_aspect('equal', adjustable='box') plt.grid() </code></pre> <p>In this case I would like the tick labels to be from 0 to 9 and to sit in the middle between the grid lines on the x and y axes.</p> <p><a href="https://i.sstatic.net/oC2IK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oC2IK.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2023-12-16 12:29:12
1
21,513
Simd
77,670,629
2,919,585
Python type hints: List of allowed objects (not literals)
<p>Say I have a function <code>foo(f)</code> where <code>f</code> should be one of <code>np.sin</code> or <code>np.cos</code>. What would be the correct way to type hint this?</p> <pre class="lang-py prettyprint-override"><code>import numpy as np def foo(f: Literal[np.sin, np.cos]): ... </code></pre> <p>looks about right, but <code>np.sin</code> and <code>np.cos</code> are not in fact literals, so my linter complains.</p>
<python><python-typing>
2023-12-16 10:49:06
5
571
schtandard
77,670,495
1,413,798
Vertex AI - Sample Notebook Authentication
<p>Vertex AI created the below sample notebook. When I run it I get a unauthorized error. I don't see any exmaples on what the authentication setup would be like. Would it be a API key, username/password, etc? What would the code consist of?</p> <pre><code>!pip install --upgrade google-cloud-aiplatform import base64 import vertexai from vertexai.preview.generative_models import GenerativeModel def generate(): model = GenerativeModel(&quot;gemini-pro-vision&quot;) responses = model.generate_content( [&quot;&quot;&quot;What is the date today?&quot;&quot;&quot;], generation_config={ &quot;max_output_tokens&quot;: 2048, &quot;temperature&quot;: 0.4, &quot;top_p&quot;: 1, &quot;top_k&quot;: 32 }, ) print(responses) generate() </code></pre>
<python><google-cloud-platform><jupyter-notebook><google-cloud-vertex-ai>
2023-12-16 09:55:27
2
947
Pie
77,670,084
10,654,749
How to modify a structured numpy arrray using pybind11
<p>I've got a few structs</p> <pre><code>/* Ray structure for a single ray */ struct RTC_ALIGN(16) RTCRay { float org_x; // x coordinate of ray origin float org_y; // y coordinate of ray origin float org_z; // z coordinate of ray origin float tnear; // start of ray segment float dir_x; // x coordinate of ray direction float dir_y; // y coordinate of ray direction float dir_z; // z coordinate of ray direction float time; // time of this ray for motion blur float tfar; // end of ray segment (set to hit distance) unsigned int mask; // ray mask unsigned int id; // ray ID unsigned int flags; // ray flags }; /* Hit structure for a single ray */ struct RTC_ALIGN(16) RTCHit { float Ng_x; // x coordinate of geometry normal float Ng_y; // y coordinate of geometry normal float Ng_z; // z coordinate of geometry normal float u; // barycentric u coordinate of hit float v; // barycentric v coordinate of hit unsigned int primID; // primitive ID unsigned int geomID; // geometry ID unsigned int instID[1]; // instance ID unsigned int instPrimID[1]; // instance primitive ID }; /* Combined ray/hit structure for a single ray */ struct RTCRayHit { struct RTCRay ray; struct RTCHit hit; }; </code></pre> <p>And I have defined the datatypes using pybind11</p> <pre><code> PYBIND11_NUMPY_DTYPE(RTCRay, org_x, org_y, org_z, tnear, dir_x, dir_y, dir_z, time, tfar, mask, id, flags); PYBIND11_NUMPY_DTYPE(RTCHit, Ng_x, Ng_y, Ng_z, u, v, primID, geomID, instID, instPrimID); PYBIND11_NUMPY_DTYPE(RTCRayHit, ray, hit); </code></pre> <p>So that I can pass in a structured numpy array to this function</p> <pre><code>void ray_intersect(py::array_t&lt;RTCRayHit, py::array::c_style&gt;&amp; rays) { auto buf = rays.request(); RTCRayHit* ptr = static_cast&lt;RTCRayHit*&gt;(buf.ptr); std::cout &lt;&lt; ptr[0].ray.org_x &lt;&lt; std::endl; ptr-&gt;ray.org_x = 2.0; ptr[0].ray.org_x = 2.0; std::cout &lt;&lt; ptr[0].ray.org_x &lt;&lt; std::endl; } </code></pre> <p>allowing me to call the code from python</p> <pre><code>RTC_MAX_INSTANCE_LEVEL_COUNT = 1 dt_ray = np.dtype([ ('org_x', 'f4'), ('org_y', 'f4'), ('org_z', 'f4'), ('tnear', 'f4'), ('dir_x', 'f4'), ('dir_y', 'f4'), ('dir_z', 'f4'), ('time', 'f4'), ('tfar', 'f4'), ('mask', 'u4'), ('id', 'u4'), ('flags', 'u4') ]) # Define the RTCHit structure dt_hit = np.dtype([ ('Ng_x', 'f4'), ('Ng_y', 'f4'), ('Ng_z', 'f4'), ('u', 'f4'), ('v', 'f4'), ('primID', 'u4'), ('geomID', 'u4'), ('instID', ('u4', (RTC_MAX_INSTANCE_LEVEL_COUNT,))), ('instPrimID', ('u4', (RTC_MAX_INSTANCE_LEVEL_COUNT,))) ]) # Define the RTCRayHit structure dt_rayhit = np.dtype([ ('ray', dt_ray), ('hit', dt_hit) ]) rayhit = np.zeros(1, dtype=dt_rayhit) rayhit[0][&quot;ray&quot;][&quot;org_x&quot;] = 1.0 print(rayhit) ray_intersect(rayhit) print(rayhit) </code></pre> <p>The numpy array seems to be succesfully passed to the ray_intersect function since it prints &quot;1&quot;, the original rays[0].ray.x_org then &quot;2&quot; the modified one but then when we get back into python code the rayhit array appears unchanged.</p> <p>The output looks like:</p> <pre><code>[((1., 0., 0., 0., 0., 0., 0., 0., 0., 0, 0, 0), (0., 0., 0., 0., 0., 0, 0, [0], [0]))] 1 2 [((1., 0., 0., 0., 0., 0., 0., 0., 0., 0, 0, 0), (0., 0., 0., 0., 0., 0, 0, [0], [0]))] </code></pre> <p>I feel like I may have specified the struct improperly to pybind11 and its silently copying it which means my mutation of rays[0].ray.org_x has no effect.</p>
<python><c++><arrays><numpy><pybind11>
2023-12-16 07:06:22
0
525
Dave Fol
77,669,790
13,097,194
Why do Backspace and Ctrl + Backspace produce different bytestrings within Linux and Windows, and how can this discrepancy be resolved within my code?
<p>I am working on a typing game in Python that uses <code>getch()</code> to read in characters, then performs different actions depending on which character has been entered.</p> <p>The code for reading in characters and assigning them to bytestrings (if needed) is as follows:</p> <pre><code>character = getch() # Imported via pip install py-getch # character is returned as a bytestring on # Windows but a string in Linux, so I added in # the following if statement to resolve this # difference. if type(character) == str: character = character.encode() </code></pre> <p>In the Windows terminal, if I enter a backspace, the value of <code>character</code> becomes <code>b'\x08'</code>. Meanwhile, if I hit Ctrl + Backspace, <code>b'\x7f'</code> gets stored in `character.'</p> <p>However, I found that, in my Ubuntu terminal, these bytestrings are reversed: a regular backspace (after it gets encoded) returns <code>b'\x7f'</code> whereas Ctrl + Backspace returns <code>b'\x08'</code>. This means that, without adjusting for the player's OS, hitting backspace will cause my program to delete one character in Windows but an entire word in Linux.</p> <p>Does this reflect normal behavior on these two operating systems, or could there be something unusual about my setup or code that is causing this difference?</p> <p>I'm currently resolving this issue by checking the player's platform, then using that information to determine which bytestrings to map to 'character_backspace' and 'word_backspace'. I can then have the script delete single characters or entire words, respectively, if the character input by the player matches one of these variables.</p> <p>Here's the code I'm using:</p> <pre><code>import platform if platform.system() == 'Linux': character_backspace = b'\x7f' word_backspace = b'\x08' else: character_backspace = b'\x08' word_backspace = b'\x7f' </code></pre> <p><em>(I haven't tested this behavior on a Mac, so I may need to update the code depending on which bytestrings get produced by Backspace and Command + Backspace on that system.)</em></p> <p>This approach works for my needs, but is there a more elegant solution that prevents the code from having to check the player's platform in the first place?</p>
<python><linux><windows><encoding><getch>
2023-12-16 04:14:36
1
974
KBurchfiel
77,669,753
11,280,068
python loguru outputs error to terminal but not to log file
<p>I'm running into an issue with loguru for logging where everything works for stdout but only partially for stderr.</p> <p>The problem is:</p> <ul> <li><p>For my Local Terminal:</p> <ul> <li>Regular logs - do output</li> <li>Errors - do output</li> </ul> </li> <li><p>For the log file</p> <ul> <li>Regular logs - do display</li> <li>Errors - do not display!</li> </ul> </li> </ul> <p>I have a cron job that runs the below bash script. It essentially just activates the python env, timestamps the log file, and runs a python script. Ubuntu is the OS.</p> <pre><code>#!/bin/bash log_file=&quot;backend/cron_run.log&quot; # get the dir of this script and cd to the Winions.gg dir CURR_DIR=&quot;$( cd &quot;$( dirname &quot;${BASH_SOURCE[0]}&quot; )&quot; &gt;/dev/null 2&gt;&amp;1 &amp;&amp; pwd )&quot; cd $CURR_DIR/.. source backend/venv/bin/activate # output the date in EST encased in brackets to the log file after 2 new lines # output the same, except with 0 new lines if the file is empty if [ -s $log_file ] then echo -e &quot;\n\n[$(TZ=America/Los_Angeles date)]&quot; &gt;&gt; $log_file else echo -e &quot;[$(TZ=America/Los_Angeles date)]&quot; &gt;&gt; $log_file fi python3 backend/Scripts/recap/recap.py </code></pre> <p>This is the loguru python config I have set up.</p> <pre><code>logger.remove(0) log_format = &quot;&lt;green&gt;{time:YYYY-MM-DD HH:mm:ss.SSS zz}&lt;/green&gt; | &lt;level&gt;{level: &lt;8}&lt;/level&gt; | &lt;yellow&gt;Line {line: &gt;4} ({file}):&lt;/yellow&gt; &lt;b&gt;{message}&lt;/b&gt;&quot; logger.add(sys.stdout, level=log_level, format=log_format, colorize=True, backtrace=True, diagnose=True) logger.add(root_dir + '/backend/cron_run.log', rotation='2 MB', level=log_level, format=log_format, colorize=False, backtrace=True, diagnose=True) </code></pre> <p>As you can see, I've set it up with the aim of logging everything to stdout and the log file simultaneously. The problem is as I've described above: regular logging happens for my terminal and the log file, but errors only get outputted to my terminal and not the log file.</p> <p>This is true whether I run the bash file, run the python file directly, or have the cron job run the bash file.</p> <p>I've also tried playing around with the last line of the bash file, adding <code>2&gt;&amp;1</code> or other variations to output it to the log file, but I want loguru to be able to handle it for continuity and formatting reasons.</p> <p>I've tried adding another sink with sys.stderr, but nothing changes. I think this is either me not understanding loguru or stderr/stdout.</p>
<python><python-3.x><bash><logging><loguru>
2023-12-16 03:49:31
1
1,194
NFeruch - FreePalestine
77,669,723
7,050,789
numpy: elision versus in-place
<p>I'm aware of the <a href="https://numpy.org/doc/stable/release/1.13.0-notes.html" rel="nofollow noreferrer">elision that numpy can do</a>, but in the following situation I'd expect that (at best) neither approach has to allocate any new storage and hence they'd perform at the same speed:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import cProfile # generate random input for the 'blue' channel of an image im_shape = (900, 1600, 3) foo_B_chan = np.random.randint(low=0, high=256, size=im_shape[:2], dtype=np.uint8) # pre-allocated buffer for output y_buf = np.ndarray(im_shape, dtype=np.uint8) # the 'blue' channel alone y_buf_B_chan = y_buf[:,:,2] def elided_then_copyto(x_B_chan, y_B_chan): z = x_B_chan + 1 np.copyto(y_B_chan, z) def copyto_then_incr(x_B_chan, y_B_chan): np.copyto(y_B_chan, x_B_chan) y_B_chan += 1 with cProfile.Profile() as pr: for j in np.arange(1000): elided_then_copyto(foo_B_chan, y_buf_B_chan) copyto_then_incr(foo_B_chan, y_buf_B_chan) pr.print_stats(sort=2) </code></pre> <p>However, the <code>elided_then_copyto</code> takes about 60% of the time that <code>copy_then_inplace</code> uses.</p> <p>The bytecode of the elided-then-copy method looks about what I'd expect:</p> <pre><code># dis.dis(elided_then_copyto) 2 0 LOAD_FAST 0 (x_B_chan) 2 LOAD_CONST 1 (1) 4 BINARY_ADD 6 STORE_FAST 2 (z) 3 8 LOAD_GLOBAL 0 (np) 10 LOAD_METHOD 1 (copyto) 12 LOAD_FAST 1 (y_B_chan) 14 LOAD_FAST 2 (z) 16 CALL_METHOD 2 18 POP_TOP 20 LOAD_CONST 0 (None) 22 RETURN_VALUE </code></pre> <p>As does the copy-then-inplace:</p> <pre><code># dis.dis(copyto_then_inplace) 2 0 LOAD_GLOBAL 0 (np) 2 LOAD_METHOD 1 (copyto) 4 LOAD_FAST 1 (y_B_chan) 6 LOAD_FAST 0 (x_B_chan) 8 CALL_METHOD 2 10 POP_TOP 3 12 LOAD_FAST 1 (y_B_chan) 14 LOAD_CONST 1 (1) 16 INPLACE_ADD 18 STORE_FAST 1 (y_B_chan) 20 LOAD_CONST 0 (None) 22 RETURN_VALUE </code></pre> <p>Does anyone know what the extra overhead of the in-place operation is?</p> <p>When I looked at the code via <code>pprof</code>, the main difference appears to be a much <em>much</em> slower application of the addition (better vectorisation and maybe cache-hit benefits in the elided case), but even the array copies are slower.</p> <p>Here's the 'elided' output in pprof</p> <p><a href="https://i.sstatic.net/44P5q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/44P5q.png" alt="pprof snippet for elided - add_AVX2=233ms" /></a></p> <p>Versus the 'in place' (+=) pprof output:</p> <p><a href="https://i.sstatic.net/LFQ0u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LFQ0u.png" alt="pprof snippet for 'in-place' - add_AVX2=1996ms" /></a></p>
<python><numpy><performance>
2023-12-16 03:32:21
0
894
stephematician
77,669,659
10,200,497
Replacing second row of each group with first row of another dataframe
<p>I have two dataframes:</p> <pre><code>import pandas as pd df1 = pd.DataFrame( { 'sym': ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'c'], 'open': [99, 22, 34, 63, 75, 86, 1800, 82], 'high': [3987, 41123, 46123, 6643, 75, 3745, 72123, 74], 'x': ['gd', 'ed', 'we', 'vt', 'de', 'sw', 'ee', 'et'], } ) df2 = pd.DataFrame( { 'sym': ['a', 'a', 'b', 'b', 'c', 'c', 'c'], 'open': [77, 232, 434, 33, 55, 66, 1000], 'high': [177, 11123, 1123, 343, 55, 3545, 21323], 'x': ['g', 'e', 'w', 'v', 'd', 's', 'g'], } ) </code></pre> <p>And this is the output that I want:</p> <pre><code> sym open high x 0 a 99 3987 gd 1 a 77 177 ed 2 a 34 46123 we 3 a 63 6643 vt 4 b 75 75 de 5 b 434 1123 sw 6 b 1800 72123 ee 7 c 82 74 et </code></pre> <p>These are the steps needed. Groups are defined by <code>sym</code>:</p> <p>a) Select the first row of each group in <code>df2</code></p> <p>b) Only <code>open</code> and <code>high</code> is needed for the previous step.</p> <p>c) Replace these values with the values from the second row of each group in <code>df1</code>.</p> <p>So for example for group <code>a</code>:</p> <p>a) <code>df2</code>: row <code>0</code> is selected</p> <p>b) <code>df2</code>: <code>open</code> is 77 and <code>high</code> is 177</p> <p>c) from row <code>1</code> of <code>df1</code> 22 and 41123 are replaced with 77 and 177.</p> <p>This is what I have tried. It gives me an <code>IndexError</code>. But even if it does not give me that error, it feels like this is not the way:</p> <pre><code>def replace_second_row(df): selected_sym = df.sym.iloc[0] row = df2.loc[df2.sym == selected_sym] row = row[['open', 'high']].iloc[0] df.iloc[1, df.columns.get_loc('open'): df.columns.get_loc('open') + 2] = row return df output = df1.groupby('sym').apply(replace_second_row) </code></pre> <p>The traceback of above<code>IndexError</code>:</p> <pre><code>Traceback (most recent call last): File &quot;D:\python\py_files\example_df.py&quot;, line 1618, in &lt;module&gt; x = df1.groupby('sym').apply(replace_second_row) File &quot;C:\Users\AF\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\groupby\groupby.py&quot;, line 894, in apply result = self._python_apply_general(f, self._selected_obj) File &quot;C:\Users\AF\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\groupby\groupby.py&quot;, line 928, in _python_apply_general keys, values, mutated = self.grouper.apply(f, data, self.axis) File &quot;C:\Users\AF\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\groupby\ops.py&quot;, line 238, in apply res = f(group) File &quot;D:\python\py_files\example_df.py&quot;, line 1614, in replace_second_row df.iloc[1, df.columns.get_loc('open'): df.columns.get_loc('open') + 2] = row File &quot;C:\Users\AF\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\indexing.py&quot;, line 689, in __setitem__ self._has_valid_setitem_indexer(key) File &quot;C:\Users\AF\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\indexing.py&quot;, line 1401, in _has_valid_setitem_indexer raise IndexError(&quot;iloc cannot enlarge its target object&quot;) IndexError: iloc cannot enlarge its target object </code></pre> <p>For more clarification of the process, I have uploaded an image. The highlighted rows are the rows that are needed to be selected/changed.</p> <p><a href="https://i.sstatic.net/8BH0z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8BH0z.png" alt="enter image description here" /></a></p>
<python><pandas>
2023-12-16 03:02:09
4
2,679
AmirX
77,669,563
699,211
Pandas: apply date_range() to a group of rows that share a fixed date?
<p>I have a csv file of WTA matches that are structured like this:</p> <pre><code>season,date,name,location,level,surface,draw,round,winner,loser,status 1987,1986-12-01,Argentine Open,Buenos Aires,C1,clay,56,R64,Maria Lindstrom,Patrizia Murgo,f 1987,1986-12-01,Argentine Open,Buenos Aires,C1,clay,56,R64,Gabriela Mosca,Pascale Etchemendy,f 1987,1986-12-01,Argentine Open,Buenos Aires,C1,clay,56,R64,Vicki Nelson,Xochitl Escobedo,f ... 1987,1986-12-01,Argentine Open,Buenos Aires,C1,clay,56,SF,Gabriela Sabatini,Lori Mcneil,f 1987,1986-12-01,Argentine Open,Buenos Aires,C1,clay,56,SF,Arantxa Sanchez Vicario,Mariana Perez Roldan,f 1987,1986-12-01,Argentine Open,Buenos Aires,C1,clay,56,F,Gabriela Sabatini,Arantxa Sanchez Vicario,f ... 2023,2023-10-16,Transylvania Open,Cluj-Napoca,WTA250,hard,32,R32,Eva Lys,Sorana Cirstea,f 2023,2023-10-16,Transylvania Open,Cluj-Napoca,WTA250,hard,32,R32,Jaqueline Cristian,Celine Naef,f 2023,2023-10-16,Transylvania Open,Cluj-Napoca,WTA250,hard,32,R32,Patricia Maria Tig,Martha Matoula,f ... 2023,2023-10-16,Transylvania Open,Cluj-Napoca,WTA250,hard,32,SF,Tamara Korpatsch,Eva Lys,f 2023,2023-10-16,Transylvania Open,Cluj-Napoca,WTA250,hard,32,SF,Elena-Gabriela Ruse,Rebeka Masarova,f 2023,2023-10-16,Transylvania Open,Cluj-Napoca,WTA250,hard,32,F,Tamara Korpatsch,Elena-Gabriela Ruse,f </code></pre> <p>After reading the contents into a DataFrame, I would like to transform the DataFrame in such a way that for each tournament, which are unique in the combination of <code>['date','name']</code>, the date is advanced by one day after the initial <code>round</code> is completed.</p> <pre><code>date,round 1986-12-01,R64 1986-12-02,R32 1986-12-03,R16 1986-12-04,QF 1986-12-05,SF 1986-12-06,F </code></pre> <p>The result should look like this:</p> <pre><code>season,date,name,location,level,surface,draw,round,winner,loser,status 1987,1986-12-01,Argentine Open,Buenos Aires,C1,clay,56,R64,Maria Lindstrom,Patrizia Murgo,f 1987,1986-12-01,Argentine Open,Buenos Aires,C1,clay,56,R64,Gabriela Mosca,Pascale Etchemendy,f 1987,1986-12-01,Argentine Open,Buenos Aires,C1,clay,56,R64,Vicki Nelson,Xochitl Escobedo,f ... 1987,1986-12-05,Argentine Open,Buenos Aires,C1,clay,56,SF,Gabriela Sabatini,Lori Mcneil,f 1987,1986-12-05,Argentine Open,Buenos Aires,C1,clay,56,SF,Arantxa Sanchez Vicario,Mariana Perez Roldan,f 1987,1986-12-06,Argentine Open,Buenos Aires,C1,clay,56,F,Gabriela Sabatini,Arantxa Sanchez Vicario,f ... 2023,2023-10-16,Transylvania Open,Cluj-Napoca,WTA250,hard,32,R32,Eva Lys,Sorana Cirstea,f 2023,2023-10-16,Transylvania Open,Cluj-Napoca,WTA250,hard,32,R32,Jaqueline Cristian,Celine Naef,f 2023,2023-10-16,Transylvania Open,Cluj-Napoca,WTA250,hard,32,R32,Patricia Maria Tig,Martha Matoula,f ... 2023,2023-10-19,Transylvania Open,Cluj-Napoca,WTA250,hard,32,SF,Tamara Korpatsch,Eva Lys,f 2023,2023-10-19,Transylvania Open,Cluj-Napoca,WTA250,hard,32,SF,Elena-Gabriela Ruse,Rebeka Masarova,f 2023,2023-10-20,Transylvania Open,Cluj-Napoca,WTA250,hard,32,F,Tamara Korpatsch,Elena-Gabriela Ruse,f </code></pre> <p>The <code>date_range()</code> function seems like a good fit, but as I'm new to Pandas, I don't see a straightforward way to use it. What is a good solution to this?</p>
<python><pandas><dataframe>
2023-12-16 01:52:11
0
709
vmorph
77,669,406
8,661,071
SharePoint programatically access error: AADSTS65001 DelegationDoesNotExist The user or administrator has not consented to use the app with ID X
<p>I’m using office365-rest-python-client to programmatically access a SharePoint site in Office365 online.</p> <p>The application I’m acceding this is just a Python script that will run in a backend 
For that, I went to <a href="https://entra.microsoft.com/" rel="nofollow noreferrer">https://entra.microsoft.com/</a> admin center and did the following:</p> <ul> <li>1- Create an app in Applications &gt; App Registration</li> <li>2- Added a Secret to that application</li> <li>3- Granted Permission to that application to access SharePoint API</li> <li>4- Granted admin consent</li> </ul> <p>Then using office365-rest-python-client , I used the application clientId and the SecretValue to send a request to add a page</p> <p>The request fails with the following error showing in the Audit Logs in Microsoft Office365 Entra

 :</p> <pre><code>Based on the information you provided we have identified following issue and recommend taking the action to resolve the issue. Error Code: 65001 Message: The user or administrator has not consented to use the application with ID ''(). Send an interactive authorization request for this user and resource. </code></pre> <p>Going deeper about this error in Microsoft Documentation it states</p> <pre><code>AADSTS65001 DelegationDoesNotExist - The user or administrator has not consented to use the application with ID X. Send an interactive authorization request for this user and resource. </code></pre> <p>The client side shows 

</p> <p><code>POST /sites/security/_api/contextInfo</code></p> <p>and</p> <p><code>POST /sites/security/_api/sitePages/pages</code></p> <p>returning</p> <p><code>401 client error “Unauthorized for url : https://my-sharepoint/sites/security/_api/sitePages/pages</code></p> <p>What could be the cause of this issue ??</p> <p>Thanks in advance</p>
<python><sharepoint><office365>
2023-12-16 00:27:14
1
1,349
MasterOfTheHouse
77,669,288
4,251,301
How to execute GPU-related heavy task in the background?
<p>I have a <strong>server-client</strong> project with <code>/analyze</code> URL to analyze something with <em><strong>GPU</strong></em>. The <strong>client</strong> application sends a request with the name of a file to the server through Python's <code>requests</code> module.</p> <pre><code># client.py import requests def send_request(host, port, file): ''' host (str): Hostname / IP address, port (int): Port ''' url = f'http://{host}:{port}/analyze' body = {'file': file} response = requests.post(url, data = body) status = response.json()['status'] print(status) if __name__ == &quot;__main__&quot;: send_request(&quot;localhost&quot;, 5000, &quot;test.h5&quot;) </code></pre> <p>When the server gets the request, it starts the analysis which takes around 77-80 seconds per request. I wanted to start the analysis in the background as it takes a lot of time. The server built with <code>flask</code> framework.</p> <pre><code># server.py import json, logging from concurrent.futures import ProcessPoolExecutor from flask import Flask, request logging.basicConfig(format='[%(asctime)s] %(message)s', datefmt='%Y-%m-%d %H:%M:%S') app = Flask(__name__) EXECUTOR = ProcessPoolExecutor() def apply_algorithm(file): # GPU-related algorithm -- image/video-based analysis (very heavy task) @app.route('/analyze', methods = ['POST']) def analyze(): file = request.form['file'] message = None try: message = f'Processing started for {file}!' EXECUTOR.submit(apply_algorithm, file) except Exception as error: message = f'Error: Unable to analyze {file}!' logging.warning(f&quot;Error: {error}&quot;) status = {'status': message} return json.dumps(status) if __name__ == &quot;__main__&quot;: app.run(debug=True, host='0.0.0.0', port=5000) </code></pre> <p><strong>Problems:</strong> The client waits for the server's response (for 77-80 seconds as said earlier) before sending the next request to the server. I tried <a href="https://stackoverflow.com/questions/59213379/make-function-not-to-wait-for-other-function-inside-it">this</a> already but did not help. I tried with <code>async</code> request-response, but it did not help.</p> <ul> <li>Why the parallel processing (with <code>ProcessPoolExecutor</code>) is not working here? (as - it is already running with GPU? or any other reason?). I also tried the <code>ThreadPoolExecutor</code> executor but not work.</li> <li>Why is the <code>async</code> request-response also not working?</li> </ul>
<python><multithreading><flask><parallel-processing><multiprocessing>
2023-12-15 23:35:19
1
852
jislam
77,669,205
226,081
pytest 5.x+ upgrade and use a command line flag to run/skip decorated tests
<p>I'm trying to upgrade from pytest 4.x to something newer. However, I'm running into issues where <code>pytest.config</code> was removed and I'm not sure how to accommodate the new syntax. Specifically, I'm trying to specify a command line flag that can run or skip specific decorated tests depending on whether or not the flag is present.</p> <p>This is how it works in pytest 4.x.</p> <pre class="lang-py prettyprint-override"><code># common.py integration = pytest.mark.skipif( not pytest.config.getoption('--integration', False), ) </code></pre> <p>Then, elsewhere in the tests..</p> <pre class="lang-py prettyprint-override"><code># test_something.py from .common import integration @integration def test_mytest(): 1 == 1 @integration def test_other_mytest(): 2 == 2 </code></pre> <p>In newer versions of pytest, the above fails with:</p> <p><code>AttributeError: module 'pytest' has no attribute 'config'</code></p> <p>In pytest 5.x+, how can I achieve the same result as above - how can I pass in flag to pytest and run/skip certain tests based on that flag? Preferably, it would be nice to decorate them as in the example, as I have a lot of tests using the decorator syntax and it could be difficult to update them all. Any advice is much appreciated.</p>
<python><python-3.x><pytest><pytest-fixtures>
2023-12-15 22:58:27
1
10,861
Joe Jasinski
77,669,146
8,100,895
Maximum sum, min length subset
<p>I am trying to solve this popular problem:</p> <p>&quot;Given an integer array, divide the array into two subsets, A and B, while respecting the following conditions.</p> <ol> <li><p>the intersection of A and B is null.</p> </li> <li><p>the union A and B is equal to the original array.</p> </li> <li><p>the number of elements in subset A is minimal.</p> </li> <li><p>the sum of A's elements is greater than the sum of B's elements.</p> </li> </ol> <p>Return the subset A in increasing order, where the sum of A's elements is greater than the sum of B's elements. If more than one subset exists, return the one with the maximal sum.&quot;</p> <p>I found several solutions online but none of them solve it when the test case is <code>[2,2,2,5]</code></p> <p>I wrote the following code:</p> <pre><code>def subsetA(nums): nums.sort(reverse=True) subset_a = [] sum_a = 0 sum_b = 0 for num in nums: if sum_a &lt;= sum_b: sum_a += num subset_a.append(num) else: sum_b += num return sorted(subset_a) </code></pre> <p>This solution works for a lot of cases but it doesn't work for the test case I mentioned above.</p> <p>The answer should be <code>[2,2,2]</code>. But i cannot think of any solution that will return that answer.</p> <p>How can I improve this code?</p>
<python><algorithm><computation-theory>
2023-12-15 22:39:15
2
1,966
Jonathan
77,668,958
13,078,279
Can a Flask (or other SSR) app receive client-side events without JS?
<p>I am making an app (as an educational exercise) that is purely rendered server-side and uses no client-side JS whatsoever. A key part of the app is that the server needs to be able to intercept <em>client-side</em> events - keyboard input, click, drag, cursor location, etc.</p> <p>A rough sketch would be something like this -</p> <p><code>app.py</code></p> <pre class="lang-py prettyprint-override"><code>from flask import Flask, request, render_template app = Flask(__name__) counter = 0 @app.route(&quot;/&quot;, methods=[&quot;GET&quot;, &quot;POST&quot;]) def hello_world(): global counter if request.method == &quot;POST&quot;: counter += 1 return render_template(&quot;index.html&quot;, counter=counter) </code></pre> <p><code>templates/index.html</code>:</p> <pre class="lang-html prettyprint-override"><code> &lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot;&gt; &lt;title&gt;Flask SSR demo&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Test Flask no JS app&lt;/h1&gt; &lt;p&gt;You have clicked {{ counter }} times!&lt;/p&gt; &lt;script&gt; document.addEventListener(&quot;click&quot;, (e) =&gt; { fetch(&quot;/&quot;, { &quot;method&quot;: &quot;POST&quot;, }); }) &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>However, this method requires JS to register event listeners. Is there any way to do it purely from the server-side?</p>
<javascript><python><flask>
2023-12-15 21:43:20
0
416
JS4137
77,668,905
17,889,328
pycharm bug? Inspection false positives when import an asynccontextmanager
<p><strong>EDIT</strong> now a reproduced issue <a href="https://youtrack.jetbrains.com/issue/PY-65385" rel="nofollow noreferrer">https://youtrack.jetbrains.com/issue/PY-65385</a></p> <p><strong>EDIT</strong> i made a more minimal reproduction, and discovered it only happens when importing..</p> <p>As of today Pycharm is highlighting calls to an asynccontextmanager as missing non-existent param &quot;_P&quot;, but only if import the manager from another module.</p> <p>There are no runtime errors - just pycharm highlighting and complaining</p> <pre class="lang-py prettyprint-override"><code>a.py @asynccontextmanager async def cont_manager(): try: ... yield finally: ... async def get_man(): async with cont_manager() as con_man: ... </code></pre> <p>works fine, no highlighting or inspection errors.</p> <p>but</p> <pre><code>a.py from b import cont_manager async def get_man(): async with cont_manager() as con_man: ... b.py from contextlib import asynccontextmanager @asynccontextmanager async def cont_manager(): try: ... yield finally: ... </code></pre> <p>Pycharm highlights closing paren of <code>async with cont_manager()</code> and says &quot;Parameter '_P' unfilled&quot;</p> <p>a context manager that does require params gets the same inspection error:</p> <pre><code>a.py from b import cont_man_params async with cont_man_params(param1='pppprrrrr1') as con_man_params: ... b.py @asynccontextmanager async def cont_man_params(param1, param2): print(param1, param2) try: yield finally: ... </code></pre> <p>i get <code>unexpected argument</code> for <code>param1='...'</code>, <code>unfulfilled param '_P'</code> and no mention of missing param2</p> <p>i tried invalidating all caches etc, delete .idea, reverted to known good revision, created a new project to make minimal examples, tried 3.11 and 3.12 and issue persists...</p> <blockquote> <p>**** original message:</p> <p>i'm relatively green, but i think this is the ide not me?</p> <p>i'm on PyCharm 2023.3.1 (Professional Edition), i have AIAssistant, not that that should be relevant, but when weird shit happens i tend to look at new tech in the stack.... i've updated pycharm and all plugins, invalidated all caches and restarted, checked out known good revisions of the repo, and in any case my tests all still pass - it's just the ide that's freaking out.</p> <p>yesterday everything was fine, today i opened pycharm, made no changes, and in loads of places ide is complaining that either parmas are unfilled, or they are unexpected.</p> <p>eg here in <code>test_reddit_cm()</code> ide says call to <code>async with reddit_cm() as reddit:</code> suffers <code>Parameter '_P unfulfilled</code> in reddit_cm()</p> <p>i have never heard of &quot;_P&quot;, i searched the codebase, and apart from it being included in other strings eg &quot;EPISODE_PAGE_TITLE&quot; it is not a param. and even if it was, it clearly is not in the async def of reddit_cm()</p> <p>there are other issues too, like funcs which do require params throwing ide complaints that the param is unexpected.</p> <pre class="lang-py prettyprint-override"><code>asynccontextmanager from asyncpraw.reddit import Reddit, Subreddit from data.consts import GURU_SUB, REDDIT_CLIENT_ID, REDDIT_CLIENT_SEC, \ REDDIT_REF_TOK, REDIRECT, TEST_SUB, TEST_WIKI, USER_AGENT @asynccontextmanager async def reddit_cm() -&gt; Reddit: try: async with Reddit( client_id=REDDIT_CLIENT_ID, client_secret=REDDIT_CLIENT_SEC, user_agent=USER_AGENT, redirect_uri=REDIRECT, refresh_token=REDDIT_REF_TOK ) as reddit: yield reddit finally: await reddit.close() ... import pytest from asyncpraw.models import WikiPage from asyncpraw.reddit import Reddit, Subreddit from data.consts import EPISODES_WIKI, GURU_SUB, TEST_SUB, TEST_WIKI from gurupod.redditbot.managers import reddit_cm, \ subreddit_cm, wiki_page_cm from gurupod.redditbot.monitor import flair_submission_write_optional, run_jobs from gurupod.redditbot.subred import submission_in_stream_by_id from gurupod.redditbot.wrrite_to_web import edit_reddit_wiki, submit_episode_subreddit from gurupod.writer import RWikiWriter @pytest.mark.asyncio async def test_reddit_cm(): async with reddit_cm() as reddit: assert isinstance(reddit, Reddit) </code></pre> <p>it's not really a functional issue, except if i start ignoring the highlights i'm going to miss real bugs</p> </blockquote>
<python><import><pycharm><praw><asyncpraw>
2023-12-15 21:27:23
0
704
prosody
77,668,682
2,683,447
Referencing a downstream class in type hinting for a class method
<p>I am building a python game using pygame. Struggling a bit to add typing to things. I have a abstract group class Mobs which I define with:</p> <pre><code>class Mobs(pygame.sprite.Group): </code></pre> <p>I then create individual monsters with:</p> <pre><code>class Mob(pygame.sprite.Sprite): </code></pre> <p>Later in the file. The trouble I'm running into is in creating a method <code>render_image</code> of <code>Mobs</code> which I'm trying to add type hinting to.</p> <pre><code>def render_image(self, sprite: Mob) -&gt; tuple[Surface, Rect]: </code></pre> <p>I get the following error:</p> <p><a href="https://i.sstatic.net/BDSwX.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BDSwX.jpg" alt="error message: Unresolved reference" /></a></p> <p>It seems the reason the reference is unresolved is because the class <code>Mob</code> gets created further down in the file. Is there a way I can fix this and add type hinting the way I'm trying to?</p>
<python><class><methods><types><pycharm>
2023-12-15 20:32:07
1
4,187
Math chiller
77,668,595
10,997,667
Matplotlib pcolormesh() atributes
<p>If I have an existing <code>pcolormesh()</code> object is it possible to change the <code>vmin</code>, <code>vmax</code> values? In other words, are <code>vmin</code> and <code>vmax</code> attributes of the <code>pcolormesh</code> object?</p>
<python><matplotlib>
2023-12-15 20:04:41
1
787
osprey
77,668,372
9,840,684
Creating a function to standardize labels for each ID
<p>I am attempting to create a function that standardizes a label column for a given ID for a given criteria.</p> <p>I would like to standardize the label based on the <em>most commonly used label</em> for that ID, and if there is no common/majority label, then just take the first observation as the default standard.</p> <p>The function I have so far is below:</p> <pre><code>def standardize_labels(df, id_col, label_col): # Function to find the most common label or the first one if there's a tie def most_common_label(group): labels = group[label_col].value_counts() # Check if the top two labels have the same count if len(labels) &gt; 1 and labels.iloc[0] == labels.iloc[1]: return group[label_col].iloc[0] return labels.idxmax() # Group by the ID column and apply the most_common_label function common_labels = df.groupby(id_col).apply(most_common_label) # Map the IDs in the original DataFrame to their common labels df['standardized_label'] = df[id_col].map(common_labels) return df </code></pre> <p>It <em>mostly</em> works, however a quirk I've noticed with some where there is an shift in the trend in the labels, the labels then change per a given ID like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">ID</th> <th style="text-align: center;">raw_label</th> <th style="text-align: right;">standardized_label</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">222</td> <td style="text-align: center;">LA Metro</td> <td style="text-align: right;">LA Metro</td> </tr> <tr> <td style="text-align: left;">222</td> <td style="text-align: center;">LA Metro</td> <td style="text-align: right;">LA Metro</td> </tr> <tr> <td style="text-align: left;">222</td> <td style="text-align: center;">Los Angeles Metro</td> <td style="text-align: right;">Los Angeles Metro</td> </tr> <tr> <td style="text-align: left;">222</td> <td style="text-align: center;">LA Metro</td> <td style="text-align: right;">Los Angeles Metro</td> </tr> <tr> <td style="text-align: left;">222</td> <td style="text-align: center;">Los Angeles Metro</td> <td style="text-align: right;">Los Angeles Metro</td> </tr> </tbody> </table> </div> <p>When instead the output I'm hoping for all the standardized_label to just be <strong>LA Metro</strong> since that is the majority label per that ID.</p>
<python><pandas><dataframe><numpy><group-by>
2023-12-15 19:11:35
1
373
JLuu
77,668,199
856,070
How to have python create log entries when called by postfix
<p>I have configured postfix with <code>mailbox_command = /path/to/my_python_script.py</code>.</p> <p>If my python script fails, I see the results of the exception in /var/log/mail.log.</p> <p>When I try to create a logging.error(message), that message does not appear in the mail.log file.</p> <p>Is there a way to make that happen without just raising a SystemExit exception?</p>
<python><logging><postfix-mta>
2023-12-15 18:32:34
0
3,148
jcfollower
77,668,127
10,545,426
Find pytests tests that are not marked
<p>We have a lot of tests in our project that it takes forever for it to complete in a single runner. So, I use pytest.mark to group the tests and in github actions I build a matrix based on these marks. Problem is sometimes we forget to mark tests and they never run. Is there a way to detect tests that do no have a mark? Maybe create a warning or error? I want to build a CI script that can detect this.</p>
<python><python-3.x><pytest>
2023-12-15 18:14:27
1
399
Stalin Thomas
77,667,916
731,351
Convert Polars string columns into list of integers
<p>I am trying to convert a TSV file in <a href="https://felixfan.github.io/bedtools/" rel="nofollow noreferrer"> BED12 format</a> to a polars data frame. I got two columns encoded as strings containing coma separated integers. My solution (simplified) involves going through structs:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame( { &quot;chrom&quot;: [&quot;1&quot;, &quot;1&quot;, &quot;2&quot;, &quot;X&quot;], &quot;blockSizes&quot;: [&quot;10,29,&quot;, &quot;20,22,&quot;, &quot;30,25,&quot;, &quot;40,23,&quot;], &quot;blockStarts&quot;: [&quot;0,50,&quot;, &quot;0,45,&quot;, &quot;0,60,&quot;, &quot;0,70,&quot;] }) df.with_columns(pl.col(&quot;blockSizes&quot;).str.split(by=&quot;,&quot;) .list.slice(0, 2) .list.to_struct(upper_bound=2) ).unnest(&quot;blockSizes&quot;).with_columns( pl.col(&quot;field_0&quot;).cast(pl.Int32).alias(&quot;blockSizes_0&quot;), pl.col(&quot;field_1&quot;).cast(pl.Int32).alias(&quot;blockSizes_1&quot;), ).drop(&quot;field_0&quot;, &quot;field_1&quot;) </code></pre> <pre><code>shape: (4, 4) ┌───────┬─────────────┬──────────────┬──────────────┐ │ chrom ┆ blockStarts ┆ blockSizes_0 ┆ blockSizes_1 │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i32 ┆ i32 │ ╞═══════╪═════════════╪══════════════╪══════════════╡ │ 1 ┆ 0,50, ┆ 10 ┆ 29 │ │ 1 ┆ 0,45, ┆ 20 ┆ 22 │ │ 2 ┆ 0,60, ┆ 30 ┆ 25 │ │ X ┆ 0,70, ┆ 40 ┆ 23 │ └───────┴─────────────┴──────────────┴──────────────┘ </code></pre> <p>Obviously I can just continue with doing the same with <code>blockStarts</code> column but I hope there is some simpler way to do it.</p>
<python><dataframe><python-polars>
2023-12-15 17:32:16
1
529
darked89
77,667,669
2,256,700
Starting or resuming async method/coroutine without awaiting it in Python
<p>In JavaScript (and other languages) an <code>async</code> function is immediately executed upon its invocation until the first <code>await</code> is encountered. This is not the case in Python. From the <a href="https://docs.python.org/3/library/asyncio-task.html#coroutines" rel="nofollow noreferrer"><code>asyncio</code> documentation</a>:</p> <blockquote> <p>Note that simply calling a coroutine will not schedule it to be executed:</p> </blockquote> <p>I find this very confusing, because my mental model of asynchronous programming is like this:</p> <ul> <li><strong>S</strong>etup/Initialization/Queuing of asynchronous task as soon as possible (usually quick and nonblocking)</li> <li><strong>A</strong>synchronous task happens in background</li> <li>Do other stuff</li> <li><strong>W</strong>aiting for task to complete as late as possible. Maybe its already completed, so no waiting occurs.</li> </ul> <pre><code>| S | AAAAAAAAAAAAAAAAAAAAA | W | | S | AAAAAAAAAAAAA | W | | S | AAAAAAAAAA | WWWWWW | </code></pre> <p>To reduce waiting time the Setup should be executed as soon as possible. But if the method is <code>async</code> in Python the &quot;Setup&quot; part is not executed by calling the method at all. Only if <code>await</code> (or <code>gather</code>, etc.) is used is it actually started. That makes it very hard to start a bunch of coroutines and wait for all of them or batches to finish, because if you use <code>asyncio.gather</code> they are all started at the very end and at the same time, which can waste a lot of time.</p> <p>One way to execute the setup part immediately without awaiting the method is this:</p> <pre class="lang-py prettyprint-override"><code>async def work(): print(&quot;Setup...&quot;) await asyncio.sleep(1) async def main(): task = asyncio.create_task(work()) await asyncio.sleep(0) # do something else await task </code></pre> <p>But that is quite cumbersome (and ugly) and also puts me at the whim of the event loop to actually choose the newly created task and not just swap back to the <code>main</code> method or something entirely different.</p> <p>I managed to come up with this method decorator that immediately executes an async method by using <code>send(None)</code> and wraps the <code>async</code> method into a normal method that returns an <code>Awaitable</code>:</p> <pre class="lang-py prettyprint-override"><code>def js_like_await(coro): class JSLikeCoroutine: def __init__(self, first_return, coro_obj): self.first_return = first_return self.coro_obj = coro_obj def __await__(self): try: yield self.first_return while True: yield self.coro_obj.send(None) except StopIteration as e: return e.value def coroutine_with_js_like_await(*args, **kwargs): coro_obj = coro(*args, **kwargs) first_return = None try: first_return = coro_obj.send(None) return JSLikeCoroutine(first_return, coro_obj) except StopIteration as e: result = asyncio.Future() result.set_result(e.value) return result return coroutine_with_js_like_await </code></pre> <p>But for this to have an effect it would also have to wrap possible inner asynchronous function calls as well, which might not be modifiable for outside callers of the method.</p> <p>I thought I could make this a lot simpler if there was a way to &quot;attempt&quot; a coroutine to start or resume it. So switching control flow to the coroutine (similar to what <code>send()</code> does), but only if the async function is currently not waiting for something. Returning immediately, if it does.</p> <p>So I want a method <code>attempt</code> that can be used like this to control the execution flow:</p> <pre class="lang-py prettyprint-override"><code>def attempt(task): while not task.is_waiting: task.continue_to_next_await() async def work(): print(&quot;Setup&quot;) await asyncio.sleep(10) print(&quot;After first sleep&quot;) await asyncio.sleep(10) print(&quot;Done&quot;) return &quot;Result&quot; task = work() # Nothing is printed, because the coroutine is only created. attempt(task) # &quot;Setup&quot; is printed and method returns immediately attempt(task) # Nothing happens, because task is still sleeping time.sleep(15) # Block CPU to wait for sleeping to finish attempt(task) # &quot;After first sleep&quot; is printed, because the sleep time is over attempt(task) # Nothing happens, because task is still sleeping time.sleep(15) # Block CPU to wait for second sleeping to finish attempt(task) # &quot;Done&quot; is printed. attempt(task) # Nothing happens, because task is finished r = await task # Get result. </code></pre> <p>Is there a way to achieve something like this?</p> <p>I couldn't get <code>send(None)</code> to work multiple times, because after the second <code>send</code> it crashes with <a href="https://github.com/python/cpython/blob/d05a180350fe20d5fde56c7e525e394a0b282703/Modules/_asynciomodule.c#L1626" rel="nofollow noreferrer"><code>await wasn't used with future</code></a> if i discard the Futures that are returned from <code>send(None)</code>:</p> <pre class="lang-py prettyprint-override"><code>async def work(): print(&quot;Setup&quot;) await asyncio.sleep(10) print(&quot;Done&quot;) async def main(): task = work() _ = task.send(None) # Returns &lt;Future pending&gt; of type &lt;class '_asyncio.Future'&gt; _ = task.send(None) # RuntimeError: await wasn't used with future </code></pre> <p>I feel like I have to somehow give the returned Future back to the event loop, but I don't know how. If I make <code>attempt</code> an <code>__await__</code>-object that just <code>yield</code>s the <code>Future</code> from <code>send</code> and <code>await</code> it, it will wait for the <code>sleep</code> and the <code>attempt</code> becomes blocking.</p> <p><sub>Note, that I'm aware of the obvious solution of &quot;not make your method <code>async</code>&quot;. But sadly, it is too late, to make everyone adhere to that.</sub></p>
<python><asynchronous><async-await><python-asyncio><coroutine>
2023-12-15 16:46:31
3
989
MaPePeR
77,667,634
445,810
String internment in Python during JSON and XML parsing
<p>Does Python perform string internment to save memory on duplicate keys when deserializing large arrays of dicts read from XML/JSON serialized to disk? (e.g. during <code>xml.dom.minidom.parse</code> / <code>json.load</code>)</p> <p>Thanks!</p>
<python><string><deserialization><json-deserialization><xml-deserialization>
2023-12-15 16:40:34
0
1,164
Vadim Kantorov
77,667,537
6,759,459
arm64 - How do I install sudachiPy - needed for japanese SpaCy
<p>I should preface by saying that I did follow the <code>SpaCy</code> documentation to install the <code>SpaCy</code> library and the models of interest.</p> <pre><code>pip install -U pip setuptools wheel pip install -U 'spacy[apple]' python -m spacy download zh_core_web_sm python -m spacy download en_core_web_sm python -m spacy download fr_core_news_sm python -m spacy download de_core_news_sm python -m spacy download ja_core_news_sm python -m spacy download es_core_news_sm </code></pre> <p>Currently stuck at installing <code>ja_core_news_sm</code> in my Docker with a python image of <code>python:3.11-slim</code> in my Docker environment. I am installing a few other <code>SpaCy</code> pipeline modules in tandem on my <code>arm64</code> Docker image and this is the source of conflict. The other pipeline models are :</p> <p>I get the following error:</p> <pre><code> ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 71.7/71.7 MB 4.4 MB/s eta 0:00:00 Building wheels for collected packages: sudachipy Building wheel for sudachipy (pyproject.toml): started Building wheel for sudachipy (pyproject.toml): finished with status 'error' error: subprocess-exited-with-error × Building wheel for sudachipy (pyproject.toml) did not run successfully. │ exit code: 1 ╰─&gt; [39 lines of output] running bdist_wheel running build running build_py creating build creating build/lib.linux-aarch64-cpython-311 creating build/lib.linux-aarch64-cpython-311/sudachipy copying py_src/sudachipy/config.py -&gt; build/lib.linux-aarch64-cpython-311/sudachipy copying py_src/sudachipy/__init__.py -&gt; build/lib.linux-aarch64-cpython-311/sudachipy copying py_src/sudachipy/command_line.py -&gt; build/lib.linux-aarch64-cpython-311/sudachipy copying py_src/sudachipy/errors.py -&gt; build/lib.linux-aarch64-cpython-311/sudachipy creating build/lib.linux-aarch64-cpython-311/sudachipy/dictionary copying py_src/sudachipy/dictionary/__init__.py -&gt; build/lib.linux-aarch64-cpython-311/sudachipy/dictionary creating build/lib.linux-aarch64-cpython-311/sudachipy/tokenizer copying py_src/sudachipy/tokenizer/__init__.py -&gt; build/lib.linux-aarch64-cpython-311/sudachipy/tokenizer creating build/lib.linux-aarch64-cpython-311/sudachipy/morphemelist copying py_src/sudachipy/morphemelist/__init__.py -&gt; build/lib.linux-aarch64-cpython-311/sudachipy/morphemelist creating build/lib.linux-aarch64-cpython-311/sudachipy/morpheme copying py_src/sudachipy/morpheme/__init__.py -&gt; build/lib.linux-aarch64-cpython-311/sudachipy/morpheme copying py_src/sudachipy/sudachipy.pyi -&gt; build/lib.linux-aarch64-cpython-311/sudachipy creating build/lib.linux-aarch64-cpython-311/sudachipy/resources copying py_src/sudachipy/resources/sudachi.json -&gt; build/lib.linux-aarch64-cpython-311/sudachipy/resources copying py_src/sudachipy/resources/rewrite.def -&gt; build/lib.linux-aarch64-cpython-311/sudachipy/resources copying py_src/sudachipy/resources/unk.def -&gt; build/lib.linux-aarch64-cpython-311/sudachipy/resources copying py_src/sudachipy/resources/char.def -&gt; build/lib.linux-aarch64-cpython-311/sudachipy/resources warning: build_py: byte-compiling is disabled, skipping. running build_ext running build_rust error: can't find Rust compiler If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler. To update pip, run: pip install --upgrade pip and then retry package installation. If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for sudachipy Failed to build sudachipy ERROR: Could not build wheels for sudachipy, which is required to install pyproject.toml-based projects </code></pre> <p>Here's my Dockerfile:</p> <pre><code># Use the official Python image, with Python 3.11 FROM python:3.11-slim # Set environment variables to reduce Python bytecode generation and buffering ENV PYTHONUNBUFFERED=1 \ PYTHONDONTWRITEBYTECODE=1 # Set working directory WORKDIR /app # Install essential dependencies including Python development headers and GCC RUN apt-get update &amp;&amp; \ apt-get install -y --no-install-recommends \ libpq-dev \ gcc \ ffmpeg \ python3-dev \ libc-dev \ &amp;&amp; apt-get clean &amp;&amp; \ rm -rf /var/lib/apt/lists/* # Update pip and install Python packages COPY ./docker-requirements.txt /app/ RUN pip install --upgrade pip &amp;&amp; \ pip install --no-cache-dir -r docker-requirements.txt # Copy application code to container COPY . /app # Expose the port the app runs on EXPOSE 5000 # Make the entrypoint script executable RUN chmod +x /app/shell_scripts/entrypoint.sh /app/shell_scripts/wait-for-it.sh /app/shell_scripts/docker-ngrok-tunnel.sh # Define entrypoint ENTRYPOINT [&quot;/app/shell_scripts/entrypoint.sh&quot;] </code></pre> <p>Read further, it's an <a href="https://github.com/WorksApplications/SudachiPy" rel="nofollow noreferrer">archived module</a>.</p> <p>Here's the note from PyPi, which explains the issue, given my Mac having an M2 chip:</p> <pre><code>Binary wheels We provide binary builds for macOS (10.14+), Windows and Linux only for x86_64 architecture. x86 32-bit architecture is not supported and is not tested. MacOS source builds seem to work on ARM-based (Aarch64) Macs, but this architecture also is not tested and require installing Rust toolchain and Cargo. </code></pre> <p>I cannot seem to install all of the models</p>
<python><docker><spacy><rust-cargo><arm64>
2023-12-15 16:20:47
1
926
Ari
77,667,507
12,775,432
Iteration not working when using a sampler in Pytorch's data loader
<p>I have this class of sampler that allows me to enter sample my data per different batch sizes.</p> <pre><code>class VaribleBatchSampler(Sampler): def __init__(self, dataset_len: int, batch_sizes: list): self.dataset_len = dataset_len self.batch_sizes = batch_sizes self.batch_idx = 0 self.start_idx = 0 self.end_idx = self.batch_sizes[self.batch_idx] def __iter__(self): return self def __next__(self): if self.start_idx &gt;= self.dataset_len: raise StopIteration() batch_indices = torch.arange(self.start_idx, self.end_idx, dtype=torch.long) self.start_idx += (self.end_idx - self.start_idx) self.batch_idx += 1 try: self.end_idx += self.batch_sizes[self.batch_idx] except IndexError: self.end_idx = self.dataset_len return batch_indices </code></pre> <p>But I can't manage to iterate it in an epoch loop. It only works for one epoch.</p> <pre><code>batch_sizes = [4, 10, 7, ..., 2] train_dataset = TensorDataset(x_train, y_train) sampler = VaribleBatchSampler(dataset_len=len(x_train), batch_sizes=batch_sizes) dataloader_train = DataLoader(train_dataset, sampler=sampler) for epoch in np.arange(1, max_epoch): model.train() for x_batch, y_batch in dataloader_train: ... </code></pre>
<python><pytorch><pytorch-dataloader>
2023-12-15 16:15:44
1
640
pyaj
77,667,478
6,995,188
PySpark : Having trouble matching a date column to a dictionary
<p>I am certain this is a simple change but having issues simply matching a date column to a list of items. Example is using the holidays package but the problem can be applied to other use cases, just my experience in PySpark is not great!!</p> <p>Using the holidays package (<a href="https://pypi.org/project/holidays/" rel="nofollow noreferrer">https://pypi.org/project/holidays/</a>) I retrieve a dict of holidays</p> <pre><code>nyse_holidays=holidays.financial.ny_stock_exchange.NewYorkStockExchange(years=2018) print(nyse_holidays) </code></pre> <p>*{datetime.date(2018, 12, 5): 'Day of Mourning for President George H.W. Bush', datetime.date(2018, 1, 1): &quot;New Year's Day&quot;, datetime.date(2018, 1, 15): 'Martin Luther King Jr. Day', datetime.date(2018, 2, 19): &quot;Washington's Birthday&quot;, datetime.date(2018, 3, 30): 'Good Friday', datetime.date(2018, 5, 28): 'Memorial Day', datetime.date(2018, 7, 4): 'Independence Day', datetime.date(2018, 9, 3): 'Labor Day', datetime.date(2018, 11, 22): 'Thanksgiving Day', datetime.date(2018, 12, 25): 'Christmas Day'} *</p> <p>I also have another Spark data frame with the following schema</p> <pre><code>root |-- id: long (nullable = false) |-- date: timestamp (nullable = false) |-- year: integer (nullable = false) |-- month: integer (nullable = false) |-- day: string (nullable = false) |-- day_of_year: string (nullable = false) |-- hour: string (nullable = false) |-- minute: string (nullable = false) |-- is_weekend: boolean (nullable = false) |-- only_date: date (nullable = false) </code></pre> <p>I simply want to add another field saying if the date for that row is a holiday or not</p> <p>The following code never matches any dates</p> <pre><code>from pyspark.sql.functions import col, create_map, lit from itertools import chain mapping_expr = create_map([lit(x) for x in chain(*nyse_holidays.items())]) #search_date display(df.withColumn(&quot;value&quot;, mapping_expr[&quot;only_date&quot;]).filter(col(&quot;value&quot;).isNotNull())) </code></pre> <p>If I change the code to a fixed value to check the mapping_expr works then it works fine.</p> <pre><code>search_date = datetime.strptime('2018-01-01', '%Y-%m-%d') display(df.withColumn(&quot;value&quot;, mapping_expr[search_date]).filter(col(&quot;value&quot;).isNotNull())) </code></pre> <p>Preferbably the code would just use the 'date' field but I thought I would create a only_date field.</p> <p>Any recommendations, sure I am just missing something silly. Assuming its the conversion of the field being past into the mapping_expr</p>
<python><pyspark><jupyter-notebook>
2023-12-15 16:08:54
1
470
user5004137
77,667,269
11,483,674
AWS Python CDK - define the launch template for Batch Compute Environment
<p>In my Python CDK Stack, I'm trying to configure an AWS Batch Compute Environment to be able to use a <code>Launch Template</code> I'm creating in the same stack. Example:</p> <pre class="lang-py prettyprint-override"><code>launch_template = ec2.CfnLaunchTemplate( scope=self, id=f&quot;{stack_id}-EC2LaunchTemplate&quot;, launch_template_name=f&quot;{stack_id}-EC2LaunchTemplate&quot;, launch_template_data=ec2.CfnLaunchTemplate.LaunchTemplateDataProperty( image_id=&quot;ami-xxxxxxxxxxx&quot;, instance_type=&quot;xxxxxxxxxx&quot;, ebs_optimized=True, user_data=Fn.base64(...) ) ) compute_env_gpu = batch.ManagedEc2EcsComputeEnvironment( scope=self, id=&quot;compute_env_id&quot;, ... launch_template=launch_template ) </code></pre> <p>My Stack without the launch template works well. The definition as above passes <code>cdk synth</code>, but fails <code>cdk deploy</code> with the following error:</p> <pre><code>Resource handler returned message: &quot;Error executing request, Exception : LaunchTemplateId or LaunchTemplateName is required. </code></pre> <p>If instead I use this:</p> <pre class="lang-py prettyprint-override"><code>launch_template=batch.CfnComputeEnvironment.LaunchTemplateSpecificationProperty( launch_template_name=f&quot;{stack_id}-EC2LaunchTemplate&quot;, ) </code></pre> <p>I get this error:</p> <pre><code>RuntimeError: Passed to parameter props of new aws-cdk-lib.aws_batch.ManagedEc2EcsComputeEnvironment: Unable to deserialize value as aws-cdk-lib.aws_batch.ManagedEc2EcsComputeEnvironmentProps ├── 🛑 Failing value is an object │ { '$jsii.struct': [Object] } ╰── 🔍 Failure reason(s): ╰─ Key 'launchTemplate': Unable to deserialize value as aws-cdk-lib.aws_ec2.ILaunchTemplate | undefined ├── 🛑 Failing value is an object │ { '$jsii.struct': [Object] } ╰── 🔍 Failure reason(s): ╰─ Value does not have the &quot;$jsii.byref&quot; key </code></pre> <p>How can I fix this?</p>
<python><amazon-ec2><aws-cdk><aws-batch>
2023-12-15 15:29:48
1
632
Luiz Tauffer
77,667,098
13,492,584
osmnx query OpenStreetMap
<p>I am using <strong>Python</strong> and I am trying to use <strong>osmnx</strong> to query <strong>OpenStreetMap</strong> and get data about the Italian city of <a href="https://www.openstreetmap.org/relation/45144#map=12/45.5354/10.2236" rel="nofollow noreferrer"><strong>Brescia</strong></a>.</p> <p>Here's the code I copied from the docs.</p> <pre><code>import osmnx as ox place = &quot;Brescia, Italy&quot; osmnx_buffer = 10000 # in metres g = ox.graph.graph_from_place(place, buffer_dist=osmnx_buffer) g = ox.projection.project_graph(g) ox.plot_graph(g) </code></pre> <p>The only thing I changed is the content of the string <code>place</code>.</p> <p>The problem is that when I run the script, it never replies (and therefore, never plots anything). I tried to read the documentation and change my query, but I do not know what is wrong with it. Can you please suggest me how to make this script work?</p> <p>My objective is to work on it with <code>networkx</code>.</p> <p>Thank you in advance.</p>
<python><graph><networkx><openstreetmap><osmnx>
2023-12-15 14:57:09
1
644
hellomynameisA
77,667,026
13,100,938
AttributeError: 'tuple' object has no attribute 'encode' during stateful DoFn
<p>I am using a stateful and timely DoFn to process data 2 seconds after the end of the fixed window I am implementing.</p> <p>I have tested a reproducible example of my code inside the Apache Beam playground. My data is in <code>KV[str, str]</code> format as an input to the DoFn. The only difference I can think of between the playground and my code is that the playground uses a DirectRunner and I am using a DataflowRunner.</p> <p>Before my DoFn I have another DoFn that mutates the input PCollection into the format the stateful DoFn is expecting:</p> <pre class="lang-py prettyprint-override"><code>class AddKeys(beam.DoFn): def __init__(self, settings): self.settings = settings def process(self, element): data = element[&quot;data&quot;] for setting in self.settings: if setting[&quot;data&quot;] == data: yield [ (&quot;tuple key 1&quot;, setting[&quot;tuple key 1&quot;]), (&quot;tuple key 2&quot;, setting[&quot;tuple key 2&quot;]), (&quot;tuple key 3&quot;, setting[&quot;tuple key 3&quot;]), (&quot;element&quot;, str(element)) ] </code></pre> <p>Then my stateful DoFn should take the output and process it:</p> <pre class="lang-py prettyprint-override"><code>class ProcessCollection(beam.DoFn): EXPIRY_TIMER = TimerSpec('expiry', TimeDomain.WATERMARK) BUFFER_STATE = BagStateSpec( 'buffer', ListCoder(StrUtf8Coder())) def process(self, element, timer=beam.DoFn.TimerParam(EXPIRY_TIMER), window=beam.DoFn.WindowParam, buffer=beam.DoFn.StateParam(BUFFER_STATE)): timer.set(window.end + Duration(seconds=2)) buffer.add(str(element)) @on_timer(EXPIRY_TIMER) def expiry(self, buffer=beam.DoFn.StateParam(BUFFER_STATE)): events = buffer.read() for event in events: yield ''.join(event) buffer.clear() </code></pre> <p>Calling DoFn method:</p> <pre class="lang-py prettyprint-override"><code># continue with processing &amp; branch to stateful DoFn extra_processing = ( raw_data_processing | &quot;Add Group Keys&quot; &gt;&gt; beam.Map( lambda message: add_group_key( message, SETTINGS) ) | &quot;Fixed Window&quot; &gt;&gt; beam.WindowInto( window.FixedWindows(self.window_length), # if message late by 700ms, still accept allowed_lateness=window.Duration(seconds=0.7) ) | &quot;Group&quot; &gt;&gt; beam.GroupByKey() | &quot;Process Further&quot; &gt;&gt; beam.ParDo(OtherDoFn(SETTINGS, CONFIG)) ) # process data with stateful DoFn ( extra_processing | &quot;Add Keys&quot; &gt;&gt; beam.ParDo(AddKeys(SETTINGS)).with_output_types(KV[str, str]) | &quot;Process Collection&quot; &gt;&gt; beam.ParDo(ProcessCollection()) | 'Log' &gt;&gt; beam.LogElements(with_timestamp=True) ) </code></pre> <p>The error I receive from Google Cloud is:</p> <pre><code>File &quot;/usr/local/lib/python3.10/site-packages/apache_beam/coders/coders.py&quot;, line 429, in encode return value.encode('utf-8') AttributeError: 'tuple' object has no attribute 'encode' [while running 'Add Keys-ptransform-51'] </code></pre> <p><a href="https://pastebin.com/xCxQYq5X" rel="nofollow noreferrer">Full stack trace in pastebin</a> due to size</p> <p>Can anyone determine why this might be occurring?</p>
<python><google-cloud-platform><encoding><google-cloud-dataflow><apache-beam>
2023-12-15 14:43:54
1
2,023
Joe Moore
77,666,811
4,715,905
python logger - log extra arguments
<p>I have got a logger set up in databricks that saves the output as json files in blob storage. This works fine but I would like to add the data I pass into &quot;extra&quot; into the output JSON as top level keys.</p> <p>This was the suggestion by the databricks AI assistant but extra_fields doesn't seem to ever return anything. Can you suggest what I'm doing wrong?</p> <pre><code>import json import logging import uuid from datetime import datetime from azure.storage.filedatalake import DataLakeServiceClient class AzureDataLakeLogHandler(logging.Handler): def __init__(self, account_name, credential, file_system_name, directory_name): super().__init__() self.account_name = account_name self.credential = credential self.file_system_name = file_system_name self.directory_name = directory_name self.file_name = str(uuid.uuid4().hex) + &quot;.txt&quot; self.client = DataLakeServiceClient( account_url=f&quot;https://{account_name}.dfs.core.windows.net&quot;, credential=credential, ) self.file_system_client = self.client.get_file_system_client( file_system=file_system_name ) def emit(self, record): # Convert timestamp to ISO format iso_timestamp = datetime.utcfromtimestamp(record.created).isoformat() log_entry = record extra_fields = log_entry.__dict__.get(&quot;extra&quot;, None) extra_fields_str = str(extra_fields) if extra_fields else &quot;&quot; # Add each field in extra_fields to the log_data dictionary log_data = { &quot;timestamp&quot;: iso_timestamp, &quot;message&quot;: log_entry.msg, } print(extra_fields) if extra_fields is not None: for key, value in extra_fields.items(): if isinstance(value, str): log_data[key] = value else: log_data[f'{key}_json'] = json.dumps(value) log_json = json.dumps(log_data, default=str) self.upload_to_azure_datalake(log_json) def upload_to_azure_datalake(self, log_json): file_path = f&quot;{self.directory_name}/{self.file_name}&quot; file_client = self.file_system_client.get_file_client(file_path) if not file_client.exists(): file_client.create_file() file_client.upload_data(log_json, overwrite=True) if __name__ == &quot;__main__&quot;: # Replace the following values with your Azure Data Lake Storage details account_name, credential, file_system_name, directory_name = ( &quot;testbrh&quot;, &quot;XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==&quot;, &quot;logs&quot;, &quot;logs&quot;, ) # Create the log handler log_handler = AzureDataLakeLogHandler( account_name, credential, file_system_name, directory_name ) # Create a logger and add the custom log handler logger = logging.getLogger(&quot;azure_datalake_logger&quot;) logger.setLevel(logging.DEBUG) logger.handlers.clear() logger.addHandler(log_handler) logger.debug(&quot;This is a debug message.&quot;,extra={&quot;Test&quot;:&quot;testvalue&quot;}) </code></pre>
<python><python-logging>
2023-12-15 14:08:08
0
1,059
Bee_Riii
77,666,517
1,448,641
Assign slice to another slice of same data frame
<p>Consider the following pandas dataframe:</p> <pre><code>df = pd.DataFrame({'a': [1, 2, None, 3, None]}, index=[0, 1, 2, 3, 4]) </code></pre> <p>I want to replace to the nan values with some other values from the same data frame, for example:</p> <pre><code>df.loc[[2, 4]] = df.loc[[0, 1]] </code></pre> <p>But that does not work. The result is the exact same data frame as before the assignment. What I can do, however, is</p> <pre><code>df.loc[[2, 4]] = 999 </code></pre> <p>and</p> <pre><code>df.loc[[2, 4]] = [999, 777] </code></pre> <p>and consequently also</p> <pre><code>df.loc[[2, 4]] = df.loc[[0, 1]].to_numpy().tolist() </code></pre> <p>All these examples replace the indexed values with the values given on the rhs, as expected.</p> <p>Question is, why does the first assignment not work in the same manner? If that's a noop, I think, there should at least be warning.</p>
<python><pandas><dataframe><assign>
2023-12-15 13:18:58
0
5,519
MaxPowers
77,666,292
11,901,834
Airflow dependencies with Task decorator
<p>I have an Airflow DAG that I can't figure out how to implement one particular dependency.</p> <p>The code looks like this:</p> <pre><code># Setup DAG @dag( dag_id=&quot;dag_1&quot;, description=&quot;Dag 1 desc&quot;, schedule=DAG_SCHEDULE, start_date=days_ago(0), catchup=False, default_args={ &quot;retries&quot;: 0, }, ) def dag_runner(): @task(task_id=&quot;task_1&quot;) def task_1(): //some logic return task_1_val @task(task_id=&quot;task_2&quot;) def task_2(input1): //some logic using input1 return task_2_val @task(task_id=&quot;task_1&quot;) def task_3(): //some logic //no return value @task(task_id=&quot;task_4&quot;) def task_4(): //some logic return task_4_val @task(task_id=&quot;task_5&quot;) def task_5(input1, input2): //some logic using input1 &amp; input2 return task_5_val task_1_val = task_1() task_2_val = task_2(task_1_val) task_3() task_4_val = task_4() task_5_val = task_5(task_2_val, task_4_val) </code></pre> <p>Currently, <code>task_1</code>, <code>task_3</code> and <code>task_4</code> all run in parallel when the DAG starts.</p> <p><code>task_2</code> will start when <code>task_1</code> completes due to it requiring the return value of <code>task_1</code>.</p> <p>I am happy with <code>task_3</code> running at the same time as <code>task_1</code>, but I want <code>task_4</code> to run once <code>task_3</code> has completed (even though it doesn't depend on a return value).</p> <p>I know I could return a value from <code>task_3</code> and pass it to <code>task_4</code> but that seems hacky when I won't use the value.</p> <p>I've tried adding this below the above code, but it doesn't apply any dependencies:</p> <pre><code>task_3 &gt;&gt; task_4 </code></pre> <p>When I tried moving it above the <code>task_1_val = task_1()</code> line, I got the following error:</p> <pre><code>unsupported operand type(s) for &gt;&gt;: '_TaskDecorator' and '_TaskDecorator' </code></pre> <p>Does anyone know what I am doing wrong?</p>
<python><airflow>
2023-12-15 12:34:31
1
1,579
nimgwfc
77,666,182
23,106,915
Pystray icon inappropriate behavior with Tkinter window withdraw
<p><strong>Description:</strong> So I am using customtkinter library instead of tkinter, the issue I am facing is that I want my tkinter app to be withdrawn when closed if the minimize window checkbox is checked in settings of the app. The first time I perform this action the app withdraws as well as pystray icon appears in the taskbar, when I open the window again using the &quot;open app&quot; button the app deiconifies but when I try the same method second time I get an error.</p> <p><strong>Source Code:</strong> Below is my code through which I am implementing this operation</p> <pre><code>def save_checkbox_state(state): with open('.\\_internal\\src\\checkbox_state.pickle', 'wb') as file: pickle.dump(state, file) def read_checkbox_state(): try: with open('.\\_internal\\src\\checkbox_state.pickle', 'rb') as file: state = pickle.load(file) # Modify this section according to your UI framework if state == 'on': checkbox.select() check_var.set('on') else: checkbox.deselect() check_var.set('off') except FileNotFoundError: return None def app_destroy(): icon.stop() app.destroy() def open_app(): icon.stop() app.after(0,app.deiconify()) image = Image.open(&quot;.\\_internal\\assets\\ico.png&quot;) menu=pystray.Menu(pystray.MenuItem(&quot;Open enigma:guard&quot;,open_app),pystray.MenuItem(&quot;Exit&quot;, app_destroy)) icon = pystray.Icon('enigma:guard', image, &quot;My system tray icon&quot;, menu) def destroy_window(): checkbox_state = check_var.get() save_checkbox_state(checkbox_state) if checkbox_state == 'on': app.withdraw() icon.run() else: app.destroy() icon.stop() settings_minimize_frame = customtkinter.CTkLabel(settings_frame_win,text=&quot;&quot;,image=settings_minimize) settings_minimize_frame.place(x=420,y=121) check_var = customtkinter.StringVar(value=&quot;off&quot;) checkbox = customtkinter.CTkCheckBox(settings_minimize_frame, text=&quot;&quot;, variable=check_var, onvalue=&quot;on&quot;, offvalue=&quot;off&quot;,width=30,height=35,checkbox_height=35,checkbox_width=35,border_color=&quot;#1FFFA9&quot;,bg_color=&quot;#000000&quot;) checkbox.place(x=1078,y=33) app.protocol(&quot;WM_DELETE_WINDOW&quot;, destroy_window) </code></pre> <p><strong>Error:</strong> Below is the error I am facing when I close the window and it minimizes and I reopen it the second time:</p> <pre><code>Exception in Tkinter callback Traceback (most recent call last): File &quot;C:\Users\[hidden]\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py&quot;, line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File &quot;D:\python_encryptor_decryptor\project2\enigma_guard.py&quot;, line 1228, in destroy_window icon.run() File &quot;C:\Users\[hidden]\AppData\Local\Programs\Python\Python311\Lib\site-packages\pystray\_base.py&quot;, line 212, in run self._run() File &quot;C:\Users\[hidden]\AppData\Local\Programs\Python\Python311\Lib\site-packages\pystray\_win32.py&quot;, line 120, in _run self._hwnd = self._create_window(self._atom) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\[hidden]\AppData\Local\Programs\Python\Python311\Lib\site-packages\pystray\_win32.py&quot;, line 244, in _create_window hwnd = win32.CreateWindowEx( ^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\[hidden]\AppData\Local\Programs\Python\Python311\Lib\site-packages\pystray\_util\win32.py&quot;, line 204, in _err raise ctypes.WinError() OSError: [WinError 6] The handle is invalid. </code></pre> <p><strong>Already tried:</strong> I tried to destroy the window instead of withdrawing it but that makes the useability a lot awkward since I am giving a splash screen at startup.</p>
<python><tkinter><customtkinter><pystray>
2023-12-15 12:12:42
1
546
AshhadDevLab
77,666,163
10,106,578
Scrapy crawler, 403 error for crawling south wales courses
<p>I have been bashing my head against this for a while and figured I would turn it over to the experts of the internet for a bit of aid.</p> <p>I am trying to use scrapy to crawl a list of courses from the university of south wales (all public information of course). However whenever I do I get met with a 403 that stops me from getting any information.</p> <p>Here is my spider code:</p> <pre><code>import scrapy class CrawlingSpider(scrapy.Spider): name = &quot;southwalescrawler&quot; start_urls = [&quot;https://www.southwales.ac.uk/courses/&quot;] download_delay = 2 def parse(self, response): pass def start_requests(self): headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) ' 'Chrome/58.0.3029.110 Safari/537.3', 'Referer': 'https://www.southwales.ac.uk/' } cookies = {'cookie_name': 'cookie_value'} for url in self.start_urls: yield scrapy.Request(url, headers=headers, cookies=cookies, callback=self.parse) </code></pre> <p>You'll see that I am handling cookies, delaying requests, and applying a User Agent and Referrer. In spite of that here is the result I get:</p> <pre><code>2023-12-15 11:51:45 [scrapy.core.engine] INFO: Spider opened 2023-12-15 11:51:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2023-12-15 11:51:45 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2023-12-15 11:51:45 [scrapy.core.engine] DEBUG: Crawled (403) &lt;GET https://www.southwales.ac.uk/robots.txt&gt; (referer: None) 2023-12-15 11:51:45 [protego] DEBUG: Rule at line 1 without any user agent to enforce it on. 2023-12-15 11:51:48 [scrapy.core.engine] DEBUG: Crawled (403) &lt;GET https://www.southwales.ac.uk/courses/&gt; (referer: https://www.southwales.ac.uk/) 2023-12-15 11:51:48 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response &lt;403 https://www.southwales.ac.uk/courses/&gt;: HTTP status code is not handled or not allowed 2023-12-15 11:51:48 [scrapy.core.engine] INFO: Closing spider (finished) </code></pre>
<python><web-scraping><scrapy>
2023-12-15 12:08:16
1
483
Kron
77,666,026
8,099,689
Length of groupby result in pandas
<h2>Problem</h2> <p>I want to calculate the <strong>number of unique combinations of two columns</strong>. What is the most performant way to achieve this using pandas?</p> <p>For me, the most intuitive way is to groupby the columns and take the length of the object. However, for a large dataframe, I find the performance to be about 5x slower than the second variant below.</p> <pre class="lang-py prettyprint-override"><code># Quick len1 = lambda: len(df[['a', 'b']].groupby(['a', 'b'], observed=True).nunique()) # Slow len2 = lambda: len(df[['a', 'b']].groupby(['a', 'b'], observed=True)) </code></pre> <h2>Questions</h2> <ol> <li>Why is the second variant faster?</li> <li>What is the <strong>fastest</strong> way for this calculation?</li> </ol> <h2>MWE to reproduce</h2> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import timeit # Set the number of rows and columns num_rows = 10000000 num_cols = 5 # Generate random numbers data = np.random.rand(num_rows, num_cols) # Convert columns a and b to categorical values data[:, 0] = pd.Categorical(np.random.randint(0, 10, size=num_rows)) data[:, 1] = pd.Categorical(np.random.randint(0, 10, size=num_rows)) # Create a DataFrame with the random numbers and categorical values df = pd.DataFrame(data, columns=['a', 'b', 'c', 'd', 'e']) len1 = lambda: len(df[['a', 'b']].groupby(['a', 'b'], observed=True).nunique()) len2 = lambda: len(df[['a', 'b']].groupby(['a', 'b'], observed=True)) time1 = timeit.timeit(len1, number=10) print(f&quot;Execution time of len1: {time1:.5f} seconds&quot;) time2 = timeit.timeit(len2, number=10) print(f&quot;Execution time of len2: {time2:.5f} seconds&quot;) </code></pre> <p>Output:</p> <pre class="lang-bash prettyprint-override"><code>Execution time of len1: 3.16599 seconds Execution time of len2: 17.47438 seconds </code></pre>
<python><pandas><dataframe><group-by><aggregate>
2023-12-15 11:40:40
2
366
joba2ca
77,665,894
11,999,452
VS Code numpy array endlessly expanding in debugger variables section
<p>Given a simple python program like</p> <pre><code>import numpy as np my_list = [1,2,3,4,5] my_array = np.array(my_list) </code></pre> <p>I can now use break points and the debugger to watch the variables</p> <p><a href="https://i.sstatic.net/PCYYF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PCYYF.png" alt="enter image description here" /></a></p> <p>I can also expand them</p> <p><a href="https://i.sstatic.net/aQiyi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aQiyi.png" alt="enter image description here" /></a></p> <p>for a list it shows all the elements neatly listed. This is esspecially helpful if I want to look at nested lists. However for the array it does not expand like it does for the list. It used to do it in the past. I can expand for example the <code>T</code> variable but it just expands into itself. Like I get the same thing as when I expanded the array in the first place. Alos I can repeat that endlessly</p> <p><a href="https://i.sstatic.net/CDPMW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CDPMW.png" alt="enter image description here" /></a></p> <p>This all started when <a href="https://stackoverflow.com/questions/77187399/vs-code-drive-c-does-not-exist">I had to switch to the pre-relases version of the Python extension</a>. I've switched back now, but it didn't helped.</p> <p>My VS Code is:</p> <pre><code>Version: 1.85.0 (user setup) Commit: af28b32d7e553898b2a91af498b1fb666fdebe0c Date: 2023-12-06T20:48:09.019Z Electron: 25.9.7 ElectronBuildId: 25551756 Chromium: 114.0.5735.289 Node.js: 18.15.0 V8: 11.4.183.29-electron.0 OS: Windows_NT x64 10.0.19045 </code></pre> <p>Also this wierd error messages started to show up at the same time, as the behavior of the debugger changed, aka when I had to switch to the pre-relases version of the Python extension. So maybe they are related to this problem.</p> <pre><code> 2023-12-15 11:41:41.416 [info] [Error - 11:41:41 AM] Server initialization failed. 2023-12-15 11:41:41.417 [info] Message: Pending response rejected since connection got disposed Code: -32097 2023-12-15 11:41:41.417 [info] [Info - 11:41:41 AM] Connection to server got closed. Server will restart. 2023-12-15 11:41:41.417 [info] true 2023-12-15 11:41:41.418 [info] [Error - 11:41:41 AM] Python Jedi client: couldn't create connection to server. 2023-12-15 11:41:41.418 [info] Message: Pending response rejected since connection got disposed Code: -32097 2023-12-15 11:41:41.419 [info] [Error - 11:41:41 AM] Restarting server failed 2023-12-15 11:41:41.419 [info] Message: Pending response rejected since connection got disposed Code: -32097 2023-12-15 11:41:41.419 [info] [Error - 11:41:41 AM] Server process exited with code 1. 2023-12-15 11:41:41.797 [info] Traceback (most recent call last): File &quot;c:\Users\felix\.vscode\extensions\ms-python.python-2023.22.1\pythonFiles\run-jedi-language-server.py&quot;, line 9, in &lt;module&gt; from jedi_language_server.cli import cli File &quot;c:\Users\felix\.vscode\extensions\ms-python.python-2023.22.1\pythonFiles\lib\jedilsp\jedi_language_server\__init__.py&quot;, line 5, in &lt;module&gt; from importlib_metadata import version File &quot;c:\Users\felix\.vscode\extensions\ms-python.python-2023.22.1\pythonFiles\lib\jedilsp\importlib_metadata\__init__.py&quot;, line 6, in &lt;module&gt; import zipp File &quot;c:\Users\felix\.vscode\extensions\ms-python.python-2023.22.1\pythonFiles\lib\jedilsp\zipp\__init__.py&quot;, line 9, in &lt;module&gt; from .py310compat import text_encoding File &quot;c:\Users\felix\.vscode\extensions\ms-python.python-2023.22.1\pythonFiles\lib\jedilsp\zipp\py310compat.py&quot;, line 5 def _text_encoding(encoding, stacklevel=2, /): # pragma: no cover ^ SyntaxError: invalid syntax 2023-12-15 11:41:41.817 [info] [Error - 11:41:41 AM] Server process exited with code 1. 2023-12-15 11:41:41.819 [info] [Error - 11:41:41 AM] Server initialization failed. 2023-12-15 11:41:41.819 [info] Message: Pending response rejected since connection got disposed Code: -32097 2023-12-15 11:41:41.819 [info] [Error - 11:41:41 AM] The Python Jedi server crashed 5 times in the last 3 minutes. The server will not be restarted. See the output for more information. 2023-12-15 11:41:41.819 [info] [Error - 11:41:41 AM] Python Jedi client: couldn't create connection to server. 2023-12-15 11:41:41.819 [info] Message: Pending response rejected since connection got disposed Code: -32097 2023-12-15 11:41:41.819 [info] [Error - 11:41:41 AM] Restarting server failed 2023-12-15 11:41:41.819 [info] Message: Pending response rejected since connection got disposed Code: -32097 </code></pre>
<python><numpy><visual-studio-code><debugpy>
2023-12-15 11:13:21
1
400
Akut Luna
77,665,538
5,589,640
BERTopic: add legend to term score decline
<p>I plot the <a href="https://maartengr.github.io/BERTopic/api/plotting/term.html" rel="nofollow noreferrer">term score decline</a> for a topic model I created on Google Colab with <a href="https://maartengr.github.io/BERTopic/index.html" rel="nofollow noreferrer">BERTopic</a>. Great function. Works neat! But I need to add a legend. This parameter is not specified in the <code>topic_model.visualize_term_rank()</code> function. Only the figure title, width, height, log transformation and topics to be plotted can be adjusted.</p> <p>The term score decline function outputs a plotly figure and is based on <a href="https://wzbsocialsciencecenter.github.io/tm_corona/tm_analysis.html" rel="nofollow noreferrer">tmtoolkit</a>. So I tried to tweak it there. But I cannot load tmtoolkit in colab. Tried <code>import tmtoolkit</code> and <code>from tmtoolkit import topicmod</code> patching things together from <a href="https://tmtoolkit.readthedocs.io/en/latest/api.html#tmtoolkit.topicmod.visualize.plot_heatmap" rel="nofollow noreferrer">the reference</a>. I found another <a href="https://colab.research.google.com/github/alvinntnu/NTNU_ENC2045_LECTURES/blob/main/nlp/topic-modeling-naive.ipynb#scrollTo=NmsLizIM4rNv" rel="nofollow noreferrer">colab file that uses tmtoolkit</a> but it gave me the same error. Colab does not find <a href="https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html" rel="nofollow noreferrer">tmtoolkit</a>. So not a problem with my file but seems to be a general issue?</p> <p>Another solution is to update the plotly figure. But how? Had a ponder in the <a href="https://plotly.com/python-api-reference/search.html?q=figure.update_&amp;check_keywords=yes&amp;area=default" rel="nofollow noreferrer">reference</a> and tried</p> <pre><code>import plotly.graph_objects as go import plotly.express as px labels = topic_model.topic_labels_ fig = model.visualize_term_rank(title = 'Topic Coherence', custom_labels = True) fig.update_legends(patch = labels) fig </code></pre> <p>This throws the below error.</p> <p><code>TypeError: object of type 'int' has no len()</code> What does it mean? It seems illogical to me that the length cannot be computed for an integer variable.</p> <p>When I omit the line <code>fig.update_legends(patch = labels)</code> it produces the figure but without the legend I need.</p> <p><strong>Up-date 2023-12-15</strong> Occurred to me that maybe tmtoolkit has to be installed liked BERTopic. I was convinced it could be loaded like tm or nltk. Be it as it may, below solution works and is fast. Took less than a second in Colab to produce figure.</p>
<python><plotly><google-colaboratory><bert-language-model><topic-modeling>
2023-12-15 10:07:26
1
625
Simone
77,665,453
8,278,075
TypeError: 'DataFrame' object is not callable when using `df = DataFrame()`
<p>I have a Streamlit application which uses Pandas dataframes to show graphs in Altair.</p> <p>I've exhausted most likely causes but I still get the following errors.</p> <p>At first the error was showing up with my code:</p> <pre><code> File &quot;/mnt/c/Users/xxx/Documents/xxx/deps/charts/charts.py&quot;, line 131, in show_financial_metrics_competitors_chart combined_df: pd.DataFrame = pd.DataFrame() TypeError: 'DataFrame' object is not callable </code></pre> <p>So to fix this, I changed <code>combined_df: pd.DataFrame = pd.DataFrame()</code> to <code>combined_df = None</code> even though it seems the former is valid syntax.</p> <p>The errors went away until errors showed for dependent libraries:</p> <pre><code>TypeError: 'DataFrame' object is not callable ... File &quot;/mnt/c/Users/xxx/Documents/xxx/env/lib/python3.9/site-packages/pandas/io/formats/style_render.py&quot;, line 1957, in Tooltips tooltips: DataFrame = DataFrame(), </code></pre> <p>I've been trying a couple days now to figure out the root cause. I've tried:</p> <ul> <li>Downgrade to Python3.8 -&gt; same issue</li> <li>Upgrade to Python10 -&gt; same issue</li> <li>Check source data is valid and populating the DataFrames -&gt; can confirm data eventually fills these DataFrames</li> <li>Delete <code>env/</code> and reinstall requirements.txt -&gt; did this for every change in Python version and a dozen times more</li> </ul> <p>Here are my versions of key dependencies:</p> <ul> <li>Python 3.9.2</li> <li>pandas==2.0.3</li> <li>streamlit==1.29.0</li> <li>numpy==1.25.2</li> <li>matplotlib==3.7.2</li> </ul> <p>Given the latest error, I'm thinking it's not my code but some version of a dependency which is the root cause. The strangest thing is my versions for Python or deps have not changed and this has started happening the in last week.</p>
<python><pandas><streamlit>
2023-12-15 09:50:07
1
3,365
engineer-x
77,665,417
5,005,808
How to find the load directory of keras_nlp model tokenizer?
<p>I'm new to <code>keras_nlp</code>. I failed in running the following code. The following code automatically download the file from <a href="https://storage.googleapis.com/keras-nlp/models/deberta_v3_base_en/v1/vocab.spm" rel="nofollow noreferrer">url</a>.</p> <pre><code>preprocessor = keras_nlp.models.DebertaV3Preprocessor.from_preset( preset=CFG.preset, # Name of the model sequence_length=CFG.sequence_length, # Max sequence length, will be padded if shorter ) </code></pre> <p>The failure reason is that my server is blocked from the outer urls so that I have to manually download it to my PC and upload it to the server. Now I download the file but I don't know where I shall put it. I tried to find it in the src file but I failed.</p> <p>What is the directory where keras_nlp is searching for the tokenizer file?</p> <p>Thank you all for helping me!!!</p>
<python><keras>
2023-12-15 09:44:26
0
1,930
pfc
77,665,388
354,051
Expending tk.Frame and it's children for an Attribute Editor like GUI
<p>Python 3.12 Windows 10</p> <p><a href="https://i.sstatic.net/9XOmC.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9XOmC.jpg" alt="enter image description here" /></a></p> <p>I'm trying to create a property editor using tkinter. Here is the code:</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk from tkinter import colorchooser from tkinter.filedialog import asksaveasfile from tkinter import ttk # Third party from tkscrolledframe import ScrolledFrame import os if os.name == 'nt': from ctypes import windll windll.shcore.SetProcessDpiAwareness(1) LABEL_WIDTH = 14 PADX = 5 PADY = 3 class StringWidget(tk.Frame): '''''' def __init__(self, parent, label:str, default=&quot;&quot;): '''''' tk.Frame.__init__(self, parent) self.label = tk.Label(self, text=label, anchor=&quot;e&quot;, width=LABEL_WIDTH) self.entry = tk.Entry(self) self.entry.insert(0, default) self.label.pack(side=&quot;left&quot;, padx=PADX) self.entry.pack(side=&quot;left&quot;, padx=PADX) class FileWidget(tk.Frame): '''''' def __init__(self, parent, label:str, default=&quot;&quot;): '''''' print(parent.winfo_width()) tk.Frame.__init__(self, parent, width=parent.winfo_width()) self.label = tk.Label(self, text=label, anchor=&quot;e&quot;, width=LABEL_WIDTH) self.entry = tk.Entry(self) self.entry.insert(0, default) self.btn = tk.Button(self, text=&quot;Browse&quot;) self.label.pack(side=&quot;left&quot;, padx=PADX) self.entry.pack(side=&quot;left&quot;, padx=PADX) self.btn.pack(side=&quot;left&quot;, padx=PADX) class Point3DWidget(tk.Frame): '''''' def __init__(self, parent, label:str, minv:int=0, maxv:int=10, default:int=0, incr:int=1, format=&quot;%.0f&quot;): '''''' tk.Frame.__init__(self, parent) self.strv = tk.StringVar() self.strv.set(default) self.label = tk.Label(self, text=label, anchor='e', width=LABEL_WIDTH) self.spinbox1 = ttk.Spinbox(self, from_=minv, to=maxv, increment=incr, format=format, textvariable=self.strv, wrap=True) self.spinbox2 = ttk.Spinbox(self, from_=minv, to=maxv, increment=incr, format=format, textvariable=self.strv, wrap=True) self.spinbox3 = ttk.Spinbox(self, from_=minv, to=maxv, increment=incr, format=format, textvariable=self.strv, wrap=True) self.label.pack(side=&quot;left&quot;, padx=PADX) self.spinbox1.pack(side=&quot;left&quot;, padx=PADX) self.spinbox2.pack(side=&quot;left&quot;, padx=PADX) self.spinbox3.pack(side=&quot;left&quot;, padx=PADX) class AttributeEditor(tk.Frame): '''''' def __init__(self, parent): tk.Frame.__init__(self, parent) self.label = tk.Label(self, text=&quot;Theme&quot;, font=('bold')) self.sw = StringWidget(self, &quot;String Widget&quot;, &quot;Inigo&quot;) self.fw = FileWidget(self, &quot;File Widget&quot;, &quot;Inigo&quot;) self.pw = Point3DWidget(self, &quot;Point 3D Widget&quot;, 0, 10, 6) self.label.grid(row=0, column=0, sticky=&quot;w&quot;, padx=PADX, pady=PADY) self.sw.grid(row=1, column=0, sticky=&quot;ew&quot;, pady=PADY) self.fw.grid(row=2, column=0, sticky=&quot;ew&quot;, pady=PADY) self.pw.grid(row=3, column=0, sticky=&quot;ew&quot;, pady=PADY) separator = ttk.Separator(self, orient='horizontal') separator.grid(row=9, column=0, sticky=&quot;ew&quot;, pady=PADX, padx=PADY) if __name__ == &quot;__main__&quot;: root = tk.Tk() root.title(&quot;Attribute Editor&quot;) root.geometry(&quot;380x360&quot;) # Create a ScrolledFrame widget sf = ScrolledFrame(root) sf.pack(side=&quot;top&quot;, expand=1, fill=&quot;both&quot;) # Bind the arrow keys and scroll wheel sf.bind_arrow_keys(root) sf.bind_scroll_wheel(root) # Create a frame within the ScrolledFrame inner_frame = sf.display_widget(AttributeEditor) root.mainloop() </code></pre> <p>When ever I resize the main window, the desired expending behavior I'm looking for is:</p> <ol> <li>In StringWidget <em>entry</em> should stretch till end of the window.</li> <li>In Filewidget the entry should take the remaining space between label and button and stretched to cover the space.</li> <li>The 3 spinners should be stretched till the end of the window and should take equal space between label and window end.</li> </ol> <p>For reference</p> <p><a href="https://i.sstatic.net/IVvBC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IVvBC.png" alt="enter image description here" /></a></p>
<python><tkinter>
2023-12-15 09:38:31
1
947
Prashant
77,665,223
19,041,863
Change Ticks on Collorbar (manually)
<p>I would like to display Change cards. I have this database (ChangeEU)</p> <pre><code> Latitude Longitude Altitude Value O18d 0 30.0 -30.0 0.0 1.262094 1.262094 1 30.0 -29.9 0.0 1.269815 1.269815 2 30.0 -29.8 0.0 1.277522 1.277522 3 30.0 -29.7 0.0 1.285215 1.285215 4 30.0 -29.6 0.0 1.292893 1.292893 ... ... ... ... ... ... 359995 69.9 59.5 68.0 -1.237269 -1.373269 359996 69.9 59.6 74.0 -1.235324 -1.383324 359997 69.9 59.7 52.0 -1.233395 -1.337395 359998 69.9 59.8 71.0 -1.231478 -1.373478 359999 69.9 59.9 68.0 -1.229575 -1.365575 </code></pre> <p>And I display it with this code</p> <pre><code>import numpy as np from matplotlib import pyplot as plt from mpl_toolkits import basemap from scipy.stats import norm from mpl_toolkits.basemap import Basemap from scipy.stats import norm from matplotlib import colors from matplotlib.patches import Path, PathPatch from matplotlib.patches import Polygon from matplotlib.collections import PatchCollection from matplotlib.patches import PathPatch import pandas as pd from Data_Prep_Paper import * def load_ChangeEU(): ChangeEU = dflediffan return ChangeEU def base_data(ChangeEU): return{ 'late':np.array(ChangeEU['Latitude']), 'lone':np.array(ChangeEU['Longitude']), 'vale':np.array(ChangeEU['O18d']), } def plot_Diff(data): fig = plt.figure(figsize=(10, 10)) ax =fig.add_subplot(221) ax.set_title(&quot;Annual Change EU PRÜFEN OB DATEN DIE RICHTIGEN SIND&quot;, fontsize = 10) m = basemap.Basemap(llcrnrlon=-30, llcrnrlat=30,urcrnrlon=60,urcrnrlat=70, resolution='l',area_thresh=1000) m.drawcoastlines(color = 'black') x,y = m(data['lone'],data['late']) divnorm=colors.TwoSlopeNorm(vmin=-8., vcenter=0., vmax=3) m.contourf(x,y,data['vale'],32, tri=True,norm=divnorm, cmap = 'PuBu_r') plt.colorbar(location='bottom',pad=0.04,fraction=0.08) x0,x1 = ax.get_xlim() y0,y1 = ax.get_ylim() map_edges = np.array([[x0,y0],[x1,y0],[x1,y1],[x0,y1]]) polys = [p.boundary for p in m.landpolygons] polys = [map_edges]+polys[:] codes = [ [Path.MOVETO]+[Path.LINETO for p in p[1:]] for p in polys ] polys_lin = [v for p in polys for v in p] codes_lin = [xx for cs in codes for xx in cs] path = Path(polys_lin, codes_lin) patch = PathPatch(path,facecolor='white', lw=0) ax.add_patch(patch) ChangeEU = load_ChangeEU() base_data = base_data(ChangeEU) plot_Diff(base_data) plt.show() </code></pre> <p>The output is this:</p> <p><a href="https://i.sstatic.net/LMNI5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LMNI5.png" alt="enter image description here" /></a></p> <p>As you can see, the colour bar is labelled with decimal numbers. However, I would like the dark blue in the colour bar to be -10 and the red end to be 2. How can I set this manually without changing the colour scale? I have already tried using ticks but that didn't change the result and manuel.ticks didn't work for me either.</p> <p>What I like to have:</p> <p><a href="https://i.sstatic.net/IFeNJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IFeNJ.png" alt="" /></a></p>
<python><matplotlib><colors>
2023-12-15 09:00:24
1
303
Weiss
77,665,136
2,876,079
How to set font for tick labels with openpyxl?
<p>I created an Excel chart using openpyxl. Now I try configure the font for the tick labels of the x axis. How to do it correctly?</p> <p>Here is my current code:</p> <pre><code>from openpyxl import Workbook from openpyxl.chart import BarChart, Reference from openpyxl.chart.text import RichText from openpyxl.drawing.text import Font, CharacterProperties def main(): wb = Workbook() chart = _create_openpyxl_chart(wb) _format_chart(chart) wb.save(&quot;example.xlsx&quot;) def _format_chart(chart): font_type = 'Arial' font_size = 12 font_style = 'bold' font = Font(typeface=font_type) character_properties = CharacterProperties( sz=font_size * 100, b=(font_style == 'bold'), latin=font, ) axis = chart.x_axis if axis.textProperties is None: axis.textProperties = RichText() paragraph = axis.textProperties.paragraphs[0] text_run = paragraph.text[0] text_run.properties = character_properties def _create_openpyxl_chart(wb): sheet = wb.active _create_example_data(sheet) chart = _create_stacked_bar_chart() _connect_chart_to_example_data(chart, sheet) sheet.add_chart(chart) return chart def _connect_chart_to_example_data(chart, sheet): values = Reference( sheet, min_col=1, max_col=4, min_row=2, max_row=4, ) chart.add_data( values, titles_from_data=True, from_rows=True, ) categories = Reference( sheet, min_col=2, max_col=4, min_row=1, max_row=1, ) chart.set_categories(categories) def _create_stacked_bar_chart(): chart = BarChart() chart.x_axis.title = &quot;Year&quot; chart.type = &quot;col&quot; chart.grouping = &quot;stacked&quot; chart.overlap = 100 return chart def _create_example_data(sheet): data = [ [&quot;Category&quot;, 2017, 2018, 2019], [&quot;Apples&quot;, 10, 7, 12], [&quot;Oranges&quot;, 5, 3, 4], [&quot;Bananas&quot;, 8, 6, 9], ] for row in data: sheet.append(row) if __name__ == '__main__': main() </code></pre>
<python><excel><charts><openpyxl><styling>
2023-12-15 08:42:21
1
12,756
Stefan
77,664,680
10,829,044
Pandas convert raw hyperlink to clickable display in excel file
<p>I have a pandas dataframe with the below data</p> <pre><code>data = {'product_id': [11,11,11,11], 'rec_prod_id':[12,13,14,15], 'product_URL': ['www.example.com','www.example4.com','www.example3.com','www.example2.com']} data = pd.DataFrame(data) </code></pre> <p>My objective is to</p> <ol> <li><p>group the product_id and perform aggregation - nunique and list</p> </li> <li><p>convert the product_id as clickable for each rec_prod_id (which will take me to the respective webpage) when I export the dataframe to excel file</p> </li> </ol> <p>So, I tried the below by referring the SO post <a href="https://stackoverflow.com/questions/31820069/add-hyperlink-to-excel-sheet-created-by-pandas-dataframe-to-excel-method">here</a></p> <pre><code>def create_hyperlink(row): return '&lt;a href=&quot;{0}&quot;&gt;{1}&lt;/a&gt;'.format(row['product_id'], row['product_URL']) data['product_hyperlinks'] = data.apply(create_hyperlink, axis=1) data.groupby('product_id').agg{('# of recs'=product_id:nunique,product_links:list)} </code></pre> <p>But the above results in unexpected and incorrect output (I get raw urls like below in the output) when I export the excel file</p> <p><a href="https://i.sstatic.net/AfRhr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AfRhr.png" alt="enter image description here" /></a></p> <p><strong>But I expect my output to be like below</strong></p> <p><a href="https://i.sstatic.net/1FYOZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1FYOZ.png" alt="enter image description here" /></a></p>
<python><pandas><excel><dataframe><list>
2023-12-15 06:49:58
1
7,793
The Great
77,664,652
4,808,079
How can I read input from a numpad while blocking its keystrokes in mac os?
<p>This person has a similar project but for linux: <a href="https://unix.stackexchange.com/questions/343305/disable-keyboard-but-still-allow-reading-from-it">https://unix.stackexchange.com/questions/343305/disable-keyboard-but-still-allow-reading-from-it</a></p> <p>What I'm trying to do is make a numpad act like a launchpad. My goal is to consume the keystrokes coming from it, blocking them from going to the currently focused software.</p> <p>I tried <a href="https://pypi.org/project/keyboard/" rel="nofollow noreferrer"><code>keyboard</code></a> and <code>pynput</code> but neither could block the key events from being read by the currently focused application. I also looked into a nodejs approach to no avail.</p> <p>I have no desire to use this numpad as a regular keyboard, so it would be okay if it were incapable of it as long as I can still respond to inputs in some background software.</p> <p>When I asked ChatGPT, it said this might not be the kind of project that's doable on MacOS, which seems believable. I've never seen a piece of software that consumes key events while not in focus.</p> <p><strong>Is it possible? If so, how can I read input from a numpad while blocking its keystrokes in mac os?</strong></p> <p>Nodejs or Python would be preferable, but if I have to go through a compiled language and XCode, it is what it is.</p>
<python><node.js><macos><keystroke><event-capturing>
2023-12-15 06:42:57
1
11,464
Seph Reed
77,664,391
9,542,989
Python Mocking in Complex Class Structure
<p>I am writing a unit test (with mocking) that looks like this,</p> <pre><code>def test_1(self): # define the chat data to return chat_data = { '@odata.context': 'test_context', 'id': 'test_id', 'topic': None, 'createdDateTime': '2023-11-20T10:25:19.553Z', 'lastUpdatedDateTime': '2023-11-20T10:25:19.553Z', 'chatType': 'oneOnOne', 'webUrl': 'https://teams.test', 'tenantId': 'test_tenant_id', 'onlineMeetingInfo': None, 'viewpoint': { 'isHidden': False, 'lastMessageReadDateTime': '2023-12-08T17:09:34.214Z' }, 'lastMessagePreview@odata.context': 'https://graph.test', 'lastMessagePreview': { 'id': '1702055374214', 'createdDateTime': '2023-12-08T17:09:34.214Z', 'isDeleted': False, 'messageType': 'message', 'eventDetail': None, 'body': { 'contentType': 'text', 'content': '\n\nTest message.' }, 'from': { 'application': None, 'device': None, 'user': {} } } } # mock the api handler api_handler = Mock(MSTeamsHandler) with patch.object(api_handler.connect(), 'get_chat', return_value=chat_data): chats_table = ChatsTable(api_handler) # the rest of my test logic </code></pre> <p>This works fine, however, right now, I am testing the <code>get_chat</code> function of the object that is returned by <code>api_handler.connect()</code>, but what I actually want to mock is the <code>requests.get()</code> call that is made to the API. This is done via the <code>get_chat</code> function.</p> <p>This is what those functions look like within the class,</p> <pre><code>Class Client: def _make_request(self, api_url: str, params: Optional[Dict] = None, data: Optional[Dict] = None, method: str = &quot;GET&quot;) -&gt; Union[Dict, object]: headers = {&quot;Authorization&quot;: f&quot;Bearer {self.access_token}&quot;} if method == &quot;GET&quot;: response = requests.get(api_url, headers=headers, params=params) elif method == &quot;POST&quot;: response = requests.post(api_url, headers=headers, json=data) else: raise NotImplementedError(f&quot;Method {method} not implemented&quot;) if response.status_code == 429: if &quot;Retry-After&quot; in response.headers: pause_time = float(response.headers[&quot;Retry-After&quot;]) time.sleep(pause_time) response = requests.get(api_url, headers=headers, params=params) if response.status_code not in [200, 201]: raise requests.exceptions.RequestException(response.text) if response.headers[&quot;Content-Type&quot;] == &quot;application/octet-stream&quot;: raw_response = response.content else: raw_response = response.json() return raw_response def get_chat(self, chat_id: Text) -&gt; Dict: &quot;&quot;&quot; Get a chat by its ID. Parameters ---------- chat_id : str The ID of the chat. &quot;&quot;&quot; api_url = self._get_api_url(f&quot;chats/{chat_id}&quot;) # expand the response with the last message preview chat = self._make_request(api_url, params={&quot;$expand&quot;: &quot;lastMessagePreview&quot;}) return chat </code></pre> <p>How can I mock the <code>requests.get()</code> call in <code>_make_request()</code> instead?</p>
<python><unit-testing><mocking><python-unittest>
2023-12-15 05:23:25
1
2,115
Minura Punchihewa
77,664,365
23,002,898
In Tkinter, i can't see the vertical scrollbar in an external file
<p>If i select <code>Tab2</code>, i would like to see the vertical scrollbar in the my <code>Verticalbar</code>, but i can't see the vertical scrollbar (in an external file <code>x.py</code>). I specify that the verticalbar is located outside the tabcontrol. Must stay out of tabcontrol. I don't get any errors</p> <p><a href="https://i.sstatic.net/YtI6y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YtI6y.png" alt="enter image description here" /></a> Let me explain: my <code>Verticalbar</code> is a white sidebar located on the right. I would like to use it as a <strong>container</strong> for all the tabs, but I would only like to modify the contents inside it. For example, when I select <code>Tab2</code>, I would like to display a frame (the <strong>pink frame</strong>) inside the <code>Verticalbar</code> and this happens correctly.</p> <p>The problem is that I can't see the vertical scrollbar for the entire Verticalbar. What am I doing wrong? How to solve?</p> <p><strong>EDIT CODE</strong></p> <p><strong>main.py</strong></p> <pre><code>import tkinter as tk from tkinter import Frame, Label, ttk from page1 import Page1 from page2 import Page2 from x import * root = tk.Tk() root.geometry('705x325') style = ttk.Style(root) style.theme_use('default') #TABCONTROL style.configure(&quot;TNotebook.Tab&quot;, padding= 7) style.configure('lefttab.TNotebook', tabposition=tk.W + tk.N, tabplacement=tk.N + tk.EW, background='black') style.configure('lefttab.TNotebook.Tab', background='black', width=10, focuscolor='#efefef', foreground='#96de3d') style.map('lefttab.TNotebook.Tab', background=[('selected', '#96de3d')], foreground=[('selected', 'white')]) notebook = ttk.Notebook(root, style='lefttab.TNotebook') intro = tk.Frame(notebook, bg='white', width=270, height=600) tab1 = Page1(notebook, bg='white', width=270, height=600) tab2 = Page2(notebook, bg='white', width=270, height=600) notebook.add(tab1, text='Tab 1') notebook.add(tab2, text='Tab 2') notebook.place(x=0, y=0) #VERTICALBAR (RIGHT) Verticalbar = Frame(root, width=200, height=400, background=&quot;white&quot;, highlightthickness=0) Verticalbar.place(x=498, y=0) def on_tab_selection_changed(event): # First, get the selected tab id tab_id = notebook.select() # Now we can get the tab text using the tab id tab_text = notebook.tab(tab_id, &quot;text&quot;) if tab_text == &quot;Tab 1&quot;: newlabel1 = Label(Verticalbar, text='TAB 1 EXAMPLE ', background=&quot;white&quot;, fg=&quot;#78b130&quot;, font='Arial 10 bold') newlabel1.place(x=2, y=3) if tab_text == &quot;Tab 2&quot;: myfunction(Verticalbar) # Bind to a virtual event to detect when the tab selection changes notebook.bind(&quot;&lt;&lt;NotebookTabChanged&gt;&gt;&quot;, on_tab_selection_changed) root.mainloop() </code></pre> <p><strong>x.py</strong></p> <pre><code>from tkinter import Frame, Label, Text, Tk, ttk import tkinter as tk def myfunction(Verticalbar): newlabel2 = Label(Verticalbar, text='TAB 2 EXAMPLE', background=&quot;pink&quot;, fg=&quot;black&quot;, font='Arial 10 bold') newlabel2.place(x=10, y=10) # scrollbar canvas = tk.Canvas(Verticalbar) scrollbar_frame = ttk.Scrollbar(Verticalbar, orient=&quot;vertical&quot;, command=canvas.yview) scrollable_frame = ttk.Frame(canvas, width=200, height=400) scrollable_frame.bind( &quot;&lt;Configure&gt;&quot;, lambda e: e.widget.master.configure( scrollregion=e.widget.master.bbox(&quot;all&quot;) ) ) canvas.create_window((0, 0), window=scrollable_frame, anchor=&quot;nw&quot;) canvas.configure(yscrollcommand=scrollbar_frame.set) #All_Page All_Page = Frame(scrollable_frame, width=200, height=400, background=&quot;pink&quot;, highlightbackground=&quot;#96de3d&quot;, highlightthickness=0) All_Page.place(x=0, y=-12) textbox = Text(All_Page, height = 5, width = 10) textbox.place(x=10, y=10) canvas.pack(side=&quot;left&quot;, fill=&quot;both&quot;, expand=True) scrollbar_frame.pack(side=&quot;right&quot;, fill=&quot;y&quot;) </code></pre> <p><strong>page2.py</strong></p> <pre><code>import tkinter as tk from tkinter import ttk class Page2(tk.Frame): def __init__(self, master, **kw): super().__init__(master, **kw) #Don't put any code here, because more code will go here </code></pre> <p><strong>page1.py</strong></p> <pre><code>import tkinter as tk from tkinter import ttk class Page2(tk.Frame): def __init__(self, master, **kw): super().__init__(master, **kw) #Don't put any code here, because more code will go here </code></pre>
<python><python-3.x><tkinter><canvas>
2023-12-15 05:11:47
1
307
Nodigap
77,664,284
414,127
Flask blueprints with SQLAlchemy for APIs -- what can I do to simplify?
<p>I am designing an app that has a database, a CRUD web interface to manage its content, and exposing a REST API to be called by an external UI written in React. I am using native Flask Blueprints, SQLAlchemy, along with Marshmallow for API serialization.</p> <p>The way I have structured Blueprints, based on what I learned from the Flask docs, seems unnecessarily complicated:</p> <ul> <li><code>app.py</code> -&gt; <code>__init__.py</code>.<code>create_app()</code> registers blueprints <ul> <li>&quot;factory&quot; Flask pattern</li> <li>each blueprint imports from a <code>routes</code> module</li> <li><code>register_blueprint</code> specifies two blueprints one for HTML, another for API as each has a different <code>url_prefix</code></li> </ul> </li> <li>each <code>routes</code> module <code>__init__.py</code> <ul> <li>imports Blueprint, then</li> <li>creates the blueprint instances for html and api, then</li> <li>imports the actual routes for the module (e.g. <code>from routes.user import routes</code>)</li> </ul> </li> <li>each <code>routes</code> implementation <ul> <li>imports the blueprints created in the module</li> <li>imports the SQLAlchemy model and Marshmallow schema</li> <li>creates the code for the routes, and decorates with the correct blueprint depending on HTML or API</li> </ul> </li> <li>each model imports required elements for SQLAlchemy</li> </ul> <p>So the SQLAlchemy approach seems just right: in one place I define all the characteristics of the model and thus the associated database tables, as well as the schema for JSON output of my API.</p> <p>But the Blueprints approach seems convoluted, with dependencies scattered about -- to define one new route, I typically have to modify three files:</p> <ul> <li><code>app</code> module (<code>__init.py__</code>)</li> <li><code>routes</code> module for the specific route (<code>routes/user/__init.py__</code>)</li> <li><code>routes</code> implementation for the specific route (<code>routes/user/routes.py</code>)</li> <li>and each of these implicitly depend on the model for the entity (e.g. <code>user</code>)</li> </ul> <p>All of this is functional, but I am assuming there's a simpler way to approach the problem that I have not yet fully grasped. Is there a simpler, cleaner way to implement this?</p>
<python><rest><flask><sqlalchemy>
2023-12-15 04:37:09
1
14,208
Tom Harrison
77,664,264
4,451,521
Is there a way to set the legends of matplotlib subplots out of the plot?
<p>Right now I have a plot with two subplots The problem is that in each subplot I put many traces with different colors so when I put the legend, it covers up some of the points.</p> <p><a href="https://i.sstatic.net/TZDbr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TZDbr.png" alt="enter image description here" /></a></p> <p>The plot covers a lot of the plot space, and the legend has many traces so it seems there is no space for both</p> <p>Is there a way to solve this? Perhaps (but I open to other ways) putting the legend outside?</p>
<python><matplotlib>
2023-12-15 04:30:52
1
10,576
KansaiRobot
77,664,095
2,955,827
What are the differences between `pipx` and `pip --user`?
<p>I have tried to install any Python packages on <a href="https://en.wikipedia.org/wiki/Ubuntu_version_history#Ubuntu_24.04_LTS_(Noble_Numbat)" rel="nofollow noreferrer">Ubuntu 24.04</a> (Noble Numbat), but I found I cannot do that as in <a href="https://en.wikipedia.org/wiki/Ubuntu_version_history#Ubuntu_22.04_LTS_(Jammy_Jellyfish)" rel="nofollow noreferrer">Ubuntu 22.04</a> (Jammy Jellyfish).</p> <p><a href="https://peps.python.org/pep-0668/" rel="nofollow noreferrer">PEP 668</a> said it is for avoiding package conflict between system-wide package and user installed package.</p> <p>But what's the differences between using <code>pipx</code> and <code>pip --user</code>? And why does the <code>--user</code> option not work? It also installs packages to the user's own home.</p> <p>Example:</p> <pre class="lang-none prettyprint-override"><code>pip install setuptools --user </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>error: externally-managed-environment × This environment is externally managed ╰─&gt; To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. </code></pre> <p>But if I do that with <code>pipx</code>:</p> <pre class="lang-none prettyprint-override"><code>pipx install pip </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>No apps associated with package pip or its dependencies. If you are attempting to install a library, pipx should not be used. Consider using pip or a similar tool instead. </code></pre> <p>I am really confused with current rules.</p> <p>How can I manage my user global environment now? And how can I use the latest <code>pip</code> (not Linux distribution version) and other packages by default for current user?</p> <p>My environment:</p> <pre class="lang-none prettyprint-override"><code>FROM ubuntu:24.04 # Add Python RUN apt install -y python3-pip python3-venv python-is-python3 pipx USER ubuntu WORKDIR /app </code></pre>
<python><pip><pipx>
2023-12-15 03:14:54
2
3,295
PaleNeutron
77,664,085
2,414,957
installing easydict in bundled Python 3.5 in Blender 2.79a in Ubuntu 22.04 using pip
<p>I am not sure if this is a fixable error in Ubuntu 22.04? Blender 2.79a is the non-negotiable here but I am open to using older versions of Ubuntu in docker. Originally, [pvnet-rendering repo][1] used Ubuntu 16.04 which is now EOL.</p> <pre><code>(pvnet-rendering) (base) mona@ada:~/pvnet-rendering$ pip install easydict Exception: Traceback (most recent call last): File &quot;/home/mona/pvnet-rendering/pvnet-rendering/lib/python3.5/site-packages/pip/basecommand.py&quot;, line 215, in main status = self.run(options, args) File &quot;/home/mona/pvnet-rendering/pvnet-rendering/lib/python3.5/site-packages/pip/commands/install.py&quot;, line 272, in run with self._build_session(options) as session: File &quot;/home/mona/pvnet-rendering/pvnet-rendering/lib/python3.5/site-packages/pip/basecommand.py&quot;, line 69, in _build_session if options.cache_dir else None File &quot;/home/mona/blender-2.79a-linux-glibc219-x86_64/2.79/python/lib/python3.5/posixpath.py&quot;, line 89, in join genericpath._check_arg_types('join', a, *p) File &quot;/home/mona/blender-2.79a-linux-glibc219-x86_64/2.79/python/lib/python3.5/genericpath.py&quot;, line 143, in _check_arg_types (funcname, s.__class__.__name__)) from None TypeError: join() argument must be str or bytes, not 'int' Traceback (most recent call last): File &quot;/home/mona/pvnet-rendering/pvnet-rendering/bin/pip&quot;, line 11, in &lt;module&gt; sys.exit(main()) File &quot;/home/mona/pvnet-rendering/pvnet-rendering/lib/python3.5/site-packages/pip/__init__.py&quot;, line 233, in main return command.main(cmd_args) File &quot;/home/mona/pvnet-rendering/pvnet-rendering/lib/python3.5/site-packages/pip/basecommand.py&quot;, line 251, in main timeout=min(5, options.timeout)) as session: File &quot;/home/mona/pvnet-rendering/pvnet-rendering/lib/python3.5/site-packages/pip/basecommand.py&quot;, line 69, in _build_session if options.cache_dir else None File &quot;/home/mona/blender-2.79a-linux-glibc219-x86_64/2.79/python/lib/python3.5/posixpath.py&quot;, line 89, in join genericpath._check_arg_types('join', a, *p) File &quot;/home/mona/blender-2.79a-linux-glibc219-x86_64/2.79/python/lib/python3.5/genericpath.py&quot;, line 143, in _check_arg_types (funcname, s.__class__.__name__)) from None TypeError: join() argument must be str or bytes, not 'int' </code></pre> <p>Also, steps to get to this point:</p> <pre><code>(base) mona@ada:~$ wget https://download.blender.org/release/Blender2.79/blender-2.79a-linux-glibc219-x86_64.tar.bz2 (base) mona@ada:~$ tar -xvf blender-2.79a-linux-glibc219-x86_64.tar.bz2 (base) mona@ada:~$ git clone https://github.com/zju3dv/pvnet-rendering (base) mona@ada:~$ cd pvnet-rendering/ /home/mona/blender-2.79a-linux-glibc219-x86_64/2.79/python/bin/python3.5m -m venv pvnet-rendering source pvnet-rendering/bin/activate </code></pre> <p>The instructions states:</p> <pre><code>(pvnet-rendering) (base) mona@ada:~/pvnet-rendering$ python run.py --type rendering Traceback (most recent call last): File &quot;run.py&quot;, line 1, in &lt;module&gt; from config import cfg File &quot;/home/mona/pvnet-rendering/config.py&quot;, line 1, in &lt;module&gt; from easydict import EasyDict ImportError: No module named 'easydict' </code></pre> <p>Hence I am trying to install the easydict package. [1]: <a href="https://github.com/zju3dv/pvnet-rendering" rel="nofollow noreferrer">https://github.com/zju3dv/pvnet-rendering</a></p>
<python><python-3.x><linux><pip><blender>
2023-12-15 03:11:20
0
38,867
Mona Jalal
77,663,997
5,896,591
pip2.7 in virtualenv: Python 2.7 reached the end of its life
<p>I created a virtual environment using this command:</p> <pre><code>/usr/bin/python2.7 -m virtualenv venv </code></pre> <p>It mostly works, but using <code>venv/bin/pip2.7</code> prints the following warning:</p> <blockquote> <p>DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at <a href="https://pip.pypa.io/en/latest/development/release-process/#python-2-support" rel="nofollow noreferrer">https://pip.pypa.io/en/latest/development/release-process/#python-2-support</a> pip 21.0 will remove support for this functionality.</p> </blockquote> <p>regardless of the command options, in both Ubuntu16 and Ubuntu20.</p> <p>Including the above warning in a product is unacceptable and can only lead to confusion. (The warning is factually incorrect because Ubuntu20 supports Python 2.7 and <code>pip2.7</code> through at least 2030. As with all popular programming languages, Python 2.7 is likely to be supported far into the future, until a newer compatible language comes along to supercede it.)</p> <p>How do I disable the warning?</p> <p><strong>Update</strong></p> <p>It appears that this is a bug in pip versions 19+. How do I tell <code>virtualenv</code> to install pip 18?</p>
<python><python-2.7><pip><virtualenv>
2023-12-15 02:31:19
2
4,630
personal_cloud
77,663,904
4,451,521
How to appropriately set the limit of the Axes in matplotlib
<p>I have a dataframe</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import numpy as np data = { 'id': [1, 2, 3, 4, 5,6,7,8,9,10], 'LeftError': [0.1, 0.2, 0.15, 0.3, 0.25,-0.1, -0.2, -0.15, -0.3, -0.25], 'RightError': [0.2, 0.3, 0.25, 0.4, 0.35,-0.2, -0.3, -0.25, -0.4, -0.35], 'NCL': [1, 2, 1, 3, 2,-1, -2, -1, -3, -2], 'NCR': [2, 3, 2, 4, 3,-2, -3, -2, -4, -3], } df = pd.DataFrame(data) </code></pre> <p>I want to plot it</p> <pre><code> fig, axes = plt.subplots(1, 2, figsize=(12, 5)) # Plot for Left side axes[0].scatter(df['NCL'], df['LeftError'], color='blue') axes[0].set_xlabel('NCL') axes[0].set_ylabel('Left Error') # Plot for Right side axes[1].scatter(df['NCR'], df['RightError'], color='green') axes[1].set_xlabel('NCR') axes[1].set_ylabel('Right Error') plt.show() </code></pre> <p>However when I do this I get <a href="https://i.sstatic.net/J9c15.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J9c15.png" alt="enter image description here" /></a></p> <p>You can see that there is a good thing and a bad thing here. The good thing is that the Y axis goes a bit more and less than the range of values (it does not cross the points) . The bad thing is that they show different values. So to correct the bad thing I do</p> <pre><code>axes[0].set_ylim((min(df['LeftError'].min(), df['RightError'].min())), (max(df['LeftError'].max(), df['RightError'].max()))) axes[1].set_ylim((min(df['LeftError'].min(), df['RightError'].min())), (max(df['LeftError'].max(), df['RightError'].max()))) </code></pre> <p>and now I get</p> <p><a href="https://i.sstatic.net/35to7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/35to7.png" alt="enter image description here" /></a></p> <p>So the bad thing was corrected, but the good thing became bad. See that the first and last green dot got crossed by the axis</p> <p>How can I make the axis have some pad as originally but keeping the same values for both?</p>
<python><matplotlib>
2023-12-15 01:52:51
1
10,576
KansaiRobot
77,663,885
7,380,827
Strange interaction of time.monotonic_ns and time.sleep
<p>I have noticed that there is some strange behaviour when <code>time.monotonic_ns</code> and <code>time.sleep</code> interact with each other in a small time period.</p> <p>To investigate this I have wrote the following &quot;test&quot;.</p> <pre><code>import time def main(): while True: start = time.monotonic_ns() time.sleep(0.01) # 10 milliseconds end = time.monotonic_ns() diff = end - start print(&quot;diff: {}&quot;.format(diff)) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>This code most of the times prints a value of about <code>15000000</code> (actually exactly 15000000 ns or 16000000 ns) or <code>15</code> milliseconds, but sometimes it is just <code>0</code>. If I decrease the sleep-time more it has the exact same behaviour (i.e. 15 ms or 0 ns).</p> <p>When I use <code>time.time_ns</code> instead it will be non-zero every time (also more random and it sometimes even goes down to ~11 ms), <code>time.perf_counter_ns</code> also works fine. I am aware that <code>time.process_time</code> will be always 0 because it does not count sleep time. <code>timeit.timit</code> also shows a minimal sleep time of ~8 ms on my system.</p> <p>I am on a Windows 11 device. According to the <a href="https://docs.python.org/3/library/time.html#time.sleep" rel="nofollow noreferrer">python docs for time</a> sleep has a resolution of 100 nanoseconds (which is clearly less than 10 milliseconds). For <code>time.monotonic_ms()</code> I was not able to find a resolution.</p> <p>I can understand how it could be more than the 10 milliseconds specified but I have no idea how it could be 0 nanoseconds. Python Version: 3.9.13.</p> <p>I have also tested this on 3.12 with the only difference that IDLE freezes when I try to stop the execution with <kbd>CTRL</kbd><kbd>C</kbd>.</p>
<python><time><sleep>
2023-12-15 01:45:28
1
534
Twistios_Player
77,663,860
1,509,264
Override return value in str subclass
<p>I am trying to write a string-like class that acts like <code>str</code> but takes the value from the attribute of an object where that value has not been set yet but will have been set before the string-like object is read from.</p> <pre class="lang-python prettyprint-override"><code>from __future__ import annotations from functools import wraps from inspect import getmembers, isroutine from typing import Any, Callable, Dict, Final, Optional, Self, Tuple, Type, override def _wrapper(func: Callable[..., Any]) -&gt; Callable[..., Any]: @wraps(func) def method(self: DeferredStr, *args: Any, **kwargs: Any) -&gt; Any: self._load() # pyright: ignore[reportPrivateUsage] value = self._value # pyright: ignore[reportPrivateUsage] assert value is not None return func(value, *args, **kwargs) return method class _DeferredStrMetaclass(type): NON_WRAPPED_METHODS: Final[Tuple[str, ...]] = ( &quot;__class__&quot;, &quot;__class_getitem__&quot;, &quot;__delattr__&quot;, &quot;__dir__&quot;, &quot;__doc__&quot;, &quot;__getattribute__&quot;, &quot;__getstate__&quot;, &quot;__init__&quot;, &quot;__init_subclass__&quot;, &quot;__new__&quot;, &quot;__repr__&quot;, &quot;__setattr__&quot;, &quot;__subclasshook__&quot;, ) def __new__( cls, clsname: str, bases: Tuple[Type[Any], ...], attrs: Dict[str, Any], **kwargs: Any, ) -&gt; Self: for attr, value in getmembers(str, predicate=isroutine): if ( attr not in attrs and attr not in _DeferredStrMetaclass.NON_WRAPPED_METHODS ): attrs[attr] = _wrapper(value) return super().__new__(cls, clsname, bases, attrs, **kwargs) class DeferredStr(str, metaclass=_DeferredStrMetaclass): _value: Optional[str] _instance: object _attributes: Tuple[str, ...] _expect_str: bool def __new__( cls, __instance: object, *__attributes: str, __expect_str: bool = True, ) -&gt; Self: return super().__new__(cls, &quot;not this&quot;) def __init__( self, __instance: object, *__attributes: str, __expect_str: bool = True, ) -&gt; None: super().__init__() self._value = None self._instance = __instance self._attributes = __attributes self._expect_str = __expect_str def _load(self) -&gt; None: if self._value is not None: return value = self._instance for attribute in self._attributes: value = getattr(value, attribute) if isinstance(value, str): value = str(value) elif self._expect_str: raise TypeError(f&quot;{repr(self)} is not a str.&quot;) else: value = repr(value) self._value = value @override def __repr__(self) -&gt; str: return &quot;&lt;{name}: {instance}, {attributes} = {value}&gt;&quot;.format( name=type(self).__name__, instance=repr(self._instance), attributes=repr(self._attributes), value=repr(self._value), ) @override def __eq__(self, __value: object) -&gt; bool: if isinstance(__value, DeferredStr): return ( self._instance == __value._instance and self._attributes == __value._attributes ) self._load() value = self._value assert value is not None return value == __value </code></pre> <p>This &quot;works&quot; up to a certain point:</p> <pre class="lang-python prettyprint-override"><code>class Test: value: Optional[str] = None instance = Test() deferred_str = DeferredStr(instance, &quot;value&quot;) # sometime later ... instance.value = &quot;1&quot; print(deferred_str + &quot;2&quot;) print(&quot;&quot;.join(deferred_str)) </code></pre> <p>Outputs:</p> <blockquote> <pre class="lang-none prettyprint-override"><code>12 1 </code></pre> </blockquote> <p>Which is as-expected; but, if the sub-class is on the right-hand side of the operator:</p> <pre class="lang-python prettyprint-override"><code>print(&quot;2&quot; + deferred_str) print(&quot;&quot;.join((deferred_str,))) </code></pre> <p>Then the output is:</p> <blockquote> <pre class="lang-none prettyprint-override"><code>2not this not this </code></pre> </blockquote> <p>I understand why this is happening as the function calls are the equivalent of:</p> <pre class="lang-python prettyprint-override"><code>print(DeferredStr.__add__(deferred_str, &quot;2&quot;)) print(&quot;&quot;.join(DeferredStr.__iter__(deferred_str))) print(str.__add__(&quot;2&quot;, deferred_str)) print(str.join(&quot;&quot;, (deferred_str,))) </code></pre> <p>and when a <code>DeferredStr</code> method is called then deferred value is substituted for the super-classes value and when the <code>str</code> method is called it is taking the underlying value passed from the <code>DeferredStr</code> constructor to the <code>str</code> constructor.</p> <p>I know that I can fix it by wrapping the sub-class in a <code>str()</code> call everywhere it is going to be used:</p> <pre class="lang-python prettyprint-override"><code>print(&quot;2&quot; + str(deferred_str)) </code></pre> <p>Is there any way to fix <code>DeferredStr</code> (or rewrite it) so that the code works on both the left- and right-hand sides of operators without having to wrap it in <code>str()</code>?</p> <p><em>(Note: any modifications should also pass type-checking in MyPy and Pyright.)</em></p> <p>My current expectation is that this is not possible and that, if it is going to be necessary to wrap the class in <code>str()</code> then, it would be better to not inherit from <code>str</code> (and maybe inherit from <code>collections.UserString</code> instead) so that the type-checkers will pick when it is used as an argument for something that expects a <code>str</code> and hasn't been wrapped in a call to <code>str()</code>. However, I'm (faintly) hoping that someone familiar with the inner workings of Python might have a suggestion for an alternative solution.</p>
<python><python-3.x>
2023-12-15 01:34:49
1
172,539
MT0
77,663,834
825,227
Issue with import tensorflow: SystemError: initialization of _pywrap_checkpoint_reader raised unreported exception
<p>Working on a Ubuntu 22.04 machine, within Spyder 3.9, with tensorflow 2.15 installed.</p> <p>Simply trying to import the package, I run and return the following errors:</p> <pre><code>import tensorflow as tf RuntimeError: module compiled against API version 0x10 but this version of numpy is 0xe Traceback (most recent call last): File &quot;/tmp/ipykernel_275746/3793406994.py&quot;, line 1, in &lt;module&gt; import tensorflow as tf File &quot;/home/chris/anaconda3/lib/python3.9/site-packages/tensorflow/__init__.py&quot;, line 48, in &lt;module&gt; from tensorflow._api.v2 import __internal__ File &quot;/home/chris/anaconda3/lib/python3.9/site-packages/tensorflow/_api/v2/__internal__/__init__.py&quot;, line 11, in &lt;module&gt; from tensorflow._api.v2.__internal__ import distribute File &quot;/home/chris/anaconda3/lib/python3.9/site-packages/tensorflow/_api/v2/__internal__/distribute/__init__.py&quot;, line 8, in &lt;module&gt; from tensorflow._api.v2.__internal__.distribute import combinations File &quot;/home/chris/anaconda3/lib/python3.9/site-packages/tensorflow/_api/v2/__internal__/distribute/combinations/__init__.py&quot;, line 8, in &lt;module&gt; from tensorflow.python.distribute.combinations import env # line: 456 File &quot;/home/chris/anaconda3/lib/python3.9/site-packages/tensorflow/python/distribute/combinations.py&quot;, line 33, in &lt;module&gt; from tensorflow.python.distribute import collective_all_reduce_strategy File &quot;/home/chris/anaconda3/lib/python3.9/site-packages/tensorflow/python/distribute/collective_all_reduce_strategy.py&quot;, line 25, in &lt;module&gt; from tensorflow.python.distribute import cross_device_ops as cross_device_ops_lib File &quot;/home/chris/anaconda3/lib/python3.9/site-packages/tensorflow/python/distribute/cross_device_ops.py&quot;, line 28, in &lt;module&gt; from tensorflow.python.distribute import cross_device_utils File &quot;/home/chris/anaconda3/lib/python3.9/site-packages/tensorflow/python/distribute/cross_device_utils.py&quot;, line 22, in &lt;module&gt; from tensorflow.python.distribute import values as value_lib File &quot;/home/chris/anaconda3/lib/python3.9/site-packages/tensorflow/python/distribute/values.py&quot;, line 23, in &lt;module&gt; from tensorflow.python.distribute import distribute_lib File &quot;/home/chris/anaconda3/lib/python3.9/site-packages/tensorflow/python/distribute/distribute_lib.py&quot;, line 206, in &lt;module&gt; from tensorflow.python.data.ops import dataset_ops File &quot;/home/chris/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/ops/dataset_ops.py&quot;, line 33, in &lt;module&gt; from tensorflow.python.data.ops import iterator_ops File &quot;/home/chris/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/ops/iterator_ops.py&quot;, line 41, in &lt;module&gt; from tensorflow.python.training.saver import BaseSaverBuilder File &quot;/home/chris/anaconda3/lib/python3.9/site-packages/tensorflow/python/training/saver.py&quot;, line 50, in &lt;module&gt; from tensorflow.python.training import py_checkpoint_reader File &quot;/home/chris/anaconda3/lib/python3.9/site-packages/tensorflow/python/training/py_checkpoint_reader.py&quot;, line 19, in &lt;module&gt; from tensorflow.python.util._pywrap_checkpoint_reader import CheckpointReader SystemError: initialization of _pywrap_checkpoint_reader raised unreported exception </code></pre> <p>I do see the first line <code>RuntimeError</code> re <code>numpy</code>, but numpy version was checked and requirement met as part of the tensorflow install. Any ideas what I'm missing?</p>
<python><tensorflow>
2023-12-15 01:22:01
1
1,702
Chris
77,663,828
17,021,151
Regex Difference between Ruby & Python
<p>I just started Advent of Code 2023 and am trying to use it to learn a few new programming languages. I have (some) familiarity with python, and literally just installed ruby today.</p> <p>Day 1, part 2, I am using a Regex to search for digits as well as their spelled out versions. The regex in python (which yields the correct result): <code>(?=(0|1|2|3|4|5|6|7|8|9|zero|one|two|three|four|five|six|seven|eight|nine))</code></p> <p>When I use this exact regex in Ruby, I get a nil result. Interestingly, when I use this regex, I do get the exact same result in both python and ruby, but it is the incorrect answer: <code>r&quot;0|1|2|3|4|5|6|7|8|9|zero|one|two|three|four|five|six|seven|eight|nine&quot;</code></p> <p>So I believe the answer has to do with the positive lookahead assertion, but I don't know why, and what it is doing differently.</p> <p>Below are both of the files.</p> <p>Python:</p> <pre><code>import re input = open(&quot;../resources/input.txt&quot;,&quot;r&quot;) lines = input.readlines() targets = [ '0','1','2','3','4','5','6','7','8','9', 'zero','one','two','three','four','five','six','seven','eight','nine' ] values = { '0': 0, '1': 1, '2': 2, '3': 3, '4': 4, '5': 5, '6': 6, '7': 7, '8': 8, '9': 9, 'zero': 0, 'one': 1, 'two': 2, 'three': 3, 'four': 4, 'five': 5, 'six': 6, 'seven': 7, 'eight': 8, 'nine': 9 } sum = 0 for line in lines: numbers = re.findall(r&quot;(?=(&quot;+'|'.join(targets)+r&quot;))&quot;, line) firstDigitValue = values[numbers[0]] * 10 lastDigitValue = values[numbers[-1]] sum += (firstDigitValue+lastDigitValue) print(sum) </code></pre> <p>Ruby:</p> <pre><code># Init vars sum = 0 reg = /\d|zero|one|two|three|four|five|six|seven|eight|nine/ reg2 = /(?=(0|1|2|3|4|5|6|7|8|9|zero|one|two|three|four|five|six|seven|eight|nine))/ reg3 = /0|1|2|3|4|5|6|7|8|9|zero|one|two|three|four|five|six|seven|eight|nine/ values = { '0' =&gt; 0, '1' =&gt; 1, '2' =&gt; 2, '3' =&gt; 3, '4' =&gt; 4, '5' =&gt; 5, '6' =&gt; 6, '7' =&gt; 7, '8' =&gt; 8, '9' =&gt; 9, 'zero' =&gt; 0, 'one' =&gt; 1, 'two' =&gt; 2, 'three' =&gt; 3, 'four' =&gt; 4, 'five' =&gt; 5, 'six' =&gt; 6, 'seven' =&gt; 7, 'eight' =&gt; 8, 'nine' =&gt; 9 } # Pipe the file line by line and do per line File.foreach(&quot;../resources/input.txt&quot;, chomp: true) do |line| # Get the first and last digits as their values numbers = line.scan(reg3) firstDigitValue = values[numbers[0]] * 10 lastDigitValue = values[numbers[-1]] # accumulate sum += (firstDigitValue+lastDigitValue) end puts sum </code></pre>
<python><ruby>
2023-12-15 01:19:12
2
316
jcodes
77,663,755
1,657,666
How pass "reason" when we use the HTTPException with FASTAPI?
<p>I have an API based on FAST API that, when something goes wrong raises the following exception.</p> <pre><code> raise HTTPException(status_code=404, detail=&quot;There is no file to created a zip package. Check device name.&quot;) </code></pre> <p>As you can see I included the detail field, however, observing the logs I see the following</p> <pre><code> [2023-12-15 00:16:30,954]:[DEBUG]:[file_operations]:download_log_file() error: 404 reason:Not Found </code></pre> <p>Why the reason is &quot;Not Found&quot; ? Is there anything wrong with the code? The code that captures this log is based on request as you see below:</p> <pre><code> try: r = self.get( self.base_url + &quot;/downloadlog&quot;, headers=headers, params=params, timeout=100, ) # Check if the request was successful (status code 200) if r.status_code == 200: ... ... else: print(f&quot;download_log_file() error: {r.status_code} reason:{r.reason}&quot;) error_response = { &quot;error&quot;: f&quot;HTTP error occurred: {r.status_code} {r.reason}&quot;, &quot;request_id&quot;: request_id, } raise HTTPException( status_code=r.status_code, detail=error_response ) </code></pre>
<python><fastapi><httpexception>
2023-12-15 00:47:51
2
451
user1657666
77,663,640
11,280,068
Easiest way to automatically add new columns to MySQL database before calling pandas df.to_sql?
<p>I'm encountering an issue where I'm appending a dataframe to a table in my mysql database, but the number of columns in the dataset can be greater than or equal to the number of column currently in the database.</p> <p>For example, the dataset and the table may both have 10 columns. The next time we want to export the data, the dataset might have 11+ columns. If I try to just run <code>df.to_sql(...)</code> it will give me a field error because a column(s) is not present in the table already.</p> <p>What would the best approach be to first add the column to the table before exporting the dataframe in pandas? I thought about <code>set(columns in df) - set(columns in table)</code> and then manually running <code>ALTER TABLE ADD COLUMN...</code> but I don't know how to dynamically map mysql datatypes to numpy datatypes</p> <p>What's the best approach for this?</p>
<python><sql><mysql><pandas><dataframe>
2023-12-14 23:56:11
1
1,194
NFeruch - FreePalestine
77,663,563
15,332,006
Create or use a function to find the repeated sequence of items in a list
<p>I am trying to write a function that takes a list/array and finds the repeated sequence of numbers.</p> <p>Examples:</p> <ul> <li><p><code>[111, 0, 3, 1, 111, 0, 3, 1, 111, 0, 3, 1]</code></p> <p><code>[111, 0, 3, 1]</code> is the block that is being repeated and is what I'm looking for.</p> </li> <li><p><code>[11, 34, 132, 54, 90, 430, 657, 689, 34, 90, 90, 90, 46, 34, 657, 689, 34, 90, 90, 90, 46, 34, 657, 689, 34, 90, 90, 90, 46, 34]</code></p> <p><code>[657, 689, 34, 90, 90, 90, 46, 34]</code> is the repeated sub list.</p> </li> <li><p><code>[569, 374, 879, 374, 879, 460, 568, 488, 460, 568, 488, 460, 568, 488, 750, 750]</code></p> <p>This has 2 <code>[374, 879]</code>; <code>[460, 568, 488]</code>.</p> </li> <li><p><code>[45, 98, 45, 98, 45]</code></p> <p><code>[45, 98]</code> is the block, not <code>[98, 45]</code> as it's not at the start. Blocks are identified from the start and <code>[98, 45]</code> is excluded because it's beginning overlaps with <code>[45, 98]</code>.</p> </li> </ul> <p>Some constraints of the dataset:</p> <ul> <li>The entire list is not a repeating sequence</li> <li>The repeating sequence will at least exist twice fully</li> <li>It can appear in any position of the list (beginning/middle/end)</li> <li>It may not overlap with itself or other sequences, if it does the largest one should be picked</li> <li>More than one might exist</li> <li>A block has at least two items</li> <li>Smaller blocks may make up a larger block so the largest one should be prioritized <code>[230, 205, 900, 617, 821, 188, 617, 821, 205, 900]</code>, <code>[617, 821]</code> is a block but not a valid one since it's inside <code>[205, 900, 617, 821, 188, 617, 821, 205, 900]</code></li> </ul> <p>The function should return a convenient data structure such as list of list, or key value pair, or list of list indicating the start and end of each unique repeation from their initial position to their first end. Initial ordering should be maintained.</p> <p>This is my attempt:</p> <pre class="lang-py prettyprint-override"><code>def get_seq_group(seq): return [(key, list(group)) for key, group in itertools.groupby(seq)] list_a = [111, 0, 3, 1, 111, 0, 3, 1, 111, 0, 3, 1] list_b = [67, 4, 67, 4, 67, 4, 67, 4, 2, 9, 0] list_c = [11, 34, 132, 54, 90, 430, 657, 689, 34, 90, 90, 90, 46, 34, 657, 689, 34, 90, 90, 90, 46, 34, 657, 689, 34, 90, 90, 90, 46, 34] list_d = [569, 374, 879, 374, 879, 460, 568, 488, 460, 568, 488, 460, 568, 488, 750, 750] print(get_seq_group(list_a)) print(get_seq_group(list_b)) print(get_seq_group(list_c)) print(get_seq_group(list_d)) </code></pre> <p>I attempted to group them but to no avail. It just returns a key value pair of each instance as adjacent numbers are not same. <code>[(111, [111]), (0, [0]), (3, [3]), (1, [1]), (111, [111]), (0, [0]), (3, [3]), (1, [1]), (111, [111]), (0, [0]), (3, [3]), (1, [1])]</code> Output for <code>list_a</code>.</p> <p>I'm not aware of any algorithm addressing this case.</p> <p>I tried converting things to string and joining them but going back to a list is impossible as all the digits merge. The solutions <a href="https://stackoverflow.com/questions/11385718/python-finding-repeating-sequence-in-list-of-integers">here</a> didn't also work for me.</p> <p>The explanation of any provided algorithm would also be nice. Functional and imperative solutions would be appreciated as well for contrasting.</p>
<python><loops><python-itertools><itertools-groupby>
2023-12-14 23:24:28
1
451
Rashiq