QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
78,151,710
4,732,111
How to initialise a polars dataframe with column names from database cursor description?
<p>I'm connecting to Postgres Database using psycopg2 connector and using cursor property, i'm fetching the records along with their column names. Here is the code snippet of the same:</p> <pre><code>rds_conn = psycopg2.connect( host=config.RDS_HOST_NAME, database=config.RDS_DB_NAME, user=config.RDS_DB_USER, password=config.RDS_DB_PASSWORD, port=config.RDS_PORT) cur = rds_conn.cursor() cur.execute(sql_query) names = [x[0] for x in cur.description] rows = cur.fetchall() cur.close() </code></pre> <p>I'm initialising a Pandas dataframe with column names from the <code>cursor.description</code> property:</p> <pre><code> df = pd.DataFrame(rows, columns=names) </code></pre> <p>Instead of Pandas dataframe, if i would like to initialise a Polars dataframe with column names from the cursor.description property, how would i do it? I don't want to convert Pandas to Polars dataframe.</p> <p>I tried using <code>polars.Dataframe.with_Columns</code> property but it didn't work.</p> <p>Can someone please help me on this?</p>
<python><pandas><dataframe><python-polars><rust-polars>
2024-03-13 06:47:32
1
363
Balaji Venkatachalam
78,151,027
15,412,256
Polars Customized Function Returns Multiple Columns
<p><code>_func</code> is designed to return two columns:</p> <pre class="lang-py prettyprint-override"><code>from polars.type_aliases import IntoExpr, IntoExprColumn import polars as pl def _func(x: IntoExpr): x1 = x+1 x2 = x+2 return pl.struct([x1, x2]) df = pl.DataFrame({&quot;test&quot;: np.arange(1, 11)}) df.with_columns( _func(pl.col(&quot;test&quot;)).alias([&quot;test1&quot;, &quot;test2&quot;]) ) </code></pre> <p>I have tried to wrap the return values using <code>pl.struct</code> but it didn't work.</p> <p>Expected output:</p> <pre class="lang-py prettyprint-override"><code>shape: (10, 3) test test1 test2 i32 i32 i32 1 2 3 2 3 4 3 4 5 4 5 6 5 6 7 6 7 8 7 8 9 8 9 10 9 10 11 10 11 12 </code></pre>
<python><dataframe><python-polars>
2024-03-13 03:01:28
3
649
Kevin Li
78,151,000
1,592,380
Adding shapely polygon to geopandas
<p><a href="https://i.sstatic.net/joIfj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/joIfj.png" alt="enter image description here" /></a></p> <p>I'm working with jupyterlab . I'm trying to save the points of a Ipyleaflet polygon to geopandas (like in <a href="https://gregfeliu.medium.com/turning-coordinates-into-a-geopandas-shape-58edd55dc2c1" rel="nofollow noreferrer">https://gregfeliu.medium.com/turning-coordinates-into-a-geopandas-shape-58edd55dc2c1</a>). I have the following in a cell:</p> <pre><code>zoom = 15 import ipywidgets from __future__ import print_function import ipyleaflet import geopandas as gpd from shapely.geometry import Point, LineString, Polygon from ipyleaflet import ( Map, Marker, TileLayer, ImageOverlay, Polyline, Polygon, Rectangle, Circle, CircleMarker, GeoJSON, DrawControl) c = ipywidgets.Box() # Define the columns and their data types columns = ['geometry', 'column1', 'column2'] # Add more columns as needed data_types = [Polygon, int, str] # Example data types, adjust as needed # Create an empty GeoDataFrame empty_gdf = gpd.GeoDataFrame(columns=columns) # empty_gdf.crs = 'EPSG:4326' # For example, setting CRS to WGS84 # topo_background = True # Use topo as background rather than map? # if topo_background: # m = Map(width='1000px',height='600px', center=center, zoom=zoom, \ # default_tiles=TileLayer(url=u'http://otile1.mqcdn.com/tiles/1.0.0/sat/{z}/{x}/{y}.jpg')) # else: # m = Map(width='1000px',height='600px', center=center, zoom=zoom) c.children = [m] # keep track of rectangles and polygons drawn on map: def clear_m(): global rects,polys rects = set() polys = set() clear_m() rect_color = '#a52a2a' poly_color = '#00F' myDrawControl = DrawControl( rectangle={'shapeOptions':{'color':rect_color}}, polygon={'shapeOptions':{'color':poly_color}}) #,polyline=None) def handle_draw(self, action, geo_json): global rects,polys polygon=[] for coords in geo_json['geometry']['coordinates'][0][:-1][:]: print(coords) polygon.append(tuple(coords)) polygon = tuple(polygon) if geo_json['properties']['style']['color'] == '#00F': # poly if action == 'created': polys.add(polygon) polygon1 = shapely.geometry.Polygon(polygon) empty_gdf.append(polygon1) elif action == 'deleted': polys.discard(polygon) if geo_json['properties']['style']['color'] == '#a52a2a': # rect if action == 'created': rects.add(polygon) elif action == 'deleted': rects.discard(polygon) myDrawControl.on_draw(handle_draw) m.add_control(myDrawControl) </code></pre> <p>However when I run &quot;empty_gdf.head()&quot; in the next cell, I get an empty geodataframe. What am I doing wrong?</p>
<python><pandas><jupyter><gis><geopandas>
2024-03-13 02:49:47
1
36,885
user1592380
78,150,997
2,307,441
pd.wide_to_long in python is slow
<p>I have a dataframe with 55049 rows and 667 columns in it.</p> <blockquote> <p>Sample dataframe structure as follows:</p> </blockquote> <pre><code> data = { 'g1': [1], 'g2': [2], 'g3': [3], 'st1_1': [1], 'st1_2': [1], 'st1_3': [1], 'st1_4': [1], 'st1_5': [5], 'st1_6': [5], 'st1_7': [5], 'st1_8': [5], 'st1_Next_1': [8], 'st1_Next_2': [8], 'st1_Next_3': [8], 'st1_Next_4': [8], 'st1_Next_5': [9], 'st1_Next_6': [9], 'st1_Next_7': [9], 'st1_Next_8': [9], 'st2_1': [2], 'st2_2': [2], 'st2_3': [2], 'st2_4': [2], 'st2_5': [2], 'st2_6': [2], 'st2_7': [2], 'st2_8': [2], 'ft_1': [1], 'ft_2': [0], 'ft_3': [1], 'ft_4': [1], 'ft_5': [1], 'ft_6': [0], 'ft_7': [0], 'ft_8': [1] } df = pd.DataFrame(data) print(df) </code></pre> <p>To get my desired output I have the following code where I am using <code>pd.wide_to_long</code></p> <pre><code>ilist = ['g1','g2','g3'] stublist = ['st1','st1_Next','st2','ft'] df_long = pd.wide_to_long( df.reset_index(), i=['index']+ilist , stubnames= stublist, j='j', sep='_').reset_index() df_long = df_long[df_long['ft']==1] </code></pre> <p>Above code is working in fine with expected results.</p> <blockquote> <p>I perfromed this wide_to_long to apply the filter <code>df_long[df_long['ft']==1]</code>. which means ft_1 need to apply for all _1, ft_2 for all _2.....and so for all _8.</p> </blockquote> <blockquote> <p>Problem is to perform wide_to_long operation it took around 2 mins, Since I have 800+ source files to process the whole process is taking 1600 mins which is quite high.</p> </blockquote> <p>I Am looking for any alternative suggestions to transpose the data.</p> <p>I have Tried <a href="https://github.com/pandas-dev/pandas/issues/49174" rel="nofollow noreferrer">this</a> but didn't work for me with much differenece.</p> <blockquote> <p>As @sammywemmy suggested, I have tried below code. But output is missing <code>st1_Next</code>.</p> </blockquote> <pre class="lang-py prettyprint-override"><code> ilist = ['g1','g2','g3'] stublist = ['st1','st1_Next','st2','ft'] df_pvot = df.pivot_longer(index=ilist,names_to=stublist,names_pattern=stublist) print(df_pvot) </code></pre> <p>Output is missing st1_Next and data clubbing with st1 Instead of new column.</p> <pre><code>Output: g1 g2 g3 st1 st2 ft 0 1 2 3 1 2.0 1.0 1 1 2 3 1 2.0 0.0 2 1 2 3 1 2.0 1.0 3 1 2 3 1 2.0 1.0 4 1 2 3 5 2.0 1.0 5 1 2 3 5 2.0 0.0 6 1 2 3 5 2.0 0.0 7 1 2 3 5 2.0 1.0 8 1 2 3 8 NaN NaN 9 1 2 3 8 NaN NaN 10 1 2 3 8 NaN NaN 11 1 2 3 8 NaN NaN 12 1 2 3 9 NaN NaN 13 1 2 3 9 NaN NaN 14 1 2 3 9 NaN NaN 15 1 2 3 9 NaN NaN </code></pre>
<python><pandas><dataframe><transform>
2024-03-13 02:48:58
0
1,075
Roshan
78,150,864
9,357,484
Export displacy graphics as an image file shows an empty file
<p>I want to export displacy graphics as an image file. I run the following code block in Jupyter Notebook and generate an svg file name sample.svg.</p> <p>The code block is</p> <p>import spacy</p> <pre><code>from spacy import displacy from pathlib import Path nlp = spacy.load('en_core_web_lg') sentence_nlp = nlp(&quot;John go home to your family&quot;) svg = displacy.render(sentence_nlp, style=&quot;ent&quot;, jupyter = False) output_path = Path(&quot;D:\\sample.svg&quot;) output_path.open(&quot;w&quot;, encoding=&quot;utf-8&quot;).write(svg) </code></pre> <p>The above code gave me an output 393.</p> <p>When I try to open the svg file by inserting in powerpoint it shows nothing.</p> <p>I drag and drop the svg file in the Chrome browser.</p> <p>The result of drag and drop is as follows</p> <p><a href="https://i.sstatic.net/1ZEio.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1ZEio.png" alt="enter image description here" /></a></p> <p>Can any one help me to export the correct svg file format?</p> <p>Thank you.</p>
<python><jupyter-notebook><nlp><spacy><displacy>
2024-03-13 01:51:25
1
3,446
Encipher
78,150,678
637,517
Shell one liner custom curl command with if else handling
<p>I'm trying to read a url using <strong>curl</strong> command, and expecting command to exit with return code 0 or 1 based on response of curl after parsing the response json.</p> <p>I tried to parse a json response, but notable to make it working in if else condition.</p> <p><em>URL - localhost:8080/health</em></p> <p>response of the url</p> <pre><code>{ &quot;db&quot;: { &quot;status&quot;: &quot;healthy&quot; }, &quot;scheduler&quot;: { &quot;fetch&quot;: &quot;2024-03-12T04:32:53.060917+00:00&quot;, &quot;status&quot;: &quot;healthy&quot; } } </code></pre> <p><strong>Expected output -</strong> <strong>one liner cmd to return with exit code 0 if scheduler.status is healthy else 1</strong></p> <p>Note - I'm not looking for curl response as 0 or 1 but looking for command to exit with 0 or 1.</p> <p>Purpose - If my scheduler status is unhealthy than my process would terminate and exit from the app.</p> <p>I'm able to parse the response message but notable to apply condition properly on response here is what i tried so far,</p> <p><strong>cmd 1 :</strong></p> <pre><code>if ((status=$(curl -s 'https://localhost:8080/health' | python -c &quot;import sys, json; print (json.load(sys.stdin) ['scheduler'] ['status'])&quot;))='healthy'); then exit 0; else 1; </code></pre> <p>It throws error as.</p> <pre><code>zsh: parse error near `='healthy'' </code></pre> <p>From above cmd this <code>curl -s 'https://localhost:8080/health' | python -c &quot;import sys, json; print (json.load(sys.stdin) ['scheduler'] ['status'])&quot;</code> part is working fine returns (healthy/unhealthy) while adding condition failing.</p> <p><strong>cmd 2:</strong></p> <pre><code>/bin/sh -c &quot;status=$(curl -kf https://localhost:8080/health --no- progress-meter | grep -Eo &quot;scheduler [^}]&quot; | grep -Eo '[^{]$' | grep -Eo &quot;status [^}]&quot; | grep -Eo &quot;[^:]$&quot; | tr -d \&quot;' | tr -d '\r' | tr -d '\ '); if [ $status=='unhealthy']; then exit 1; fi;0&quot; </code></pre> <p>This is also not working but, this part <code>curl -kf https://localhost:8080/health --no- progress-meter | grep -Eo &quot;scheduler [^}]&quot; | grep -Eo '[^{]$' | grep -Eo &quot;status [^}]&quot; | grep -Eo &quot;[^:]$&quot; | tr -d \&quot;' | tr -d '\r' | tr -d '\ '</code> is working fine to return healthy/unhealthy</p> <p>I tried all this short of, but no luck not sure if there is any workaround with a single cmd.</p> <p>great if there is any option using shell without python else with python would also work</p> <p><strong>Please note - No Jq or any other tool installation is required, just with existing commands</strong></p>
<python><linux><shell><curl><command>
2024-03-13 00:29:52
8
6,158
subodh
78,150,613
13,945,013
Setting up a Python project for packaging and distribution via Homebrew
<p>I'm trying to create a Homebrew formula for a Python project.</p> <p>Here's the Homebrew formula:</p> <pre class="lang-rb prettyprint-override"><code>class Scanman &lt; Formula include Language::Python::Virtualenv desc &quot;Using LLMs to interact with man pages&quot; url &quot;https://github.com/nikhilkmr300/scanman/archive/refs/tags/1.0.1.tar.gz&quot; sha256 &quot;93658e02082e9045b8a49628e7eec2e9463cb72b0e0e9f5040ff5d69f0ba06c8&quot; depends_on &quot;python@3.11&quot; def install virtualenv_install_with_resources bin.install &quot;scanman&quot; end test do # Simply run the program system &quot;#{bin}/scanman&quot; end end </code></pre> <p>Upon running the application with the installed scanman version, it fails to locate my custom modules housed within the src directory.</p> <pre class="lang-bash prettyprint-override"><code>ModuleNotFoundError: No module named 'src' </code></pre> <p>Any insights into why this is happening?</p> <p>Here's my directory structure if that helps:</p> <pre class="lang-bash prettyprint-override"><code>. β”œβ”€β”€ requirements.txt β”œβ”€β”€ scanman β”œβ”€β”€ scanman.rb β”œβ”€β”€ setup.py └── src β”œβ”€β”€ __init__.py β”œβ”€β”€ cli.py β”œβ”€β”€ commands.py β”œβ”€β”€ manpage.py β”œβ”€β”€ rag.py └── state.py </code></pre> <p>The main executable is <code>scanman</code>. It's a Python script that lets you interact with man pages using an LLM.</p> <p>It's worth noting the following:</p> <ul> <li>When I run the local version of <code>scanman</code> from my repository, it works absolutely fine.</li> <li>Other 3rd party packages installed from PyPI don't throw any error. I can't find them in <code>/usr/local/Cellar/scanman/1.0.1/libexec/lib/python3.11/site-packages/</code>, however.</li> </ul>
<python><homebrew><python-packaging>
2024-03-13 00:07:39
1
1,262
Nikhil Kumar
78,150,478
7,571,086
Installing PyTorch with Python version 3.11 in a Docker container
<p>I see on the official PyTorch page that PyTorch supports Python versions 3.8 to 3.11.</p> <p>When I actually try to install PyTorch + CUDA in a Python 3.11 Docker image, it seems unable to find CUDA drivers, e.g.</p> <pre><code>FROM python:3.11.4 RUN --mount=type=cache,id=pip-build,target=/root/.cache/pip \ pip install torch torchaudio ENV PATH=&quot;/usr/local/nvidia/bin:${PATH}&quot; \ NVIDIA_VISIBLE_DEVICES=all \ NVIDIA_DRIVER_CAPABILITIES=all </code></pre> <p>Then, inside the container, I see that <code>torch.version.cuda</code> is <code>None</code></p> <p>Compare this to</p> <pre><code>FROM pytorch/pytorch RUN --mount=type=cache,id=pip-build,target=/root/.cache/pip \ pip install torchaudio ENV PATH=&quot;/usr/local/nvidia/bin:${PATH}&quot; \ NVIDIA_VISIBLE_DEVICES=all \ NVIDIA_DRIVER_CAPABILITIES=all </code></pre> <p>Inside the container I see that <code>torch.version.cuda</code> is <code>12.1</code></p> <p>PyTorch claims they're compatible with Python 3.11, has anybody actually been able to use PyTorch+CUDA with Python 3.11?</p> <p>Tried running Docker images with Python 3.11.4</p> <p>Tried running the Conda docker image and installing pytorch, but kept getting errors that the images couldn't be found</p>
<python><docker><pytorch><python-3.11>
2024-03-12 23:14:46
1
380
Sishaar Rao
78,150,431
10,305,444
Getting `ValueError: as_list() is not defined on an unknown TensorShape.` when trying to tokenize as part of the model
<p>I am trying to do tokenization as part of my model, as it will reduce my CPU usage, and RAM, on the other hand, it will utilize my GPU more. But I am facing an issue saying <code>ValueError: as_list() is not defined on an unknown TensorShape.</code></p> <p>I have created a <code>Layer</code> called <code>TokenizationLayer</code> which takes care of the tokenization, and defines as:</p> <pre><code>class TokenizationLayer(Layer): def __init__(self, max_length, **kwargs): super(TokenizationLayer, self).__init__(**kwargs) self.max_length = max_length self.tokenizer = Tokenizer() def build(self, input_shape): super(TokenizationLayer, self).build(input_shape) def tokenize_sequences(self, x): # Tokenization function return self.tokenizer.texts_to_sequences([x.numpy()])[0] def call(self, inputs): # Use tf.py_function to apply tokenization element-wise sequences = tf.map_fn(lambda x: tf.py_function(self.tokenize_sequences, [x], tf.int32), inputs, dtype=tf.int32) # Masking step mask = tf.math.logical_not(tf.math.equal(sequences, 0)) return tf.where(mask, sequences, -1) # Using -1 as a mask value def compute_output_shape(self, input_shape): return (input_shape[0], self.max_length) # Use self.max_length instead of trying to access shape </code></pre> <p>But it keeps giving me an error saying <code>as_list() is not defined on an unknown TensorShape.</code></p> <p>Here is the complete code, if you need it:</p> <pre><code>import tensorflow as tf from tensorflow.keras.layers import Layer, Input, Embedding, LSTM, Dense, Concatenate from tensorflow.keras.models import Model from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences class TokenizationLayer(Layer): def __init__(self, max_length, **kwargs): super(TokenizationLayer, self).__init__(**kwargs) self.max_length = max_length self.tokenizer = Tokenizer() def build(self, input_shape): super(TokenizationLayer, self).build(input_shape) def tokenize_sequences(self, x): # Tokenization function return self.tokenizer.texts_to_sequences([x.numpy()])[0] def call(self, inputs): # Use tf.py_function to apply tokenization element-wise sequences = tf.map_fn(lambda x: tf.py_function(self.tokenize_sequences, [x], tf.int32), inputs, dtype=tf.int32) # Masking step mask = tf.math.logical_not(tf.math.equal(sequences, 0)) return tf.where(mask, sequences, -1) # Using -1 as a mask value def compute_output_shape(self, input_shape): return (input_shape[0], self.max_length) # Use self.max_length instead of trying to access shape # Build the model with the custom tokenization layer def build_model(vocab_size, max_length): input1 = Input(shape=(1,), dtype=tf.string) input2 = Input(shape=(1,), dtype=tf.string) # Tokenization layer tokenization_layer = TokenizationLayer(max_length) embedded_seq1 = tokenization_layer(input1) embedded_seq2 = tokenization_layer(input2) # Embedding layer for encoding strings embedding_layer = Embedding(input_dim=vocab_size, output_dim=128, input_length=max_length) # Encode first string lstm_out1 = LSTM(64)(embedding_layer(embedded_seq1)) # Encode second string lstm_out2 = LSTM(64)(embedding_layer(embedded_seq2)) # Concatenate outputs concatenated = Concatenate()([lstm_out1, lstm_out2]) # Dense layer for final output output = Dense(1, activation='relu')(concatenated) # Build model model = Model(inputs=[input1, input2], outputs=output) return model string1 = &quot;hello world&quot; string2 = &quot;foo bar baz&quot; max_length = max(len(string1.split()), len(string2.split())) model = build_model(vocab_size=1000, max_length=max_length) model.summary() labels = tf.random.normal((1, 5)) model.compile(optimizer='adam', loss='mse') model.fit([tf.constant([string1]), tf.constant([string2])], labels, epochs=10, batch_size=1, validation_split=0.2) </code></pre> <p>Here is the full stack-trace:</p> <pre><code>WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/deprecation.py:660: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with dtype is deprecated and will be removed in a future version. Instructions for updating: Use fn_output_signature instead --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-1-23051fe36790&gt; in &lt;cell line: 64&gt;() 62 max_length = max(len(string1.split()), len(string2.split())) 63 ---&gt; 64 model = build_model(vocab_size=1000, max_length=max_length) 65 model.summary() 66 2 frames &lt;ipython-input-1-23051fe36790&gt; in build_model(vocab_size, max_length) 42 43 # Encode first string ---&gt; 44 lstm_out1 = LSTM(64)(embedding_layer(embedded_seq1)) 45 46 # Encode second string /usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py in error_handler(*args, **kwargs) 68 # To get the full stack trace, call: 69 # `tf.debugging.disable_traceback_filtering()` ---&gt; 70 raise e.with_traceback(filtered_tb) from None 71 finally: 72 del filtered_tb /usr/local/lib/python3.10/dist-packages/tensorflow/python/framework/tensor_shape.py in as_list(self) 1438 &quot;&quot;&quot; 1439 if self._dims is None: -&gt; 1440 raise ValueError(&quot;as_list() is not defined on an unknown TensorShape.&quot;) 1441 return list(self._dims) 1442 ValueError: as_list() is not defined on an unknown TensorShape. </code></pre>
<python><numpy><tensorflow><keras><tokenize>
2024-03-12 22:55:21
0
4,689
Maifee Ul Asad
78,150,355
3,121,548
`from typing` vs. `from collections.abc` for standard primitive type annotation?
<p>I'm looking at the <a href="https://github.com/python/cpython/blob/3.12/Lib/typing.py" rel="noreferrer">standard library documentation</a>, and I see that <code>from typing import Sequence</code> is just calling <code>collections.abc</code> under the hood.</p> <p>Now originally, there was a deprecation warning/error and migration from the <code>collections</code> package to <code>collections.abc</code> for some abstract classes. <a href="https://stackoverflow.com/questions/55882715/difference-between-from-collections-import-container-and-from-collections-abc-im">See here</a>. However, now that the abstractions have settled in a new location, is it fine to use either? I see <code>from collections.abc import [etc]</code> in the codebase, and I wonder if it would be more practical to just import from <code>typing</code> when trying to do type annotations?</p> <p>Cython source code: <code>Sequence = _alias(collections.abc.Sequence, 1)</code></p>
<python><python-typing><standard-library>
2024-03-12 22:32:32
1
1,164
Dave Liu
78,150,317
9,783,587
Jupyter on VScode doesn't work with pyenv+poetry
<p>I'm trying to create a new Jupyter notebook in VScode. This is what I did so far:</p> <p>Step 1: installed VScode Jupyter extension.<br /> Step 2: created a virtual environment using pyenv.<br /> Step 3: installed jupyter using poetry.</p> <p>Maybe not relevant, but I also selected the virtual environment's interpreter using the command palette command <code>Python: Select Interpreter</code>.</p> <p>The issue is that the command <code>Create: New Jupyter Notebook</code> is missing from the command palette (attached below). I also tried the <code>Select Interpreter to Start Jupyter Server</code> command but nothing happens.</p> <p><a href="https://i.sstatic.net/KFMD4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KFMD4.png" alt="enter image description here" /></a></p> <p>Also, If I run <code>jupyter lab</code> in the terminal, create a new notebook and then open the file again in VScode's file explorer, I see a JSON-like file rather than the familiar look of a Jupyter notebook.</p> <p><strong>VScode</strong><br /> Version: 1.85.2 (Universal)<br /> Commit: 8b3775030ed1a69b13e4f4c628c612102e30a681<br /> Date: 2024-01-18T06:40:32.531Z<br /> Electron: 25.9.7<br /> ElectronBuildId: 26354273<br /> Chromium: 114.0.5735.289<br /> Node.js: 18.15.0<br /> V8: 11.4.183.29-electron.0</p> <p><strong>Mac</strong><br /> Apple M1 Pro 14.2.1<br /> OS: Darwin arm64 23.2.0</p> <p>Any ideas?</p>
<python><visual-studio-code><jupyter-notebook>
2024-03-12 22:21:39
0
361
MaterialZ
78,150,295
10,083,382
Find a combination of column values exists in Lookup table
<p>Suppose that I have 2 dataframes <code>d1</code> and <code>d2</code> which can be generated using code below.</p> <pre><code>d1 = pd.DataFrame({'c1':['A', 'B', 'C', 'D', 'E', 'F'], 'c2': ['G', 'H', 'I', 'J', 'K', 'L'], 'val':[10, 20, 30, 40, 50, 60]}) d2 = pd.DataFrame({'c1':['A', 'B', 'C', 'D', 'E', 'F'], 'c2': ['H', 'H', 'I', 'J', 'L', 'K'], 'c1_found' : [1, 1, 1, 1, 1, 1], 'c2_found' : [1, 1, 1, 1, 1, 1]}) </code></pre> <p>I want to create a column <code>c1_c2_found</code> by checking if both <code>c1</code> and <code>c2</code> combination exists in table <code>d1</code>.</p> <p>I can achieve that using code below. Is there a more optimized method (vectorized approach) that I can use to solve this problem?</p> <pre><code># Check if both 'c1' and 'c2' values in d1 exist in d2 merged_data = pd.merge(d2, d1, on=['c1', 'c2'], how='inner') d2['c1_c2_found'] = d2.apply(lambda row: 1 if (row['c1'], row['c2']) in zip(merged_data['c1'], merged_data['c2']) else 0, axis=1) </code></pre>
<python><python-3.x><pandas><join><lookup-tables>
2024-03-12 22:15:08
1
394
Lopez
78,150,282
7,896,849
Choices from enum in pydantic
<p>I have a pydantic (v2.6.1) class with an attribute and I want to limit the possible choices user can make. I can use an enum for that. But the catch is, I have multiple classes which need to enforce different choices. See the example below,</p> <pre><code>from enum import Enum from pydantic import BaseModel, ValidationError, field_validator class FruitEnum(str, Enum): pear = 'pear' banana = 'banana' grapes = 'grapes' class CookingModel_1(BaseModel): fruit: FruitEnum @field_validator('fruit') def _validate_fruit(cls, value): if value not in (FruitEnum.pear, FruitEnum.grapes): raise ValueError(f'{value} is not a allowed fruit.!') class CookingModel_2(BaseModel): fruit: FruitEnum @field_validator('fruit') def _validate_fruit(cls, value): if value not in (FruitEnum.banana, FruitEnum.grapes): raise ValueError(f'{value} is not a allowed fruit.!') </code></pre> <p>for my <code>CookingModel_1</code> and <code>CookingModel_2</code> class the choices of <code>fruit</code> is different.</p> <p>I can achieve the task like shown above. But if my list of Cooking models are large (let's say 15 or so), I'll have to define a validator in each class and that's a lot of redundant code doing the same (almost) thing.</p> <p>Building on that, I can define a nested function to make a parameterized-fieldvalidator like shown below and reduce the redundant code.</p> <pre><code>def get_fruit_validator(allowed_fruits): def _validator(fruit): if fruit not in allowed_fruits: raise ValueError(f'{fruit} is not a allowed fruit.!') return fruit return _validator class FruitEnum(str, Enum): pear = 'pear' banana = 'banana' grapes = 'grapes' class CookingModel_1(BaseModel): fruit: FruitEnum _fruit_validator = field_validator('fruit')(get_fruit_validator({FruitEnum.pear, FruitEnum.grapes})) class CookingModel_2(BaseModel): fruit: FruitEnum _fruit_validator = field_validator('fruit')(get_fruit_validator({FruitEnum.banana, FruitEnum.grapes})) </code></pre> <p>This is good enough for my use case.</p> <p>But can lead to redundant code if I had multiple fields which expects other enums and long lists. I'm looking for more &quot;elegant&quot; solution. Maybe a new ConstraintType that can help restrict the choices? Do we have any alternative solution for this that doesn't involve writing custom validators?</p>
<python><pydantic>
2024-03-12 22:08:07
1
11,971
Sreeram TP
78,150,149
110,129
Monkeypatch 'print' to 'yield' for streaming output
<p>I am using a long running function from a library that's primarily called from a CLI, and the status is shown by print statements. Something like this:</p> <pre class="lang-py prettyprint-override"><code>def long_running_function(): ... LOGGER.info(&quot;first thing done&quot;) ... LOGGER.info(&quot;second thing done&quot;) ... </code></pre> <p>But now I want to trigger that same function from a web page and stream those print statements to the browser.</p> <p>In FastAPI it looks like <a href="https://github.com/sysid/sse-starlette" rel="nofollow noreferrer">I should use Server-Sent Events</a>, but that requires a function that yields messages. Like this:</p> <pre class="lang-py prettyprint-override"><code>async def long_running_stream(): while True: yield &quot;data: message\n\n&quot; </code></pre> <p>Is there a way to monkeypatch those LOGGER statements into 'yield' statements? Will that even work if I am calling long_running_function() from inside long_running_stream() ?</p> <p>Or is there a better way to do this?</p>
<python><fastapi><server-sent-events><starlette>
2024-03-12 21:32:14
1
1,750
Jono
78,149,987
12,708,740
Append lists of elements as rows in df based on groupby
<p>I have two pandas dataframed called <code>df</code> and <code>legend</code>:</p> <pre><code>df = pd.DataFrame({'object': ['dog', 'dog', 'cat', 'mouse'], 'personID': [1, 1, 2, 3], 'word': ['paw', 'head', 'whisker', 'tail'], 'included': [1, 1, 1, 1]}) legend = pd.DataFrame({'object': ['dog', 'cat', 'mouse'], 'word_lists': [ ['paw', 'head', 'nose', 'body'], ['whisker', 'ears', 'eyes'], ['ears', 'tail', 'fur']]}) </code></pre> <p>I am trying to append the words in &quot;legend['word_lists']&quot; based on the &quot;object&quot;. Specifically, I am trying append these word_lists based df.groupby(['object', 'person']) so that each group of object and person gets these new words.</p> <p>I am also keeping track of which words were originally included in the column &quot;df['included']&quot;. All new words should receive a 0. Here's my desired output:</p> <pre><code>result_df = pd.DataFrame({ 'object': ['dog', 'dog', 'dog', 'dog', 'dog','dog','cat', 'cat','cat','cat','mouse', 'mouse', 'mouse', 'mouse'], 'personID': [1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3], 'word': ['paw', 'head', 'paw', 'head', 'nose', 'body', 'whisker', 'whisker', 'ears', 'eyes', 'tail', 'ears', 'tail', 'fur'], 'included': [1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0]}) </code></pre>
<python><pandas><dataframe>
2024-03-12 20:53:32
1
675
psychcoder
78,149,967
3,826,115
Use xr.apply_unfunc on a function that can only take scalars, with an input of multi-dimensional DataArrays
<p>Lets say I have some function <code>f</code> that can only take in scalar values:</p> <pre><code>def f(a, b): # Ensure inputs are scalars assert(np.isscalar(a) and np.isscalar(b)) result1 = a + b result2 = a * b return result1, result2 </code></pre> <p>I also have two 2D xarray DataArrays:</p> <pre><code>da1 = xr.DataArray(np.random.randn(3, 3), dims=('x', 'y'), name='a') da2 = xr.DataArray(np.random.randn(3, 3), dims=('x', 'y'), name='b') </code></pre> <p>I want to apply f() on every index of da1 and da2 together. Essentially, I want to do this:</p> <pre><code>result1_da = xr.zeros_like(da1) result2_da = xr.zeros_like(da2) for xi in da1['x']: for yi in da2['y']: result1, result2 = f(da1.sel(x = xi, y = yi).item(), da2.sel(x = xi, y = yi).item()) result1_da.loc[dict(x=xi, y=yi)] = result1 result2_da.loc[dict(x=xi, y=yi)] = result2 </code></pre> <p>But without looping. I think I should be able to do this using <code>xr.apply_unfunc</code>. But I can't quite get it to work. If I do this:</p> <pre><code>xr.apply_ufunc(f, da1, da2) </code></pre> <p>I get an assertion error (scalars are not being passed in)</p> <pre><code>assert(np.isscalar(a) and np.isscalar(b) ) AssertionError </code></pre> <p>I've also tried messing around with <code>input_core_dims</code> and the other <code>xr.apply_unfunc</code> parameters, but I can't get anything to work.</p>
<python><python-xarray>
2024-03-12 20:48:48
1
1,533
hm8
78,149,918
6,238,676
Split record into two records with a calculation based on condition
<p>As title says, Let's say I have following dataframe</p> <pre><code>import pandas as pd df = pd.DataFrame({'UID':['A','B','C','D'],'FlagVal':[0,100,50,90],'TrueVal':[1000,1000,1000,1000]}) ndf = df.loc[~df['FlagVal'].between(0,100,inclusive='neither')] mdf = df.loc[df['FlagVal'].between(0,100,inclusive='neither')] </code></pre> <p>I want to split records into two where FlagVal is between 0 and 100. i.e. mdf , and perform a few calculations -</p> <pre><code>def split_record(row): gtd = row.copy() ungtd = row.copy() gtd['UID'] = row['UID'] + '_T1' gtd['Flag'] = 'Y' ungtd['UID'] = row['UID'] + '_T2' ungtd['Flag'] = '' gtd['TrueVal'] = float(gtd['TrueVal'])*(float(gtd['FlagVal'])/100.0) ungtd['TrueVal'] = float(ungtd['TrueVal'])*(1 - (float(ungtd['FlagVal'])/100.0)) gtd['FlagVal'] = 100 ungtd['FlagVal'] = 0 result_data = pd.DataFrame([gtd, ungtd]) return result_data split_df = pd.concat([split_record(row) for _, row in mdf.iterrows()], ignore_index=True) </code></pre> <p>then merge with 'ndf'. This works just fine for ~1000 records or so but it takes a toll when record count is in millions. I tried using apply function with axis=1, but not sure how to merge it with data again.</p> <p>Can you point me to a correct function to optimize it?</p>
<python><pandas>
2024-03-12 20:38:16
2
627
Prish
78,149,798
1,278,288
pytest, xdist, and sharing generated file dependencies
<p>I have multiple tests that need require an expensive-to-generate file. I'd like the file to be re-generated on every test run, but no more than once. To complicate the matter, both these tests as well as the file depend on an input parameter.</p> <pre><code>def expensive(param) -&gt; Path: # Generate file and return its path. @mark.parametrize('input', TEST_DATA) class TestClass: def test_one(self, input) -&gt; None: check_expensive1(expensive(input)) def test_two(self, input) -&gt; None: check_expensive2(expensive(input)) </code></pre> <p>How can make sure that this file is not regenerated across threads even when running these tests in parallel? For context, I'm porting test infrastructure that Makefiles to pytest.</p> <p>I'd be OK, with using file-based locks to synchronize, but I'm sure someone else has had this problem and would rather use an existing solution.</p> <p>Using <code>functools.cache</code> works great for a single thread. Fixtures with <code>scope=&quot;module&quot;</code> doesn't work at all, because the parameter <code>input</code> is at function scope.</p>
<python><pytest><pytest-xdist>
2024-03-12 20:09:10
1
1,830
nishantjr
78,149,683
48,956
How to save what jupyter's display() shows?
<p>Jupyter's display function is great for displaying a variety of objects (e.g. a dataframe). It seems that for many (most?) cases internally a bitmap, SVG, or HTML asset is created? Is there a way to generically save to file what <code>display()</code> shows?</p> <p>I only see examples to do <em>similar</em> via 3rd party APIs (e.g. for dataframes specifically, how to uses pandas APIs and <a href="https://ljmartin.github.io/sideprojects/df_to_svg.html" rel="nofollow noreferrer">third party APIs</a> to save a png or svg which doesn't have the same rendering at the output shown by Jupyter):</p> <pre><code>for x, name in things: display(makey(x), saveAs=f&quot;{name}.svg&quot;) </code></pre> <p>or</p> <pre><code>for x, name in things: display(makey(x)) saveAsJupyterShowsIt(makey(x), saveAs=f&quot;{name}.svg&quot;) </code></pre> <p>This would be a more convenient and more consistent in a WYISWG sense than dropping down to lower APIs.</p>
<python><jupyter-notebook><jupyter>
2024-03-12 19:42:36
1
15,918
user48956
78,149,673
15,412,256
Polars Customized Expression Constructor Function
<p>I want to change the first <code>n</code> rows with certain values in Polars. The normal use case is solved in the <a href="https://stackoverflow.com/questions/78143771/polars-replacing-the-first-n-rows-with-certain-values">previous post</a>.</p> <p>However, I want to achieve this by constructing a customized function:</p> <pre class="lang-py prettyprint-override"><code>from polars.type_aliases import IntoExpr, IntoExprColumn import polars as pl import numpy as np df = pl.DataFrame({&quot;test&quot;: np.arange(1, 11)}) def _func(x: IntoExpr) -&gt; pl.Expr: return pl.when((x+1) &lt; 5).then(None).otherwise(x+1) df.with_columns( _func(pl.col(&quot;test&quot;)).alias(&quot;test+1&quot;) ) </code></pre> <ol> <li>How to I create the index col using the customized function? (<strong>by not making the Polars DataFrame as an input parameter</strong>)</li> <li>Is there anyway to access the Polars DataFrame that the Polars Expression is used for by not passing in the DataFrame as an input parameter?</li> </ol>
<python><dataframe><numpy><python-polars>
2024-03-12 19:40:21
1
649
Kevin Li
78,149,633
3,599,856
Create a Sparkline Bar Chart Within a Pandas Dataframe Column
<p>I am looking to replicate the image below in a pandas dataframe within a Jupyter notebook. The title of orders in 2020 is not needed. I discovered this page <a href="https://github.com/crdietrich/sparklines/blob/master/Pandas%20Sparklines%20Demo.ipynb" rel="nofollow noreferrer">https://github.com/crdietrich/sparklines/blob/master/Pandas%20Sparklines%20Demo.ipynb</a> which seems to add sparklines within a dataframe, but not a bar chart. Any insight would be much appreciated! Code below provides sparklines in a dataframe column, but as shown in the image I'd like stacked bars with the below example data.</p> <pre><code># example data data = [[20, 10], [50, 15], [6, 14]] # Create the pandas DataFrame df = pd.DataFrame(data, columns=['Orchid', 'Rose']) # print dataframe. print(df) </code></pre> <p>Output I am looking for should look similiar to the below image.</p> <p><a href="https://i.sstatic.net/hlYzR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hlYzR.png" alt="Sparkline Bar Chart Example" /></a></p> <p>Example code that shows sparklines in a column, but not stacked bars.</p> <pre><code>import numpy as np import pandas as pd from scipy import stats import matplotlib.pyplot as plt %matplotlib inline import sparklines # Create some data density_func = 78 mean, var, skew, kurt = stats.chi.stats(density_func, moments='mvsk') x_chi = np.linspace(stats.chi.ppf(0.01, density_func), stats.chi.ppf(0.99, density_func), 100) y_chi = stats.chi.pdf(x_chi, density_func) x_expon = np.linspace(stats.expon.ppf(0.01), stats.expon.ppf(0.99), 100) y_expon = stats.expon.pdf(x_expon) a_gamma = 1.99 x_gamma = np.linspace(stats.gamma.ppf(0.01, a_gamma), stats.gamma.ppf(0.99, a_gamma), 100) y_gamma = stats.gamma.pdf(x_gamma, a_gamma) n = 100 np.random.seed(0) # keep generated data the same for git commit data = [np.random.rand(n), np.random.randn(n), np.random.beta(2, 1, size=n), np.random.binomial(3.4, 0.22, size=n), np.random.exponential(size=n), np.random.geometric(0.5, size=n), np.random.laplace(size=n), y_chi, y_expon, y_gamma] function = ['rand', 'randn', 'beta', 'binomial', 'exponential', 'geometric', 'laplace', 'chi', 'expon', 'gamma'] df = pd.DataFrame(data) df['function'] = function df # Define range of data to make sparklines a = df.ix[:, 0:100] # Output to new DataFrame of Sparklines df_out = pd.DataFrame() df_out['sparkline'] = sparklines.create(data=a) sparklines.show(df_out[['sparkline']]) # Insert Sparklines into source DataFrame df['sparkline'] = sparklines.create(data=a) sparklines.show(df[['function', 'sparkline']]) # Detailed Formatting df_out = pd.DataFrame() df_out['sparkline'] = sparklines.create(data=a, color='#1b470a', fill_color='#99a894', fill_alpha=0.2, point_color='blue', point_fill='none', point_marker='*', point_size=3, figsize=(6, 0.25)) sparklines.show(df_out[['sparkline']]) # Example Data and Sparklines Layout df_copy = df[['function', 'sparkline']].copy() df_copy['value'] = df.ix[:, 100] df_copy['change'] = df.ix[:,98] - df.ix[:,99] df_copy['change_%'] = df_copy.change / df.ix[:,99] sparklines.show(df_copy) </code></pre>
<python><pandas><matplotlib><plotly><sparklines>
2024-03-12 19:30:40
1
373
Joe Rivera
78,149,556
3,079,439
Unknown Document Type Error while using LLamaIndex with Azure OpenAI
<p>I'm trying to reproduce the code from documentation: <a href="https://docs.llamaindex.ai/en/stable/examples/customization/llms/AzureOpenAI.html" rel="nofollow noreferrer">https://docs.llamaindex.ai/en/stable/examples/customization/llms/AzureOpenAI.html</a> and receive the following error after <code>index = VectorStoreIndex.from_documents(documents)</code>:</p> <pre><code>raise ValueError(f&quot;Unknown document type: {type(document)}&quot;) ValueError: Unknown document type: &lt;class 'llama_index.legacy.schema.Document'&gt; </code></pre> <p>Due to the fact that all these generative ai libraries are being constantly updated, I have to switch the import of <code>SimpleDirectoryReader</code> and make it like <code>from llama_index.legacy.readers.file.base import SimpleDirectoryReader</code> All the rest is actually the same with tutorial (using <code>llama_index==0.10.18</code> and python of version <code>3.9.16</code>). I have spent already several hours on that and actually don't have ideas how should I proceed. So if somebody can assist with that - it would be super helpful :)</p> <p>Many thanks in advance.</p>
<python><azure><azure-openai><llama><llama-index>
2024-03-12 19:15:49
1
3,158
Keithx
78,149,546
6,145,729
MultiIndex DataFrame to a Standard index DF
<p>How do I convert a MultiIndex DataFrame to a Standard index DF?</p> <pre><code>import pandas as pd df1 = pd.DataFrame({'old_code': ['00000001', '00000002', '00000003', '00000004'], 'Desc': ['99999991', '99999992 or 99999922', 'Use 99999993 or 99999933', '99999994']}, ) df1.set_index('old_code', inplace=True) df2=df1[&quot;Desc&quot;].str.extractall(r&quot;(?P&lt;new_code&gt;\d{7,9})&quot;) print(df2.head(10)) </code></pre> <p>My output looks like this</p> <pre><code>old_code match new_code 00000001 0 99999991 00000002 0 99999992 1 99999922 00000003 0 99999993 1 99999933 00000004 0 99999994 </code></pre> <p>I'm trying to get it in a format like this?</p> <pre><code>old_code new_code 00000001 99999991 00000002 99999992 00000002 99999922 00000003 99999993 00000002 99999933 00000004 99999994 </code></pre>
<python><python-3.x><pandas>
2024-03-12 19:13:20
1
575
Lee Murray
78,149,496
1,744,491
Deploy AWS stack using pulumi error: no stack named 'dev' found
<p>I'm trying to deploy a stack using pulumi in my AWS account. My deploy.yml looks like this:</p> <pre><code>name: Pushes Glue Scripts to S3 # Controls when the workflow will run on: # Triggers the workflow on push or pull request events but only for the &quot;main&quot; branch push: branches: [ &quot;main&quot; ] pull_request: branches: [ &quot;main&quot; ] # Allows you to run this workflow manually from the Actions tab workflow_dispatch: jobs: deploy: runs-on: ubuntu-latest steps: - name: Deploy jobs uses: pulumi/actions@v5 id: pulumi env: PULUMI_CONFIG_PASSPHRASE: ${{ secrets.PULUMI_CONFIG_PASSPHRASE }} with: command: up cloud-url: s3://my-bucket/pulumi/ stack-name: dev </code></pre> <p>In my repository I have my stack file named Pulumi.dev.yaml. The file itself have just an encryption salt code. It's important to say I configured the pulumi backend into my S3 bucket using the command: <code>pulumi login s3://my-bucket/pulumi</code>.</p> <p>However, when I run my deploy code, I get the following error:</p> <pre><code> StackNotFoundError: code: -2 stdout: stderr: Command failed with exit code 255: pulumi stack select --stack dev --non-interactive error: no stack named 'dev' found err?: Error: Command failed with exit code 255: pulumi stack select --stack dev --non-interactive error: no stack named 'dev' found </code></pre> <p>I believe the container which runs the pulumi up code isn't seeing my stack. So, how can I fix this? Is there any step to check when runing my Github Actions with Pulumi?</p>
<python><amazon-web-services><github-actions><pulumi>
2024-03-12 19:03:55
1
670
Guilherme Noronha
78,149,480
4,994,787
Why does Lark split names into separate characters?
<p>I'm trying to parse a simple text like this:</p> <pre><code>test abc </code></pre> <p>Lark grammar is here:</p> <pre><code>start: test test: &quot;test&quot; _WSI name _NL name: (LETTER | DIGIT | &quot;_&quot;)+ %import common.WS_INLINE -&gt; _WSI %import common.NEWLINE -&gt; _NL %import common.LETTER %import common.DIGIT </code></pre> <p>Now if I print and pretty_print it, 'name' is split into separate tokens:</p> <pre><code>Tree(Token('RULE', 'start'), [Tree(Token('RULE', 'test'), [Tree(Token('RULE', 'name'), [Token('LETTER', 'a'), Token('LETTER', 'b'), Token('LETTER', 'c')])])]) start test name a b c </code></pre> <p>Why? I want to have that name as a string, not separate characters...</p>
<python><lark>
2024-03-12 19:00:05
1
418
Grrruk
78,149,421
1,317,018
Unfolding tensor containing image into patches
<p>I have a batch of size <code>4</code> of size <code>h x w = 180 x 320</code> single channel images. I want to unfold them series of <code>p</code> smaller patches of shape <code>h_p x w_p</code> yielding tensor of shape <code>4 x p x h_p x w_p</code>. If <code>h</code> is not divisible for <code>h_p</code>, or <code>w</code> is not divisible for <code>w_p</code>, the frames will be 0-padded. I tried following to achieve this:</p> <pre><code>import torch tensor = torch.randn(4, 180, 320) patch_size = (64, 64) #h_p = w_p = 64 unfold = torch.nn.Unfold(kernel_size=patch_size, stride=patch_size, padding=0) unfolded = unfold(tensor) print(unfolded.shape) </code></pre> <p>It prints:</p> <pre><code>torch.Size([16384, 10]) </code></pre> <p>What I am missing here?</p> <p><strong>PS:</strong></p> <p>I guess I have found the solution myself which I have posted below. I am yet to evaluate it fully. But let me know if you find it wrong or poor in any sense, may be performance</p>
<python><python-3.x><pytorch>
2024-03-12 18:49:39
1
25,281
Mahesha999
78,149,331
381,179
Pylance (VSCode) not displaying docstrings from my Rust (PyO3, Maturin) extension
<p>In Rust</p> <pre class="lang-rust prettyprint-override"><code>use pyo3::prelude::*; #[pyfunction] /// Returns the answer to life, the universe, and all the rest pub fn get_answer() -&gt; usize { 42 } #[pymodule] fn answers(_py: Python, m: &amp;PyModule) -&gt; PyResult&lt;()&gt; { m.add_function(wrap_pyfunction!(get_answer, m)?)?; Ok(()) } </code></pre> <p>Built with <code>maturin develop</code> in my current python environment, but also checked doing</p> <p><code>maturin build &amp;&amp; pip install [wherever_that_wheel_went]</code></p> <p>In Python</p> <pre class="lang-py prettyprint-override"><code>import answers my_answer = answers.get_answer() </code></pre> <p>Now if you go to that python code in VSCode and hover over <code>get_answer</code>, the floating tip will <em>not</em> show the docstring. If I go to <code>ipython</code> and explicitly ask for <code>__doc__</code>, it's there, though.</p> <p>What am I missing?</p>
<python><visual-studio-code><rust><pylance><pyo3>
2024-03-12 18:31:56
1
9,723
cadolphs
78,149,197
1,350,796
Visual editor for PIL?
<p>Is there a way to visually edit images and generate Pillow code that will correspond to those edits, ala recording a macro? I'm interested in creating Pillow code that will do different image transformation and compositions but it is a pain to develop everything purely in code - the feedback loop of running the script to see if things are in the right place pixel-wise is really slow. Looking for a better tool.</p>
<python><image><animation><image-processing><python-imaging-library>
2024-03-12 18:04:18
0
7,040
Andrew
78,149,126
14,121,088
Wrong dates when loading from s3 to Redshift
<p>I have an automatic process that is loading data from a s3 bucket in parquet format to a table in Redshift. And it was working fine</p> <p>But now, since 2024, the dates loaded in my redshift table look something like this: '2377-03-22 02:49:51:703' and before, when it was working fine it looked for example like this: '2023-01-01 00:00:00.000'</p> <p>I checked the data from my parquet file and the dates are fine, in the format 'YYYY-MM-DD'</p> <p>The code I'm using to load into redshift is a typicall copy_from_files:</p> <pre><code>wr.redshift.copy_from_files( path = path, con = connection, use_threads = True, table = table, schema = schema, mode = 'upsert', primary_keys = pks, iam_role = ROLE_REDSHIFT, ) </code></pre> <p>The type in the ddl from the redshift table is TIMESTAMP WITHOUT TIME ZONE ENCODE az64</p>
<python><sql><amazon-web-services><amazon-s3><amazon-redshift>
2024-03-12 17:51:41
0
355
tonga
78,149,104
4,933,822
scipy butterworth filter has no delay? What's the trick?
<p><strong>On the following graph:</strong></p> <ul> <li><strong>Orange</strong>: original signal</li> <li><strong>Blue</strong> : butterworth filter from scipy applied.</li> <li><strong>Grey</strong> : a custom implementation of butterworth filter applied.</li> </ul> <p><a href="https://i.sstatic.net/63dXX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/63dXX.png" alt="enter image description here" /></a> <strong>On the following graph:</strong></p> <ul> <li><strong>Orange</strong>: same original signal</li> <li><strong>Blue</strong>: another custom implementation of butterworth filter applied.</li> </ul> <p><a href="https://i.sstatic.net/a00V1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a00V1.png" alt="enter image description here" /></a></p> <p><strong>Question:</strong></p> <p>Both custom implementations have delay while scipy's does not have. What's the trick?</p> <p>Note: Custom implementations don't give the same result because they don't have the same coeff.</p>
<python><scipy><butterworth>
2024-03-12 17:46:10
1
890
Julien
78,149,019
4,902,934
How to parse an unpickleable object from startup event to a function used in multiprocessing pool?
<p>Here are my code:</p> <pre><code>from multiprocessing import Pool from functools import partial from fastapi import FastAPI clientObject = package.client() # object not pickle able app = FastAPI() def wrapper(func, retries, data_point): # wrapper function to add a retry mechanism retry = 0 while retry&lt;retries: try: result = func(data_point) except Exception as err: result = err time.sleep(5) else: break return result def get_response(data_point): # function use clientObject to get data from an Azure endpoint data_point = some_other_processes(data_point) ans = clientObject.process(data_point) return ans def main(raw_data): # main function where I use multiprocessing pool list_data_point = preprocess(raw_data) with Pool() as pool: wrapped_workload = partial(wrapper, get_response, 3) results = pool.map(wrapped_workload, list_data_point) pool.close() pool.join() return results @app.post(&quot;/get_answer&quot;) def get_answer(raw_data): processed_data = main(raw_data) return processed_data </code></pre> <p>The code above works fine if i declare <code>clientObject</code> from the begining (as a global variable). But if i store it as an object in <code>app.state</code>like this:</p> <pre><code>@app.on_event(&quot;start_up&quot;) def start_connection(): app.state.clientObject = package.client() </code></pre> <p>and access it inside <code>get_response</code> function like this:</p> <pre><code>def get_response(data_point): data_point = some_other_processes(data_point) ans = app.state.clientObject.process(data_point) return ans </code></pre> <p>it throws an error: '<strong>State</strong>' object has no attribute <code>clientObject</code>. However, the <code>app.state.clientObject</code> is still available inside <code>main()</code> function.</p> <p>Also, due to the <code>clientObject</code> is not pickleable, I cannot pass it as an argument to <code>get_response(data_point, clientObject)</code> function.</p> <p>Is there any way that I could initiate the <code>clientObject</code> on startup, store it in a variable and access it from a function used in multiprocessing pool? (without declaring it as global)</p> <p><strong>Edit</strong>: <em>This is my solution followed by suggestion of Frank Yellin below</em>:</p> <pre><code>def initialize_workers(): global clientObject clientObject = package.client() def get_response(data_point): global clientObject data_point = some_other_processes(data_point) ans = clientObject.process(data_point) return ans def main(raw_data): list_data_point = preprocess(raw_data) with Pool(initializer=initialize_workers) as pool: wrapped_workload = partial(wrapper, get_response, 3) results = pool.map(wrapped_workload, list_data_point) pool.close() pool.join() return results </code></pre>
<python><multiprocessing><fastapi><starlette>
2024-03-12 17:30:12
1
1,030
HienPham
78,148,971
4,133,524
Cannot import "deprecated" class from warnings module
<p>I am using code similar to</p> <pre><code>from warnings import deprecated @deprecated def test(): print(&quot;test&quot;) test() </code></pre> <p>I am getting an error (python 3.11) with &quot;cannot load deprecated from warnings&quot; and I was wondering if anybody knows why that is.</p> <p>I noticed that <a href="https://github.com/python/cpython/blob/5d72b753889977fa6d2d015499de03f94e16b035/Lib/warnings.py#L518" rel="noreferrer">warnings.py does infact have that class</a></p>
<python>
2024-03-12 17:21:37
1
426
Barry
78,148,951
12,162,229
Coloring rows of dataframe in excel using xlsxwriter
<p>Here is my code, my problem seems to be writing the dataframe to excel creates formatting that I cannot overwrite:</p> <pre><code>import polars as pl import xlsxwriter as writer df = pl.DataFrame({ &quot;A&quot;: [1, 2, 3, 2, 5], &quot;B&quot;: [&quot;x&quot;, &quot;y&quot;, &quot;x&quot;, &quot;z&quot;, &quot;y&quot;] }) with writer.Workbook('text_book.xlsx') as wb: worksheet = wb.add_worksheet() data_format1 = wb.add_format({'bg_color': '#FFC7CE'}) df.write_excel(wb, worksheet = 'Sheet1', autofilter= False, autofit = True, position = 'A3', include_header = False) for row in range(0,10,2): worksheet.set_row(row+2, cell_format=data_format1) </code></pre> <p>Output:</p> <p><a href="https://i.sstatic.net/zKCiQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zKCiQ.png" alt="enter image description here" /></a></p> <p>Ideally the ouput would be:</p> <p><a href="https://i.sstatic.net/QzDBr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QzDBr.png" alt="enter image description here" /></a></p> <p>I'm looking for a method of iterating over some list of row indices and setting the color for those rows.</p>
<python><xlsxwriter><python-polars>
2024-03-12 17:17:24
1
317
AColoredReptile
78,148,746
4,933,822
scipy butterworth filter to arm cmsis
<p>Witn scipy, I did some experiments with a butterworth filter like this:</p> <pre><code>sos = butter(order, normal_cutoff, btype='low', analog=False, output=&quot;sos&quot;) </code></pre> <p>I expect sos to be the coefficient of the filter.</p> <p>I need to port this filter to an arm platform. There are several filters functions implemented in the CMSIS library but I don't understand if butterworth falls into one filter familly implemented in CMSIS.</p> <p>My question is: shoud I implement butterworth myself or is there a cmsis function for that?</p>
<python><scipy><arm><cmsis>
2024-03-12 16:39:11
1
890
Julien
78,148,739
12,399,409
Expand unique values of a column into multiple columns, for X columns in a DataFrame
<p>I need to convert a DataFrame in the following shape:</p> <pre><code>import pandas as pd import numpy as np # input DataFrame df = pd.DataFrame({ 'foo': ['one', 'one', 'one', 'two', 'two', 'two', 'three', 'three', 'three'], 'tak': ['dgad', 'dgad', 'dgad', 'ogfagas', 'ogfagas', 'ogfagas', 'adgadg', 'adgadg', 'adgadg'], 'bar': ['B', 'B', 'A', 'C', 'A', 'C', 'C', 'C', 'C'], 'nix': ['Z', 'Z', 'Z', 'G', 'G', 'G', 'Z', 'G', 'G'] }) </code></pre> <p>... into a DataFrame where <code>foo</code> and <code>tak</code> are the index (there's never more than one unique value of <code>tak</code> for each unique value of <code>foo</code>). For <code>bar</code> and <code>nix</code> (and I actually have 10 different columns I need to do this with), I need to somehow pivot each of those columns into multiple columns where <code>bar_1</code> would be the first unique value from <code>bar</code> for each index, and <code>bar_2</code> would be the second unique value from <code>bar</code> for each <code>foo</code> group, etc. for each column. If there's only one or no unique values in <code>bar</code> or <code>nix</code> for a given <code>foo</code> group, there should be a <code>np.nan</code> inserted. Like this:</p> <pre><code># desired output DataFrame pd.DataFrame({ 'foo': ['one', 'two', 'three'], 'tak': ['dgad', 'ogfagas', 'adgadg'], 'bar_one': ['B', 'C', 'C'], 'bar_two': ['A', 'A', np.nan], 'nix_one': ['Z' , 'G', 'Z'], 'nix_two': [np.nan, np.nan, 'G'] }) </code></pre> <p>What I'm currently doing is using <code>.pivot_table</code> with this aggregation function:</p> <pre><code>pivot_df = df.pivot_table( index=['foo', 'tak'], values=['bar', 'nix'], aggfunc = lambda x: list(set(x)) ) </code></pre> <p>Then I'm expanding those columns of lists of unique values for each foo-tak group into multiple columns, and concatenating those together in a list comprehension:</p> <pre><code>pd.concat( [ pivot_df[column].apply(pd.Series) for column in ['bar', 'nix'] ], axis=1 ) </code></pre> <p>Is there a simpler / more direct / more Pythonic way of doing this transformation?</p>
<python><pandas><dataframe><pivot><data-cleaning>
2024-03-12 16:37:52
3
803
semblable
78,148,722
2,452,562
Problems with transitive imports in Python3
<p>I'm having a problem with some (likely badly-written) Python2 code, which I'm trying to convert to Python3. I'm sure this has been answered somewhere else before, but my Google-fu is not up to this task. I'm developing this on Ubuntu 18. (I know, Python2 and Ubuntu 18 are way way EOL, but my company is very slow to do the upgrade thing)</p> <p>All of my code lives in the <code>$HOME/python</code> directory. I've added <code>$HOME/python</code> to my <code>PYTHONPATH</code> environment variable. My script is in <code>$HOME/python/scripts</code>, and is simply:</p> <pre><code>import modules.foo </code></pre> <p>In the <code>$HOME/python/modules</code> directory, I have an (empty) <code>__init__.py</code>, and modules <code>bar.py</code> and <code>baz.py</code>: bar.py:</p> <pre><code>import baz bar = 0 </code></pre> <p>baz:py</p> <pre><code>baz = 0 </code></pre> <p>This works fine when I execute with Python2. However, when I attempt to execute using Python3, I get:</p> <pre><code>Traceback (most recent call last): File &quot;scripts/foo.py&quot;, line 1, in &lt;module&gt; import modules.bar File &quot;/home/kplatz/python/modules/bar.py&quot;, line 1, in &lt;module&gt; import baz ModuleNotFoundError: No module named 'baz' </code></pre> <p>How do I resolve this (without breaking Python2 compatibility, if possible)?</p> <p>Thank you in advance!</p>
<python><python-module>
2024-03-12 16:33:59
0
580
Ken P
78,148,567
6,233,888
Avoid delayed patching
<p>I'm working on a Python project where I use <code>diskcache.Cache.memoize</code> for caching. I want to write unit tests for my code, and I'm having trouble figuring out how to properly mock <code>diskcache.Cache.memoize</code> using pytest without being delayed in my mocks.</p> <p>Here's a simplified version of my module:</p> <pre class="lang-py prettyprint-override"><code># my_module.py import diskcache cache = diskcache.Cache('/path/to/my/local/cache') @cache.memoize(expire=3600) def expensive_function(argument): # Expensive calculation here return result </code></pre> <p>I want to write unit tests for <code>expensive_function</code> where I can test the caching behavior on and off via pytest.</p> <p>I have mocked diskcache.Cache in <code>conftest.py</code> like so:</p> <pre><code># conftest.py import pytest import unittest.mock import diskcach class MockCache: def __init__(self, *args, **kwargs): pass def __call__(self, fun, *args, **kwargs): return fun def memoize(fun, *args, **kwargs): return fun @pytest.fixture(autouse=True) def default_mock_memoize(monkeypatch, pytestconfig): if not pytestconfig.getoption(&quot;--cache&quot;): monkeypatch.setattr(diskcache, &quot;Cache&quot;, MockCache) def pytest_addoption(parser): parser.addoption( &quot;--cache&quot;, action=&quot;store_true&quot;, default=False, help=( &quot;Enable caching for tests. Careful: This will disable the test coverage &quot; &quot;of the cached code if the return value is in cache.&quot; ), ) </code></pre> <p>This works technically but is too late since it leaves the decorated <code>expensive_function</code> in the hands of the unmocked diskcache.Cache. <strong>How can I achieve that <code>diskcache.Cache</code> is mocked before collection?</strong></p> <p>Thanks in advance for your help!</p>
<python><unit-testing><caching><mocking><monkeypatching>
2024-03-12 16:10:18
0
410
Daniel BΓΆckenhoff
78,148,519
1,714,692
Type hinting a dictionary with a given set of keys and values
<p>Suppose I have an Enum class in Python:</p> <pre><code>class MyEnum(Enum): A = &quot;a&quot; B = &quot;b&quot; </code></pre> <p>I have a function that is returning for each of the possible (two in this case) enum values a given type: suppose for both of them it is returning a DataFrame. I want to type hint, and for this I was using <code>TypedDict</code> in this way:</p> <pre><code>import pandas as pd from typing import TypedDict class ReturnedType(TypedDict): MyEnum.A.value: pd.DataFrame MyEnum.B.value: pd.DataFrame </code></pre> <p>and then:</p> <pre><code>def foo(...) -&gt; ReturnedType </code></pre> <p>but apparently <code>TypedDict</code> does not accept defining field names with other variables and this mypy checks to fail.</p> <p>What is the most pythonic way to type hint such a function in this case?</p> <p>Here is a MWE:</p> <pre><code>from typing import Dict, TypedDict from enum import Enum class MyEnum(Enum): A = 'a' B = 'b' class MyClass(TypedDict): &quot;&quot;&quot;The class defines the shape of the dictionary output by any RoadCodec&quot;&quot;&quot; MyEnum.A.value: int MyEnum.B.value: float def foo() -&gt; MyClass: res: MyClass = {MyEnum.A.value: 3, MyEnum.B.value: 4.4} </code></pre> <p>By launching mypy checks I get the error</p> <blockquote> <p>Invalid statement in TypedDict definition; expected &quot;field_name: field_type&quot; [misc]</p> </blockquote> <p>Also note that I am running under Python3.8 so StrEnum is not available</p>
<python><python-typing><python-3.8><typeddict>
2024-03-12 16:01:48
1
9,606
roschach
78,147,928
10,053,485
Pycharm: Decorated function with default parameter alerted as unfilled
<p><strong>Pycharm Version: 2023.1.5</strong> (up to date at time of writing)</p> <p>I'm building a decorator for an async function, which should retry to execute the decorated function when an error is detected.</p> <p>The code itself appears to work fine, but Pycharm still suggests the parameter with a default value is 'unfilled'.</p> <img src="https://i.sstatic.net/1DNGRl.png" width="500"> <p>In my experience, Pycharm is right more often than I am, thus I'm wondering what's triggering the problem here.</p> <pre class="lang-py prettyprint-override"><code>from typing import Callable, ParamSpec, TypeVar import functools import asyncio def retry_execute(retries: int = 2, sleep_time: float = 2): &quot;&quot;&quot; Retry decorator. :param retries: max number of retries :param sleep_time: time between retries &quot;&quot;&quot; P = ParamSpec(&quot;P&quot;) T = TypeVar(&quot;T&quot;) def decorator(coroutine: Callable[P, T]) -&gt; Callable[P, T]: @functools.wraps(coroutine) async def wrapper(*args: P.args, **kwargs: P.kwargs) -&gt; T: for attempt in range(retries): try: return await coroutine(*args, **kwargs) except Exception as e: if attempt == retries - 1: raise e else: await asyncio.sleep(sleep_time) continue return wrapper return decorator @retry_execute() async def my_fn(n: int, b='hi'): await asyncio.sleep(1) # Simulate work print(b) print(n) raise ValueError # generic error to trigger async def main(): await asyncio.gather(my_fn(n=1), my_fn(n=2, b='bye')) asyncio.run(main()) </code></pre> <p>Output:</p> <pre><code>hi 1 bye 2 hi 1 bye 2 &gt; ValueError </code></pre> <p>Which would suggest that at least at execution this isn't an issue.</p> <p>More explicitly defining <code>my_fn()</code>, e.g. <code>my_fn(..., b: Optional[str] = 'hi')</code>, does not fix the issue.</p> <p>Are there any edge cases in which my code would not properly pass the default parameter? Or, how should I update my code to ensure PyCharm properly detects default paramaters of the decorated function?</p> <p><strong>Edit:</strong> I noticed one more place within Pycharm where these default parameters were 'lost', namely viewing the function parameters with &lt;ctrl+P&gt;. If it would recognise the default parameter here the issue would likely be resolved. Interestingly it does detect the default string indirectly, as Pycharm recognises <code>b</code> should be of type string. <img src="https://i.sstatic.net/9yoUB.png" width="500"></p>
<python><pycharm><default-arguments>
2024-03-12 14:29:18
0
408
Floriancitt
78,147,903
5,547,553
How to create a conditional incremented column in polars?
<br> I'd like to create a conditional incremented column in polars.<br> It should start from 1 and increment only if a certain condition (pl.col('code') == 'L') is met.<br> <pre><code>import polars as pl df = pl.DataFrame({'file': ['a.txt','a.txt','a.txt','a.txt','b.txt','b.txt','c.txt','c.txt','c.txt','c.txt','c.txt'], 'code': ['X','Y','Z','L','A','A','B','L','C','L','X'] }) df.with_columns(pl.int_range(start=1, end=pl.len()+1).over('file').alias('rrr') ) </code></pre> <p>This produces a simple unconditional increment. But how do I add conditions?</p>
<python><python-polars>
2024-03-12 14:25:31
3
1,174
lmocsi
78,147,867
462,169
How can I distribute a Python application into an empty environment?
<p>I have a python3 app that I have written I need to install on a production, linux, server. For various reasons, this linux installation is very barebones. It will not have any version of python. It will not have Docker or anything similar. I don't know why that is the case, but those are my requirements. How can I deploy my python app to this server so it will function.</p> <p>I had a few ideas:</p> <ul> <li>Create an binary with something like pyinstaller. I have heard, however, that this does not really work very well and can be flaky.</li> <li>Package up python in my install package with instructions to install it. This is not great, as it is supposed to be a zero-install package (although this might be acceptable).</li> <li>Could this be as simple as just creating a python virtual environment? It seems like that includes the python interpreter (venv/bin/python). Someone I work with told me that will not work and python still needs to be installed on host environment. Not sure why.</li> </ul> <p>I am going to play with this last option and see if it works.</p>
<python><pyinstaller><python-venv><python-packaging>
2024-03-12 14:19:55
2
1,647
Wanderer
78,147,737
4,096,572
python-xarray: How to create a Dataset and assign results of an iteration to the Dataset?
<p>I have a for loop which is running some analysis on some data and returning some values. For boring reasons, this loop cannot easily be vectorised. I want to create a Dataset and then assign the result of the for loop to the Dataset as I iterate through.</p> <h1><code>Dataset.update</code></h1> <p>If I write some code which uses <a href="https://docs.xarray.dev/en/stable/generated/xarray.Dataset.update.html" rel="nofollow noreferrer"><code>Dataset.update</code></a> as follows:</p> <pre><code>import numpy as np from xarray import Dataset, cftime_range, concat times = cftime_range(start=&quot;2024-01-01&quot;, end=&quot;2024-01-02&quot;, freq=&quot;H&quot;) test_xarray = Dataset(coords={&quot;time&quot;: None, &quot;mlt&quot;: np.arange(24)}) for time in times: test_for_this_time = Dataset({&quot;x&quot;: ([&quot;time&quot;, &quot;mlt&quot;], np.random.random((1, 24)))}, coords={&quot;time&quot;: np.array([time]), &quot;mlt&quot;: np.arange(24)}) test_xarray.update(test_for_this_time) print(test_xarray) </code></pre> <p>I get the following:</p> <pre><code>&lt;xarray.Dataset&gt; Dimensions: (time: 1, mlt: 24) Coordinates: * time (time) object 2024-01-01 00:00:00 * mlt (mlt) int64 0 1 2 3 4 5 6 7 8 9 ... 14 15 16 17 18 19 20 21 22 23 Data variables: x (time, mlt) float64 nan nan nan nan nan nan ... nan nan nan nan nan </code></pre> <h1><code>Dataset.merge</code></h1> <p>This is clearly not what I want, and so I tried using <a href="https://docs.xarray.dev/en/stable/generated/xarray.Dataset.merge.html" rel="nofollow noreferrer"><code>Dataset.merge</code></a> instead of <code>update</code>.</p> <pre><code>import numpy as np from xarray import Dataset, cftime_range, concat times = cftime_range(start=&quot;2024-01-01&quot;, end=&quot;2024-01-02&quot;, freq=&quot;H&quot;) test_xarray = Dataset(coords={&quot;time&quot;: None, &quot;mlt&quot;: np.arange(24)}) for time in times: test_for_this_time = Dataset({&quot;x&quot;: ([&quot;time&quot;, &quot;mlt&quot;], np.random.random((1, 24)))}, coords={&quot;time&quot;: np.array([time]), &quot;mlt&quot;: np.arange(24)}) test_xarray = test_xarray.merge(test_for_this_time) print(test_xarray) </code></pre> <p>I get the following:</p> <pre><code>&lt;xarray.Dataset&gt; Dimensions: (time: 25, mlt: 24) Coordinates: * time (time) object 2024-01-01 00:00:00 ... 2024-01-02 00:00:00 * mlt (mlt) int64 0 1 2 3 4 5 6 7 8 9 ... 14 15 16 17 18 19 20 21 22 23 Data variables: x (time, mlt) float64 0.6399 0.6227 0.7972 ... 0.7804 0.8763 0.7198 </code></pre> <p>This does do what I want, so hurrah, but I don't understand what I did wrong in the first method, which I would have expected to work.</p> <h1>Is this the best method?</h1> <p>I'm curious as to whether I'm using xarray in the best way here. I've looked through Stack Overflow and through the documentation and I can't see any examples of this sort of workflow. I've also tried with <a href="https://docs.xarray.dev/en/stable/generated/xarray.concat.html" rel="nofollow noreferrer"><code>xarray.concat</code></a>, but that doesn't quite seem to do what I want; it leaves the first <code>None</code> value in the <code>time</code> dimension. It might be that the method above is the best way, but if not, I would greatly appreciate any advice on how better to do it.</p>
<python><numpy><python-xarray>
2024-03-12 13:57:18
1
605
John Coxon
78,147,654
525,865
How to iterate a list from a to z - to scrape data and transform it into dataframe?
<p>Currently working on a scraper that collects the data of german insurances - here we have a comprehensive list of data insurances-companies from <strong>a to z</strong></p> <p>our <strong>menbers:</strong></p> <p><a href="https://www.gdv.de/gdv/der-gdv/unsere-mitglieder" rel="nofollow noreferrer">https://www.gdv.de/gdv/der-gdv/unsere-mitglieder</a> the overview on 478 results:</p> <p>for the <strong>letter a</strong>: <a href="https://www.gdv.de/gdv/der-gdv/unsere-mitglieder?letter=A" rel="nofollow noreferrer">https://www.gdv.de/gdv/der-gdv/unsere-mitglieder?letter=A</a> for the <strong>letter b</strong>: <a href="https://www.gdv.de/gdv/der-gdv/unsere-mitglieder?letter=B" rel="nofollow noreferrer">https://www.gdv.de/gdv/der-gdv/unsere-mitglieder?letter=B</a></p> <p>and so forth: btw: see for example one page - of a company:<br /> <a href="https://www.gdv.de/gdv/der-gdv/unsere-mitglieder/ba-die-bayerische-allgemeine-versicherung-ag-47236" rel="nofollow noreferrer">https://www.gdv.de/gdv/der-gdv/unsere-mitglieder/ba-die-bayerische-allgemeine-versicherung-ag-47236</a></p> <p>With the data, we need to have the contact-data and the adress</p> <p>Well I think that this task could be done best with a tiny bs4 scraper with request and putting all the data to a dataframe: I use <code>BeautifulSoup</code> for parsing the HTML and <code>Requests</code> for making HTTP requests. Best method - yes I guess is <code>BeautifulSoup</code> and <code>Requests</code> to extract the contact data and address from the given URL (see below and also above). First of all we need to defines a function <code>scrape_insurance_company</code> that takes a URL as input, and then send an HTTP GET request to it, and extract the contact data and address using <code>BeautifulSoup</code>.</p> <p>Finally, we need to return a dictionary containing the extracted data. well since we need to cover the characters from a to z we have to iterate here: we iterate through a list of URLs containing the insurance companies and call this function for each URL to collect the data. Subsequently we use Pandas to organize the data into a DataFrame.</p> <p><strong>note:</strong> i run this on google-colab:</p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd def scrape_insurance_company(url): # Send a GET request to the URL response = requests.get(url) # Check if the request was successful if response.status_code == 200: # Parse the HTML content soup = BeautifulSoup(response.content, 'html.parser') # Find all the links to insurance companies company_links = soup.find_all('a', class_='entry-title') # List to store the data for all insurance companies all_data = [] # Iterate through each company link for link in company_links: company_url = link['href'] company_data = scrape_company_data(company_url) if company_data: all_data.append(company_data) return all_data else: print(&quot;Failed to fetch the page:&quot;, response.status_code) return None def scrape_company_data(url): # Send a GET request to the URL response = requests.get(url) # Check if the request was successful if response.status_code == 200: # Parse the HTML content soup = BeautifulSoup(response.content, 'html.parser') # DEBUG: Print HTML content of the page print(soup.prettify()) # Find the relevant elements containing contact data and address contact_info = soup.find('div', class_='contact') address_info = soup.find('div', class_='address') # Extract contact data and address if found contact_data = contact_info.text.strip() if contact_info else None address = address_info.text.strip() if address_info else None return {'Contact Data': contact_data, 'Address': address} else: print(&quot;Failed to fetch the page:&quot;, response.status_code) return None # now we list to store data for all insurance companies all_insurance_data = [] # and now we iterate through the alphabet for letter in range(ord('A'), ord('Z') + 1): letter_url = f&quot;https://www.gdv.de/gdv/der-gdv/unsere-mitglieder?letter={chr(letter)}&quot; print(&quot;Scraping page:&quot;, letter_url) data = scrape_insurance_company(letter_url) if data: all_insurance_data.extend(data) # subsequently we convert the data to a Pandas DataFrame df = pd.DataFrame(all_insurance_data) # and finally - we save the data to a CSV file df.to_csv('insurance_data.csv', index=False) print(&quot;Scraping completed and data saved to 'insurance_data.csv'.&quot;) </code></pre> <p>well at the moment all looks like so - i get in google-colab-terminal:</p> <p>the insurance:</p> <pre><code>Scraping page: https://www.gdv.de/gdv/der-gdv/unsere-mitglieder?letter=A Scraping page: https://www.gdv.de/gdv/der-gdv/unsere-mitglieder?letter=B Scraping page: https://www.gdv.de/gdv/der-gdv/unsere-mitglieder?letter=C Scraping page: https://www.gdv.de/gdv/der-gdv/unsere-mitglieder?letter=D Scraping page: https://www.gdv.de/gdv/der-gdv/unsere-mitglieder?letter=Z Scraping completed and data saved to 'insurance_data.csv'. </code></pre> <p>bit the list is still empty... i still struggle here a bit</p>
<python><pandas><web-scraping><beautifulsoup><python-requests>
2024-03-12 13:47:42
1
1,223
zero
78,147,561
11,462,274
How working correctly with Beautifulsoup to not generate Type Checking alerts in VSCode
<p>Page Source example:</p> <pre class="lang-python prettyprint-override"><code>from bs4 import BeautifulSoup, Tag, ResultSet from re import compile page_source = &quot;&quot;&quot; &lt;html&gt; &lt;body&gt; &lt;div class=&quot;block_general_statistics&quot;&gt; &lt;table&gt; &lt;tbody&gt; &lt;tr&gt; &lt;th&gt;Header 1&lt;/th&gt; &lt;td class=&quot;total&quot;&gt;Data 1&lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; &quot;&quot;&quot; </code></pre> <p>Original use to reduce the number of lines and characters but which generate Type Checking alerts and also note that <code>find | text | strip</code> inside the list comprehension all of these the font color is white due to lack of necessary combination:</p> <pre class="lang-pyton prettyprint-override"><code>soup = BeautifulSoup(page_source, 'html.parser') table_stats = soup.find('div', class_=compile('block_general_statistics')).find('table') table_stats_body = table_stats.find('tbody').find_all('tr') thead = [th.find('th').text.strip() for th in table_stats_body] tbody = [th.find('td', class_='total').text.strip() for th in table_stats_body] </code></pre> <p><a href="https://i.sstatic.net/cKpgY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cKpgY.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/7TqfR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7TqfR.png" alt="enter image description here" /></a></p> <p>The only way that with my basic knowledge I was able to resolve all the alerts and also fix all fonts colored correctly without any being white due to &quot;lack of function&quot;:</p> <pre class="lang-python prettyprint-override"><code>soup = BeautifulSoup(page_source, 'html.parser') table_stats = soup.find('div', class_=compile('block_general_statistics')) if type(table_stats) == Tag: table_stats = table_stats.find('table') if type(table_stats) == Tag: table_stats_body = table_stats.find('tbody') if type(table_stats_body) == Tag: table_stats_body = table_stats_body.find_all('tr') if type(table_stats_body) == ResultSet: thead = [] for th in table_stats_body: if type(th) == Tag: th = th.find('th') if type(th) == Tag: thead.append(th.text.strip()) tbody = [] for th in table_stats_body: if type(th) == Tag: th = th.find('td', class_='total') if type(th) == Tag: tbody.append(th.text.strip()) </code></pre> <p>Is there any smarter way that can resolve the alerts but that doesn't make a simple, short code become something so large, detailed and even difficult to make changes in the future?</p>
<python><visual-studio-code><beautifulsoup><python-typing>
2024-03-12 13:34:31
1
2,222
Digital Farmer
78,147,513
9,097,114
Compare and find nearest matching string by iterating rows from same column
<p>I am new to Python and trying to get output iterating over rows from same column</p> <pre><code># Import pandas library import pandas as pd # initialize list of lists data = [[1, 'AA,AC,BC,DE'], [2, 'AA,AD,BC,D'], [3, 'A,C,BC,E'],[4, 'AA,AC,BC,DEEE'],[5, 'KK']] # Create the pandas DataFrame df = pd.DataFrame(data, columns=['rowid', 'Col1']) </code></pre> <p><strong>Output table :</strong></p> <pre><code>data_out = [[1, 'AA,AC,BC,DE',2,'2,4'], [2, 'AA,AD,BC,D',2,'1,4'], [3, 'A,C,BC,E',1,'1,2,4'],[4, 'AA,AC,BC,DEEE',2,'1,2'],[5, 'KK',0,'NA']] # Create the pandas DataFrame df_out = pd.DataFrame(data_out, columns=['rowid', 'Col1','max_matching#','Rowid']) </code></pre> <p>max_matching# : explanation Row-1 values : 'AA,AC,BC,DE' matches 2 values('AA','BC') from row2 and matches 1 value('BC') from row-3, matches 2 values('AA','BC') from row4 and NIL from row5 som max_matching = max matching from all rows(i.e 2,1,2,NA) = 2<br /> Rowid : Explanation Row-1 values : 'AA,AC,BC,DE' : max matching rows from above explanation i.e row2&amp;row4 (2,4)</p> <p>Thanks in advance!</p>
<python><pandas>
2024-03-12 13:26:29
1
523
san1
78,147,487
5,669,713
Assign output of expanded pandas str.extract to new columns of same dataframe when rows are filtered with .loc
<p>I have a dataframe with some registration types and some alphanumeric registration numbers for certain types. If a row has a certain type, I need to perform a regex operation on the relevant number, potentially splitting it into two, and return two columns based on that operation to the original dataframe. I am receiving a warning about setting values based on slices and an instruction to use <code>.loc</code> instead, but doing so voids my output column for reasons I cant understand.</p> <p>Using the slice method:</p> <pre class="lang-py prettyprint-override"><code>data = [['5007', 'foo'], ['5008', '1111111'], ['5008', '00222222'], ['5008', '333333'], ['5007', 'BAR'], ['5008', 'SC444444'], ['5008', 'SC5555'], ['5008', 'LP6666'], ['5008', '7777-8888'], ['5007', 'foobar']] df = pd.DataFrame(data, columns=['reg_type', 'reg_no']) filter = (df['reg_type'] == '5008') df[['code', 'number']] = df.loc[filter]['reg_no'].str.extract('([A-Za-z]+)?(.+$)', expand=True) </code></pre> <p>This gives the output</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th></th> <th>reg_type</th> <th>reg_no</th> <th>code</th> <th>number</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>5007</td> <td>foo</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>1</td> <td>5008</td> <td>1111111</td> <td>NaN</td> <td>1111111</td> </tr> <tr> <td>2</td> <td>5008</td> <td>00222222</td> <td>NaN</td> <td>00222222</td> </tr> <tr> <td>3</td> <td>5008</td> <td>333333</td> <td>NaN</td> <td>333333</td> </tr> <tr> <td>4</td> <td>5007</td> <td>BAR</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>5</td> <td>5008</td> <td>SC444444</td> <td>SC</td> <td>444444</td> </tr> <tr> <td>6</td> <td>5008</td> <td>SC5555</td> <td>SC</td> <td>5555</td> </tr> <tr> <td>7</td> <td>5008</td> <td>LP6666</td> <td>LP</td> <td>6666</td> </tr> <tr> <td>8</td> <td>5008</td> <td>7777-8888</td> <td>NaN</td> <td>7777-8888</td> </tr> <tr> <td>9</td> <td>5007</td> <td>foobar</td> <td>NaN</td> <td>NaN</td> </tr> </tbody> </table></div> <p>This is correct, but it also produces the warning:</p> <pre><code>A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead </code></pre> <p>Modifying to what I think the <code>.loc</code> method should be:</p> <pre class="lang-py prettyprint-override"><code>data = [['5007', 'foo'], ['5008', '1111111'], ['5008', '00222222'], ['5008', '333333'], ['5007', 'BAR'], ['5008', 'SC444444'], ['5008', 'SC5555'], ['5008', 'LP6666'], ['5008', '7777-8888'], ['5007', 'foobar']] df = pd.DataFrame(data, columns=['reg_type', 'reg_no']) filter = (df['reg_type'] == '5008') df.loc[filter, ['code', 'number']] = df.loc[filter]['reg_no'].str.extract('([A-Za-z]+)?(.+$)', expand=True) </code></pre> <p>This produces the output</p> <p>| |reg_type |reg_no |code |number| |---|-----------|-------|-------|------| |0 |5007 |foo |NaN |NaN| |1 |5008 |1111111 |NaN |NaN| |2 |5008 |00222222 |NaN |NaN| |3 |5008 |333333 |NaN |NaN| |4 |5007 |BAR |NaN |NaN| |5 |5008 |SC444444 |NaN |NaN| |6 |5008 |SC5555 |NaN |NaN| |7 |5008 |LP6666 |NaN |NaN| |8 |5008 |7777-8888 |NaN |NaN| |9 |5007 |foobar |NaN |NaN| The output of the <code>extract</code> has been voided. I have also tried to append <code>reset_index()</code> to the end of the extract command but that produced no change.</p> <p>How can I satisfy achieving this with <code>.loc</code>?</p>
<python><pandas><regex><dataframe>
2024-03-12 13:22:50
0
355
Alex Howard
78,147,374
2,473,382
Python typing with explicit hint not used
<p>Why in the following code, <code>a</code> is not seen as being of type <code>LineString</code>? Pyright does not see any issue there with using a as an <code>int</code>.</p> <pre class="lang-py prettyprint-override"><code>from typing import cast import shapely from shapely.geometry.linestring import LineString def get_line_string() -&gt; LineString: return cast(LineString, shapely.from_wkt(&quot;LINESTRING (30 10, 10 30, 40 40)&quot;)) a: int = get_line_string() print(a + 1) </code></pre> <p>Shapely is not properly stubbed, but that's why I thought that using <code>cast</code> and explicit type hint should do the trick.</p> <p>How can I get an error from pyright when I try to type <code>a</code> to <code>int</code>?</p> <p>I am using shapely as an example here, but this is a generic question.</p> <p>If I add myself stubs:</p> <pre><code># in typings.shapely.__init__.pyi def from_wkt(geometry:str) -&gt; LineString: ... </code></pre> <p>Then my editor (vscode/pylance) sees that <code>from_wkt</code> does return <code>LineString</code>, but the error is still not caught.</p>
<python><python-typing>
2024-03-12 13:04:26
0
3,081
Guillaume
78,147,330
12,297,666
How create a list with only the months that have complete days in Python
<p>Consider the following DateTimeIndex:</p> <pre><code>from calendar import monthrange import pandas as pd index_h = pd.date_range(start='2022-01-04 00:00:00', end='2023-01-10 23:00:00', freq='H') </code></pre> <p>We can see that both January/2022 and January/2023 are incomplete.</p> <p>How can I create a list that contains the Month/Year that are complete in that range?</p> <p>I have been trying to use <code>monthrange</code> from <code>calendar</code> to count the values, but not really sure how to proceed from here:</p> <pre><code>years_months = index_h.to_period('M').unique() complete_month_year_list = [] for year_month in years_months: num_days = monthrange(year_month.year, year_month.month)[1] if what_goes_here??? == num_days: print(f&quot;Month {year_month.month} of year {year_month.year} is complete.&quot;) complete_month_year_list.append(?????) else: print(f&quot;Month {year_month.month} of year {year_month.year} is not complete.&quot;) </code></pre>
<python><pandas>
2024-03-12 12:59:07
0
679
Murilo
78,147,108
666,066
Pyinstaller generating difference executable sizes on two difference laptops
<p>I have a Python project that is converted to an executable using pyinstaller.</p> <p>The same executable is generated on another laptop and the size of the executable is not the same.</p> <p>The version of python on both Windows laptops is same. The list of python plugins is the same on both Windows laptops.</p> <p>The comparison of the file, Analysis-00.toc, an output of pyinstaller, shows the same Python dlls, such as \Python311\DLLS_queue.pyd.</p> <p>In Analysis-00.toc, the versions of api-ms-win-crt-xxx.dll files are the same, but are not from the same directory. All versions are api-ms-win-crt-xxx-11-1-0.dll. One set is from Java JDK, the other is from Java JRE.</p> <p>Other than the differences of the directory of api-ms-win-crt, I cannot find any differences between the Windows laptops to explain the differences in the executable size.</p> <p>Where else should I look for differences that can explain the differences in executable size?</p>
<python><pyinstaller>
2024-03-12 12:26:17
1
774
KeithSmith
78,147,082
3,697,202
How can I type the return type of a subclass in Python?
<p>I use the service object pattern in my codebase. I have a base class for service objects:</p> <pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod from typing import Generic, TypeVar T = TypeVar(&quot;T&quot;, contravariant=True) # input type R = TypeVar(&quot;R&quot;, covariant=True) # return type class BaseService(Generic[T, R], ABC): &quot;&quot;&quot;Service object that strongly types its inputs and outputs.&quot;&quot;&quot; inputs: T def __init__(self, inputs: T): self.inputs = inputs @classmethod def execute(cls, inputs: T, **kwargs) -&gt; R: instance = cls(inputs, **kwargs) return instance.process() @abstractmethod def process(self) -&gt; R: &quot;&quot;&quot; Main method to be overridden; contains the business logic. &quot;&quot;&quot; </code></pre> <p>Its usage is e.g.</p> <pre class="lang-py prettyprint-override"><code>from typing import TypedDict class Inputs(TypedDict): foo: str class DoubleService(BaseService[Inputs, str]): def process(self): return self.inputs[&quot;foo&quot;] * 2 DoubleService.execute({&quot;foo&quot;: &quot;bar&quot;}) # returns &quot;barbar&quot; </code></pre> <p>I want to enforce correct typing of the <code>process</code> method in subclasses. For example, I want all of the following to fail in the type checker:</p> <pre class="lang-py prettyprint-override"><code>class Inputs(TypedDict): foo: str class InvalidOne(BaseService[Inputs, str]): def process(self) -&gt; int: #Β wrong explicit return type # ... class InvalidTwo(BaseService[Inputs, str]): def process(self): return 1 #Β wrong implicit return type </code></pre> <p>Is is possible to achieve this with any existing type checker? I don't care if it's Mypy / Pyright / something else.</p>
<python><mypy><python-typing><pyright>
2024-03-12 12:21:55
0
1,057
tao_oat
78,146,998
1,079,320
I want to replace the url_for() call entirely
<p>I've had tremendous success using the following tutorial to stream a sequence of JPEG images from OpenCV using Flask: <a href="https://www.aranacorp.com/en/stream-video-from-a-raspberry-pi-to-a-web-browser/" rel="nofollow noreferrer">https://www.aranacorp.com/en/stream-video-from-a-raspberry-pi-to-a-web-browser/</a></p> <p>Using that tutorial, I am able to render the JPEG image sequence processed by OpenCV in my client HTML document by using the following code:</p> <pre><code>&lt;img src=&quot;{{ url_for('video_feed') }}&quot; /&gt; </code></pre> <p>However, in my use case, I would prefer to consume the images using the following schema:</p> <pre><code>&lt;img src&quot;ip.address.of.server/filename.jpg&quot; /&gt; </code></pre> <p>How do I provision OpenCV in Flask to accomplish this?</p> <p>*<strong>NOTE:</strong> I am new to Python, so please forgive me for the &quot;dumb question&quot;.</p>
<python><flask>
2024-03-12 12:09:52
1
670
lincolnberryiii
78,146,960
7,896,849
Exclude fields from a pydantic class
<p>I have a class like this,</p> <pre><code>from pydantic import BaseModel, Field class MyClass(BaseModel): field_1: str = Field(description='Field 1') field_2: dict = Field(description='Field 2') field_3: list = Field(description='Field 3') </code></pre> <p>I want to create child class inherting MyClass like this,</p> <pre><code>class MyChildClass(MyClass): field_4: str = Field(description='Field 4') </code></pre> <p>While creating the child class I need to disable field_2 being inherited (alternatively field_2 should be removed from MyChildClass after inheriting).</p> <p>What can I achieve? What are the best practices surrounding this?</p>
<python><pydantic>
2024-03-12 12:04:27
2
11,971
Sreeram TP
78,146,867
5,547,553
Element of list of tuples is truncated in python 3.9
<br> I'm trying to populate a list with tuples of key and regex expressions. Looking at the populated list looks good, but when I want to print it or write to file, then it gets truncated: <pre><code># coding: utf-8 import re regs = [] regs.append(('szerzodes_felmondas', re.compile(r'\b(szerzΓΆdΓ©sΓ©rΕ‘l|szerzΓΆdΓ‘st|szerzΓΆdΓ©semet|szerzΓΆdΓ©sΓ©st|szerzΓΆdΓ©sΓ©hez|szerzdΓ©s|szerzdΓ©sre|szerzΓΆdΓ©seddel|szerzΓΆdΓ©sek|szerzdΓ©ssel|szerzΓΆdΓ©sedet|szerzdΓ©sen|szerzΓΆdΓ©sΓ©vel|szerzΓΆdΓ©seinken|szerzΓΆdΓ©ssel|szerzΓΆdΓ©seket|szerzΓΆdΓ©st|szerzΓΆdΓ©seinek|szerzΓΆdΓ©seire|szerzΓΆdΓ©seink|szerzΓΆdΓ©sed|szerzΓΆdΓ©sein|szerzΓΆdΓ©sΓ©be|szerzΓΆzΓ©deseket)\b', re.UNICODE))) for i in regs: if i[0] == 'szerzodes_felmondas': print(i) with open('regs_test.txt','w') as out: for i in regs: out.write(str(i)+'\n') </code></pre> <p>It is on python 3.9.<br> Why is it so, and how to overcome it?</p>
<python>
2024-03-12 11:49:51
1
1,174
lmocsi
78,146,684
2,706,344
Unstack only the last three columns
<p>We start with this data:</p> <pre><code>import numpy as np import pandas as pd data=pd.DataFrame(data=np.random.rand(10,5),columns=['headA','headB','tailA','tailB','tailC']) </code></pre> <p><a href="https://i.sstatic.net/Bw7Nx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bw7Nx.png" alt="enter image description here" /></a></p> <p>Now I want to perform a certain unstack operation which unstacks only the last three columns. Hence, this should be the new index:</p> <pre><code>pd.MultiIndex.from_product([data.columns[-3:],data.index]) MultiIndex([('tailA', 0),('tailA', 1),('tailA', 2),('tailA', 3),('tailA', 4),('tailA', 5),('tailA', 6),('tailA', 7),('tailA', 8),('tailA', 9),('tailB', 0),('tailB', 1),('tailB', 2),('tailB', 3),('tailB', 4),('tailB', 5),('tailB', 6),('tailB', 7),('tailB', 8),('tailB', 9),('tailC', 0),('tailC', 1),('tailC', 2),('tailC', 3),('tailC', 4),('tailC', 5),('tailC', 6),('tailC', 7),('tailC', 8),('tailC', 9)],) </code></pre> <p>I think, to perform this, I have to put the first two columns into another level than the remaining three columns. I don't know how I can do that in an elegant way. Any suggestions?</p>
<python><pandas><multi-index>
2024-03-12 11:23:02
1
4,346
principal-ideal-domain
78,146,641
1,377,912
Python ThreadPoolExecutor(max_workers=MAX_PARALLEL_REQUESTS) asyncio analog
<p>When I use ThreadPoolExecutor, I can send a requests batch with limitation of parallel requests like this:</p> <pre class="lang-py prettyprint-override"><code>with ThreadPoolExecutor(max_workers=MAX_PARALLEL_REQUESTS) as pool: results = list(pool.map(request_func, requests_input_data)) </code></pre> <p>How can I repeat this behavior with asyncio? Are there some libraries for this or I should write it by myself with something like &quot;wait for first future has been completed then add a new request&quot;?</p>
<python><python-asyncio><ipython-parallel>
2024-03-12 11:16:12
1
1,409
andre487
78,146,532
16,765,847
Display image using shiny in python
<p>I want to display an image using Shiny in Python, but I cannot find an error in the code. The file is not uploaded once I open the app and input the image.</p> <p>Here is the code:</p> <pre><code>from pathlib import Path from shiny import render from shiny.express import input, ui from shiny import App, Inputs, Outputs, Session, reactive, render, ui from shiny.types import FileInfo ui.page_fluid( ui.input_file(&quot;file_input&quot;, &quot;Upload Image&quot;), # Add file input ui.output_image(&quot;image&quot;) ) @render.image def image(): img = {&quot;src&quot;: input.file_input(), &quot;width&quot;: &quot;100px&quot;} return img </code></pre> <p>That I can display the image.</p>
<python><py-shiny>
2024-03-12 10:58:32
1
1,006
Isaac
78,146,531
15,519,366
Twitter API media upload fails with FileNotFoundError
<h1>What I want to do</h1> <p>I'm currently coding a Twitter bot which just creates a tweet with a couple of media with the Python wrapper &quot;Twikit&quot;. This is the Python code &quot;betsuaka_test.py&quot; I wrote is below:</p> <pre><code>from twikit import Client client = Client('ja') client.login( auth_info_1='xxxxxxxxx', auth_info_2='foobar@gmail.com', password='xxxxxxxx' ) TWEET_TEXT = 'γƒ†γ‚ΉγƒˆζŠ•η¨Ώ2' MEDIA_IDS = [ client.upload_media('https://i.imgur.com/ZFMaW5h.png',0) ] client.create_tweet(TWEET_TEXT, MEDIA_IDS) </code></pre> <h1>What happenned</h1> <p>When I run this code with <code>python3 betsuaka_test.py</code>, my shell returns the *<strong>two types of the error messages</strong> alternately:</p> <p>[error message 1]</p> <pre><code>ubuntunaoki@LAPTOP-7Q7QL2PR:~$ python3 betsuaka_test.py Traceback (most recent call last): File &quot;/home/ubuntunaoki/betsuaka_test.py&quot;, line 13, in &lt;module&gt; client.upload_media('https://i.imgur.com/ZFMaW5h.png',0) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/twikit/client.py&quot;, line 568, in upload_media img_size = os.path.getsize(source) File &quot;/usr/lib/python3.10/genericpath.py&quot;, line 50, in getsize return os.stat(filename).st_size FileNotFoundError: [Errno 2] No such file or directory: 'https://i.imgur.com/ZFMaW5h.png' </code></pre> <p>[error message2]</p> <pre><code>ubuntunaoki@LAPTOP-7Q7QL2PR:~$ python3 betsuaka_test.py Traceback (most recent call last): File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_transports/default.py&quot;, line 69, in map_httpcore_exceptions yield File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_transports/default.py&quot;, line 233, in handle_request resp = self._pool.handle_request(req) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py&quot;, line 216, in handle_request raise exc from None File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py&quot;, line 196, in handle_request response = connection.handle_request( File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpcore/_sync/connection.py&quot;, line 101, in handle_request return self._connection.handle_request(request) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpcore/_sync/http11.py&quot;, line 143, in handle_request raise exc File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpcore/_sync/http11.py&quot;, line 93, in handle_request self._send_request_headers(**kwargs) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpcore/_sync/http11.py&quot;, line 151, in _send_request_headers with map_exceptions({h11.LocalProtocolError: LocalProtocolError}): File &quot;/usr/lib/python3.10/contextlib.py&quot;, line 153, in __exit__ self.gen.throw(typ, value, traceback) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpcore/_exceptions.py&quot;, line 14, in map_exceptions raise to_exc(exc) from exc httpcore.LocalProtocolError: Illegal header value b'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 ' The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/home/ubuntunaoki/betsuaka_test.py&quot;, line 5, in &lt;module&gt; client.login( File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/twikit/client.py&quot;, line 138, in login guest_token = self._get_guest_token() File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/twikit/client.py&quot;, line 62, in _get_guest_token response = self.http.post( File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/twikit/http.py&quot;, line 54, in post return self.request('POST', url, **kwargs) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/twikit/http.py&quot;, line 25, in request response = self.client.request(method, url, **kwargs) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_client.py&quot;, line 827, in request return self.send(request, auth=auth, follow_redirects=follow_redirects) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_client.py&quot;, line 914, in send response = self._send_handling_auth( File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_client.py&quot;, line 942, in _send_handling_auth response = self._send_handling_redirects( File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_client.py&quot;, line 979, in _send_handling_redirects response = self._send_single_request(request) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_client.py&quot;, line 1015, in _send_single_request response = transport.handle_request(request) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_transports/default.py&quot;, line 232, in handle_request with map_httpcore_exceptions(): File &quot;/usr/lib/python3.10/contextlib.py&quot;, line 153, in __exit__ self.gen.throw(typ, value, traceback) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_transports/default.py&quot;, line 86, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.LocalProtocolError: Illegal header value b'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 ' </code></pre> <p>Why this happens?</p> <ul> <li>&quot;Twikit&quot; can use not only local image files but also images which is on a web.</li> <li>The image url <code>https://i.imgur.com/ZFMaW5h.png</code> is valid and the image I want pops up when I write down this URL on browser URL bar.</li> </ul> <h1>New trial</h1> <p>I tried some modifications.</p> <ul> <li>Get image file from image url and put it into a variable <code>image_file</code></li> <li>And then, put the binary file directly into update_media() function.</li> </ul> <pre><code>from twikit import Client client = Client('ja') client.login( auth_info_1='xxxxxxx', auth_info_2='xxxxxx@gmail.com', password='xxxxxx' ) import base64 image_file_path = 'https://i.imgur.com/ZFMaW5h.png' with open(image_file_path, mode='rb') as f: image_file = f.read() # base64エンコードする binary_file_b64 = base64.b64encode(image_file) # base64γ‚¨γƒ³γ‚³γƒΌγƒ‰γ—γŸγƒγ‚€γƒŠγƒͺγƒ‡γƒΌγ‚Ώγ‚’γ‚ΏγƒΌγƒŸγƒŠγƒ«γ«θ‘¨η€Ίγ—γ¦γΏγ‚‹ print(binary_file_b64) TWEET_TEXT = 'γƒ†γ‚ΉγƒˆζŠ•η¨Ώ2' https://avatars.githubusercontent.com/u/7451118?s=200&amp;v=4MEDIA_IDS = [ client.upload_media(binary_file_b64,0) ] client.create_tweet(TWEET_TEXT, MEDIA_IDS) </code></pre> <p>but another two error messages came:</p> <pre><code>Traceback (most recent call last): File &quot;/home/ubuntunaoki/betsuaka_test.py&quot;, line 15, in &lt;module&gt; with open(image_file_path, mode='rb') as f: FileNotFoundError: [Errno 2] No such file or directory: 'https://i.imgur.com/ZFMaW5h.png' </code></pre> <pre><code>Traceback (most recent call last): File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_transports/default.py&quot;, line 69, in map_httpcore_exceptions yield File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_transports/default.py&quot;, line 233, in handle_request resp = self._pool.handle_request(req) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py&quot;, line 216, in handle_request raise exc from None File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py&quot;, line 196, in handle_request response = connection.handle_request( File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpcore/_sync/connection.py&quot;, line 101, in handle_request return self._connection.handle_request(request) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpcore/_sync/http11.py&quot;, line 143, in handle_request raise exc File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpcore/_sync/http11.py&quot;, line 93, in handle_request self._send_request_headers(**kwargs) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpcore/_sync/http11.py&quot;, line 151, in _send_request_headers with map_exceptions({h11.LocalProtocolError: LocalProtocolError}): File &quot;/usr/lib/python3.10/contextlib.py&quot;, line 153, in __exit__ self.gen.throw(typ, value, traceback) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpcore/_exceptions.py&quot;, line 14, in map_exceptions raise to_exc(exc) from exc httpcore.LocalProtocolError: Illegal header value b'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 ' The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/home/ubuntunaoki/betsuaka_test.py&quot;, line 5, in &lt;module&gt; client.login( File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/twikit/client.py&quot;, line 138, in login guest_token = self._get_guest_token() File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/twikit/client.py&quot;, line 62, in _get_guest_token response = self.http.post( File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/twikit/http.py&quot;, line 54, in post return self.request('POST', url, **kwargs) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/twikit/http.py&quot;, line 25, in request response = self.client.request(method, url, **kwargs) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_client.py&quot;, line 827, in request return self.send(request, auth=auth, follow_redirects=follow_redirects) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_client.py&quot;, line 914, in send response = self._send_handling_auth( File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_client.py&quot;, line 942, in _send_handling_auth response = self._send_handling_redirects( File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_client.py&quot;, line 979, in _send_handling_redirects response = self._send_single_request(request) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_client.py&quot;, line 1015, in _send_single_request response = transport.handle_request(request) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_transports/default.py&quot;, line 232, in handle_request with map_httpcore_exceptions(): File &quot;/usr/lib/python3.10/contextlib.py&quot;, line 153, in __exit__ self.gen.throw(typ, value, traceback) File &quot;/home/ubuntunaoki/.local/lib/python3.10/site-packages/httpx/_transports/default.py&quot;, line 86, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.LocalProtocolError: Illegal header value b'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 ' </code></pre>
<python><twitter><twitterapi-python>
2024-03-12 10:58:19
1
827
ten
78,146,106
6,915,206
Django data getting saved in wrong model Why its saving the Model B,C data in model A
<p>I have 3 django models <code>CustomerDetail</code> <code>CarrierForm</code> <code>InfluencerModel</code> When i am trying to save data in <code>CarrierForm</code> or <code>InfluencerModel</code> through different different page's forms its getting saved in model <code>CustomerDetail</code> Why its happening tell me what i am doing wrong?</p> <p>Here is Model</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>class CustomerDetail(models.Model): full_name = models.CharField(max_length=255, null=False, blank=False) email = models.EmailField(max_length=255, null=False, blank=False) contact_number = models.CharField(max_length=10, null=False, blank=False) message = models.TextField(null=False, blank=False) visited_on = models.DateTimeField(auto_now_add=True) def __str__(self): return self.email class CarrierForm(models.Model): full_name = models.CharField(max_length=255, null=False, blank=False) email = models.EmailField(max_length=255, null=False, blank=False) contact_number = models.CharField(max_length=10, null=False, blank=False) upload_resume = models.FileField(null=False, blank=False) message = models.TextField(null=True, blank=True) visited_on = models.DateTimeField(auto_now_add=True) def __str__(self): return self.email class InfluencerModel(models.Model): full_name = models.CharField(max_length=255, null=False, blank=False) email = models.EmailField(max_length=255, null=False, blank=False) contact_number = models.CharField(max_length=10, null=False, blank=False) instagram_id = models.CharField(max_length=50, null=False, blank=False) message = models.TextField(null=True, blank=True) visited_on = models.DateTimeField(auto_now_add=True) def __str__(self): return self.email def get_absolute_url(self): return reverse("influencers", kwargs={'slug': self.slug})</code></pre> </div> </div> </p> <p>Rendering the forms like this</p> <pre><code>&lt;form action=&quot;{% url 'home' %}&quot; role=&quot;form&quot; class=&quot;php-email-form&quot; method=&quot;post&quot;&gt; {% csrf_token %} &lt;div class=&quot;row&quot;&gt; {{ form.as_table }} &lt;div class=&quot;col-md-6 form-group&quot;&gt; {{ form.full_name.errors }} {{form.full_name|as_crispy_field}} &lt;/div&gt; &lt;div class=&quot;col-md-6 form-group mt-3 mt-md-0&quot;&gt; {{ form.email.errors }} {{form.email|as_crispy_field}} &lt;!-- &lt;input type=&quot;email&quot; class=&quot;form-control&quot; name=&quot;email&quot; id=&quot;email&quot; placeholder=&quot;Your Email&quot; required&gt;--&gt; &lt;/div&gt; &lt;div class=&quot;form-group col-md-6&quot;&gt; {{ form.contact_number.errors }} {{form.contact_number|as_crispy_field}} &lt;!-- &lt;input type=&quot;text&quot; class=&quot;form-control&quot; name=&quot;subject&quot; id=&quot;subject&quot; placeholder=&quot;Subject&quot; required&gt;--&gt; &lt;/div&gt; &lt;div class=&quot;form-group col-md-6&quot;&gt; {{ form.instagram_id.errors }} {{form.instagram_id|as_crispy_field}} &lt;!-- &lt;input type=&quot;text&quot; class=&quot;form-control&quot; name=&quot;subject&quot; id=&quot;subject&quot; placeholder=&quot;Subject&quot; required&gt;--&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class=&quot;form-group mt-3&quot; rows=&quot;7&quot;&gt; {{ form.message.errors }} {{form.message|as_crispy_field}} &lt;!-- &lt;textarea class=&quot;form-control&quot; name=&quot;message&quot; rows=&quot;7&quot; placeholder=&quot;Message&quot; required&gt;&lt;/textarea&gt;--&gt; &lt;/div&gt; &lt;!-- &lt;div class=&quot;my-3&quot;&gt;--&gt; &lt;!-- &lt;div class=&quot;loading&quot;&gt;Loading&lt;/div&gt;--&gt; &lt;!-- &lt;div class=&quot;error-message&quot;&gt;&lt;/div&gt;--&gt; &lt;!-- &lt;div class=&quot;sent-message&quot;&gt;Your message has been sent. Thank you!&lt;/div&gt;--&gt; &lt;!-- &lt;/div&gt;--&gt; &lt;div class=&quot;text-center&quot;&gt; &lt;button type=&quot;submit&quot; class=&quot;btn btn-outline-secondary&quot; style=&quot;background-color:#FF512F; color: white&quot;&gt;Send Message&lt;/button&gt; &lt;/div&gt; &lt;!-- &lt;div class=&quot;text-center&quot;&gt;&lt;button type=&quot;submit&quot;&gt;Send Message&lt;/button&gt;&lt;/div&gt;--&gt; &lt;/form&gt; </code></pre> <p>Urls</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>urlpatterns = [ path('admin/', admin.site.urls), path('', HomeView.as_view(), name='home'), path('influencers/', InfluencersPageView.as_view(), name='influencers'), path('carrier/', CarrierFormView.as_view(), name='carrier'), ]</code></pre> </div> </div> </p>
<python><html><django><django-models><django-forms>
2024-03-12 09:52:40
1
563
Rahul Verma
78,145,973
4,095,235
Bug in large sparse CSR binary matrices multiplication result
<p>This boggles my mind, is this a known bug or am I missing something? If a bug is there a way to circumvent it?</p> <hr /> <p>Suppose I have a relatively small binary (0/1) n x q <code>scipy.sparse.csr_matrix</code>, as in:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy import sparse def get_dummies(vec, vec_max): vec_size = vec.size Z = sparse.csr_matrix((np.ones(vec_size), (np.arange(vec_size), vec)), shape=(vec_size, vec_max), dtype=np.uint8) return Z q = 100 ns = np.round(np.random.random(q)*100).astype(np.int16) Z_idx = np.repeat(np.arange(q), ns) Z = get_dummies(Z_idx, q) Z </code></pre> <pre><code>&lt;5171x100 sparse matrix of type '&lt;class 'numpy.uint8'&gt;' with 5171 stored elements in Compressed Sparse Row format&gt; </code></pre> <p>Here <code>Z</code> is a standard dummy variables matrix, with n=5171 observations and q=100 variables:</p> <pre class="lang-py prettyprint-override"><code>Z[:5, :5].toarray() </code></pre> <pre><code>array([[1, 0, 0, 0, 0], [1, 0, 0, 0, 0], [1, 0, 0, 0, 0], [1, 0, 0, 0, 0], [1, 0, 0, 0, 0]], dtype=uint8) </code></pre> <p>E.g., if the first 5 variables have...</p> <pre class="lang-py prettyprint-override"><code>ns[:5] </code></pre> <pre><code>array([21, 22, 37, 24, 99], dtype=int16) </code></pre> <p>frequencies, we would also see this in <code>Z</code>'s column sums:</p> <pre class="lang-py prettyprint-override"><code>Z[:, :5].sum(axis=0) </code></pre> <pre><code>matrix([[21, 22, 37, 24, 99]], dtype=uint64) </code></pre> <p>Now, as expected, if I multiply <code>Z.T @ Z</code> I should get a q x q diagonal matrix, on the diagonal the frequencies of the q variables:</p> <pre class="lang-py prettyprint-override"><code>print((Z.T @ Z).shape) print((Z.T @ Z)[:5, :5].toarray() </code></pre> <pre><code>(100, 100) [[21 0 0 0 0] [ 0 22 0 0 0] [ 0 0 37 0 0] [ 0 0 0 24 0] [ 0 0 0 0 99]] </code></pre> <p><strong>Now for the bug</strong>: If n is really large (for me it happens around n = 100K already):</p> <pre class="lang-py prettyprint-override"><code>q = 1000 ns = np.round(np.random.random(q)*1000).astype(np.int16) Z_idx = np.repeat(np.arange(q), ns) Z = get_dummies(Z_idx, q) Z </code></pre> <pre><code>&lt;495509x1000 sparse matrix of type '&lt;class 'numpy.uint8'&gt;' with 495509 stored elements in Compressed Sparse Row format&gt; </code></pre> <p>The frequencies are large, the sum of <code>Z</code>'s columns is as expected:</p> <pre class="lang-py prettyprint-override"><code>print(ns[:5]) Z[:, :5].sum(axis=0) </code></pre> <pre><code>array([485, 756, 380, 87, 454], dtype=int16) matrix([[485, 756, 380, 87, 454]], dtype=uint64) </code></pre> <p>But the <code>Z.T @ Z</code> is messed up! In the sense that I'm not getting the right frequencies on the diagonal:</p> <pre class="lang-py prettyprint-override"><code>print((Z.T @ Z).shape) print((Z.T @ Z)[:5, :5].toarray()) </code></pre> <pre><code>(1000, 1000) [[229 0 0 0 0] [ 0 244 0 0 0] [ 0 0 124 0 0] [ 0 0 0 87 0] [ 0 0 0 0 198]] </code></pre> <p>Amazingly, there is some relation to the true frequencies:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt plt.scatter(ns, (Z.T @ Z).diagonal()) plt.xlabel('real frequencies') plt.ylabel('values on ZZ diagonal') plt.show() </code></pre> <p><a href="https://i.sstatic.net/LygSX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LygSX.png" alt="enter image description here" /></a></p> <p>What is going on?</p> <p>I'm using standard colab:</p> <pre class="lang-py prettyprint-override"><code>import scipy as sc print(np.__version__) print(sc.__version__) </code></pre> <pre><code>1.25.2 1.11.4 </code></pre> <p>PS: Obviously if I wanted just <code>Z.T @ Z</code>'s output matrix there are easier ways to get it, this is a very simplified reduced problem, thanks.</p>
<python><scipy><sparse-matrix>
2024-03-12 09:31:41
1
3,709
Giora Simchoni
78,145,819
9,785,875
What's the best practice of configuring logging in Python?
<p>It seems that <code>logging</code> is not that easy to use in Python. For example, I need to use <code>logging</code> for logs, and I also need to import other stuff from some other packages.</p> <p>If I write my code like below:</p> <pre><code>import logging import foo from bar import baz logging.basicConfig(...) </code></pre> <p>the logs will not output as I configured in <code>basicConfig()</code>, they will be output in the default format. So the <code>basicConfig()</code> doesn't work in this case.</p> <p>If I write my code like below:</p> <pre><code>import logging logging.basicConfig(...) import foo from bar import baz </code></pre> <p>The logs will be output as I configured in <code>basicConfig()</code>, but the code style does not obey PEP8, because <code>logging.basicConfig()</code> appears among the import statements.</p> <p>Is there any solution for it so that I can have it both ways?</p>
<python><logging><pep8>
2024-03-12 09:07:18
0
644
Hank Chow
78,145,755
5,767,535
Unexplained behaviour from vectorized partial function using `numpy` and `functools`
<p>I am trying to vectorize a partial function which takes two arguments, both of them lists, then does something to the pairwise elements from the lists (using <code>zip</code>). However, I am finding some unexpected behaviour.</p> <p>Consider the following code:</p> <pre class="lang-py prettyprint-override"><code>import functools import numpy as np def f(l1,l2): l1 = l1 if isinstance(l1,list) or isinstance(l1,np.ndarray) else [l1] l2 = l2 if isinstance(l2,list) or isinstance(l2,np.ndarray) else [l2] for e1,e2 in zip(l1,l2): print(e1,e2) f(['a','b'],[1,2]) fp = functools.partial(f,l1=['a','b']) fp(l2=[1,2]) fv = np.vectorize(fp) fv(l2=np.array([1,2])) </code></pre> <p>The output from the Jupyter notebook is as follows:</p> <pre class="lang-py prettyprint-override"><code>a 1 b 2 a 1 b 2 a 1 a 1 a 2 array([None, None], dtype=object) </code></pre> <p>I have two questions:</p> <ul> <li>First, the type check at the beginning of <code>f</code> is necessary because <code>np.vectorize</code> seems to automatically fully flatten any input (I get a <code>int32 not iterable</code> exception otherwise). <strong>Is there a way to avoid this?</strong></li> <li>Secondly, when the partial function <code>fp</code> is vectorized, clearly the output is not the expected one - I am not sure I understand what NumPy is doing here, including the final empty array output. No matter how much I nest <code>[1,2]</code> within a list, tuple or array the output seems to be always the same. <strong>How can I fix my code so that the vectorized function <code>fv</code> behave as expected - that is the same as <code>fp</code>?</strong></li> </ul> <p><strong>Edit</strong><br /> Another try I have done is:</p> <pre class="lang-py prettyprint-override"><code>fpv(l2=[np.array([1,2]), np.array([3,4])]) </code></pre> <p>whose output is:</p> <pre class="lang-py prettyprint-override"><code>a 1 a 1 a 2 a 3 a 4 </code></pre>
<python><numpy><vectorization><functools>
2024-03-12 08:56:14
2
2,343
Daneel Olivaw
78,145,667
2,508,672
Python comparing data in Pandas format mismatch
<p>I have the input below with data (test.csv)</p> <pre><code>date,time 26-02-2024,8:01 26-02-2024,8:01 26-02-2024,7:59 26-02-2024,7:59 26-02-2024,7:56 26-02-2024,7:55 26-02-2024,7:55 26-02-2024,7:53 26-02-2024,7:52 26-02-2024,7:52 </code></pre> <p>Now I have the code below</p> <pre><code>df = pd.read_csv('test.csv') df.columns = df.columns.str.strip() df = df[(datetime.strptime(str(df['date']),'%Y-%m-%d') &lt;= datetime.now())] </code></pre> <p>I get the error</p> <blockquote> <p>ValueError: time data '0 26-02-2024\n1 26-02-2024\n2<br /> 26-02-2024\n3 26-02-2024\n4 26-02-2024\n5 26-02-2024\n6<br /> 26-02-2024\n7 26-02-2024\n8 26-02-2024\n9 26-02-2024\nName: date, dtype: object' does not match format '%Y-%m-%d'</p> </blockquote> <p>When I try</p> <pre><code>df['date'] = pd.to_datetime(df['date'],format='%Y-%m-%d') </code></pre> <p>I get get the following error:</p> <blockquote> <p>ValueError: time data &quot;26-02-2024&quot; doesn't match format &quot;%Y-%m-%d&quot;, at position 0. You might want to try:</p> </blockquote> <p>When I try</p> <pre><code>df['date'] = pd.to_datetime(df['date'],format='%Y-%m-%d',errors='coerce') </code></pre> <p>I got null for date column in output, that looks like some error in date column value, but could not figure it out</p> <p>Unable to find why it is happening</p> <p><strong>Update</strong></p> <p>Thanks to Panda Kim, changing to date format to %d-%m-%Y it is working , but when I apply same thing on condition it throws error</p> <pre><code>df = df[ pd.to_datetime(start_date,format='%d-%m-%Y') &lt;= pd.to_datetime(df['date'], format='%d-%m-%Y') &lt;= pd.to_datetime(end_date,format='%d-%m-%Y')] </code></pre> <p>throws error now</p> <p>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p> <p>Any guidance would be welcome</p>
<python><pandas>
2024-03-12 08:39:30
1
4,608
Md. Parvez Alam
78,145,559
19,565,276
How to combine members of flags when values are tuples
<p>I have an enum Flag defined like this:</p> <pre><code>@unique @verify(CONTINUOUS) class DaysEnum(IntFlag): ONE = (1, &quot;One Day&quot;) TWO = (2, &quot;Two Days&quot;) THREE = (3, &quot;Three Days&quot;) FOUR = (4, &quot;Four Days&quot;) FIVE = (5, &quot;Five Days&quot;) SIX = (6, &quot;Six Days&quot;) SEVEN = (7, &quot;Seven Days&quot;) EVEN_DAYS = (TWO | FOUR | SIX, &quot;Even Days&quot;) def __init__(self, value, label) -&gt; None: self.label = label def __new__(cls, value: int, label: str): obj = int.__new__(cls, value) obj._value_ = value obj.label = label return obj </code></pre> <p>According to <a href="https://docs.python.org/3/howto/enum.html#combining-members-of-flag" rel="nofollow noreferrer">the Python docs</a> I can combine members of enums when inheriting from the class <code>enum.Flag</code>. When running this code, I get the error <code>TypeError: unsupported operand type(s) for |: 'tuple' and 'tuple'.</code> The syntax of how I defined the member <code>EVEN_DAYS</code> seems to be wrong.</p> <p>What is the proper syntax to combine the members of <code>EVEN_DAYS</code> considering that the values of <code>DaysEnum</code> are tuples?</p>
<python><enums>
2024-03-12 08:18:59
1
311
Lucien Chardon
78,145,511
15,915,172
How to retrain the existing vertex AI model using more data? Is there any process with python?
<p>I am learning Google's vertex AI, AutoML with Python. I have created the single label dataset, pipelines, endpoint and deployed the model using google's UI.</p> <p>Now I have more data then how to append that into existing dataset? or retrain the existing model?</p>
<python><google-cloud-vertex-ai><automl><google-prediction>
2024-03-12 08:12:16
1
784
Rakesh Saini
78,145,418
13,734,451
Looping through list of lists then create a dataframe
<p>I have a list containing sublists. I want to loop through all the sublist and extract the data, save them in individual lists then create a dataframe. When I try doing so the values get mixed up....</p> <pre><code> fulllist = [ [ {'Variable': 'First_Name', 'Answer': 'Anne'}, {'Variable': 'Middle_Name', 'Answer': 'Wanjohi'}, {'Variable': 'Age', 'Answer': '50'}, {'Variable': 'Country', 'Answer': 'Uganda'}], [ {'Variable': 'First_Name', 'Answer': 'John'}, {'Variable': 'Middle_Name', 'Answer': 'Wagwara'}, {'Variable': 'Country', 'Answer': 'Kenya'} ], [ {'Variable': 'First_Name', 'Answer': 'Jeff'}, {'Variable': 'Middle_Name', 'Answer': 'Simboyi'}, {'Variable': 'Age', 'Answer': '20'}, {'Variable': 'Country', 'Answer': 'UK'}], [ {'Variable': 'First_Name', 'Answer': 'Ken'}, {'Variable': 'Middle_Name', 'Answer': 'Kumbua'}, {'Variable': 'Country', 'Answer': 'Tanzania'} ] ] First_Name = [] Middle_Name = [] Age = [] Country = [] for i in range(len(fulllist)): try: First_Name.append(fulllist[i][0]['Answer']) Middle_Name.append(fulllist[i][1]['Answer']) Age.append(fulllist[i][2]['Answer']) Country.append(fulllist[i][3]['Answer']) except IndexError: print(i) print(Age) print(Country) </code></pre>
<python><pandas><dataframe>
2024-03-12 07:56:28
3
1,516
Moses
78,145,023
9,394,465
file with quotes in any of the fields to be retained as is in pandas
<p>I have been trying so many options but not able to retain the quotes present in the input file onto my output file.</p> <p>Reproducible code:</p> <pre><code># Input file csv_data = '''A,B,C,D,E 234,mno,C22,U, 567,pqr,&quot;C3&quot;&quot;&quot;,U,5555 999,abc,&quot;C99&quot;,D,9999 ''' # Load CSV data into dataframes df = pd.read_csv(StringIO(csv_data), header=0, dtype=str, keep_default_na=False, engine='python', sep=',') df.to_csv('output.txt', sep=',', index=False, header=True) </code></pre> <p>Now, the output.txt looks like:</p> <pre><code>A,B,C,D,E 234,mno,C22,U, 567,pqr,&quot;C3&quot;&quot;&quot;,U,5555 999,abc,C99,D,9999 </code></pre> <p>Expected output:</p> <pre><code>A,B,C,D,E 234,mno,C22,U, 567,pqr,&quot;C3&quot;&quot;&quot;,U,5555 999,abc,&quot;C99&quot;,D,9999 </code></pre> <p>I just don't want to lose anything present in my input data while saving (including the quotes).</p>
<python><pandas>
2024-03-12 06:27:37
1
513
SpaceyBot
78,144,868
13,801,302
How to add langchain docs to LCEL chain?
<p>I want to create a chain in langchain. And get the following error in short</p> <pre><code>TypeError: Expected a Runnable, callable or dict.Instead got an unsupported type: &lt;class 'str'&gt; </code></pre> <p>You find the complete error message at the end.</p> <p><strong>What I want to do?</strong></p> <p>First I pass my query through several retrievers and then use RRF to rerank. The result is <code>result_context</code>. It's a list of tuples <code>(&lt;document&gt;,&lt;score&gt;)</code>. In the code below, I define my prompt template and join my document's page_content to one string. In the chain I want to pass my joined context and the query to the prompt. Further the final prompt should pass to the generation pipeline <code>llm</code>.</p> <pre class="lang-py prettyprint-override"><code>if self.prompt_template == None: template = &quot;&quot;&quot;Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Use three sentences maximum and keep the answer as concise as possible. Always say &quot;thanks for asking!&quot; at the end of the answer. {context} Question: {question} Helpful Answer:&quot;&quot;&quot; self.prompt_template = PromptTemplate.from_template(template) prompt = ChatPromptTemplate.from_template(template) context = &quot;\n&quot;.join([doc.page_content for doc, score in result_context]) chain = ( {&quot;context&quot;:context, &quot;question&quot;: RunnablePassthrough()} |prompt |llm |StrOutParser() ) inference = chain.invoke(query) print(str(inference)) </code></pre> <p>Now I get the following error and I don't know how to work around. I hope you can help me to fix the problem, that is possible to invoke the query to the chain.</p> <p>Thanks in advance.</p> <pre class="lang-py prettyprint-override"><code> 169 prompt = ChatPromptTemplate.from_template(template) 170 context = &quot;\n&quot;.join([doc.page_content for doc, score in result_context]) 171 chain = ( --&gt; 172 {&quot;context&quot;:context, &quot;question&quot;: RunnablePassthrough()} 173 |prompt 174 |llm 175 |StrOutParser() 176 ) 178 inference = chain.invoke(final_prompt) 179 print(str(inference)) File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:433, in Runnable.__ror__(self, other) 423 def __ror__( 424 self, 425 other: Union[ (...) 430 ], 431 ) -&gt; RunnableSerializable[Other, Output]: 432 &quot;&quot;&quot;Compose this runnable with another object to create a RunnableSequence.&quot;&quot;&quot; --&gt; 433 return RunnableSequence(coerce_to_runnable(other), self) File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:4373, in coerce_to_runnable(thing) 4371 return RunnableLambda(cast(Callable[[Input], Output], thing)) 4372 elif isinstance(thing, dict): -&gt; 4373 return cast(Runnable[Input, Output], RunnableParallel(thing)) 4374 else: 4375 raise TypeError( 4376 f&quot;Expected a Runnable, callable or dict.&quot; 4377 f&quot;Instead got an unsupported type: {type(thing)}&quot; 4378 ) File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:2578, in RunnableParallel.__init__(self, _RunnableParallel__steps, **kwargs) 2575 merged = {**__steps} if __steps is not None else {} 2576 merged.update(kwargs) 2577 super().__init__( -&gt; 2578 steps={key: coerce_to_runnable(r) for key, r in merged.items()} 2579 ) File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:2578, in &lt;dictcomp&gt;(.0) 2575 merged = {**__steps} if __steps is not None else {} 2576 merged.update(kwargs) 2577 super().__init__( -&gt; 2578 steps={key: coerce_to_runnable(r) for key, r in merged.items()} 2579 ) File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:4375, in coerce_to_runnable(thing) 4373 return cast(Runnable[Input, Output], RunnableParallel(thing)) 4374 else: -&gt; 4375 raise TypeError( 4376 f&quot;Expected a Runnable, callable or dict.&quot; 4377 f&quot;Instead got an unsupported type: {type(thing)}&quot; 4378 ) TypeError: Expected a Runnable, callable or dict.Instead got an unsupported type: &lt;class 'str'&gt; </code></pre>
<python><langchain><information-retrieval><data-retrieval>
2024-03-12 05:45:27
1
621
Christian01
78,144,776
10,483,893
python: With GIL, is it still possible to have deadlocks with threading? (Not multi-processing)
<p>python: With GIL, is it still possible to have deadlocks with threading? (Not multi-processing)</p> <p>Below code will produce deadlock situation (Updated per Amadan, added sleep):</p> <pre><code>import threading import time # Two resources resource1 = threading.Lock() resource2 = threading.Lock() def function1(): with resource1: print(&quot;Thread 1 acquired resource 1&quot;) time.sleep(3) with resource2: print(&quot;Thread 1 acquired resource 2&quot;) time.sleep(3) def function2(): with resource2: print(&quot;Thread 2 acquired resource 2&quot;) time.sleep(3) with resource1: print(&quot;Thread 2 acquired resource 1&quot;) time.sleep(3) # Create two threads thread1 = threading.Thread(target=function1) thread2 = threading.Thread(target=function2) # Start the threads thread1.start() thread2.start() # Wait for both threads to finish thread1.join() thread2.join() print(&quot;Both threads finished execution&quot;) </code></pre> <p>But conceptually, even with GIL, the two threads can still trying to acquire a lock/resource that's already been acquired by the other thread.</p>
<python><multithreading><gil>
2024-03-12 05:13:40
1
1,404
user3761555
78,144,558
6,803,114
Open folder button in streamlit
<p>I have built a streamlit application which reads a file and processes it and stores it in a file folder which is already defined.</p> <p>Now I want to create a button in streamlit where after file is processed it should display a button, which on click opens the respective folder. Screenshot for reference: <a href="https://i.sstatic.net/Met83.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Met83.png" alt="enter image description here" /></a></p> <p>I tried using <code>st.markdown</code> and <code>st.link_button</code> but it is not opening the folder. Here is what I tried:</p> <pre><code>###method 1: st.link_button(&quot;Go to Folder&quot;, r&quot;C:\path\to\Output_file&quot;) ###method 2: url = r&quot;C:\path\to\Output_file&quot; st.markdown(f''' &lt;a href={url}&gt;&lt;button style=&quot;background-color:GreenYellow;&quot;&gt;Go To Folder&lt;/button&gt;&lt;/a&gt; ''',unsafe_allow_html=True) </code></pre> <p>It isn't working. Is there any workaround for this?</p> <p>I saw a few articles where people are suggesting to use Tkinter, but how to implement that to open folder. or are there any other suitable options.</p> <p>if not streamlit then is there any other application which can do it?</p>
<python><python-3.x><streamlit>
2024-03-12 03:47:58
0
7,676
Shubham R
78,144,557
303,863
Efficient modular arithmetic in Z3
<p>I want to work with integer modulo 3 under addition in BitVec, so basically (a+b)%3. Note BitVec is much faster than integer so I want to make sure all operations are inside BitVec.</p> <p>I would need to create bit vectors of size 3. This is because compute a+b would need 3 bits, although taking the modulus, the result is 2 bits.</p> <p>For example, I would need to do the following.</p> <pre><code>a = BitVecVal(3, 3) b = BitVecVal(3, 3) simplify(URem(a+b,3)) </code></pre> <p>However, it would be nice if I only have to use BitVec(2) through out.</p> <p>Is there a way to do this?</p>
<python><z3><z3py>
2024-03-12 03:47:22
1
2,206
Chao Xu
78,144,401
2,694,184
How could I handle superclass and subclass cases
<p><strong>Problem statement:</strong></p> <p>I have 100s of py files which defines pydantic schema. Suddenly I need to treat empty string as None. I am expecting minimal changes across all the files.</p> <p><strong>Approach I implemented:</strong></p> <p>I created an inherited class such as</p> <pre><code>class ConstrainedStr(str): @classmethod def __get_validators__(cls): yield cls.validate @classmethod def validate(cls, v: str, field: Field) -&gt; Optional[str]: v = v.strip() if v == &quot;&quot;: return None return v </code></pre> <p>Then, in all the py files I just added an import statement</p> <pre><code>from package.module import ConstrainedStr as str </code></pre> <p>Luckily, It worked at the first sight. But I ended up with an issue where I have a function</p> <p><strong>User file:</strong></p> <pre><code>from package.module import ConstrainedStr as str def validate(cls, value:str): //sample value 'asd' if isinstance(value, str): validation_rule() </code></pre> <p>Here, this conditional statement failed.</p> <p><strong>Question</strong></p> <ol> <li>How could I avoid major changes to achieve this? Is there a way?</li> <li>Why that isinstance check failed. Constrainedstr is also a str right? Is my understanding wrong?</li> </ol> <p><strong>When I digged into this further,</strong> just to understand the behaviour for the question #2, I found the below.</p> <pre><code>type(ConstrainedStr) </code></pre> <p>return <code>type&lt;class&gt;</code>.</p> <pre><code>`type('123')` </code></pre> <p>returns <code>type&lt;str&gt;</code></p> <p>But when I check the <code>builtins</code> package, <code>str</code> is also a class. But type function returns the type for <code>str</code> as <code>str</code> but for <code>ConstrainedStr</code> as <code>class</code>. Hence, My question #2 popped up.</p> <p>Another example:</p> <pre><code>class A(int): pass isinstance(2, A) # False </code></pre> <p>This is my #2 question.</p>
<python>
2024-03-12 02:42:47
1
23,098
Gibbs
78,144,376
11,188,140
Applying a specified rule to build an array
<p>Consider array <code>a</code> whose columns hold random values taken from 1, 2, 3:</p> <pre><code>a = np.array([[2, 3, 1, 3], [3, 2, 1, 3], [1, 1, 1, 2], [1, 3, 2, 3], [3, 3, 1, 3], [2, 1, 3, 2]]) </code></pre> <p>Now, consider array <code>b</code> whose first 2 columns hold the 9 possible pairs of values taken from 1, 2, 3 (the order of the pair elements is important). The 3rd column of <code>b</code> associates a non-negative integer with each pairing.</p> <pre><code>b = np.array([[1, 1, 6], [1, 2, 0], [1, 3, 9], [2, 1, 6], [2, 2, 0], [2, 3, 4], [3, 1, 1], [3, 2, 0], [3, 3, 8]]) </code></pre> <p>I need help with code that produces array <code>c</code> where vertically adjacent elements in <code>a</code> are replaced with matching values from the 3rd column of <code>b</code>. For example, the first column of 'a' moves down from 2 to 3 to 1 to 1 to 3 to 2. So, the first column of <code>c</code> would hold values 4, 1, 6, 9, 0. The same idea applies to every column of <code>a</code>. We see that pair order is important (moving from 3 to 1 produces value 1, while moving from 1 to 3 produces value 9.</p> <p>The output of this small example would be:</p> <pre><code>c = np.array([[4, 0, 6, 8], [1, 6, 6, 0], [6, 9, 0, 4], [9, 8, 6, 8], [0, 1, 9, 0]]) </code></pre> <p>Because this code will be executed a vast number of times, I'm hoping there is a speedy vectorized solution.</p>
<python><arrays><numpy>
2024-03-12 02:34:12
1
746
user109387
78,144,199
6,654,904
Error BadGzipFile when read gz file via python gzip
<p>The source file is json and we zipped it to gz format. The file is good. I am able to open the file with notepad++</p> <pre><code>import gzip # Open the GZIP file in text mode ('rt') with gzip.open('example.gz', 'rt') as f: file_content = f.read() print(file_content) </code></pre> <p>The error I get is</p> <blockquote> <p>BadGzipFile: Not a gzipped file (b'{\n')</p> </blockquote> <p>I also try to read line by line and get the same error</p> <pre><code>import gzip with gzip.open('example.gz', 'r') as fin: for line in fin: print('got line:', line) </code></pre> <p>This is my sample json data:</p> <pre><code>{ &quot;metadata_version&quot;: 1, &quot;created&quot;: &quot;2024-01-31T16:02:11.400125+00:00&quot;, &quot;domain&quot;: { &quot;name&quot;: &quot;myname&quot;, &quot;version&quot;: 1, &quot;type&quot;: &quot;core&quot; }, &quot;id1&quot;: &quot;01HNG439A8M7395MB9CWC4XSKC&quot;, &quot;id2&quot;: { &quot;id3&quot;: &quot;efbc9315-6a27-455b-9050-02ea08eb1b69&quot;, &quot;id4&quot;: &quot;05933069-eeb5-4801-8801-fdd9819d08bf&quot;, &quot;id5&quot;: &quot;8b642da5-e954-402c-bcb9-a196d594ed62&quot; }, &quot;data&quot;: &quot;AAAAAAAA22RzW7CMBCEXyXymVQJNOHnVgGlHIoikvbQ2+IsYMnYdNemQlXfvQ4Q4MB1ZvebWftXVASGQTplzcyrWozEWub9LM8HsUyyNE5TxBjSvoyTJEuyfPUMvXUqOmKJ3x7ZTcChGBmvdUeMNQIps3mznnE&quot; } </code></pre>
<python><json><gzip>
2024-03-12 01:16:45
1
353
Anson
78,144,080
4,555,858
Update parent class variable from child class in Python
<p>The code below has two classes, and I am trying to update the parent class variable from the <code>Child1</code> class.</p> <pre><code>class Parent: def __init__(self): self.shared_variable = &quot;I am a shared variable&quot; class Child1(Parent): def __init__(self): super().__init__() def change_variable(self, new_value): self.shared_variable = new_value class Child2(Parent): def __init__(self): super().__init__() # Example usage: child1_instance = Child1() child2_instance = Child2() print(child1_instance.shared_variable) # Initial value child1_instance.change_variable(&quot;New value from Child1&quot;) print(child2_instance.shared_variable) # Updated value accessed by Child2 </code></pre> <p>However, after updating the value of the <code>shared_variable</code> using the <code>child1_instance</code>, the output shown by <code>child2_instance</code> is still the same. How can I rectify this?</p> <p>The desired output is:</p> <pre class="lang-none prettyprint-override"><code>I am a shared variable New value from Child1 </code></pre>
<python><inheritance>
2024-03-12 00:36:05
1
483
Prateek Daniels
78,144,060
1,159,783
Type-checking dynamic attribute creation using Mypy plugins
<p>I have a codebase that uses a deeply weird pattern for defining command-line options. It looks like this:</p> <pre class="lang-py prettyprint-override"><code># opts.py def group(): o = OptionsGroup() return o, o.define options = _SomeOptionsSingletonClass() def define(name: str, type_: Type, default: bool, ...): # Create and set attribute on `options` singleton object to track this option. ... class OptionsGroup: def define(name: str, type_: Type, default: bool, ...): define(*args) </code></pre> <p>Then, in various files across the codebase, either Pattern 1</p> <pre class="lang-py prettyprint-override"><code># some_module.py from opts import group options, define = group() define(&quot;foo&quot;, type=&quot;str&quot;, ...) # ... print(options.foo) # How the heck do I typecheck this? </code></pre> <p>or Pattern 2:</p> <pre class="lang-py prettyprint-override"><code># some_module.py from opts import options, define define(&quot;foo&quot;, type=&quot;str&quot;, ...) # ... print(options.foo) # How the heck do I typecheck this? </code></pre> <p>We use <code>mypy</code> extensively to type-check code, and many of these options constructions are in critical paths that need coverage.</p> <h2>Potential Solutions</h2> <ul> <li>I cannot delete or refactor this pattern. It is deeply embedded in a large, production codebase.</li> <li>I am not aware of way to shoehorn this construction into the Python static typing system.</li> <li>The current solution is a very fragile script that attempts to parse these structures and rewrite them into a checkable type before executing <code>mypy</code>. I dislike this solution because it means <code>mypy</code> can't run against the <em>actual</em> codebase, and because the script's logic is very fragile. (I could fix the fragility).</li> <li>I've explored using a <code>mypy</code> plugin to infer and add types to the <code>options</code> value. <ul> <li>What I've found is that I cannot get the <code>get_function_signature_hook()</code> or <code>get_function_hook()</code> or <code>get_method_signature_hook()</code> or <code>get_method_hook()</code> hooks to fire on any of the calls to <code>define()</code> in Pattern 1.</li> <li>I cannot easily use the Analyze Type hook because the classes involved don't implement <code>__getattr__()</code>, and changing the structure of these classes is pretty risky.</li> </ul> </li> </ul> <p>I would strongly prefer to use a <code>mypy</code> plugin or other strategy that allows typechecking to understand these structures in my actual source files, rather than a munged source tree.</p> <h2>My Question</h2> <p>Broadly: is there a better solution to typecheck this pattern?</p> <p>Narrowly: Is there any way I can get <code>mypy</code> to call my plugin while evaluating calls to <code>define()</code> and/or references to <code>options</code> attributes, so that I can synthesize types for these attribute references?</p>
<python><mypy><python-typing>
2024-03-12 00:27:00
1
2,769
David Reed
78,143,969
6,597,296
Is it possible to output emojis in a CMD.EXE window with Python?
<p>If I open a Command window tab in the Windows Terminal, either of the following methods would work correctly:</p> <pre class="lang-py prettyprint-override"><code>print(&quot;\U0001F600&quot;) </code></pre> <pre class="lang-py prettyprint-override"><code>print(&quot;\N{grinning face}&quot;) </code></pre> <pre class="lang-py prettyprint-override"><code>from emoji import emojize print(emojize(&quot;:grinning_face:&quot;)) </code></pre> <pre class="lang-py prettyprint-override"><code>from rich import print as rich_print rich_print(&quot;:grinning_face:&quot;) </code></pre> <p>However, if I open a command window via Win+R CMD.EXE, neither of these methods works - a question mark in a square is displayed every time. Changing the code page to 65001 does not help, either. Importing <code>just_fix_windows_console</code> from <code>colorama</code> and running it before the printing also doesn't help - although it fixes the ability of printing ANSI escape sequences.</p> <p>Both the CMD.EXE window and the Command window in Terminal use the same font - <code>Consolas</code>.</p> <p>Is displaying emojis in a CMD.EXE window simply not possible or am I missing some trick that I don't know?</p>
<python><console><emoji>
2024-03-11 23:53:52
1
578
bontchev
78,143,771
15,412,256
Polars replacing the first n rows with certain values
<p>In Pandas we can replace the first <code>n</code> rows with certain values:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({&quot;test&quot;: range(10)}) df_pd = df.to_pandas() df_pd.iloc[0:5] = float(&quot;nan&quot;) df_pd </code></pre> <pre><code> test 0 NaN 1 NaN 2 NaN 3 NaN 4 NaN 5 5.0 6 6.0 7 7.0 8 8.0 9 9.0 </code></pre> <p>What would be the equivalent operations in Polars?</p>
<python><dataframe><python-polars>
2024-03-11 22:46:40
3
649
Kevin Li
78,143,604
1,422,096
How does this embedded Python packaging find its lib files?
<p>Blender(*) for Windows is shipped with an embedded Python, like this:</p> <pre><code>blender-2.79b-windows64\ 2.79\ python\ bin\ python.exe python35.dll # only these 2 files lib\ asyncio\ collections\ ... + many .py and .pyd files </code></pre> <p>Launching <code>python.exe</code> works even if there is no system global Pyton install. Also, there is no <code>.pth</code> or <code>._pth</code> file anywhere.</p> <p><strong>Question: how does <code>python.exe</code> know that the libs are in <code>..\lib</code>?</strong></p> <p>Note (*): this is <em>not</em> specific to Blender (I used Blender just as an example), this is common for many software that ship Python embedded with them.</p>
<python><blender><python-embedding>
2024-03-11 21:57:28
1
47,388
Basj
78,143,576
1,018,826
Python and PCRE regex that are the same give different outputs for the same input
<p>I am trying to implement the <code>minbpe</code> library in <a href="https://github.com/GoWind/tokenizig/blob/main/src/main.zig#L33" rel="nofollow noreferrer">zig</a>, using a wrapper over <a href="https://github.com/liyu1981/jstring.zig" rel="nofollow noreferrer">PCRE library</a>.</p> <p>The pattern in Python is <code>r&quot;&quot;&quot;'(?:[sdmt]|ll|ve|re)| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+&quot;&quot;&quot;</code></p> <p>When I use the pattern with a UTF-8 encoded text like <code>abcdeparallel ΰ₯§ΰ₯¨ΰ₯ͺ</code>, I get the following output:</p> <pre><code>&gt;&gt;&gt; import regex as re &gt;&gt;&gt; p = re.compile(r&quot;&quot;&quot;'(?:[sdmt]|ll|ve|re)| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+&quot;&quot;&quot;) &gt;&gt;&gt; p regex.Regex(&quot;'(?:[sdmt]|ll|ve|re)| ?\\p{L}+| ?\\p{N}+| ?[^\\s\\p{L}\\p{N}]+|\\s+(?!\\S)|\\s+&quot;, flags=regex.V0) &gt;&gt;&gt; p.findall(&quot;abcdeparallel ΰ₯§ΰ₯¨ΰ₯ͺ&quot;) ['abcdeparallel', ' ΰ₯§ΰ₯¨ΰ₯ͺ'] </code></pre> <p>It looks like this is more or less the same in PCRE flavored regex as well, with me just having to add a <code>/g</code> flag in the end for UTF-8 matching</p> <p>However when I try to use the pattern with pcre via the pcre2test tool on macOS, I get a much different output</p> <pre><code>$ pcre2test -8 PCRE2 version 10.42 2022-12-11 re&gt; /'(?:[sdmt]|ll|ve|re)| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+/g data&gt; abcdeparallel ΰ₯§ΰ₯¨ΰ₯ͺ 0: abcdeparallel 0: \xe0 0: \xa5\xa7 0: \xe0 0: \xa5\xa8 0: \xe0 0: \xa5 0: \xaa </code></pre> <p>Somehow it looks like the code points for the Hindi numerals (1, 2 4) are interpreted differently and the output is matched as a totally different set of characters</p> <pre><code>&gt;&gt;&gt; &quot;\xe0\xa5\xa7\xe0\xa5\xa8&quot; 'Γ Β₯Β§Γ Β₯Β¨' </code></pre> <p>Is there a flag or something that I am missing that must be passed to have the same behaviour as the the <code>regex</code> Package/module from Python ? When UTF-8 code points are decoded into bytes, wouldn't the library know how to put them back together into the same code points ?</p>
<python><regex><pcre><pcre2>
2024-03-11 21:48:55
2
483
draklor40
78,143,403
1,035,279
How can I eliminate the segfaults that occur in working code only when debugging with ipdb?
<p>I am using:</p> <ul> <li>Python: 3.11.0</li> <li>Django: 4.2.8</li> <li>djangorestframework 3.14.0</li> <li>sqlite: 3.38.5</li> </ul> <p>When I'm debugging and I use 'n' to go over a method, I sometimes get a segfault where there is no problem running the code normally. I can move my 'ipdb.set_trace()' to immediately after the call that caused the segfault, rerun the test and continue debugging but this is tedious.</p> <p>One cause I've tracked, is under the django reverse function. Here, the <code>&lt;URLResolver &lt;module 'rest_framework.urls' from '/home/paul/wk/cliosoft/sosmgrweb/venv/lib/python3.11/site-packages/rest_framework/urls.py'&gt; (rest_framework:rest_framework) 'api-auth/'&gt;</code> causes the segfault when its _populate method is invoked.</p> <p>I <em>could</em> start upgrading everything but this is a large application with many dependencies and I'd like to be confident that this problem will be eliminated if I go down that road.</p> <p>Does anyone know what the cause of this is and how I can resolve it?</p>
<python><django><django-rest-framework><ipdb>
2024-03-11 21:03:21
1
16,671
Paul Whipp
78,143,337
2,248,721
Generate smoother colormap for contour plot
<p>My data set contains approximately thousands of values in four columns <em>X, Y, Z, error of Z</em> -</p> <pre><code>1.2351e+00 -6.3115e-02 0.0000e+00 0.0000e+00 1.2417e+00 -6.3115e-02 0.0000e+00 0.0000e+00 1.2483e+00 -6.3115e-02 0.0000e+00 0.0000e+00 1.2549e+00 -6.3115e-02 0.0000e+00 0.0000e+00 1.2615e+00 -6.3115e-02 0.0000e+00 0.0000e+00 1.2681e+00 -6.3115e-02 0.0000e+00 0.0000e+00 1.2746e+00 -6.3115e-02 0.0000e+00 0.0000e+00 1.2812e+00 -6.3115e-02 0.0000e+00 0.0000e+00 1.2878e+00 -6.3115e-02 1.8186e+01 1.8186e+01 1.2944e+00 -6.3115e-02 0.0000e+00 0.0000e+00 1.3010e+00 -6.3115e-02 0.0000e+00 0.0000e+00 1.3075e+00 -6.3115e-02 0.0000e+00 0.0000e+00 1.3141e+00 -6.3115e-02 0.0000e+00 0.0000e+00 1.3207e+00 -6.3115e-02 0.0000e+00 0.0000e+00 ... .... </code></pre> <p>I want to plot these data using a contour plot, something like this- (reference image) <a href="https://i.sstatic.net/XFhjn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XFhjn.png" alt="reference image" /></a></p> <p>I am using the following code -</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x, y, z, err = np.genfromtxt(r'dataset_gid.txt', unpack=True) x1, y1 = np.meshgrid(x, y) cont1 = plt.tricontourf(x, y, z, cmap='seismic') plt.colorbar(cont1) plt.show() </code></pre> <p>I am getting the following plot -</p> <p><a href="https://i.sstatic.net/zYWYX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zYWYX.png" alt="enter image description here" /></a></p> <p>How can I generate a colorbar similar to the above image? I have created a gist for the dataset I am using <a href="https://gist.github.com/frdayeen/b92027462675407288d18bbdc23e9c71#file-dataset_gid-txt" rel="nofollow noreferrer">here</a>.</p> <p><strong>Update</strong></p> <p>Using <code>scat=plt.scatter(x, y, c=z, s=3);plt.colorbar(scat)</code> gives me weird lines -</p> <p><a href="https://i.sstatic.net/vcU3w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vcU3w.png" alt="enter image description here" /></a></p> <p>Using <code>plt.tricontourf(x, y, z, levels=50)</code> looks better but the spots are not bright enough.</p> <p><a href="https://i.sstatic.net/zeDkn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zeDkn.png" alt="enter image description here" /></a></p> <p>Is there a way to make the spots brighter like the reference image?</p>
<python><matplotlib><contour><colormap><contourf>
2024-03-11 20:46:39
1
433
aries0152
78,143,281
12,820,205
App to store and view multiple instances of itself
<p>My Toga-based app has the user enter information in a series of boxes. In the end, the information is used to start a number of processes governed by a scheduler. Once the processes are started, progress should be displayed to the user through continuous updates of a status box.</p> <p>The user is to run several instances of the app at the same time. The user should be able to make a new instance of the app and view the status box of all the previous instances that have ongoing processes.</p> <p>The current approach (with <code>multiprocessing</code> added for operational stability) essentially looks as follows:</p> <pre><code>import toga import multiprocessing from toga.style import Pack class Test(toga.App): instances = [] def startup(self): self.main_box = toga.Box(style=Pack(direction='column')) self.main_window = toga.MainWindow(title=self.formal_name) self.main_window.content = self.main_box self.main_window.show() self.work() def work(self): add_but = toga.Button('Add instance to list', on_press=self.add_but_hdl) ins_num = len(Test.instances) + 1 ins_lbl = toga.Label('Current instance number is {}'.format(ins_num)) if Test.instances: prv_ins_idx = len(Test.instances) - 1 self.main_box.add(Test.instances[prv_ins_idx].main_window.show(), ins_lbl, add_but) else: self.main_box.add(ins_lbl, add_but) def add_but_hdl(self, widget): Test.instances.append(self) self.init_proc() def init_proc(self): proc = multiprocessing.Process(target=self.mk_app_ins) proc.start() def mk_app_ins(self): new_ins = Test() new_ins.main_loop() def main(): return Test() </code></pre> <p>When running the above through <code>briefcase dev</code> and pressing <em>Add instance to list</em>, the above leads to the error:</p> <blockquote> <p>[tender] Starting in dev mode... =========================================================================== Failed to register: An object is already exported for the interface org.gtk.Application at ...</p> </blockquote> <p>Can the setup be made to work, or should it be disregarded in favor of an entirely different approach? What could a viable solution look like?</p>
<python><beeware>
2024-03-11 20:31:51
1
1,994
rjen
78,143,259
8,740,854
How to create the @iterator_cache decorator?
<p><strong>Question:</strong> How to create the <code>iterator_cache</code> decorator ?</p> <pre class="lang-py prettyprint-override"><code>@iterator_cache def cheap_eval(numbers: Tuple[int]) -&gt; Tuple[int]: print(&quot;cheap eval of {numbers}&quot;) return expensive_eval(numbers) def expensive_eval(numbers: Tuple[int]) -&gt; Tuple[int]: time.sleep(5 + len(numbers)) # Example of consuming time print(f&quot;expensive eval of {numbers}&quot;) return tuple(numb**2 for numb in numbers) </code></pre> <p><strong>Description:</strong> I have the <code>expensive_eval</code> function from a library and I need to call it many times.</p> <p>I want to create the <code>cheap_eval</code> that uses cache and gives the same result as <code>expensive_eval</code> to reduce the total time.</p> <p>The first idea I had was to use an auxiliar function <code>eval_one</code> with <code>@lru_cache</code>, but calling <code>expensive_eval</code> for each value is not good cause there's the 5-second constant term for each call of <code>expensive_eval</code>.</p> <p>How can I create the decorator such it memorizes all the iterable inputs instead of the entire tuple?</p> <p>A code with input and desired output is:</p> <pre class="lang-py prettyprint-override"><code># Input print(cheap_eval([1, 2])) print(cheap_eval([1, 3])) print(cheap_eval([1, 2, 3])) # Output without iterator_cache # total time = 22 seconds cheap eval of [1, 2] expensive eval of [1, 2] [1, 4] cheap eval of [1, 3] expensive eval of [1, 3] [1, 9] cheap eval of [1, 2, 3] expensive eval of [1, 2, 3] [1, 4, 9] # Output with iterator_cache # total time = 13 seconds cheap eval of [1, 2] expensive eval of [1, 2] [1, 4] cheap eval of [1, 3] expensive eval of [3] [1, 9] cheap eval of [1, 2, 3] [1, 4, 9] </code></pre>
<python><caching>
2024-03-11 20:29:25
2
472
Carlos Adir
78,143,186
13,181,599
How to run juggernaut model in local
<p>I want to run fine-tuned stable diffusion models in my local pc using python. For example juggernaut: <a href="https://huggingface.co/RunDiffusion/Juggernaut-XL-v9" rel="nofollow noreferrer">https://huggingface.co/RunDiffusion/Juggernaut-XL-v9</a></p> <p>This is my code (it works with stable-diffusion-xl-base-1.0):</p> <pre><code>import random from diffusers import DiffusionPipeline, StableDiffusionXLImg2ImgPipeline import torch import gc import time # for cleaning memory gc.collect() torch.cuda.empty_cache() start_time = time.time() model = &quot;RunDiffusion/Juggernaut-XL-v9&quot; pipe = DiffusionPipeline.from_pretrained( model, torch_dtype=torch.float16, ) pipe.to(&quot;cuda&quot;) prompt = (&quot;a portrait of male as a knight in middle ages, masculine looking, battle in the background, sharp focus, highly detailed, movie-style lighting, shadows&quot;) seed = random.randint(0, 2**32 - 1) generator = torch.Generator(&quot;cuda&quot;).manual_seed(seed) image = pipe(prompt=prompt, generator=generator, num_inference_steps=1) image = image.images[0] image.save(f&quot;output_images/{seed}.png&quot;) end_time = time.time() total_time = end_time - start_time minutes = int(total_time // 60) seconds = int(total_time % 60) print(f&quot;Took: {minutes} min {seconds} sec&quot;) print(f&quot;Saved to output_images/{seed}.png&quot;) </code></pre> <p>But I am getting:</p> <blockquote> <p>OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory</p> </blockquote> <p>Maybe because of python, cuda versions. I'm dropping down my libraries versions:</p> <p>Python 3.9.0</p> <p>PyTorch: 2.2.0+cu118</p> <p>CUDA : 11.8</p> <p>Diffusers: 0.26.3</p> <p>Transformers: 4.38.1</p>
<python><machine-learning><pytorch><stable-diffusion><fine-tuning>
2024-03-11 20:12:01
2
301
Mahammad Yusifov
78,143,178
3,288,004
How to check multiple items for nullity in an AND fashion in Python
<p>I was wondering if there's a Pythonic/hacky way to check for all (or none) items and dependencies in a set to be null.</p> <p>Suppose I have a variable <code>foo</code> that is a dict, but can be nullable:</p> <pre class="lang-py prettyprint-override"><code>foo: Optional[dict] = None </code></pre> <p>And I want to check that both foo is not None or one of its keys:</p> <pre class="lang-py prettyprint-override"><code>if foo is not None and foo.get('bar') is not None: pass </code></pre> <p>Using AND, it will &quot;short-circuit&quot; before anything else, so I'm safe checking for other keys on a nullable item.</p> <p>I was thinking about ways of doing that more neatly, like we can do with <code>any()</code> or <code>all()</code>.</p> <p>I though about doing it in a list comprehension, but it would fail:</p> <pre class="lang-py prettyprint-override"><code> if not all(item is not None for item in [foo, foo.get('bar'), foo.get('baz')]: # Value Error! :sad </code></pre> <p>One way could be using a try/catch, but that is ugly and won't make it look fun...</p> <p>We can't go for walrus operator as it is not assignable in a list comprehension:</p> <pre class="lang-py prettyprint-override"><code> if not all (item for item in [foo := foo or {}, foo.get('bar'), foo.get('baz')]: # SyntaxError, can't use walrus in list comprehension </code></pre> <p>I was also wondering if there's not a very well maintened tool that serves this purpose. Any better ideas?</p>
<python><python-3.x><conditional-statements>
2024-03-11 20:10:17
3
2,069
Tiago Duque
78,143,080
6,273,533
FileNotFoundError: [WinError 2] The system cannot find the file specified: find extensions in os.listdir
<p>Trying to delete all but one file in a given directory. Looked at other related posts but my question seems specific to getting names from os.listdir and needing full path with extensions to use os.remove:</p> <pre><code># delete files from the save path but preserve the zip file if os.path.isfile(zip_path): for clean_up in os.listdir(data_path): if not clean_up.endswith(tStamp+'.zip'): os.remove(clean_up) </code></pre> <p>Gives this error:</p> <pre><code>Line 5: os.remove(clean_up) FileNotFoundError: [WinError 2] The system cannot find the file specified: 'firstFileInListName' </code></pre> <p>I think this is because os.listdir is not capturing the file extension of each file (printed os.listdir(data_path) and only got names of the files without extensions)</p> <p>What can I do to delete all files from the data_path except for the one that ends with tStamp+'.zip' ?</p>
<python><listdir>
2024-03-11 19:47:00
1
709
geoJshaun
78,143,064
159,072
How can I compute this?
<p>Suppose I have the following numpy array:</p> <pre><code>import numpy as np my_data = np.array( [[1, 1, 1], [2, 2, 2], [3, 3, 3], [4, 4, 4], [5, 5, 5]]) # obtain the length n = len(my_data) # 5 </code></pre> <p>I want to do this computation for t=0, 1, 2, 3, 4.</p> <ol> <li>take the t-th row</li> <li>compute dot product of the t-th row and the entire array</li> <li>compute sum</li> <li>divide by (n-t)</li> <li>retun all the results as an array</li> </ol> <p>I.e.</p> <pre><code>t = 0 v0 = my_data[t:,] my_data_dot = np.dot(my_data, v0) my_data_sum = np.sum(my_data_dot)/(n-t) </code></pre> <p>How can I do this without using looping statements?</p> <p>Please show me the simplest way.</p>
<python><numpy>
2024-03-11 19:42:56
1
17,446
user366312
78,142,890
17,800,932
Passing additional arguments for Python's `websockets` `serve` function
<p>I am prototyping a WebSocket server using Python's <a href="https://websockets.readthedocs.io/en/stable/index.html" rel="nofollow noreferrer"><code>websockets</code> library</a>.</p> <pre class="lang-py prettyprint-override"><code>async def receive_websocket_message(websocket, send_queue: asyncio.Queue, receive_queue: asyncio.Queue): async for message in websocket: # Send a message to an internal client send_queue.put_nowait(message) # Wait for a response response = await receive_queue.get() # Send the response back to the WebSocket sender await websocket.send(response) async def websocket_server(send_queue: asyncio.Queue, receive_queue: asyncio.Queue): # How do I pass the queues into the websocket server handler? async with serve(receive_websocket_message, &quot;localhost&quot;, 8765, send_queue=send_queue, receive_queue=receive_queue): await asyncio.Future() # run forever </code></pre> <p>How do I pass additional arguments into the <a href="https://websockets.readthedocs.io/en/stable/reference/asyncio/server.html" rel="nofollow noreferrer"><code>serve</code></a> function, in this case a queue to be shared by the WebSocket server and another internal coroutine?</p> <p>I get an error like <code>TypeError: BaseEventLoop.create_server() got an unexpected keyword argument 'send_queue'</code>.</p>
<python><websocket><python-asyncio>
2024-03-11 19:02:24
1
908
bmitc
78,142,676
7,179,546
How can I process a request to AzureChatOpenAI that is too big?
<p>I'm trying to perform a question to AzureChatOpenAI that is very big. Therefore I'm getting the following error:</p> <pre><code>openai.BadRequestError: Error code: 400 - {'error': {'message': &quot;This model's maximum context length is 8192 tokens. However, your messages resulted in 37571 tokens (37332 in the messages, 239 in the functions). Please reduce the length of the messages or functions </code></pre> <p>I'm working on Python. How can I get rid of this error? Is there a way to send this data in chunks, so all the info can be sent but at the same time, all the message is processed as a single one?</p>
<python><azure-openai>
2024-03-11 18:14:06
1
737
Carabes
78,142,403
13,578,682
Why are these two datetimes not equal?
<pre><code>from datetime import datetime, timedelta from zoneinfo import ZoneInfo chi = ZoneInfo(&quot;America/Chicago&quot;) nyc = ZoneInfo(&quot;America/New_York&quot;) dt1 = datetime(2024, 3, 10, 3, 30, tzinfo=chi) dt2 = datetime(2024, 3, 10, 2, 30, tzinfo=chi) print(dt1 == dt1.astimezone(nyc)) print(dt2 == dt2.astimezone(nyc)) </code></pre> <p>Actual result:</p> <pre class="lang-none prettyprint-override"><code>True False </code></pre> <p>Expected result:</p> <pre class="lang-none prettyprint-override"><code>True True </code></pre> <p>Why is <code>False</code> returned in one of those cases? In both cases it's comparing the same datetime, only adjusted to a different zone.</p> <p>The same thing also happens in the fold:</p> <pre><code>from datetime import datetime, timedelta from zoneinfo import ZoneInfo chi = ZoneInfo(&quot;America/Chicago&quot;) nyc = ZoneInfo(&quot;America/New_York&quot;) dt1 = datetime(2024, 11, 3, 2, 30, tzinfo=chi) dt2 = datetime(2024, 11, 3, 1, 30, tzinfo=chi) print(dt1 == dt1.astimezone(nyc)) print(dt2 == dt2.astimezone(nyc)) </code></pre>
<python><datetime><timezone><dst>
2024-03-11 17:20:05
1
665
no step on snek
78,142,241
14,244,437
Annotations returning pure PK value instead of Django HashID
<p>In my application the user can assume one of 3 different roles.</p> <p>These users can be assigned to programs, and they can be assigned to 3 different fields, each of those exclusevely to a role.</p> <p>In my API I'm trying to query all my users and annotate the programs they are in:</p> <pre><code>def get_queryset(self): queryset = ( User.objects .select_related('profile') .prefetch_related('managed_programs', 'supervised_programs', 'case_managed_programs') .annotate( programs=Case( When( role=RoleChoices.PROGRAM_MANAGER, then=ArraySubquery( Program.objects.filter(program_managers=OuterRef('pk'), is_active=True) .values('id', 'name') .annotate( data=Func( Value('{&quot;id&quot;: '), F('id'), Value(', &quot;name&quot;: &quot;'), F('name'), Value('&quot;}'), function='concat', output_field=CharField(), ) ) .values('data') ), ), When( role=RoleChoices.SUPERVISOR, then=ArraySubquery( Program.objects.filter(supervisors=OuterRef('pk'), is_active=True) .values('id', 'name') .annotate( data=Func( Value('{&quot;id&quot;: '), F('id'), Value(', &quot;name&quot;: &quot;'), F('name'), Value('&quot;}'), function='concat', output_field=CharField(), ) ) .values('data') ), ), When( role=RoleChoices.CASE_MANAGER, then=ArraySubquery( Program.objects.filter(case_managers=OuterRef('pk'), is_active=True) .values('id', 'name') .annotate( data=Func( Value('{&quot;id&quot;: '), F('id'), Value(', &quot;name&quot;: &quot;'), F('name'), Value('&quot;}'), function='concat', output_field=CharField(), ) ) .values('data') ), ), default=Value(list()), output_field=ArrayField(base_field=JSONField()), ) ) .order_by('id') ) return queryset </code></pre> <p>This works (almost) flawlessly and gives me only 5 DB hits, perfect, or not...</p> <p>The problem is that I'm using Django HashID fields for the Program PK, and this query returns the pure integer value for each Program.</p> <p>I've tried a more &quot;normal&quot; approach, by getting the data using a <code>SerializerMethodField</code>:</p> <pre><code>@staticmethod def get_programs(obj): role_attr = { RoleChoices.PROGRAM_MANAGER: 'managed_programs', RoleChoices.SUPERVISOR: 'supervised_programs', RoleChoices.CASE_MANAGER: 'case_managed_programs', } try: programs = getattr(obj, role_attr[obj.role], None).values_list('id', 'name') return [{'id': str(id), 'name': name} for id, name in programs] except (AttributeError, KeyError): return [] </code></pre> <p>This gives me the result I need, but the query quantity skyrockets. It seems that it's not taking advantage of the <code>prefetch_related</code>, but I don't understand how is this possible, considering I'm using the same queryset.</p> <p>So, I have two options here:</p> <ul> <li>Use the annotations but having the HashID returning, instead of integer PK</li> <li>Have the SerializerMethodField reuse the prefetched data, instead of requerying</li> </ul> <p>Is there a way to accomplish any of those?</p> <p>EDIT:</p> <p>A small heads-up, I've decided to use the first approach and Hash the ID manually inside the serializer</p> <pre><code>programs = serializers.SerializerMethodField() @staticmethod def get_programs(obj): return [ {&quot;id&quot;: str(Hashid(value=program['id'], salt=settings.SECRET_KEY, min_length=13)), &quot;name&quot;: program['name']} for program in obj.programs ] </code></pre> <p>For now it works, but I'd be more satisfied if there's a more direct way to accomplish this.</p>
<python><django><postgresql><django-orm><hashids>
2024-03-11 16:45:11
0
481
andrepz
78,142,224
4,168,794
Finetuing BERT with masking and having multiple correct labels
<p>I aim to fine-tune a BERT model for a specific task involving simple arithmetic operations like &quot;5 + 3 = 8&quot; or &quot;7 plus 2 equals 9&quot;. My dataset comprises thousands of examples where one operand, operator, or result is masked. For instance:</p> <ul> <li>Masked: &quot;1 added to [MASK] equals 7&quot;, Label: &quot;1 added to 6 is equal to 7&quot;</li> <li>Masked: &quot;6 plus 5 [MASK] 11&quot;, Label: &quot;6 plus 5 gives 11&quot;</li> </ul> <p>The challenge lies in ensuring that multiple correct labels are accepted for a masked sample during training. For instance, if the model predicts &quot;equals&quot; instead of the masked token in the second sample, it should be considered correct.</p>
<python><nlp><huggingface-transformers><bert-language-model>
2024-03-11 16:43:12
1
655
abbassix
78,142,209
18,494,333
Read from audio output in PyAudio through loopbacks [Python record system output]
<p>I'm writing a program that records from my speaker output using <code>pyaudio</code>. I am on a Raspberry Pi. I built the program while using the audio jack to play audio through some speakers, but recently have switched to using the speakers in my monitor, through HDMI. Suddenly, the program records silence.</p> <pre><code>from pyaudio import PyAudio p = PyAudio() print(p.get_default_input_device_info()['index'], '\n') print(*[p.get_device_info_by_index(i) for i in range(p.get_device_count())], sep='\n\n') </code></pre> <p>The above code outputs first the index of the default input device of <code>pyaudio</code>, then the available devices. See the results below.</p> <p><strong>Case A:</strong></p> <pre><code>2 {'index': 0, 'structVersion': 2, 'name': 'bcm2835 Headphones: - (hw:2,0)', 'hostApi': 0, 'maxInputChannels': 0, 'maxOutputChannels': 8, 'defaultLowInputLatency': -1.0, 'defaultLowOutputLatency': 0.0016099773242630386, 'defaultHighInputLatency': -1.0, 'defaultHighOutputLatency': 0.034829931972789115, 'defaultSampleRate': 44100.0} {'index': 1, 'structVersion': 2, 'name': 'pulse', 'hostApi': 0, 'maxInputChannels': 32, 'maxOutputChannels': 32, 'defaultLowInputLatency': 0.008684807256235827, 'defaultLowOutputLatency': 0.008684807256235827, 'defaultHighInputLatency': 0.034807256235827665, 'defaultHighOutputLatency': 0.034807256235827665, 'defaultSampleRate': 44100.0} {'index': 2, 'structVersion': 2, 'name': 'default', 'hostApi': 0, 'maxInputChannels': 32, 'maxOutputChannels': 32, 'defaultLowInputLatency': 0.008684807256235827, 'defaultLowOutputLatency': 0.008684807256235827, 'defaultHighInputLatency': 0.034807256235827665, 'defaultHighOutputLatency': 0.034807256235827665, 'defaultSampleRate': 44100.0} </code></pre> <p>If I then go into to terminal, enter <code>sudo raspi-config</code> and change the audio output to the headphone jack, I get an actual recording, not silence, and receive a different output to the above code.</p> <p><strong>Case B:</strong></p> <pre><code>5 {'index': 0, 'structVersion': 2, 'name': 'vc4-hdmi-0: MAI PCM i2s-hifi-0 (hw:0,0)', 'hostApi': 0, 'maxInputChannels': 0, 'maxOutputChannels': 2, 'defaultLowInputLatency': -1.0, 'defaultLowOutputLatency': 0.005804988662131519, 'defaultHighInputLatency': -1.0, 'defaultHighOutputLatency': 0.034829931972789115, 'defaultSampleRate': 44100.0} {'index': 1, 'structVersion': 2, 'name': 'bcm2835 Headphones: - (hw:2,0)', 'hostApi': 0, 'maxInputChannels': 0, 'maxOutputChannels': 8, 'defaultLowInputLatency': -1.0, 'defaultLowOutputLatency': 0.0016099773242630386, 'defaultHighInputLatency': -1.0, 'defaultHighOutputLatency': 0.034829931972789115, 'defaultSampleRate': 44100.0} {'index': 2, 'structVersion': 2, 'name': 'sysdefault', 'hostApi': 0, 'maxInputChannels': 0, 'maxOutputChannels': 128, 'defaultLowInputLatency': -1.0, 'defaultLowOutputLatency': 0.005804988662131519, 'defaultHighInputLatency': -1.0, 'defaultHighOutputLatency': 0.034829931972789115, 'defaultSampleRate': 44100.0} {'index': 3, 'structVersion': 2, 'name': 'hdmi', 'hostApi': 0, 'maxInputChannels': 0, 'maxOutputChannels': 2, 'defaultLowInputLatency': -1.0, 'defaultLowOutputLatency': 0.005804988662131519, 'defaultHighInputLatency': -1.0, 'defaultHighOutputLatency': 0.034829931972789115, 'defaultSampleRate': 44100.0} {'index': 4, 'structVersion': 2, 'name': 'pulse', 'hostApi': 0, 'maxInputChannels': 32, 'maxOutputChannels': 32, 'defaultLowInputLatency': 0.008684807256235827, 'defaultLowOutputLatency': 0.008684807256235827, 'defaultHighInputLatency': 0.034807256235827665, 'defaultHighOutputLatency': 0.034807256235827665, 'defaultSampleRate': 44100.0} {'index': 5, 'structVersion': 2, 'name': 'default', 'hostApi': 0, 'maxInputChannels': 32, 'maxOutputChannels': 32, 'defaultLowInputLatency': 0.008684807256235827, 'defaultLowOutputLatency': 0.008684807256235827, 'defaultHighInputLatency': 0.034807256235827665, 'defaultHighOutputLatency': 0.034807256235827665, 'defaultSampleRate': 44100.0} </code></pre> <p>You can see in case B that I now have access to many different devices. I've attempted recording from all three available inputs in case A, and both #0 and #1 fail. #1 also records silence, and #0 returns <code>OSError: [Errno -9998] Invalid number of channels</code>. If you look closely at case A, you'll see that #0 has <code>['maxInputChannels'] = 0</code>, so that's why.</p> <p>I've attempted to create loopback devices that read from the sound output and introduce another input to pass the audio back in. I would then record from that input, as it would have input channels. I've researched on this thread <a href="https://stackoverflow.com/questions/23295920/loopback-what-u-hear-recording-in-python-using-pyaudio">here</a>, but the only solution is for Windows.</p> <p>I have also attempted to create a loopback device using the <code>pulseaudio</code> utility <code>pactl</code>. This link <a href="https://askubuntu.com/questions/1295430/how-do-i-mix-together-a-real-microphone-input-and-a-virtual-microphone-using-pul">here</a> demonstrates what I have tried. Upon succesfully creating a loopback, I'm unable to plug into it using <code>pyaudio</code>; it doesn't show up in the list of devices.</p> <p>Does anybody know...</p> <ul> <li>How to record from a <code>pulseaudio</code> loopback using <code>pyaudio</code>?</li> <li>An alternative way of creating a loopback on Linux?</li> <li>An alternative way of using <code>pyaudio</code> to solve my problem?</li> </ul> <p>Thanks very much.</p>
<python><pyaudio><loopback><portaudio><pulseaudio>
2024-03-11 16:40:00
1
387
MillerTime
78,142,190
6,145,729
Python 3 Data Frame string manipulation to extract numbers between 8 to 12 characters
<p>I'm not sure where to start with this one.</p> <p>I have a list of obsolete items with a new item_code listed somewhere in the description column. Item codes are always between 8 &amp; 12 characters so all other numbers in the description should be ignored.</p> <pre><code>import pandas as pd df1 = pd.DataFrame({'Item_Code': ['00001234', '00012345', '00123456', '01234567'], 'Desc': ['Widget1 - Obsolete Use Alternative 56789100', 'Obsolete Widget 2 - Use Alternative 56789100 - Blah Blah Blah', 'Alternative Use 9999999910 - Blah Blah Blah', 'Obsolete use 99999999911']}, index=[0, 1, 3, 4]) print(df1.head(10)) </code></pre> <p><a href="https://i.sstatic.net/tvW0t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tvW0t.png" alt="enter image description here" /></a></p> <p>So ideally I'm looking to have the alternative codes in a new column.</p> <p><a href="https://i.sstatic.net/UBBJx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UBBJx.png" alt="enter image description here" /></a></p>
<python><python-3.x><pandas>
2024-03-11 16:36:36
1
575
Lee Murray
78,142,171
4,547,189
Pandas select rows with values are present for all elements in array
<p>Firstly the title of the post may not do justice to the question, so my humble apologies for this.</p> <p>Here is the question:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Date</th> <th>Type</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>2024-03-11</td> <td>3</td> <td>3</td> </tr> <tr> <td>2024-3-11</td> <td>4</td> <td>5</td> </tr> <tr> <td>2024-03-12</td> <td>3</td> <td>3</td> </tr> <tr> <td>2024-3-12</td> <td>4</td> <td>5</td> </tr> <tr> <td>2024-3-12</td> <td>5</td> <td>5</td> </tr> <tr> <td>2024-03-13</td> <td>3</td> <td>3</td> </tr> <tr> <td>2024-3-13</td> <td>4</td> <td>5</td> </tr> <tr> <td>2024-3-13</td> <td>5</td> <td>2</td> </tr> <tr> <td>2024-3-14</td> <td>5</td> <td>5</td> </tr> </tbody> </table></div> <p>Type = [3,4,5]</p> <p>Is there a simple way for me to, in Pandas, create a new DF from the above one where the data is only present if the date has values for all the elements in the list ? Meaning the reusltant DF should only contain data for date 12,13 since the original DF has values for elements in the Type array ? Thanks</p>
<python><pandas>
2024-03-11 16:31:07
1
648
tkansara
78,142,075
14,775,478
What is a vectorized way to detect feature drift in pandas columns?
<p>I'm working on very large pandas dataframes that hold time series with significant feature drift. The drift is often sudden (e.g., the features would be 1.5-2.0x times larger than a few periods earlier).</p> <p>I found several solutions to detect 'concept drift'. One convenient option is <a href="https://riverml.xyz/0.9.0/examples/concept-drift-detection/" rel="nofollow noreferrer">river</a>. However, the solution is not vectorized.</p> <p>Clearly, vectorized approaches are much, much faster - the easiest for example using the pandas built-ins to take moving averages and look whether those change/jump <code>df.groupby().mean().rolling()</code>.</p> <p>What are vectorized ways to handle the above task?</p>
<python><pandas><filtering><feature-engineering><drift>
2024-03-11 16:14:42
1
1,690
KingOtto
78,142,072
2,611,009
square root function on Python using Spyder
<p>I'm just starting out using both Python under Spyder. I was about to do a simple square root calculation. I see that in Python, you need to do an &quot;import math&quot;. I did that in Spyder and got an error: 'math' imported but unused. I tried using sqrt() but that threw an error. There's something fundamental I'm missing here. I did a bunch of searching and still have not been able to figure this issue out. Any help would be greatly appreciated.</p> <p>thanks! clem</p>
<python><spyder>
2024-03-11 16:14:13
3
413
Clem
78,141,853
1,422,096
Folder structure inside Python "embedded" packaging
<p>I distribute a ready-to-run software for Windows written in Python by:</p> <ul> <li>shipping the content of an embedded version of Python, say <code>python-3.8.10-embed-amd64.zip</code></li> <li>adding a <code>myprogram\</code> package folder (= the program itself)</li> <li>simply run <code>pythonw.exe -m myprogram</code> to start the program</li> </ul> <p>It works well (and is a simple alternative to cxfreeze and co, but this part is out of topic here).</p> <p>The tree structure is:</p> <pre><code>main\ _asyncio.pyd _bz2.pyd ... + other .pyd files libcrypto-1_1.dll ... + other .dll files python.exe pythonw.exe python38._pth python38.zip ... myprogram\ # here my main program as a module PIL\ # dependencies win32\ numpy\ ... # many other folders for dependencies </code></pre> <p><strong>Is there a way to move all the dependencies folders to a subfolder and still have Python (embedded version) be able to locate them? How to do so?</strong> More precisely, like this:</p> <pre><code>main\ python.exe pythonw.exe python38._pth python38.zip ... myprogram\ # here my main program as a module dependencies\ PIL\ # all required modules win32\ numpy\ ... _asyncio.pyd # and also .pyd files ... </code></pre> <p>NB: the goal is here to use an embedded Python, which is <em>totally independent</em> from the system's global Python install. So this is independent from any environment variable such as <code>PYTHONPATH</code> etc.</p>
<python><windows><python-packaging><python-embedding><pth>
2024-03-11 15:38:55
1
47,388
Basj
78,141,847
2,812,625
Pandas remove strings after character occurance
<p>I am trying to remove the last occurrence of the string where there is a hyphen(-) leaving all the other strings alone that do not mean that criteria:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: center;">Name1</th> <th style="text-align: right;">ID</th> <th style="text-align: right;">ID_SPLIT</th> <th>ID_SPLIT2</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">FUH-V2</td> <td style="text-align: right;">FUH07V2NUM</td> <td style="text-align: right;"></td> <td></td> </tr> <tr> <td style="text-align: center;">FUH-V2</td> <td style="text-align: right;">FUHV2-DEN</td> <td style="text-align: right;">FUHV2</td> <td>FUHV2</td> </tr> <tr> <td style="text-align: center;">FUH-V2</td> <td style="text-align: right;">FUH30V2NUM</td> <td style="text-align: right;"></td> <td></td> </tr> </tbody> </table></div> <pre><code>df['ID'].str.split('-').str[:-1].str.join('-') </code></pre>
<python><pandas>
2024-03-11 15:38:21
1
446
Tinkinc
78,141,829
163,536
How do I avoid circular imports when using sqlalchemy relationship() with type annotations?
<p>I'm declaring my models like so:</p> <pre class="lang-py prettyprint-override"><code>from __future__ import annotations class Parent(Base): ... children: Mapped[list[Child]] = relationship(&quot;Child&quot;, back_populates=&quot;parent&quot;) class Child(Base): ... parent_id: Mapped[Optional[str]] = mapped_column(String, ForeignKey(&quot;parent.id&quot;, name=&quot;parent_id_fk&quot;, ondelete=&quot;CASCADE&quot;), nullable=True, default=None) parent: Mapped[Parent] = relationship(&quot;Parent&quot;, back_populates=&quot;children&quot;) </code></pre> <p>This works fine, the problem starts when I want to separate these models to different files. How can I achieve this while still maintaining type annotations and the same capabilities, and avoiding circular imports?</p>
<python><sqlalchemy><python-typing>
2024-03-11 15:35:58
2
1,180
lorg